From a1a2c957da58793c30d5c854df5b18bbd6e405fe Mon Sep 17 00:00:00 2001 From: Schspa Shi Date: Sun, 8 May 2022 23:02:47 +0800 Subject: [PATCH 0001/1842] usb: gadget: fix race when gadget driver register via ioctl commit 5f0b5f4d50fa0faa8c76ef9d42a42e8d43f98b44 upstream. The usb_gadget_register_driver can be called multi time by to threads via USB_RAW_IOCTL_RUN ioctl syscall, which will lead to multiple registrations. Call trace: driver_register+0x220/0x3a0 drivers/base/driver.c:171 usb_gadget_register_driver_owner+0xfb/0x1e0 drivers/usb/gadget/udc/core.c:1546 raw_ioctl_run drivers/usb/gadget/legacy/raw_gadget.c:513 [inline] raw_ioctl+0x1883/0x2730 drivers/usb/gadget/legacy/raw_gadget.c:1220 ioctl USB_RAW_IOCTL_RUN This routine allows two processes to register the same driver instance via ioctl syscall. which lead to a race condition. Please refer to the following scenarios. T1 T2 ------------------------------------------------------------------ usb_gadget_register_driver_owner driver_register driver_register driver_find driver_find bus_add_driver bus_add_driver priv alloced drv->p = priv; kobject_init_and_add // refcount = 1; //couldn't find an available UDC or it's busy priv alloced drv->priv = priv; kobject_init_and_add ---> refcount = 1 <------ // register success ===================== another ioctl/process ====================== driver_register driver_find k = kset_find_obj() ---> refcount = 2 <------ driver_unregister // drv->p become T2's priv ---> refcount = 1 <------ kobject_put(k) ---> refcount = 0 <------ return priv->driver; --------UAF here---------- There will be UAF in this scenario. We can fix it by adding a new STATE_DEV_REGISTERING device state to avoid double register. Reported-by: syzbot+dc7c3ca638e773db07f6@syzkaller.appspotmail.com Link: https://lore.kernel.org/all/000000000000e66c2805de55b15a@google.com/ Reviewed-by: Andrey Konovalov Signed-off-by: Schspa Shi Link: https://lore.kernel.org/r/20220508150247.38204-1-schspa@gmail.com Signed-off-by: Greg Kroah-Hartman --- drivers/usb/gadget/legacy/raw_gadget.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/usb/gadget/legacy/raw_gadget.c b/drivers/usb/gadget/legacy/raw_gadget.c index 33efa6915b91..34cecd3660bf 100644 --- a/drivers/usb/gadget/legacy/raw_gadget.c +++ b/drivers/usb/gadget/legacy/raw_gadget.c @@ -144,6 +144,7 @@ enum dev_state { STATE_DEV_INVALID = 0, STATE_DEV_OPENED, STATE_DEV_INITIALIZED, + STATE_DEV_REGISTERING, STATE_DEV_RUNNING, STATE_DEV_CLOSED, STATE_DEV_FAILED @@ -507,6 +508,7 @@ static int raw_ioctl_run(struct raw_dev *dev, unsigned long value) ret = -EINVAL; goto out_unlock; } + dev->state = STATE_DEV_REGISTERING; spin_unlock_irqrestore(&dev->lock, flags); ret = usb_gadget_probe_driver(&dev->driver); From 3c48558be571e01f67e65edcf03193484eeb2b79 Mon Sep 17 00:00:00 2001 From: Jens Axboe Date: Thu, 19 May 2022 06:05:27 -0600 Subject: [PATCH 0002/1842] io_uring: always grab file table for deferred statx Lee reports that there's a use-after-free of the process file table. There's an assumption that we don't need the file table for some variants of statx invocation, but that turns out to be false and we end up with not grabbing a reference for the request even if the deferred execution uses it. Get rid of the REQ_F_NO_FILE_TABLE optimization for statx, and always grab that reference. This issues doesn't exist upstream since the native workers got introduced with 5.12. Link: https://lore.kernel.org/io-uring/YoOJ%2FT4QRKC+fAZE@google.com/ Reported-by: Lee Jones Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- fs/io_uring.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 4330603eae35..3ecf71151fb1 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -4252,12 +4252,8 @@ static int io_statx(struct io_kiocb *req, bool force_nonblock) struct io_statx *ctx = &req->statx; int ret; - if (force_nonblock) { - /* only need file table for an actual valid fd */ - if (ctx->dfd == -1 || ctx->dfd == AT_FDCWD) - req->flags |= REQ_F_NO_FILE_TABLE; + if (force_nonblock) return -EAGAIN; - } ret = do_statx(ctx->dfd, ctx->filename, ctx->flags, ctx->mask, ctx->buffer); From 911b36267855501f7f80a75927c128c0ac03fe58 Mon Sep 17 00:00:00 2001 From: Willy Tarreau Date: Sun, 8 May 2022 11:37:07 +0200 Subject: [PATCH 0003/1842] floppy: use a statically allocated error counter commit f71f01394f742fc4558b3f9f4c7ef4c4cf3b07c8 upstream. Interrupt handler bad_flp_intr() may cause a UAF on the recently freed request just to increment the error count. There's no point keeping that one in the request anyway, and since the interrupt handler uses a static pointer to the error which cannot be kept in sync with the pending request, better make it use a static error counter that's reset for each new request. This reset now happens when entering redo_fd_request() for a new request via set_next_request(). One initial concern about a single error counter was that errors on one floppy drive could be reported on another one, but this problem is not real given that the driver uses a single drive at a time, as that PC-compatible controllers also have this limitation by using shared signals. As such the error count is always for the "current" drive. Reported-by: Minh Yuan Suggested-by: Linus Torvalds Tested-by: Denis Efremov Signed-off-by: Willy Tarreau Signed-off-by: Linus Torvalds Signed-off-by: Denis Efremov Signed-off-by: Greg Kroah-Hartman --- drivers/block/floppy.c | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c index c9411fe2f0af..4ef407a33996 100644 --- a/drivers/block/floppy.c +++ b/drivers/block/floppy.c @@ -509,8 +509,8 @@ static unsigned long fdc_busy; static DECLARE_WAIT_QUEUE_HEAD(fdc_wait); static DECLARE_WAIT_QUEUE_HEAD(command_done); -/* Errors during formatting are counted here. */ -static int format_errors; +/* errors encountered on the current (or last) request */ +static int floppy_errors; /* Format request descriptor. */ static struct format_descr format_req; @@ -530,7 +530,6 @@ static struct format_descr format_req; static char *floppy_track_buffer; static int max_buffer_sectors; -static int *errors; typedef void (*done_f)(int); static const struct cont_t { void (*interrupt)(void); @@ -1455,7 +1454,7 @@ static int interpret_errors(void) if (drive_params[current_drive].flags & FTD_MSG) DPRINT("Over/Underrun - retrying\n"); bad = 0; - } else if (*errors >= drive_params[current_drive].max_errors.reporting) { + } else if (floppy_errors >= drive_params[current_drive].max_errors.reporting) { print_errors(); } if (reply_buffer[ST2] & ST2_WC || reply_buffer[ST2] & ST2_BC) @@ -2095,7 +2094,7 @@ static void bad_flp_intr(void) if (!next_valid_format(current_drive)) return; } - err_count = ++(*errors); + err_count = ++floppy_errors; INFBOUND(write_errors[current_drive].badness, err_count); if (err_count > drive_params[current_drive].max_errors.abort) cont->done(0); @@ -2240,9 +2239,8 @@ static int do_format(int drive, struct format_descr *tmp_format_req) return -EINVAL; } format_req = *tmp_format_req; - format_errors = 0; cont = &format_cont; - errors = &format_errors; + floppy_errors = 0; ret = wait_til_done(redo_format, true); if (ret == -EINTR) return -EINTR; @@ -2721,7 +2719,7 @@ static int make_raw_rw_request(void) */ if (!direct || (indirect * 2 > direct * 3 && - *errors < drive_params[current_drive].max_errors.read_track && + floppy_errors < drive_params[current_drive].max_errors.read_track && ((!probing || (drive_params[current_drive].read_track & (1 << drive_state[current_drive].probed_format)))))) { max_size = blk_rq_sectors(current_req); @@ -2846,10 +2844,11 @@ static int set_next_request(void) current_req = list_first_entry_or_null(&floppy_reqs, struct request, queuelist); if (current_req) { - current_req->error_count = 0; + floppy_errors = 0; list_del_init(¤t_req->queuelist); + return 1; } - return current_req != NULL; + return 0; } /* Starts or continues processing request. Will automatically unlock the @@ -2908,7 +2907,6 @@ static void redo_fd_request(void) _floppy = floppy_type + drive_params[current_drive].autodetect[drive_state[current_drive].probed_format]; } else probing = 0; - errors = &(current_req->error_count); tmp = make_raw_rw_request(); if (tmp < 2) { request_done(tmp); From 55c820c1b2b615103a25486217724461f95c9c8d Mon Sep 17 00:00:00 2001 From: Greg Thelen Date: Mon, 16 May 2022 17:08:35 -0700 Subject: [PATCH 0004/1842] Revert "drm/i915/opregion: check port number bounds for SWSCI display power state" This reverts commit b84857c06ef9e72d09fadafdbb3ce9af64af954f. 5.10 stable contains 2 identical commits: 1. commit eb7bf11e8ef1 ("drm/i915/opregion: check port number bounds for SWSCI display power state") 2. commit b84857c06ef9 ("drm/i915/opregion: check port number bounds for SWSCI display power state") Both commits add separate checks for the same condition. Revert the 2nd redundant check to match upstream, which only has one check. Signed-off-by: Greg Thelen Signed-off-by: Yu Liao Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/i915/display/intel_opregion.c | 15 --------------- 1 file changed, 15 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_opregion.c b/drivers/gpu/drm/i915/display/intel_opregion.c index 6d083b98f6ae..abff2d6cedd1 100644 --- a/drivers/gpu/drm/i915/display/intel_opregion.c +++ b/drivers/gpu/drm/i915/display/intel_opregion.c @@ -376,21 +376,6 @@ int intel_opregion_notify_encoder(struct intel_encoder *intel_encoder, return -EINVAL; } - /* - * The port numbering and mapping here is bizarre. The now-obsolete - * swsci spec supports ports numbered [0..4]. Port E is handled as a - * special case, but port F and beyond are not. The functionality is - * supposed to be obsolete for new platforms. Just bail out if the port - * number is out of bounds after mapping. - */ - if (port > 4) { - drm_dbg_kms(&dev_priv->drm, - "[ENCODER:%d:%s] port %c (index %u) out of bounds for display power state notification\n", - intel_encoder->base.base.id, intel_encoder->base.name, - port_name(intel_encoder->port), port); - return -EINVAL; - } - if (!enable) parm |= 4 << 8; From 170110adbecc1c603baa57246c15d38ef1faa0fa Mon Sep 17 00:00:00 2001 From: Sasha Neftin Date: Wed, 7 Jul 2021 08:14:40 +0300 Subject: [PATCH 0005/1842] igc: Remove _I_PHY_ID checking commit 7c496de538eebd8212dc2a3c9a468386b264d0d4 upstream. i225 devices have only one PHY vendor. There is no point checking _I_PHY_ID during the link establishment and auto-negotiation process. This patch comes to clean up these pointless checkings. Signed-off-by: Sasha Neftin Tested-by: Dvora Fuxbrumer Signed-off-by: Tony Nguyen Signed-off-by: Nobuhiro Iwamatsu Signed-off-by: Greg Kroah-Hartman --- drivers/net/ethernet/intel/igc/igc_base.c | 10 +--------- drivers/net/ethernet/intel/igc/igc_main.c | 3 +-- drivers/net/ethernet/intel/igc/igc_phy.c | 6 ++---- 3 files changed, 4 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_base.c b/drivers/net/ethernet/intel/igc/igc_base.c index fd37d2c203af..7f3523f0d196 100644 --- a/drivers/net/ethernet/intel/igc/igc_base.c +++ b/drivers/net/ethernet/intel/igc/igc_base.c @@ -187,15 +187,7 @@ static s32 igc_init_phy_params_base(struct igc_hw *hw) igc_check_for_copper_link(hw); - /* Verify phy id and set remaining function pointers */ - switch (phy->id) { - case I225_I_PHY_ID: - phy->type = igc_phy_i225; - break; - default: - ret_val = -IGC_ERR_PHY; - goto out; - } + phy->type = igc_phy_i225; out: return ret_val; diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 61cebb7df6bc..da39380ab205 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -4189,8 +4189,7 @@ bool igc_has_link(struct igc_adapter *adapter) break; } - if (hw->mac.type == igc_i225 && - hw->phy.id == I225_I_PHY_ID) { + if (hw->mac.type == igc_i225) { if (!netif_carrier_ok(adapter->netdev)) { adapter->flags &= ~IGC_FLAG_NEED_LINK_UPDATE; } else if (!(adapter->flags & IGC_FLAG_NEED_LINK_UPDATE)) { diff --git a/drivers/net/ethernet/intel/igc/igc_phy.c b/drivers/net/ethernet/intel/igc/igc_phy.c index 8de4de2e5636..3a103406eadb 100644 --- a/drivers/net/ethernet/intel/igc/igc_phy.c +++ b/drivers/net/ethernet/intel/igc/igc_phy.c @@ -249,8 +249,7 @@ static s32 igc_phy_setup_autoneg(struct igc_hw *hw) return ret_val; } - if ((phy->autoneg_mask & ADVERTISE_2500_FULL) && - hw->phy.id == I225_I_PHY_ID) { + if (phy->autoneg_mask & ADVERTISE_2500_FULL) { /* Read the MULTI GBT AN Control Register - reg 7.32 */ ret_val = phy->ops.read_reg(hw, (STANDARD_AN_REG_MASK << MMD_DEVADDR_SHIFT) | @@ -390,8 +389,7 @@ static s32 igc_phy_setup_autoneg(struct igc_hw *hw) ret_val = phy->ops.write_reg(hw, PHY_1000T_CTRL, mii_1000t_ctrl_reg); - if ((phy->autoneg_mask & ADVERTISE_2500_FULL) && - hw->phy.id == I225_I_PHY_ID) + if (phy->autoneg_mask & ADVERTISE_2500_FULL) ret_val = phy->ops.write_reg(hw, (STANDARD_AN_REG_MASK << MMD_DEVADDR_SHIFT) | From d0229838b63c1f5f33a267c067e7e66e126929aa Mon Sep 17 00:00:00 2001 From: Sasha Neftin Date: Sat, 10 Jul 2021 20:57:50 +0300 Subject: [PATCH 0006/1842] igc: Remove phy->type checking commit 47bca7de6a4fb8dcb564c7ca14d885c91ed19e03 upstream. i225 devices have only one phy->type: copper. There is no point checking phy->type during the igc_has_link method from the watchdog that invoked every 2 seconds. This patch comes to clean up these pointless checkings. Signed-off-by: Sasha Neftin Tested-by: Dvora Fuxbrumer Signed-off-by: Tony Nguyen Signed-off-by: Nobuhiro Iwamatsu Signed-off-by: Greg Kroah-Hartman --- drivers/net/ethernet/intel/igc/igc_main.c | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index da39380ab205..fd9257c7059a 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -4177,17 +4177,10 @@ bool igc_has_link(struct igc_adapter *adapter) * false until the igc_check_for_link establishes link * for copper adapters ONLY */ - switch (hw->phy.media_type) { - case igc_media_type_copper: - if (!hw->mac.get_link_status) - return true; - hw->mac.ops.check_for_link(hw); - link_active = !hw->mac.get_link_status; - break; - default: - case igc_media_type_unknown: - break; - } + if (!hw->mac.get_link_status) + return true; + hw->mac.ops.check_for_link(hw); + link_active = !hw->mac.get_link_status; if (hw->mac.type == igc_i225) { if (!netif_carrier_ok(adapter->netdev)) { From 67136fff5b9acc5c30e1b6db568206bb9bc394eb Mon Sep 17 00:00:00 2001 From: Sasha Neftin Date: Thu, 9 Sep 2021 20:49:04 +0300 Subject: [PATCH 0007/1842] igc: Update I226_K device ID commit 79cc8322b6d82747cb63ea464146c0bf5b5a6bc1 upstream. The device ID for I226_K was incorrectly assigned, update the device ID to the correct one. Fixes: bfa5e98c9de4 ("igc: Add new device ID") Signed-off-by: Sasha Neftin Tested-by: Nechama Kraus Signed-off-by: Tony Nguyen Signed-off-by: Nobuhiro Iwamatsu Signed-off-by: Greg Kroah-Hartman --- drivers/net/ethernet/intel/igc/igc_hw.h | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/ethernet/intel/igc/igc_hw.h b/drivers/net/ethernet/intel/igc/igc_hw.h index 55dae7c4703f..7e29f41f70e0 100644 --- a/drivers/net/ethernet/intel/igc/igc_hw.h +++ b/drivers/net/ethernet/intel/igc/igc_hw.h @@ -22,6 +22,7 @@ #define IGC_DEV_ID_I220_V 0x15F7 #define IGC_DEV_ID_I225_K 0x3100 #define IGC_DEV_ID_I225_K2 0x3101 +#define IGC_DEV_ID_I226_K 0x3102 #define IGC_DEV_ID_I225_LMVP 0x5502 #define IGC_DEV_ID_I225_IT 0x0D9F #define IGC_DEV_ID_I226_LM 0x125B From 2b4e5a2d7da04e6c80a5869fe209caee322a65b6 Mon Sep 17 00:00:00 2001 From: Vincent Whitchurch Date: Fri, 10 Dec 2021 17:09:51 +0100 Subject: [PATCH 0008/1842] rtc: fix use-after-free on device removal [ Upstream commit c8fa17d9f08a448184f03d352145099b5beb618e ] If the irqwork is still scheduled or running while the RTC device is removed, a use-after-free occurs in rtc_timer_do_work(). Cleanup the timerqueue and ensure the work is stopped to fix this. BUG: KASAN: use-after-free in mutex_lock+0x94/0x110 Write of size 8 at addr ffffff801d846338 by task kworker/3:1/41 Workqueue: events rtc_timer_do_work Call trace: mutex_lock+0x94/0x110 rtc_timer_do_work+0xec/0x630 process_one_work+0x5fc/0x1344 ... Allocated by task 551: kmem_cache_alloc_trace+0x384/0x6e0 devm_rtc_allocate_device+0xf0/0x574 devm_rtc_device_register+0x2c/0x12c ... Freed by task 572: kfree+0x114/0x4d0 rtc_device_release+0x64/0x80 device_release+0x8c/0x1f4 kobject_put+0x1c4/0x4b0 put_device+0x20/0x30 devm_rtc_release_device+0x1c/0x30 devm_action_release+0x54/0x90 release_nodes+0x124/0x310 devres_release_group+0x170/0x240 i2c_device_remove+0xd8/0x314 ... Last potentially related work creation: insert_work+0x5c/0x330 queue_work_on+0xcc/0x154 rtc_set_time+0x188/0x5bc rtc_dev_ioctl+0x2ac/0xbd0 ... Signed-off-by: Vincent Whitchurch Signed-off-by: Alexandre Belloni Link: https://lore.kernel.org/r/20211210160951.7718-1-vincent.whitchurch@axis.com Signed-off-by: Sasha Levin --- drivers/rtc/class.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/rtc/class.c b/drivers/rtc/class.c index 7c88d190c51f..625effe6cb65 100644 --- a/drivers/rtc/class.c +++ b/drivers/rtc/class.c @@ -26,6 +26,15 @@ struct class *rtc_class; static void rtc_device_release(struct device *dev) { struct rtc_device *rtc = to_rtc_device(dev); + struct timerqueue_head *head = &rtc->timerqueue; + struct timerqueue_node *node; + + mutex_lock(&rtc->ops_lock); + while ((node = timerqueue_getnext(head))) + timerqueue_del(head, node); + mutex_unlock(&rtc->ops_lock); + + cancel_work_sync(&rtc->irqwork); ida_simple_remove(&rtc_ida, rtc->id); kfree(rtc); From c39b91fcd5e35b62da3dc649ad4d239ae50389cb Mon Sep 17 00:00:00 2001 From: Hugo Villeneuve Date: Tue, 8 Feb 2022 11:29:07 -0500 Subject: [PATCH 0009/1842] rtc: pcf2127: fix bug when reading alarm registers [ Upstream commit 73ce05302007eece23a6acb7dc124c92a2209087 ] The first bug is that reading the 5 alarm registers results in a read operation of 20 bytes. The reason is because the destination buffer is defined as an array of "unsigned int", and we use the sizeof() operator on this array to define the bulk read count. The second bug is that the read value is invalid, because we are indexing the destination buffer as integers (4 bytes), instead of indexing it as u8. Changing the destination buffer type to u8 fixes both problems. Signed-off-by: Hugo Villeneuve Signed-off-by: Alexandre Belloni Link: https://lore.kernel.org/r/20220208162908.3182581-1-hugo@hugovil.com Signed-off-by: Sasha Levin --- drivers/rtc/rtc-pcf2127.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/rtc/rtc-pcf2127.c b/drivers/rtc/rtc-pcf2127.c index f0a6861ff3ae..715513311ece 100644 --- a/drivers/rtc/rtc-pcf2127.c +++ b/drivers/rtc/rtc-pcf2127.c @@ -366,7 +366,8 @@ static int pcf2127_watchdog_init(struct device *dev, struct pcf2127 *pcf2127) static int pcf2127_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alrm) { struct pcf2127 *pcf2127 = dev_get_drvdata(dev); - unsigned int buf[5], ctrl2; + u8 buf[5]; + unsigned int ctrl2; int ret; ret = regmap_read(pcf2127->regmap, PCF2127_REG_CTRL2, &ctrl2); From ea6a86886caa1ec659dd08b38ca879bacde47123 Mon Sep 17 00:00:00 2001 From: David Gow Date: Thu, 10 Feb 2022 11:43:53 +0800 Subject: [PATCH 0010/1842] um: Cleanup syscall_handler_t definition/cast, fix warning MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit f4f03f299a56ce4d73c5431e0327b3b6cb55ebb9 ] The syscall_handler_t type for x86_64 was defined as 'long (*)(void)', but always cast to 'long (*)(long, long, long, long, long, long)' before use. This now triggers a warning (see below). Define syscall_handler_t as the latter instead, and remove the cast. This simplifies the code, and fixes the warning. Warning: In file included from ../arch/um/include/asm/processor-generic.h:13 from ../arch/x86/um/asm/processor.h:41 from ../include/linux/rcupdate.h:30 from ../include/linux/rculist.h:11 from ../include/linux/pid.h:5 from ../include/linux/sched.h:14 from ../include/linux/ptrace.h:6 from ../arch/um/kernel/skas/syscall.c:7: ../arch/um/kernel/skas/syscall.c: In function ‘handle_syscall’: ../arch/x86/um/shared/sysdep/syscalls_64.h:18:11: warning: cast between incompatible function types from ‘long int (*)(void)’ to ‘long int (*)(long int, long int, long int, long int, long int, long int)’ [ -Wcast-function-type] 18 | (((long (*)(long, long, long, long, long, long)) \ | ^ ../arch/x86/um/asm/ptrace.h:36:62: note: in definition of macro ‘PT_REGS_SET_SYSCALL_RETURN’ 36 | #define PT_REGS_SET_SYSCALL_RETURN(r, res) (PT_REGS_AX(r) = (res)) | ^~~ ../arch/um/kernel/skas/syscall.c:46:33: note: in expansion of macro ‘EXECUTE_SYSCALL’ 46 | EXECUTE_SYSCALL(syscall, regs)); | ^~~~~~~~~~~~~~~ Signed-off-by: David Gow Signed-off-by: Richard Weinberger Signed-off-by: Sasha Levin --- arch/x86/um/shared/sysdep/syscalls_64.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/x86/um/shared/sysdep/syscalls_64.h b/arch/x86/um/shared/sysdep/syscalls_64.h index 8a7d5e1da98e..1e6875b4ffd8 100644 --- a/arch/x86/um/shared/sysdep/syscalls_64.h +++ b/arch/x86/um/shared/sysdep/syscalls_64.h @@ -10,13 +10,12 @@ #include #include -typedef long syscall_handler_t(void); +typedef long syscall_handler_t(long, long, long, long, long, long); extern syscall_handler_t *sys_call_table[]; #define EXECUTE_SYSCALL(syscall, regs) \ - (((long (*)(long, long, long, long, long, long)) \ - (*sys_call_table[syscall]))(UPT_SYSCALL_ARG1(®s->regs), \ + (((*sys_call_table[syscall]))(UPT_SYSCALL_ARG1(®s->regs), \ UPT_SYSCALL_ARG2(®s->regs), \ UPT_SYSCALL_ARG3(®s->regs), \ UPT_SYSCALL_ARG4(®s->regs), \ From d5e88c2d76efa9d7bb7ceffaec60fe6c76c748d7 Mon Sep 17 00:00:00 2001 From: Jeff LaBundy Date: Sun, 20 Mar 2022 21:55:27 -0700 Subject: [PATCH 0011/1842] Input: add bounds checking to input_set_capability() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 409353cbe9fe48f6bc196114c442b1cff05a39bc ] Update input_set_capability() to prevent kernel panic in case the event code exceeds the bitmap for the given event type. Suggested-by: Tomasz Moń Signed-off-by: Jeff LaBundy Reviewed-by: Tomasz Moń Link: https://lore.kernel.org/r/20220320032537.545250-1-jeff@labundy.com Signed-off-by: Dmitry Torokhov Signed-off-by: Sasha Levin --- drivers/input/input.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/drivers/input/input.c b/drivers/input/input.c index 3cfd2c18eebd..49504dcd5dc6 100644 --- a/drivers/input/input.c +++ b/drivers/input/input.c @@ -47,6 +47,17 @@ static DEFINE_MUTEX(input_mutex); static const struct input_value input_value_sync = { EV_SYN, SYN_REPORT, 1 }; +static const unsigned int input_max_code[EV_CNT] = { + [EV_KEY] = KEY_MAX, + [EV_REL] = REL_MAX, + [EV_ABS] = ABS_MAX, + [EV_MSC] = MSC_MAX, + [EV_SW] = SW_MAX, + [EV_LED] = LED_MAX, + [EV_SND] = SND_MAX, + [EV_FF] = FF_MAX, +}; + static inline int is_event_supported(unsigned int code, unsigned long *bm, unsigned int max) { @@ -1976,6 +1987,14 @@ EXPORT_SYMBOL(input_get_timestamp); */ void input_set_capability(struct input_dev *dev, unsigned int type, unsigned int code) { + if (type < EV_CNT && input_max_code[type] && + code > input_max_code[type]) { + pr_err("%s: invalid code %u for type %u\n", __func__, code, + type); + dump_stack(); + return; + } + switch (type) { case EV_KEY: __set_bit(code, dev->keybit); From 5565fc538ded8961c9d885ad37200ee72a7bd1a2 Mon Sep 17 00:00:00 2001 From: Zheng Yongjun Date: Sun, 20 Mar 2022 21:56:38 -0700 Subject: [PATCH 0012/1842] Input: stmfts - fix reference leak in stmfts_input_open [ Upstream commit 26623eea0da3476446909af96c980768df07bbd9 ] pm_runtime_get_sync() will increment pm usage counter even it failed. Forgetting to call pm_runtime_put_noidle will result in reference leak in stmfts_input_open, so we should fix it. Signed-off-by: Zheng Yongjun Link: https://lore.kernel.org/r/20220317131604.53538-1-zhengyongjun3@huawei.com Signed-off-by: Dmitry Torokhov Signed-off-by: Sasha Levin --- drivers/input/touchscreen/stmfts.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/input/touchscreen/stmfts.c b/drivers/input/touchscreen/stmfts.c index 9a64e1dbc04a..64b690a72d10 100644 --- a/drivers/input/touchscreen/stmfts.c +++ b/drivers/input/touchscreen/stmfts.c @@ -339,11 +339,11 @@ static int stmfts_input_open(struct input_dev *dev) err = pm_runtime_get_sync(&sdata->client->dev); if (err < 0) - return err; + goto out; err = i2c_smbus_write_byte(sdata->client, STMFTS_MS_MT_SENSE_ON); if (err) - return err; + goto out; mutex_lock(&sdata->mutex); sdata->running = true; @@ -366,7 +366,9 @@ static int stmfts_input_open(struct input_dev *dev) "failed to enable touchkey\n"); } - return 0; +out: + pm_runtime_put_noidle(&sdata->client->dev); + return err; } static void stmfts_input_close(struct input_dev *dev) From e803f12ea27f8b690e553d2432aebd9b816c34c1 Mon Sep 17 00:00:00 2001 From: Monish Kumar R Date: Wed, 16 Mar 2022 13:24:49 +0530 Subject: [PATCH 0013/1842] nvme-pci: add quirks for Samsung X5 SSDs [ Upstream commit bc360b0b1611566e1bd47384daf49af6a1c51837 ] Add quirks to not fail the initialization and to have quick resume latency after cold/warm reboot. Signed-off-by: Monish Kumar R Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/pci.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 6939b03a16c5..a36db0701d17 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -3265,7 +3265,10 @@ static const struct pci_device_id nvme_id_table[] = { NVME_QUIRK_128_BYTES_SQES | NVME_QUIRK_SHARED_TAGS | NVME_QUIRK_SKIP_CID_GEN }, - + { PCI_DEVICE(0x144d, 0xa808), /* Samsung X5 */ + .driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY| + NVME_QUIRK_NO_DEEPEST_PS | + NVME_QUIRK_IGNORE_DEV_SUBNQN, }, { PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) }, { 0, } }; From bab037ebbe7dbb5f631dfce5b989fcf9e5653c0c Mon Sep 17 00:00:00 2001 From: Andreas Gruenbacher Date: Mon, 14 Mar 2022 18:32:02 +0100 Subject: [PATCH 0014/1842] gfs2: Disable page faults during lockless buffered reads [ Upstream commit 52f3f033a5dbd023307520af1ff551cadfd7f037 ] During lockless buffered reads, filemap_read() holds page cache page references while trying to copy data to the user-space buffer. The calling process isn't holding the inode glock, but the page references it holds prevent those pages from being removed from the page cache, and that prevents the underlying inode glock from being moved to another node. Thus, we can end up in the same kinds of distributed deadlock situations as with normal (non-lockless) buffered reads. Fix that by disabling page faults during lockless reads as well. Signed-off-by: Andreas Gruenbacher Signed-off-by: Sasha Levin --- fs/gfs2/file.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c index 2e6f622ed428..55a8eb3c1963 100644 --- a/fs/gfs2/file.c +++ b/fs/gfs2/file.c @@ -858,14 +858,16 @@ static ssize_t gfs2_file_read_iter(struct kiocb *iocb, struct iov_iter *to) return ret; iocb->ki_flags &= ~IOCB_DIRECT; } + pagefault_disable(); iocb->ki_flags |= IOCB_NOIO; ret = generic_file_read_iter(iocb, to); iocb->ki_flags &= ~IOCB_NOIO; + pagefault_enable(); if (ret >= 0) { if (!iov_iter_count(to)) return ret; written = ret; - } else { + } else if (ret != -EFAULT) { if (ret != -EAGAIN) return ret; if (iocb->ki_flags & IOCB_NOWAIT) From 703c80ff4330377a70aa5662db256d55ee12961e Mon Sep 17 00:00:00 2001 From: Andre Przywara Date: Fri, 11 Feb 2022 12:26:28 +0000 Subject: [PATCH 0015/1842] rtc: sun6i: Fix time overflow handling [ Upstream commit 9f6cd82eca7e91a0d0311242a87c6aa3c2737968 ] Using "unsigned long" for UNIX timestamps is never a good idea, and comparing the value of such a variable against U32_MAX does not do anything useful on 32-bit systems. Use the proper time64_t type when dealing with timestamps, and avoid cutting down the time range unnecessarily. This also fixes the flawed check for the alarm time being too far into the future. The check for this condition is actually somewhat theoretical, as the RTC counts till 2033 only anyways, and 2^32 seconds from now is not before the year 2157 - at which point I hope nobody will be using this hardware anymore. Signed-off-by: Andre Przywara Reviewed-by: Jernej Skrabec Signed-off-by: Alexandre Belloni Link: https://lore.kernel.org/r/20220211122643.1343315-4-andre.przywara@arm.com Signed-off-by: Sasha Levin --- drivers/rtc/rtc-sun6i.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-) diff --git a/drivers/rtc/rtc-sun6i.c b/drivers/rtc/rtc-sun6i.c index f2818cdd11d8..52b36b7c6129 100644 --- a/drivers/rtc/rtc-sun6i.c +++ b/drivers/rtc/rtc-sun6i.c @@ -138,7 +138,7 @@ struct sun6i_rtc_dev { const struct sun6i_rtc_clk_data *data; void __iomem *base; int irq; - unsigned long alarm; + time64_t alarm; struct clk_hw hw; struct clk_hw *int_osc; @@ -510,10 +510,8 @@ static int sun6i_rtc_setalarm(struct device *dev, struct rtc_wkalrm *wkalrm) struct sun6i_rtc_dev *chip = dev_get_drvdata(dev); struct rtc_time *alrm_tm = &wkalrm->time; struct rtc_time tm_now; - unsigned long time_now = 0; - unsigned long time_set = 0; - unsigned long time_gap = 0; - int ret = 0; + time64_t time_now, time_set; + int ret; ret = sun6i_rtc_gettime(dev, &tm_now); if (ret < 0) { @@ -528,9 +526,7 @@ static int sun6i_rtc_setalarm(struct device *dev, struct rtc_wkalrm *wkalrm) return -EINVAL; } - time_gap = time_set - time_now; - - if (time_gap > U32_MAX) { + if ((time_set - time_now) > U32_MAX) { dev_err(dev, "Date too far in the future\n"); return -EINVAL; } @@ -539,7 +535,7 @@ static int sun6i_rtc_setalarm(struct device *dev, struct rtc_wkalrm *wkalrm) writel(0, chip->base + SUN6I_ALRM_COUNTER); usleep_range(100, 300); - writel(time_gap, chip->base + SUN6I_ALRM_COUNTER); + writel(time_set - time_now, chip->base + SUN6I_ALRM_COUNTER); chip->alarm = time_set; sun6i_rtc_setaie(wkalrm->enabled, chip); From 39acee8aea3d02f649fb840bf0d72e3603fc9c83 Mon Sep 17 00:00:00 2001 From: Zheng Yongjun Date: Thu, 17 Mar 2022 13:16:13 +0000 Subject: [PATCH 0016/1842] crypto: stm32 - fix reference leak in stm32_crc_remove [ Upstream commit e9a36feecee0ee5845f2e0656f50f9942dd0bed3 ] pm_runtime_get_sync() will increment pm usage counter even it failed. Forgetting to call pm_runtime_put_noidle will result in reference leak in stm32_crc_remove, so we should fix it. Signed-off-by: Zheng Yongjun Signed-off-by: Herbert Xu Signed-off-by: Sasha Levin --- drivers/crypto/stm32/stm32-crc32.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/stm32/stm32-crc32.c b/drivers/crypto/stm32/stm32-crc32.c index be1bf39a317d..90a920e7f664 100644 --- a/drivers/crypto/stm32/stm32-crc32.c +++ b/drivers/crypto/stm32/stm32-crc32.c @@ -384,8 +384,10 @@ static int stm32_crc_remove(struct platform_device *pdev) struct stm32_crc *crc = platform_get_drvdata(pdev); int ret = pm_runtime_get_sync(crc->dev); - if (ret < 0) + if (ret < 0) { + pm_runtime_put_noidle(crc->dev); return ret; + } spin_lock(&crc_list.lock); list_del(&crc->list); From a59450656bcda7fbee9f892d5a65715ad846ce29 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 22 Mar 2022 12:48:10 +0100 Subject: [PATCH 0017/1842] crypto: x86/chacha20 - Avoid spurious jumps to other functions [ Upstream commit 4327d168515fd8b5b92fa1efdf1d219fb6514460 ] The chacha_Nblock_xor_avx512vl() functions all have their own, identical, .LdoneN label, however in one particular spot {2,4} jump to the 8 version instead of their own. Resulting in: arch/x86/crypto/chacha-x86_64.o: warning: objtool: chacha_2block_xor_avx512vl() falls through to next function chacha_8block_xor_avx512vl() arch/x86/crypto/chacha-x86_64.o: warning: objtool: chacha_4block_xor_avx512vl() falls through to next function chacha_8block_xor_avx512vl() Make each function consistently use its own done label. Reported-by: Stephen Rothwell Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Martin Willi Signed-off-by: Herbert Xu Signed-off-by: Sasha Levin --- arch/x86/crypto/chacha-avx512vl-x86_64.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/crypto/chacha-avx512vl-x86_64.S b/arch/x86/crypto/chacha-avx512vl-x86_64.S index bb193fde123a..8713c16c2501 100644 --- a/arch/x86/crypto/chacha-avx512vl-x86_64.S +++ b/arch/x86/crypto/chacha-avx512vl-x86_64.S @@ -172,7 +172,7 @@ SYM_FUNC_START(chacha_2block_xor_avx512vl) # xor remaining bytes from partial register into output mov %rcx,%rax and $0xf,%rcx - jz .Ldone8 + jz .Ldone2 mov %rax,%r9 and $~0xf,%r9 @@ -438,7 +438,7 @@ SYM_FUNC_START(chacha_4block_xor_avx512vl) # xor remaining bytes from partial register into output mov %rcx,%rax and $0xf,%rcx - jz .Ldone8 + jz .Ldone4 mov %rax,%r9 and $~0xf,%r9 From 7d3f69cbdec8529cbad3905a41c7b27a1ee39279 Mon Sep 17 00:00:00 2001 From: Kai-Heng Feng Date: Sat, 26 Mar 2022 00:05:00 +0800 Subject: [PATCH 0018/1842] ALSA: hda/realtek: Enable headset mic on Lenovo P360 [ Upstream commit 5a8738571747c1e275a40b69a608657603867b7e ] Lenovo P360 is another platform equipped with ALC897, and it needs ALC897_FIXUP_HEADSET_MIC_PIN quirk to make its headset mic work. Signed-off-by: Kai-Heng Feng Link: https://lore.kernel.org/r/20220325160501.705221-1-kai.heng.feng@canonical.com Signed-off-by: Takashi Iwai Signed-off-by: Sasha Levin --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index a5b1fd62a99c..3f880c4fd5e0 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -10876,6 +10876,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = { SND_PCI_QUIRK(0x144d, 0xc051, "Samsung R720", ALC662_FIXUP_IDEAPAD), SND_PCI_QUIRK(0x14cd, 0x5003, "USI", ALC662_FIXUP_USI_HEADSET_MODE), SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC662_FIXUP_LENOVO_MULTI_CODECS), + SND_PCI_QUIRK(0x17aa, 0x1057, "Lenovo P360", ALC897_FIXUP_HEADSET_MIC_PIN), SND_PCI_QUIRK(0x17aa, 0x32ca, "Lenovo ThinkCentre M80", ALC897_FIXUP_HEADSET_MIC_PIN), SND_PCI_QUIRK(0x17aa, 0x32cb, "Lenovo ThinkCentre M70", ALC897_FIXUP_HEADSET_MIC_PIN), SND_PCI_QUIRK(0x17aa, 0x32cf, "Lenovo ThinkCentre M950", ALC897_FIXUP_HEADSET_MIC_PIN), From f0931ee125ffc4e8be8ef27c36ca5811cff2a80b Mon Sep 17 00:00:00 2001 From: Niklas Schnelle Date: Mon, 20 Sep 2021 09:32:21 +0200 Subject: [PATCH 0019/1842] s390/pci: improve zpci_dev reference counting [ Upstream commit c122383d221dfa2f41cfe5e672540595de986fde ] Currently zpci_dev uses kref based reference counting but only accounts for one original reference plus one reference from an added pci_dev to its underlying zpci_dev. Counting just the original reference worked until the pci_dev reference was added in commit 2a671f77ee49 ("s390/pci: fix use after free of zpci_dev") because once a zpci_dev goes away, i.e. enters the reserved state, it would immediately get released. However with the pci_dev reference this is no longer the case and the zpci_dev may still appear in multiple availability events indicating that it was reserved. This was solved by detecting when the zpci_dev is already on its way out but still hanging around. This has however shown some light on how unusual our zpci_dev reference counting is. Improve upon this by modelling zpci_dev reference counting on pci_dev. Analogous to pci_get_slot() increment the reference count in get_zdev_by_fid(). Thus all users of get_zdev_by_fid() must drop the reference once they are done with the zpci_dev. Similar to pci_scan_single_device(), zpci_create_device() returns the device with an initial count of 1 and the device added to the zpci_list (analogous to the PCI bus' device_list). In turn users of zpci_create_device() must only drop the reference once the device is gone from the point of view of the zPCI subsystem, it might still be referenced by the common PCI subsystem though. Reviewed-by: Matthew Rosato Signed-off-by: Niklas Schnelle Signed-off-by: Vasily Gorbik Signed-off-by: Sasha Levin --- arch/s390/pci/pci.c | 1 + arch/s390/pci/pci_bus.h | 3 ++- arch/s390/pci/pci_clp.c | 9 +++++++-- arch/s390/pci/pci_event.c | 7 ++++++- 4 files changed, 16 insertions(+), 4 deletions(-) diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c index e14e4a3a647a..74799439b259 100644 --- a/arch/s390/pci/pci.c +++ b/arch/s390/pci/pci.c @@ -69,6 +69,7 @@ struct zpci_dev *get_zdev_by_fid(u32 fid) list_for_each_entry(tmp, &zpci_list, entry) { if (tmp->fid == fid) { zdev = tmp; + zpci_zdev_get(zdev); break; } } diff --git a/arch/s390/pci/pci_bus.h b/arch/s390/pci/pci_bus.h index 55c9488e504c..8d2fcd091ca7 100644 --- a/arch/s390/pci/pci_bus.h +++ b/arch/s390/pci/pci_bus.h @@ -13,7 +13,8 @@ void zpci_bus_device_unregister(struct zpci_dev *zdev); void zpci_release_device(struct kref *kref); static inline void zpci_zdev_put(struct zpci_dev *zdev) { - kref_put(&zdev->kref, zpci_release_device); + if (zdev) + kref_put(&zdev->kref, zpci_release_device); } static inline void zpci_zdev_get(struct zpci_dev *zdev) diff --git a/arch/s390/pci/pci_clp.c b/arch/s390/pci/pci_clp.c index 0a0e8b8293be..d1a5c80a41cb 100644 --- a/arch/s390/pci/pci_clp.c +++ b/arch/s390/pci/pci_clp.c @@ -22,6 +22,8 @@ #include #include +#include "pci_bus.h" + bool zpci_unique_uid; void update_uid_checking(bool new) @@ -372,8 +374,11 @@ static void __clp_add(struct clp_fh_list_entry *entry, void *data) return; zdev = get_zdev_by_fid(entry->fid); - if (!zdev) - zpci_create_device(entry->fid, entry->fh, entry->config_state); + if (zdev) { + zpci_zdev_put(zdev); + return; + } + zpci_create_device(entry->fid, entry->fh, entry->config_state); } int clp_scan_pci_devices(void) diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c index b7cfde7e80a8..6ced44b5be8a 100644 --- a/arch/s390/pci/pci_event.c +++ b/arch/s390/pci/pci_event.c @@ -61,10 +61,12 @@ static void __zpci_event_error(struct zpci_ccdf_err *ccdf) pdev ? pci_name(pdev) : "n/a", ccdf->pec, ccdf->fid); if (!pdev) - return; + goto no_pdev; pdev->error_state = pci_channel_io_perm_failure; pci_dev_put(pdev); +no_pdev: + zpci_zdev_put(zdev); } void zpci_event_error(void *data) @@ -76,6 +78,7 @@ void zpci_event_error(void *data) static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf) { struct zpci_dev *zdev = get_zdev_by_fid(ccdf->fid); + bool existing_zdev = !!zdev; enum zpci_state state; struct pci_dev *pdev; int ret; @@ -161,6 +164,8 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf) default: break; } + if (existing_zdev) + zpci_zdev_put(zdev); } void zpci_event_availability(void *data) From 5a4cbcb3df45be164bdf9257ea7e0dd89e447cf3 Mon Sep 17 00:00:00 2001 From: Zhu Lingshan Date: Tue, 22 Feb 2022 19:54:25 +0800 Subject: [PATCH 0020/1842] vhost_vdpa: don't setup irq offloading when irq_num < 0 [ Upstream commit cce0ab2b2a39072d81f98017f7b076f3410ef740 ] When irq number is negative(e.g., -EINVAL), the virtqueue may be disabled or the virtqueues are sharing a device irq. In such case, we should not setup irq offloading for a virtqueue. Signed-off-by: Zhu Lingshan Link: https://lore.kernel.org/r/20220222115428.998334-3-lingshan.zhu@intel.com Signed-off-by: Michael S. Tsirkin Signed-off-by: Sasha Levin --- drivers/vhost/vdpa.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index e4d60009d908..04578aa87e4d 100644 --- a/drivers/vhost/vdpa.c +++ b/drivers/vhost/vdpa.c @@ -97,8 +97,11 @@ static void vhost_vdpa_setup_vq_irq(struct vhost_vdpa *v, u16 qid) return; irq = ops->get_vq_irq(vdpa, qid); + if (irq < 0) + return; + irq_bypass_unregister_producer(&vq->call_ctx.producer); - if (!vq->call_ctx.ctx || irq < 0) + if (!vq->call_ctx.ctx) return; vq->call_ctx.producer.token = vq->call_ctx.ctx; From 3663d6023aa2198f7d3eb55cc7797b1f9fff7659 Mon Sep 17 00:00:00 2001 From: "Michael S. Tsirkin" Date: Sun, 20 Mar 2022 07:02:14 -0400 Subject: [PATCH 0021/1842] tools/virtio: compile with -pthread [ Upstream commit f03560a57c1f60db6ac23ffd9714e1c69e2f95c7 ] When using pthreads, one has to compile and link with -lpthread, otherwise e.g. glibc is not guaranteed to be reentrant. This replaces -lpthread. Reported-by: Matthew Wilcox Signed-off-by: Michael S. Tsirkin Signed-off-by: Sasha Levin --- tools/virtio/Makefile | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/tools/virtio/Makefile b/tools/virtio/Makefile index 0d7bbe49359d..1b25cc7c64bb 100644 --- a/tools/virtio/Makefile +++ b/tools/virtio/Makefile @@ -5,7 +5,8 @@ virtio_test: virtio_ring.o virtio_test.o vringh_test: vringh_test.o vringh.o virtio_ring.o CFLAGS += -g -O2 -Werror -Wno-maybe-uninitialized -Wall -I. -I../include/ -I ../../usr/include/ -Wno-pointer-sign -fno-strict-overflow -fno-strict-aliasing -fno-common -MMD -U_FORTIFY_SOURCE -include ../../include/linux/kconfig.h -LDFLAGS += -lpthread +CFLAGS += -pthread +LDFLAGS += -pthread vpath %.c ../../drivers/virtio ../../drivers/vhost mod: ${MAKE} -C `pwd`/../.. M=`pwd`/vhost_test V=${V} From 87fd0dd43e9c09155f67716ba0f9306fd3d077b8 Mon Sep 17 00:00:00 2001 From: Anton Eidelman Date: Thu, 24 Mar 2022 13:05:11 -0600 Subject: [PATCH 0022/1842] nvme-multipath: fix hang when disk goes live over reconnect [ Upstream commit a4a6f3c8f61c3cfbda4998ad94596059ad7e4332 ] nvme_mpath_init_identify() invoked from nvme_init_identify() fetches a fresh ANA log from the ctrl. This is essential to have an up to date path states for both existing namespaces and for those scan_work may discover once the ctrl is up. This happens in the following cases: 1) A new ctrl is being connected. 2) An existing ctrl is successfully reconnected. 3) An existing ctrl is being reset. While in (1) ctrl->namespaces is empty, (2 & 3) may have namespaces, and nvme_read_ana_log() may call nvme_update_ns_ana_state(). This result in a hang when the ANA state of an existing namespace changes and makes the disk live: nvme_mpath_set_live() issues IO to the namespace through the ctrl, which does NOT have IO queues yet. See sample hang below. Solution: - nvme_update_ns_ana_state() to call set_live only if ctrl is live - nvme_read_ana_log() call from nvme_mpath_init_identify() therefore only fetches and parses the ANA log; any erros in this process will fail the ctrl setup as appropriate; - a separate function nvme_mpath_update() is called in nvme_start_ctrl(); this parses the ANA log without fetching it. At this point the ctrl is live, therefore, disks can be set live normally. Sample failure: nvme nvme0: starting error recovery nvme nvme0: Reconnecting in 10 seconds... block nvme0n6: no usable path - requeuing I/O INFO: task kworker/u8:3:312 blocked for more than 122 seconds. Tainted: G E 5.14.5-1.el7.elrepo.x86_64 #1 Workqueue: nvme-wq nvme_tcp_reconnect_ctrl_work [nvme_tcp] Call Trace: __schedule+0x2a2/0x7e0 schedule+0x4e/0xb0 io_schedule+0x16/0x40 wait_on_page_bit_common+0x15c/0x3e0 do_read_cache_page+0x1e0/0x410 read_cache_page+0x12/0x20 read_part_sector+0x46/0x100 read_lba+0x121/0x240 efi_partition+0x1d2/0x6a0 bdev_disk_changed.part.0+0x1df/0x430 bdev_disk_changed+0x18/0x20 blkdev_get_whole+0x77/0xe0 blkdev_get_by_dev+0xd2/0x3a0 __device_add_disk+0x1ed/0x310 device_add_disk+0x13/0x20 nvme_mpath_set_live+0x138/0x1b0 [nvme_core] nvme_update_ns_ana_state+0x2b/0x30 [nvme_core] nvme_update_ana_state+0xca/0xe0 [nvme_core] nvme_parse_ana_log+0xac/0x170 [nvme_core] nvme_read_ana_log+0x7d/0xe0 [nvme_core] nvme_mpath_init_identify+0x105/0x150 [nvme_core] nvme_init_identify+0x2df/0x4d0 [nvme_core] nvme_init_ctrl_finish+0x8d/0x3b0 [nvme_core] nvme_tcp_setup_ctrl+0x337/0x390 [nvme_tcp] nvme_tcp_reconnect_ctrl_work+0x24/0x40 [nvme_tcp] process_one_work+0x1bd/0x360 worker_thread+0x50/0x3d0 Signed-off-by: Anton Eidelman Reviewed-by: Sagi Grimberg Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/core.c | 1 + drivers/nvme/host/multipath.c | 25 +++++++++++++++++++++++-- drivers/nvme/host/nvme.h | 4 ++++ 3 files changed, 28 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index ad4f1cfbad2e..e73a5c62a858 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -4420,6 +4420,7 @@ void nvme_start_ctrl(struct nvme_ctrl *ctrl) if (ctrl->queue_count > 1) { nvme_queue_scan(ctrl); nvme_start_queues(ctrl); + nvme_mpath_update(ctrl); } } EXPORT_SYMBOL_GPL(nvme_start_ctrl); diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index 18a756444d5a..a9e15c8f907b 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -484,8 +484,17 @@ static void nvme_update_ns_ana_state(struct nvme_ana_group_desc *desc, ns->ana_grpid = le32_to_cpu(desc->grpid); ns->ana_state = desc->state; clear_bit(NVME_NS_ANA_PENDING, &ns->flags); - - if (nvme_state_is_live(ns->ana_state)) + /* + * nvme_mpath_set_live() will trigger I/O to the multipath path device + * and in turn to this path device. However we cannot accept this I/O + * if the controller is not live. This may deadlock if called from + * nvme_mpath_init_identify() and the ctrl will never complete + * initialization, preventing I/O from completing. For this case we + * will reprocess the ANA log page in nvme_mpath_update() once the + * controller is ready. + */ + if (nvme_state_is_live(ns->ana_state) && + ns->ctrl->state == NVME_CTRL_LIVE) nvme_mpath_set_live(ns); } @@ -572,6 +581,18 @@ static void nvme_ana_work(struct work_struct *work) nvme_read_ana_log(ctrl); } +void nvme_mpath_update(struct nvme_ctrl *ctrl) +{ + u32 nr_change_groups = 0; + + if (!ctrl->ana_log_buf) + return; + + mutex_lock(&ctrl->ana_lock); + nvme_parse_ana_log(ctrl, &nr_change_groups, nvme_update_ana_state); + mutex_unlock(&ctrl->ana_lock); +} + static void nvme_anatt_timeout(struct timer_list *t) { struct nvme_ctrl *ctrl = from_timer(ctrl, t, anatt_timer); diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 10e5ae3a8c0d..95b9657cabaf 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -712,6 +712,7 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, struct nvme_id_ns *id); void nvme_mpath_remove_disk(struct nvme_ns_head *head); int nvme_mpath_init_identify(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id); void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl); +void nvme_mpath_update(struct nvme_ctrl *ctrl); void nvme_mpath_uninit(struct nvme_ctrl *ctrl); void nvme_mpath_stop(struct nvme_ctrl *ctrl); bool nvme_mpath_clear_current_path(struct nvme_ns *ns); @@ -798,6 +799,9 @@ static inline int nvme_mpath_init_identify(struct nvme_ctrl *ctrl, "Please enable CONFIG_NVME_MULTIPATH for full support of multi-port devices.\n"); return 0; } +static inline void nvme_mpath_update(struct nvme_ctrl *ctrl) +{ +} static inline void nvme_mpath_uninit(struct nvme_ctrl *ctrl) { } From 00d8b06a4ed438a0cfe66ebadca1bad5c8dcd616 Mon Sep 17 00:00:00 2001 From: Mario Limonciello Date: Tue, 11 Jan 2022 16:57:50 -0600 Subject: [PATCH 0023/1842] rtc: mc146818-lib: Fix the AltCentury for AMD platforms [ Upstream commit 3ae8fd41573af4fb3a490c9ed947fc936ba87190 ] Setting the century forward has been failing on AMD platforms. There was a previous attempt at fixing this for family 0x17 as part of commit 7ad295d5196a ("rtc: Fix the AltCentury value on AMD/Hygon platform") but this was later reverted due to some problems reported that appeared to stem from an FW bug on a family 0x17 desktop system. The same comments mentioned in the previous commit continue to apply to the newer platforms as well. ``` MC146818 driver use function mc146818_set_time() to set register RTC_FREQ_SELECT(RTC_REG_A)'s bit4-bit6 field which means divider stage reset value on Intel platform to 0x7. While AMD/Hygon RTC_REG_A(0Ah)'s bit4 is defined as DV0 [Reference]: DV0 = 0 selects Bank 0, DV0 = 1 selects Bank 1. Bit5-bit6 is defined as reserved. DV0 is set to 1, it will select Bank 1, which will disable AltCentury register(0x32) access. As UEFI pass acpi_gbl_FADT.century 0x32 (AltCentury), the CMOS write will be failed on code: CMOS_WRITE(century, acpi_gbl_FADT.century). Correct RTC_REG_A bank select bit(DV0) to 0 on AMD/Hygon CPUs, it will enable AltCentury(0x32) register writing and finally setup century as expected. ``` However in closer examination the change previously submitted was also modifying bits 5 & 6 which are declared reserved in the AMD documentation. So instead modify just the DV0 bank selection bit. Being cognizant that there was a failure reported before, split the code change out to a static function that can also be used for exclusions if any regressions such as Mikhail's pop up again. Cc: Jinke Fan Cc: Mikhail Gavrilov Link: https://lore.kernel.org/all/CABXGCsMLob0DC25JS8wwAYydnDoHBSoMh2_YLPfqm3TTvDE-Zw@mail.gmail.com/ Link: https://www.amd.com/system/files/TechDocs/51192_Bolton_FCH_RRG.pdf Signed-off-by: Raul E Rangel Signed-off-by: Mario Limonciello Signed-off-by: Alexandre Belloni Link: https://lore.kernel.org/r/20220111225750.1699-1-mario.limonciello@amd.com Signed-off-by: Sasha Levin --- drivers/rtc/rtc-mc146818-lib.c | 16 +++++++++++++++- include/linux/mc146818rtc.h | 2 ++ 2 files changed, 17 insertions(+), 1 deletion(-) diff --git a/drivers/rtc/rtc-mc146818-lib.c b/drivers/rtc/rtc-mc146818-lib.c index 5add637c9ad2..b036ff33fbe6 100644 --- a/drivers/rtc/rtc-mc146818-lib.c +++ b/drivers/rtc/rtc-mc146818-lib.c @@ -99,6 +99,17 @@ unsigned int mc146818_get_time(struct rtc_time *time) } EXPORT_SYMBOL_GPL(mc146818_get_time); +/* AMD systems don't allow access to AltCentury with DV1 */ +static bool apply_amd_register_a_behavior(void) +{ +#ifdef CONFIG_X86 + if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || + boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) + return true; +#endif + return false; +} + /* Set the current date and time in the real time clock. */ int mc146818_set_time(struct rtc_time *time) { @@ -172,7 +183,10 @@ int mc146818_set_time(struct rtc_time *time) save_control = CMOS_READ(RTC_CONTROL); CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL); save_freq_select = CMOS_READ(RTC_FREQ_SELECT); - CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT); + if (apply_amd_register_a_behavior()) + CMOS_WRITE((save_freq_select & ~RTC_AMD_BANK_SELECT), RTC_FREQ_SELECT); + else + CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT); #ifdef CONFIG_MACH_DECSTATION CMOS_WRITE(real_yrs, RTC_DEC_YEAR); diff --git a/include/linux/mc146818rtc.h b/include/linux/mc146818rtc.h index 0661af17a758..1e0205811394 100644 --- a/include/linux/mc146818rtc.h +++ b/include/linux/mc146818rtc.h @@ -86,6 +86,8 @@ struct cmos_rtc_board_info { /* 2 values for divider stage reset, others for "testing purposes only" */ # define RTC_DIV_RESET1 0x60 # define RTC_DIV_RESET2 0x70 + /* In AMD BKDG bit 5 and 6 are reserved, bit 4 is for select dv0 bank */ +# define RTC_AMD_BANK_SELECT 0x10 /* Periodic intr. / Square wave rate select. 0=none, 1=32.8kHz,... 15=2Hz */ # define RTC_RATE_SELECT 0x0F From 05c073b1ad2565a9c4eeb05a3414aef19985fe57 Mon Sep 17 00:00:00 2001 From: Guo Xuenan Date: Wed, 30 Mar 2022 09:49:28 -0700 Subject: [PATCH 0024/1842] fs: fix an infinite loop in iomap_fiemap [ Upstream commit 49df34221804cfd6384135b28b03c9461a31d024 ] when get fiemap starting from MAX_LFS_FILESIZE, (maxbytes - *len) < start will always true , then *len set zero. because of start offset is beyond file size, for erofs filesystem it will always return iomap.length with zero,iomap iterate will enter infinite loop. it is necessary cover this corner case to avoid this situation. ------------[ cut here ]------------ WARNING: CPU: 7 PID: 905 at fs/iomap/iter.c:35 iomap_iter+0x97f/0xc70 Modules linked in: xfs erofs CPU: 7 PID: 905 Comm: iomap Tainted: G W 5.17.0-rc8 #27 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014 RIP: 0010:iomap_iter+0x97f/0xc70 Code: 85 a1 fc ff ff e8 71 be 9c ff 0f 1f 44 00 00 e9 92 fc ff ff e8 62 be 9c ff 0f 0b b8 fb ff ff ff e9 fc f8 ff ff e8 51 be 9c ff <0f> 0b e9 2b fc ff ff e8 45 be 9c ff 0f 0b e9 e1 fb ff ff e8 39 be RSP: 0018:ffff888060a37ab0 EFLAGS: 00010293 RAX: 0000000000000000 RBX: ffff888060a37bb0 RCX: 0000000000000000 RDX: ffff88807e19a900 RSI: ffffffff81a7da7f RDI: ffff888060a37be0 RBP: 7fffffffffffffff R08: 0000000000000000 R09: ffff888060a37c20 R10: ffff888060a37c67 R11: ffffed100c146f8c R12: 7fffffffffffffff R13: 0000000000000000 R14: ffff888060a37bd8 R15: ffff888060a37c20 FS: 00007fd3cca01540(0000) GS:ffff888108780000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020010820 CR3: 0000000054b92000 CR4: 00000000000006e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: iomap_fiemap+0x1c9/0x2f0 erofs_fiemap+0x64/0x90 [erofs] do_vfs_ioctl+0x40d/0x12e0 __x64_sys_ioctl+0xaa/0x1c0 do_syscall_64+0x35/0x80 entry_SYSCALL_64_after_hwframe+0x44/0xae ---[ end trace 0000000000000000 ]--- watchdog: BUG: soft lockup - CPU#7 stuck for 26s! [iomap:905] Reported-by: Hulk Robot Signed-off-by: Guo Xuenan Reviewed-by: Christoph Hellwig [djwong: fix some typos] Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Sasha Levin --- fs/ioctl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/ioctl.c b/fs/ioctl.c index 4e6cc0a7d69c..7bcc60091287 100644 --- a/fs/ioctl.c +++ b/fs/ioctl.c @@ -170,7 +170,7 @@ int fiemap_prep(struct inode *inode, struct fiemap_extent_info *fieinfo, if (*len == 0) return -EINVAL; - if (start > maxbytes) + if (start >= maxbytes) return -EFBIG; /* From 9b7f3211064dcb7c59f0fc2f10b26652267d9800 Mon Sep 17 00:00:00 2001 From: Xiaoke Wang Date: Fri, 25 Mar 2022 19:49:41 +0800 Subject: [PATCH 0025/1842] MIPS: lantiq: check the return value of kzalloc() [ Upstream commit 34123208bbcc8c884a0489f543a23fe9eebb5514 ] kzalloc() is a memory allocation function which can return NULL when some internal memory errors happen. So it is better to check the return value of it to prevent potential wrong memory access or memory leak. Signed-off-by: Xiaoke Wang Signed-off-by: Thomas Bogendoerfer Signed-off-by: Sasha Levin --- arch/mips/lantiq/falcon/sysctrl.c | 2 ++ arch/mips/lantiq/xway/gptu.c | 2 ++ arch/mips/lantiq/xway/sysctrl.c | 46 ++++++++++++++++++++----------- 3 files changed, 34 insertions(+), 16 deletions(-) diff --git a/arch/mips/lantiq/falcon/sysctrl.c b/arch/mips/lantiq/falcon/sysctrl.c index 42222f849bd2..446a2536999b 100644 --- a/arch/mips/lantiq/falcon/sysctrl.c +++ b/arch/mips/lantiq/falcon/sysctrl.c @@ -167,6 +167,8 @@ static inline void clkdev_add_sys(const char *dev, unsigned int module, { struct clk *clk = kzalloc(sizeof(struct clk), GFP_KERNEL); + if (!clk) + return; clk->cl.dev_id = dev; clk->cl.con_id = NULL; clk->cl.clk = clk; diff --git a/arch/mips/lantiq/xway/gptu.c b/arch/mips/lantiq/xway/gptu.c index 3d5683e75cf1..200fe9ff641d 100644 --- a/arch/mips/lantiq/xway/gptu.c +++ b/arch/mips/lantiq/xway/gptu.c @@ -122,6 +122,8 @@ static inline void clkdev_add_gptu(struct device *dev, const char *con, { struct clk *clk = kzalloc(sizeof(struct clk), GFP_KERNEL); + if (!clk) + return; clk->cl.dev_id = dev_name(dev); clk->cl.con_id = con; clk->cl.clk = clk; diff --git a/arch/mips/lantiq/xway/sysctrl.c b/arch/mips/lantiq/xway/sysctrl.c index 917fac1636b7..084f6caba5f2 100644 --- a/arch/mips/lantiq/xway/sysctrl.c +++ b/arch/mips/lantiq/xway/sysctrl.c @@ -315,6 +315,8 @@ static void clkdev_add_pmu(const char *dev, const char *con, bool deactivate, { struct clk *clk = kzalloc(sizeof(struct clk), GFP_KERNEL); + if (!clk) + return; clk->cl.dev_id = dev; clk->cl.con_id = con; clk->cl.clk = clk; @@ -338,6 +340,8 @@ static void clkdev_add_cgu(const char *dev, const char *con, { struct clk *clk = kzalloc(sizeof(struct clk), GFP_KERNEL); + if (!clk) + return; clk->cl.dev_id = dev; clk->cl.con_id = con; clk->cl.clk = clk; @@ -356,24 +360,28 @@ static void clkdev_add_pci(void) struct clk *clk_ext = kzalloc(sizeof(struct clk), GFP_KERNEL); /* main pci clock */ - clk->cl.dev_id = "17000000.pci"; - clk->cl.con_id = NULL; - clk->cl.clk = clk; - clk->rate = CLOCK_33M; - clk->rates = valid_pci_rates; - clk->enable = pci_enable; - clk->disable = pmu_disable; - clk->module = 0; - clk->bits = PMU_PCI; - clkdev_add(&clk->cl); + if (clk) { + clk->cl.dev_id = "17000000.pci"; + clk->cl.con_id = NULL; + clk->cl.clk = clk; + clk->rate = CLOCK_33M; + clk->rates = valid_pci_rates; + clk->enable = pci_enable; + clk->disable = pmu_disable; + clk->module = 0; + clk->bits = PMU_PCI; + clkdev_add(&clk->cl); + } /* use internal/external bus clock */ - clk_ext->cl.dev_id = "17000000.pci"; - clk_ext->cl.con_id = "external"; - clk_ext->cl.clk = clk_ext; - clk_ext->enable = pci_ext_enable; - clk_ext->disable = pci_ext_disable; - clkdev_add(&clk_ext->cl); + if (clk_ext) { + clk_ext->cl.dev_id = "17000000.pci"; + clk_ext->cl.con_id = "external"; + clk_ext->cl.clk = clk_ext; + clk_ext->enable = pci_ext_enable; + clk_ext->disable = pci_ext_disable; + clkdev_add(&clk_ext->cl); + } } /* xway socs can generate clocks on gpio pins */ @@ -393,9 +401,15 @@ static void clkdev_add_clkout(void) char *name; name = kzalloc(sizeof("clkout0"), GFP_KERNEL); + if (!name) + continue; sprintf(name, "clkout%d", i); clk = kzalloc(sizeof(struct clk), GFP_KERNEL); + if (!clk) { + kfree(name); + continue; + } clk->cl.dev_id = "1f103000.cgu"; clk->cl.con_id = name; clk->cl.clk = clk; From 5a19f3c2d3b6e5f5237cb79849626c85831a5fa9 Mon Sep 17 00:00:00 2001 From: Jakob Koschel Date: Fri, 1 Apr 2022 00:03:48 +0200 Subject: [PATCH 0026/1842] drbd: remove usage of list iterator variable after loop [ Upstream commit 901aeda62efa21f2eae937bccb71b49ae531be06 ] In preparation to limit the scope of a list iterator to the list traversal loop, use a dedicated pointer to iterate through the list [1]. Since that variable should not be used past the loop iteration, a separate variable is used to 'remember the current location within the loop'. To either continue iterating from that position or skip the iteration (if the previous iteration was complete) list_prepare_entry() is used. Link: https://lore.kernel.org/all/CAHk-=wgRr_D8CB-D9Kg-c=EHreAsk5SqXPwr9Y7k9sA6cWXJ6w@mail.gmail.com/ [1] Signed-off-by: Jakob Koschel Link: https://lore.kernel.org/r/20220331220349.885126-1-jakobkoschel@gmail.com Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- drivers/block/drbd/drbd_main.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c index 65b95aef8dbc..3cdbd81f983f 100644 --- a/drivers/block/drbd/drbd_main.c +++ b/drivers/block/drbd/drbd_main.c @@ -184,7 +184,7 @@ void tl_release(struct drbd_connection *connection, unsigned int barrier_nr, unsigned int set_size) { struct drbd_request *r; - struct drbd_request *req = NULL; + struct drbd_request *req = NULL, *tmp = NULL; int expect_epoch = 0; int expect_size = 0; @@ -238,8 +238,11 @@ void tl_release(struct drbd_connection *connection, unsigned int barrier_nr, * to catch requests being barrier-acked "unexpectedly". * It usually should find the same req again, or some READ preceding it. */ list_for_each_entry(req, &connection->transfer_log, tl_requests) - if (req->epoch == expect_epoch) + if (req->epoch == expect_epoch) { + tmp = req; break; + } + req = list_prepare_entry(tmp, &connection->transfer_log, tl_requests); list_for_each_entry_safe_from(req, r, &connection->transfer_log, tl_requests) { if (req->epoch != expect_epoch) break; From 0acaf9cacd4feb99c11c2303cb3c04a63a32f048 Mon Sep 17 00:00:00 2001 From: Tzung-Bi Shih Date: Wed, 9 Feb 2022 13:11:30 +0800 Subject: [PATCH 0027/1842] platform/chrome: cros_ec_debugfs: detach log reader wq from devm [ Upstream commit 0e8eb5e8acbad19ac2e1856b2fb2320184299b33 ] Debugfs console_log uses devm memory (e.g. debug_info in cros_ec_console_log_poll()). However, lifecycles of device and debugfs are independent. An use-after-free issue is observed if userland program operates the debugfs after the memory has been freed. The call trace: do_raw_spin_lock _raw_spin_lock_irqsave remove_wait_queue ep_unregister_pollwait ep_remove do_epoll_ctl A Python example to reproduce the issue: ... import select ... p = select.epoll() ... f = open('/sys/kernel/debug/cros_scp/console_log') ... p.register(f, select.POLLIN) ... p.poll(1) [(4, 1)] # 4=fd, 1=select.POLLIN [ shutdown cros_scp at the point ] ... p.poll(1) [(4, 16)] # 4=fd, 16=select.POLLHUP ... p.unregister(f) An use-after-free issue raises here. It called epoll_ctl with EPOLL_CTL_DEL which in turn to use the workqueue in the devm (i.e. log_wq). Detaches log reader's workqueue from devm to make sure it is persistent even if the device has been removed. Signed-off-by: Tzung-Bi Shih Reviewed-by: Guenter Roeck Link: https://lore.kernel.org/r/20220209051130.386175-1-tzungbi@google.com Signed-off-by: Benson Leung Signed-off-by: Sasha Levin --- drivers/platform/chrome/cros_ec_debugfs.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/platform/chrome/cros_ec_debugfs.c b/drivers/platform/chrome/cros_ec_debugfs.c index 272c89837d74..0dbceee87a4b 100644 --- a/drivers/platform/chrome/cros_ec_debugfs.c +++ b/drivers/platform/chrome/cros_ec_debugfs.c @@ -25,6 +25,9 @@ #define CIRC_ADD(idx, size, value) (((idx) + (value)) & ((size) - 1)) +/* waitqueue for log readers */ +static DECLARE_WAIT_QUEUE_HEAD(cros_ec_debugfs_log_wq); + /** * struct cros_ec_debugfs - EC debugging information. * @@ -33,7 +36,6 @@ * @log_buffer: circular buffer for console log information * @read_msg: preallocated EC command and buffer to read console log * @log_mutex: mutex to protect circular buffer - * @log_wq: waitqueue for log readers * @log_poll_work: recurring task to poll EC for new console log data * @panicinfo_blob: panicinfo debugfs blob */ @@ -44,7 +46,6 @@ struct cros_ec_debugfs { struct circ_buf log_buffer; struct cros_ec_command *read_msg; struct mutex log_mutex; - wait_queue_head_t log_wq; struct delayed_work log_poll_work; /* EC panicinfo */ struct debugfs_blob_wrapper panicinfo_blob; @@ -107,7 +108,7 @@ static void cros_ec_console_log_work(struct work_struct *__work) buf_space--; } - wake_up(&debug_info->log_wq); + wake_up(&cros_ec_debugfs_log_wq); } mutex_unlock(&debug_info->log_mutex); @@ -141,7 +142,7 @@ static ssize_t cros_ec_console_log_read(struct file *file, char __user *buf, mutex_unlock(&debug_info->log_mutex); - ret = wait_event_interruptible(debug_info->log_wq, + ret = wait_event_interruptible(cros_ec_debugfs_log_wq, CIRC_CNT(cb->head, cb->tail, LOG_SIZE)); if (ret < 0) return ret; @@ -173,7 +174,7 @@ static __poll_t cros_ec_console_log_poll(struct file *file, struct cros_ec_debugfs *debug_info = file->private_data; __poll_t mask = 0; - poll_wait(file, &debug_info->log_wq, wait); + poll_wait(file, &cros_ec_debugfs_log_wq, wait); mutex_lock(&debug_info->log_mutex); if (CIRC_CNT(debug_info->log_buffer.head, @@ -377,7 +378,6 @@ static int cros_ec_create_console_log(struct cros_ec_debugfs *debug_info) debug_info->log_buffer.tail = 0; mutex_init(&debug_info->log_mutex); - init_waitqueue_head(&debug_info->log_wq); debugfs_create_file("console_log", S_IFREG | 0444, debug_info->dir, debug_info, &cros_ec_console_log_fops); From aca18bacdb7174455f89adc3ca8ee86fae09bfc1 Mon Sep 17 00:00:00 2001 From: linyujun Date: Fri, 1 Apr 2022 10:52:47 +0100 Subject: [PATCH 0028/1842] ARM: 9191/1: arm/stacktrace, kasan: Silence KASAN warnings in unwind_frame() [ Upstream commit 9be4c88bb7924f68f88cfd47d925c2d046f51a73 ] The following KASAN warning is detected by QEMU. ================================================================== BUG: KASAN: stack-out-of-bounds in unwind_frame+0x508/0x870 Read of size 4 at addr c36bba90 by task cat/163 CPU: 1 PID: 163 Comm: cat Not tainted 5.10.0-rc1 #40 Hardware name: ARM-Versatile Express [] (unwind_backtrace) from [] (show_stack+0x10/0x14) [] (show_stack) from [] (dump_stack+0x98/0xb0) [] (dump_stack) from [] (print_address_description.constprop.0+0x58/0x4bc) [] (print_address_description.constprop.0) from [] (kasan_report+0x154/0x170) [] (kasan_report) from [] (unwind_frame+0x508/0x870) [] (unwind_frame) from [] (__save_stack_trace+0x110/0x134) [] (__save_stack_trace) from [] (stack_trace_save+0x8c/0xb4) [] (stack_trace_save) from [] (kasan_set_track+0x38/0x60) [] (kasan_set_track) from [] (kasan_set_free_info+0x20/0x2c) [] (kasan_set_free_info) from [] (__kasan_slab_free+0xec/0x120) [] (__kasan_slab_free) from [] (kmem_cache_free+0x7c/0x334) [] (kmem_cache_free) from [] (rcu_core+0x390/0xccc) [] (rcu_core) from [] (__do_softirq+0x180/0x518) [] (__do_softirq) from [] (irq_exit+0x9c/0xe0) [] (irq_exit) from [] (__handle_domain_irq+0xb0/0x110) [] (__handle_domain_irq) from [] (gic_handle_irq+0xa0/0xb8) [] (gic_handle_irq) from [] (__irq_svc+0x6c/0x94) Exception stack(0xc36bb928 to 0xc36bb970) b920: c36bb9c0 00000000 c0126919 c0101228 c36bb9c0 b76d7730 b940: c36b8000 c36bb9a0 c3335b00 c01ce0d8 00000003 c36bba3c c36bb940 c36bb978 b960: c010e298 c011373c 60000013 ffffffff [] (__irq_svc) from [] (unwind_frame+0x0/0x870) [] (unwind_frame) from [<00000000>] (0x0) The buggy address belongs to the page: page:(ptrval) refcount:0 mapcount:0 mapping:00000000 index:0x0 pfn:0x636bb flags: 0x0() raw: 00000000 00000000 ef867764 00000000 00000000 00000000 ffffffff 00000000 page dumped because: kasan: bad access detected addr c36bba90 is located in stack of task cat/163 at offset 48 in frame: stack_trace_save+0x0/0xb4 this frame has 1 object: [32, 48) 'trace' Memory state around the buggy address: c36bb980: f1 f1 f1 f1 00 04 f2 f2 00 00 f3 f3 00 00 00 00 c36bba00: 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1 >c36bba80: 00 00 f3 f3 00 00 00 00 00 00 00 00 00 00 00 00 ^ c36bbb00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 c36bbb80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ================================================================== There is a same issue on x86 and has been resolved by the commit f7d27c35ddff ("x86/mm, kasan: Silence KASAN warnings in get_wchan()"). The solution could be applied to arm architecture too. Signed-off-by: Lin Yujun Reported-by: He Ying Signed-off-by: Russell King (Oracle) Signed-off-by: Sasha Levin --- arch/arm/kernel/stacktrace.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/arm/kernel/stacktrace.c b/arch/arm/kernel/stacktrace.c index db798eac7431..824774999825 100644 --- a/arch/arm/kernel/stacktrace.c +++ b/arch/arm/kernel/stacktrace.c @@ -53,17 +53,17 @@ int notrace unwind_frame(struct stackframe *frame) return -EINVAL; frame->sp = frame->fp; - frame->fp = *(unsigned long *)(fp); - frame->pc = *(unsigned long *)(fp + 4); + frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp)); + frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 4)); #else /* check current frame pointer is within bounds */ if (fp < low + 12 || fp > high - 4) return -EINVAL; /* restore the registers from the stack frame */ - frame->fp = *(unsigned long *)(fp - 12); - frame->sp = *(unsigned long *)(fp - 8); - frame->pc = *(unsigned long *)(fp - 4); + frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp - 12)); + frame->sp = READ_ONCE_NOCHECK(*(unsigned long *)(fp - 8)); + frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp - 4)); #endif return 0; From d626fcdabea2258be395a775bdbe09270e9bf73d Mon Sep 17 00:00:00 2001 From: Ryusuke Konishi Date: Fri, 1 Apr 2022 11:28:18 -0700 Subject: [PATCH 0029/1842] nilfs2: fix lockdep warnings in page operations for btree nodes [ Upstream commit e897be17a441fa637cd166fc3de1445131e57692 ] Patch series "nilfs2 lockdep warning fixes". The first two are to resolve the lockdep warning issue, and the last one is the accompanying cleanup and low priority. Based on your comment, this series solves the issue by separating inode object as needed. Since I was worried about the impact of the object composition changes, I tested the series carefully not to cause regressions especially for delicate functions such like disk space reclamation and snapshots. This patch (of 3): If CONFIG_LOCKDEP is enabled, nilfs2 hits lockdep warnings at inode_to_wb() during page/folio operations for btree nodes: WARNING: CPU: 0 PID: 6575 at include/linux/backing-dev.h:269 inode_to_wb include/linux/backing-dev.h:269 [inline] WARNING: CPU: 0 PID: 6575 at include/linux/backing-dev.h:269 folio_account_dirtied mm/page-writeback.c:2460 [inline] WARNING: CPU: 0 PID: 6575 at include/linux/backing-dev.h:269 __folio_mark_dirty+0xa7c/0xe30 mm/page-writeback.c:2509 Modules linked in: ... RIP: 0010:inode_to_wb include/linux/backing-dev.h:269 [inline] RIP: 0010:folio_account_dirtied mm/page-writeback.c:2460 [inline] RIP: 0010:__folio_mark_dirty+0xa7c/0xe30 mm/page-writeback.c:2509 ... Call Trace: __set_page_dirty include/linux/pagemap.h:834 [inline] mark_buffer_dirty+0x4e6/0x650 fs/buffer.c:1145 nilfs_btree_propagate_p fs/nilfs2/btree.c:1889 [inline] nilfs_btree_propagate+0x4ae/0xea0 fs/nilfs2/btree.c:2085 nilfs_bmap_propagate+0x73/0x170 fs/nilfs2/bmap.c:337 nilfs_collect_dat_data+0x45/0xd0 fs/nilfs2/segment.c:625 nilfs_segctor_apply_buffers+0x14a/0x470 fs/nilfs2/segment.c:1009 nilfs_segctor_scan_file+0x47a/0x700 fs/nilfs2/segment.c:1048 nilfs_segctor_collect_blocks fs/nilfs2/segment.c:1224 [inline] nilfs_segctor_collect fs/nilfs2/segment.c:1494 [inline] nilfs_segctor_do_construct+0x14f3/0x6c60 fs/nilfs2/segment.c:2036 nilfs_segctor_construct+0x7a7/0xb30 fs/nilfs2/segment.c:2372 nilfs_segctor_thread_construct fs/nilfs2/segment.c:2480 [inline] nilfs_segctor_thread+0x3c3/0xf90 fs/nilfs2/segment.c:2563 kthread+0x405/0x4f0 kernel/kthread.c:327 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 This is because nilfs2 uses two page caches for each inode and inode->i_mapping never points to one of them, the btree node cache. This causes inode_to_wb(inode) to refer to a different page cache than the caller page/folio operations such like __folio_start_writeback(), __folio_end_writeback(), or __folio_mark_dirty() acquired the lock. This patch resolves the issue by allocating and using an additional inode to hold the page cache of btree nodes. The inode is attached one-to-one to the traditional nilfs2 inode if it requires a block mapping with b-tree. This setup change is in memory only and does not affect the disk format. Link: https://lkml.kernel.org/r/1647867427-30498-1-git-send-email-konishi.ryusuke@gmail.com Link: https://lkml.kernel.org/r/1647867427-30498-2-git-send-email-konishi.ryusuke@gmail.com Link: https://lore.kernel.org/r/YXrYvIo8YRnAOJCj@casper.infradead.org Link: https://lore.kernel.org/r/9a20b33d-b38f-b4a2-4742-c1eb5b8e4d6c@redhat.com Signed-off-by: Ryusuke Konishi Reported-by: syzbot+0d5b462a6f07447991b3@syzkaller.appspotmail.com Reported-by: syzbot+34ef28bb2aeb28724aa0@syzkaller.appspotmail.com Reported-by: Hao Sun Reported-by: David Hildenbrand Tested-by: Ryusuke Konishi Cc: Matthew Wilcox Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- fs/nilfs2/btnode.c | 23 ++++++++-- fs/nilfs2/btnode.h | 1 + fs/nilfs2/btree.c | 27 ++++++++---- fs/nilfs2/gcinode.c | 7 +-- fs/nilfs2/inode.c | 104 ++++++++++++++++++++++++++++++++++++++------ fs/nilfs2/mdt.c | 7 +-- fs/nilfs2/nilfs.h | 14 +++--- fs/nilfs2/page.c | 7 ++- fs/nilfs2/segment.c | 9 ++-- fs/nilfs2/super.c | 5 +-- 10 files changed, 154 insertions(+), 50 deletions(-) diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c index 4391fd3abd8f..e00e184b1261 100644 --- a/fs/nilfs2/btnode.c +++ b/fs/nilfs2/btnode.c @@ -20,6 +20,23 @@ #include "page.h" #include "btnode.h" + +/** + * nilfs_init_btnc_inode - initialize B-tree node cache inode + * @btnc_inode: inode to be initialized + * + * nilfs_init_btnc_inode() sets up an inode for B-tree node cache. + */ +void nilfs_init_btnc_inode(struct inode *btnc_inode) +{ + struct nilfs_inode_info *ii = NILFS_I(btnc_inode); + + btnc_inode->i_mode = S_IFREG; + ii->i_flags = 0; + memset(&ii->i_bmap_data, 0, sizeof(struct nilfs_bmap)); + mapping_set_gfp_mask(btnc_inode->i_mapping, GFP_NOFS); +} + void nilfs_btnode_cache_clear(struct address_space *btnc) { invalidate_mapping_pages(btnc, 0, -1); @@ -29,7 +46,7 @@ void nilfs_btnode_cache_clear(struct address_space *btnc) struct buffer_head * nilfs_btnode_create_block(struct address_space *btnc, __u64 blocknr) { - struct inode *inode = NILFS_BTNC_I(btnc); + struct inode *inode = btnc->host; struct buffer_head *bh; bh = nilfs_grab_buffer(inode, btnc, blocknr, BIT(BH_NILFS_Node)); @@ -57,7 +74,7 @@ int nilfs_btnode_submit_block(struct address_space *btnc, __u64 blocknr, struct buffer_head **pbh, sector_t *submit_ptr) { struct buffer_head *bh; - struct inode *inode = NILFS_BTNC_I(btnc); + struct inode *inode = btnc->host; struct page *page; int err; @@ -157,7 +174,7 @@ int nilfs_btnode_prepare_change_key(struct address_space *btnc, struct nilfs_btnode_chkey_ctxt *ctxt) { struct buffer_head *obh, *nbh; - struct inode *inode = NILFS_BTNC_I(btnc); + struct inode *inode = btnc->host; __u64 oldkey = ctxt->oldkey, newkey = ctxt->newkey; int err; diff --git a/fs/nilfs2/btnode.h b/fs/nilfs2/btnode.h index 0f88dbc9bcb3..05ab64d354dc 100644 --- a/fs/nilfs2/btnode.h +++ b/fs/nilfs2/btnode.h @@ -30,6 +30,7 @@ struct nilfs_btnode_chkey_ctxt { struct buffer_head *newbh; }; +void nilfs_init_btnc_inode(struct inode *btnc_inode); void nilfs_btnode_cache_clear(struct address_space *); struct buffer_head *nilfs_btnode_create_block(struct address_space *btnc, __u64 blocknr); diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c index f42ab57201e7..77efd69213a3 100644 --- a/fs/nilfs2/btree.c +++ b/fs/nilfs2/btree.c @@ -58,7 +58,8 @@ static void nilfs_btree_free_path(struct nilfs_btree_path *path) static int nilfs_btree_get_new_block(const struct nilfs_bmap *btree, __u64 ptr, struct buffer_head **bhp) { - struct address_space *btnc = &NILFS_BMAP_I(btree)->i_btnode_cache; + struct inode *btnc_inode = NILFS_BMAP_I(btree)->i_assoc_inode; + struct address_space *btnc = btnc_inode->i_mapping; struct buffer_head *bh; bh = nilfs_btnode_create_block(btnc, ptr); @@ -470,7 +471,8 @@ static int __nilfs_btree_get_block(const struct nilfs_bmap *btree, __u64 ptr, struct buffer_head **bhp, const struct nilfs_btree_readahead_info *ra) { - struct address_space *btnc = &NILFS_BMAP_I(btree)->i_btnode_cache; + struct inode *btnc_inode = NILFS_BMAP_I(btree)->i_assoc_inode; + struct address_space *btnc = btnc_inode->i_mapping; struct buffer_head *bh, *ra_bh; sector_t submit_ptr = 0; int ret; @@ -1742,6 +1744,10 @@ nilfs_btree_prepare_convert_and_insert(struct nilfs_bmap *btree, __u64 key, dat = nilfs_bmap_get_dat(btree); } + ret = nilfs_attach_btree_node_cache(&NILFS_BMAP_I(btree)->vfs_inode); + if (ret < 0) + return ret; + ret = nilfs_bmap_prepare_alloc_ptr(btree, dreq, dat); if (ret < 0) return ret; @@ -1914,7 +1920,7 @@ static int nilfs_btree_prepare_update_v(struct nilfs_bmap *btree, path[level].bp_ctxt.newkey = path[level].bp_newreq.bpr_ptr; path[level].bp_ctxt.bh = path[level].bp_bh; ret = nilfs_btnode_prepare_change_key( - &NILFS_BMAP_I(btree)->i_btnode_cache, + NILFS_BMAP_I(btree)->i_assoc_inode->i_mapping, &path[level].bp_ctxt); if (ret < 0) { nilfs_dat_abort_update(dat, @@ -1940,7 +1946,7 @@ static void nilfs_btree_commit_update_v(struct nilfs_bmap *btree, if (buffer_nilfs_node(path[level].bp_bh)) { nilfs_btnode_commit_change_key( - &NILFS_BMAP_I(btree)->i_btnode_cache, + NILFS_BMAP_I(btree)->i_assoc_inode->i_mapping, &path[level].bp_ctxt); path[level].bp_bh = path[level].bp_ctxt.bh; } @@ -1959,7 +1965,7 @@ static void nilfs_btree_abort_update_v(struct nilfs_bmap *btree, &path[level].bp_newreq.bpr_req); if (buffer_nilfs_node(path[level].bp_bh)) nilfs_btnode_abort_change_key( - &NILFS_BMAP_I(btree)->i_btnode_cache, + NILFS_BMAP_I(btree)->i_assoc_inode->i_mapping, &path[level].bp_ctxt); } @@ -2135,7 +2141,8 @@ static void nilfs_btree_add_dirty_buffer(struct nilfs_bmap *btree, static void nilfs_btree_lookup_dirty_buffers(struct nilfs_bmap *btree, struct list_head *listp) { - struct address_space *btcache = &NILFS_BMAP_I(btree)->i_btnode_cache; + struct inode *btnc_inode = NILFS_BMAP_I(btree)->i_assoc_inode; + struct address_space *btcache = btnc_inode->i_mapping; struct list_head lists[NILFS_BTREE_LEVEL_MAX]; struct pagevec pvec; struct buffer_head *bh, *head; @@ -2189,12 +2196,12 @@ static int nilfs_btree_assign_p(struct nilfs_bmap *btree, path[level].bp_ctxt.newkey = blocknr; path[level].bp_ctxt.bh = *bh; ret = nilfs_btnode_prepare_change_key( - &NILFS_BMAP_I(btree)->i_btnode_cache, + NILFS_BMAP_I(btree)->i_assoc_inode->i_mapping, &path[level].bp_ctxt); if (ret < 0) return ret; nilfs_btnode_commit_change_key( - &NILFS_BMAP_I(btree)->i_btnode_cache, + NILFS_BMAP_I(btree)->i_assoc_inode->i_mapping, &path[level].bp_ctxt); *bh = path[level].bp_ctxt.bh; } @@ -2399,6 +2406,10 @@ int nilfs_btree_init(struct nilfs_bmap *bmap) if (nilfs_btree_root_broken(nilfs_btree_get_root(bmap), bmap->b_inode)) ret = -EIO; + else + ret = nilfs_attach_btree_node_cache( + &NILFS_BMAP_I(bmap)->vfs_inode); + return ret; } diff --git a/fs/nilfs2/gcinode.c b/fs/nilfs2/gcinode.c index 448320496856..aadea660c66c 100644 --- a/fs/nilfs2/gcinode.c +++ b/fs/nilfs2/gcinode.c @@ -126,9 +126,10 @@ int nilfs_gccache_submit_read_data(struct inode *inode, sector_t blkoff, int nilfs_gccache_submit_read_node(struct inode *inode, sector_t pbn, __u64 vbn, struct buffer_head **out_bh) { + struct inode *btnc_inode = NILFS_I(inode)->i_assoc_inode; int ret; - ret = nilfs_btnode_submit_block(&NILFS_I(inode)->i_btnode_cache, + ret = nilfs_btnode_submit_block(btnc_inode->i_mapping, vbn ? : pbn, pbn, REQ_OP_READ, 0, out_bh, &pbn); if (ret == -EEXIST) /* internal code (cache hit) */ @@ -170,7 +171,7 @@ int nilfs_init_gcinode(struct inode *inode) ii->i_flags = 0; nilfs_bmap_init_gc(ii->i_bmap); - return 0; + return nilfs_attach_btree_node_cache(inode); } /** @@ -185,7 +186,7 @@ void nilfs_remove_all_gcinodes(struct the_nilfs *nilfs) ii = list_first_entry(head, struct nilfs_inode_info, i_dirty); list_del_init(&ii->i_dirty); truncate_inode_pages(&ii->vfs_inode.i_data, 0); - nilfs_btnode_cache_clear(&ii->i_btnode_cache); + nilfs_btnode_cache_clear(ii->i_assoc_inode->i_mapping); iput(&ii->vfs_inode); } } diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c index 745d371d6fea..c3e4e677c679 100644 --- a/fs/nilfs2/inode.c +++ b/fs/nilfs2/inode.c @@ -29,12 +29,14 @@ * @cno: checkpoint number * @root: pointer on NILFS root object (mounted checkpoint) * @for_gc: inode for GC flag + * @for_btnc: inode for B-tree node cache flag */ struct nilfs_iget_args { u64 ino; __u64 cno; struct nilfs_root *root; - int for_gc; + bool for_gc; + bool for_btnc; }; static int nilfs_iget_test(struct inode *inode, void *opaque); @@ -314,7 +316,8 @@ static int nilfs_insert_inode_locked(struct inode *inode, unsigned long ino) { struct nilfs_iget_args args = { - .ino = ino, .root = root, .cno = 0, .for_gc = 0 + .ino = ino, .root = root, .cno = 0, .for_gc = false, + .for_btnc = false }; return insert_inode_locked4(inode, ino, nilfs_iget_test, &args); @@ -527,6 +530,13 @@ static int nilfs_iget_test(struct inode *inode, void *opaque) return 0; ii = NILFS_I(inode); + if (test_bit(NILFS_I_BTNC, &ii->i_state)) { + if (!args->for_btnc) + return 0; + } else if (args->for_btnc) { + return 0; + } + if (!test_bit(NILFS_I_GCINODE, &ii->i_state)) return !args->for_gc; @@ -538,15 +548,15 @@ static int nilfs_iget_set(struct inode *inode, void *opaque) struct nilfs_iget_args *args = opaque; inode->i_ino = args->ino; - if (args->for_gc) { + NILFS_I(inode)->i_cno = args->cno; + NILFS_I(inode)->i_root = args->root; + if (args->root && args->ino == NILFS_ROOT_INO) + nilfs_get_root(args->root); + + if (args->for_gc) NILFS_I(inode)->i_state = BIT(NILFS_I_GCINODE); - NILFS_I(inode)->i_cno = args->cno; - NILFS_I(inode)->i_root = NULL; - } else { - if (args->root && args->ino == NILFS_ROOT_INO) - nilfs_get_root(args->root); - NILFS_I(inode)->i_root = args->root; - } + if (args->for_btnc) + NILFS_I(inode)->i_state |= BIT(NILFS_I_BTNC); return 0; } @@ -554,7 +564,8 @@ struct inode *nilfs_ilookup(struct super_block *sb, struct nilfs_root *root, unsigned long ino) { struct nilfs_iget_args args = { - .ino = ino, .root = root, .cno = 0, .for_gc = 0 + .ino = ino, .root = root, .cno = 0, .for_gc = false, + .for_btnc = false }; return ilookup5(sb, ino, nilfs_iget_test, &args); @@ -564,7 +575,8 @@ struct inode *nilfs_iget_locked(struct super_block *sb, struct nilfs_root *root, unsigned long ino) { struct nilfs_iget_args args = { - .ino = ino, .root = root, .cno = 0, .for_gc = 0 + .ino = ino, .root = root, .cno = 0, .for_gc = false, + .for_btnc = false }; return iget5_locked(sb, ino, nilfs_iget_test, nilfs_iget_set, &args); @@ -595,7 +607,8 @@ struct inode *nilfs_iget_for_gc(struct super_block *sb, unsigned long ino, __u64 cno) { struct nilfs_iget_args args = { - .ino = ino, .root = NULL, .cno = cno, .for_gc = 1 + .ino = ino, .root = NULL, .cno = cno, .for_gc = true, + .for_btnc = false }; struct inode *inode; int err; @@ -615,6 +628,68 @@ struct inode *nilfs_iget_for_gc(struct super_block *sb, unsigned long ino, return inode; } +/** + * nilfs_attach_btree_node_cache - attach a B-tree node cache to the inode + * @inode: inode object + * + * nilfs_attach_btree_node_cache() attaches a B-tree node cache to @inode, + * or does nothing if the inode already has it. This function allocates + * an additional inode to maintain page cache of B-tree nodes one-on-one. + * + * Return Value: On success, 0 is returned. On errors, one of the following + * negative error code is returned. + * + * %-ENOMEM - Insufficient memory available. + */ +int nilfs_attach_btree_node_cache(struct inode *inode) +{ + struct nilfs_inode_info *ii = NILFS_I(inode); + struct inode *btnc_inode; + struct nilfs_iget_args args; + + if (ii->i_assoc_inode) + return 0; + + args.ino = inode->i_ino; + args.root = ii->i_root; + args.cno = ii->i_cno; + args.for_gc = test_bit(NILFS_I_GCINODE, &ii->i_state) != 0; + args.for_btnc = true; + + btnc_inode = iget5_locked(inode->i_sb, inode->i_ino, nilfs_iget_test, + nilfs_iget_set, &args); + if (unlikely(!btnc_inode)) + return -ENOMEM; + if (btnc_inode->i_state & I_NEW) { + nilfs_init_btnc_inode(btnc_inode); + unlock_new_inode(btnc_inode); + } + NILFS_I(btnc_inode)->i_assoc_inode = inode; + NILFS_I(btnc_inode)->i_bmap = ii->i_bmap; + ii->i_assoc_inode = btnc_inode; + + return 0; +} + +/** + * nilfs_detach_btree_node_cache - detach the B-tree node cache from the inode + * @inode: inode object + * + * nilfs_detach_btree_node_cache() detaches the B-tree node cache and its + * holder inode bound to @inode, or does nothing if @inode doesn't have it. + */ +void nilfs_detach_btree_node_cache(struct inode *inode) +{ + struct nilfs_inode_info *ii = NILFS_I(inode); + struct inode *btnc_inode = ii->i_assoc_inode; + + if (btnc_inode) { + NILFS_I(btnc_inode)->i_assoc_inode = NULL; + ii->i_assoc_inode = NULL; + iput(btnc_inode); + } +} + void nilfs_write_inode_common(struct inode *inode, struct nilfs_inode *raw_inode, int has_bmap) { @@ -762,7 +837,8 @@ static void nilfs_clear_inode(struct inode *inode) if (test_bit(NILFS_I_BMAP, &ii->i_state)) nilfs_bmap_clear(ii->i_bmap); - nilfs_btnode_cache_clear(&ii->i_btnode_cache); + if (!test_bit(NILFS_I_BTNC, &ii->i_state)) + nilfs_detach_btree_node_cache(inode); if (ii->i_root && inode->i_ino == NILFS_ROOT_INO) nilfs_put_root(ii->i_root); diff --git a/fs/nilfs2/mdt.c b/fs/nilfs2/mdt.c index c0361ce45f62..d3f6cb9c32a0 100644 --- a/fs/nilfs2/mdt.c +++ b/fs/nilfs2/mdt.c @@ -531,7 +531,7 @@ int nilfs_mdt_save_to_shadow_map(struct inode *inode) goto out; ret = nilfs_copy_dirty_pages(&shadow->frozen_btnodes, - &ii->i_btnode_cache); + ii->i_assoc_inode->i_mapping); if (ret) goto out; @@ -622,8 +622,9 @@ void nilfs_mdt_restore_from_shadow_map(struct inode *inode) nilfs_clear_dirty_pages(inode->i_mapping, true); nilfs_copy_back_pages(inode->i_mapping, &shadow->frozen_data); - nilfs_clear_dirty_pages(&ii->i_btnode_cache, true); - nilfs_copy_back_pages(&ii->i_btnode_cache, &shadow->frozen_btnodes); + nilfs_clear_dirty_pages(ii->i_assoc_inode->i_mapping, true); + nilfs_copy_back_pages(ii->i_assoc_inode->i_mapping, + &shadow->frozen_btnodes); nilfs_bmap_restore(ii->i_bmap, &shadow->bmap_store); diff --git a/fs/nilfs2/nilfs.h b/fs/nilfs2/nilfs.h index f8450ee3fd06..635383b30d67 100644 --- a/fs/nilfs2/nilfs.h +++ b/fs/nilfs2/nilfs.h @@ -28,7 +28,7 @@ * @i_xattr: * @i_dir_start_lookup: page index of last successful search * @i_cno: checkpoint number for GC inode - * @i_btnode_cache: cached pages of b-tree nodes + * @i_assoc_inode: associated inode (B-tree node cache holder or back pointer) * @i_dirty: list for connecting dirty files * @xattr_sem: semaphore for extended attributes processing * @i_bh: buffer contains disk inode @@ -43,7 +43,7 @@ struct nilfs_inode_info { __u64 i_xattr; /* sector_t ??? */ __u32 i_dir_start_lookup; __u64 i_cno; /* check point number for GC inode */ - struct address_space i_btnode_cache; + struct inode *i_assoc_inode; struct list_head i_dirty; /* List for connecting dirty files */ #ifdef CONFIG_NILFS_XATTR @@ -75,13 +75,6 @@ NILFS_BMAP_I(const struct nilfs_bmap *bmap) return container_of(bmap, struct nilfs_inode_info, i_bmap_data); } -static inline struct inode *NILFS_BTNC_I(struct address_space *btnc) -{ - struct nilfs_inode_info *ii = - container_of(btnc, struct nilfs_inode_info, i_btnode_cache); - return &ii->vfs_inode; -} - /* * Dynamic state flags of NILFS on-memory inode (i_state) */ @@ -98,6 +91,7 @@ enum { NILFS_I_INODE_SYNC, /* dsync is not allowed for inode */ NILFS_I_BMAP, /* has bmap and btnode_cache */ NILFS_I_GCINODE, /* inode for GC, on memory only */ + NILFS_I_BTNC, /* inode for btree node cache */ }; /* @@ -264,6 +258,8 @@ struct inode *nilfs_iget(struct super_block *sb, struct nilfs_root *root, unsigned long ino); extern struct inode *nilfs_iget_for_gc(struct super_block *sb, unsigned long ino, __u64 cno); +int nilfs_attach_btree_node_cache(struct inode *inode); +void nilfs_detach_btree_node_cache(struct inode *inode); extern void nilfs_update_inode(struct inode *, struct buffer_head *, int); extern void nilfs_truncate(struct inode *); extern void nilfs_evict_inode(struct inode *); diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c index 171fb5cd427f..d1a148f0cae3 100644 --- a/fs/nilfs2/page.c +++ b/fs/nilfs2/page.c @@ -448,10 +448,9 @@ void nilfs_mapping_init(struct address_space *mapping, struct inode *inode) /* * NILFS2 needs clear_page_dirty() in the following two cases: * - * 1) For B-tree node pages and data pages of the dat/gcdat, NILFS2 clears - * page dirty flags when it copies back pages from the shadow cache - * (gcdat->{i_mapping,i_btnode_cache}) to its original cache - * (dat->{i_mapping,i_btnode_cache}). + * 1) For B-tree node pages and data pages of DAT file, NILFS2 clears dirty + * flag of pages when it copies back pages from shadow cache to the + * original cache. * * 2) Some B-tree operations like insertion or deletion may dispose buffers * in dirty state, and this needs to cancel the dirty state of their pages. diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c index e3726aca28ed..8350c2eaee75 100644 --- a/fs/nilfs2/segment.c +++ b/fs/nilfs2/segment.c @@ -738,15 +738,18 @@ static void nilfs_lookup_dirty_node_buffers(struct inode *inode, struct list_head *listp) { struct nilfs_inode_info *ii = NILFS_I(inode); - struct address_space *mapping = &ii->i_btnode_cache; + struct inode *btnc_inode = ii->i_assoc_inode; struct pagevec pvec; struct buffer_head *bh, *head; unsigned int i; pgoff_t index = 0; + if (!btnc_inode) + return; + pagevec_init(&pvec); - while (pagevec_lookup_tag(&pvec, mapping, &index, + while (pagevec_lookup_tag(&pvec, btnc_inode->i_mapping, &index, PAGECACHE_TAG_DIRTY)) { for (i = 0; i < pagevec_count(&pvec); i++) { bh = head = page_buffers(pvec.pages[i]); @@ -2415,7 +2418,7 @@ nilfs_remove_written_gcinodes(struct the_nilfs *nilfs, struct list_head *head) continue; list_del_init(&ii->i_dirty); truncate_inode_pages(&ii->vfs_inode.i_data, 0); - nilfs_btnode_cache_clear(&ii->i_btnode_cache); + nilfs_btnode_cache_clear(ii->i_assoc_inode->i_mapping); iput(&ii->vfs_inode); } } diff --git a/fs/nilfs2/super.c b/fs/nilfs2/super.c index 4abd928b0bc8..b9d30e8c43b0 100644 --- a/fs/nilfs2/super.c +++ b/fs/nilfs2/super.c @@ -157,7 +157,8 @@ struct inode *nilfs_alloc_inode(struct super_block *sb) ii->i_bh = NULL; ii->i_state = 0; ii->i_cno = 0; - nilfs_mapping_init(&ii->i_btnode_cache, &ii->vfs_inode); + ii->i_assoc_inode = NULL; + ii->i_bmap = &ii->i_bmap_data; return &ii->vfs_inode; } @@ -1377,8 +1378,6 @@ static void nilfs_inode_init_once(void *obj) #ifdef CONFIG_NILFS_XATTR init_rwsem(&ii->xattr_sem); #endif - address_space_init_once(&ii->i_btnode_cache); - ii->i_bmap = &ii->i_bmap_data; inode_init_once(&ii->vfs_inode); } From fe5ac3da50a92764bd534a39f851c744f0d4e2c3 Mon Sep 17 00:00:00 2001 From: Ryusuke Konishi Date: Fri, 1 Apr 2022 11:28:21 -0700 Subject: [PATCH 0030/1842] nilfs2: fix lockdep warnings during disk space reclamation [ Upstream commit 6e211930f79aa45d422009a5f2e5467d2369ffe5 ] During disk space reclamation, nilfs2 still emits the following lockdep warning due to page/folio operations on shadowed page caches that nilfs2 uses to get a snapshot of DAT file in memory: WARNING: CPU: 0 PID: 2643 at include/linux/backing-dev.h:272 __folio_mark_dirty+0x645/0x670 ... RIP: 0010:__folio_mark_dirty+0x645/0x670 ... Call Trace: filemap_dirty_folio+0x74/0xd0 __set_page_dirty_nobuffers+0x85/0xb0 nilfs_copy_dirty_pages+0x288/0x510 [nilfs2] nilfs_mdt_save_to_shadow_map+0x50/0xe0 [nilfs2] nilfs_clean_segments+0xee/0x5d0 [nilfs2] nilfs_ioctl_clean_segments.isra.19+0xb08/0xf40 [nilfs2] nilfs_ioctl+0xc52/0xfb0 [nilfs2] __x64_sys_ioctl+0x11d/0x170 This fixes the remaining warning by using inode objects to hold those page caches. Link: https://lkml.kernel.org/r/1647867427-30498-3-git-send-email-konishi.ryusuke@gmail.com Signed-off-by: Ryusuke Konishi Tested-by: Ryusuke Konishi Cc: Matthew Wilcox Cc: David Hildenbrand Cc: Hao Sun Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- fs/nilfs2/dat.c | 4 ++- fs/nilfs2/inode.c | 63 ++++++++++++++++++++++++++++++++++++++++++++--- fs/nilfs2/mdt.c | 38 +++++++++++++++++++--------- fs/nilfs2/mdt.h | 6 ++--- fs/nilfs2/nilfs.h | 2 ++ 5 files changed, 92 insertions(+), 21 deletions(-) diff --git a/fs/nilfs2/dat.c b/fs/nilfs2/dat.c index 8bccdf1158fc..1a3d183027b9 100644 --- a/fs/nilfs2/dat.c +++ b/fs/nilfs2/dat.c @@ -497,7 +497,9 @@ int nilfs_dat_read(struct super_block *sb, size_t entry_size, di = NILFS_DAT_I(dat); lockdep_set_class(&di->mi.mi_sem, &dat_lock_key); nilfs_palloc_setup_cache(dat, &di->palloc_cache); - nilfs_mdt_setup_shadow_map(dat, &di->shadow); + err = nilfs_mdt_setup_shadow_map(dat, &di->shadow); + if (err) + goto failed; err = nilfs_read_inode_common(dat, raw_inode); if (err) diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c index c3e4e677c679..95684fa3c985 100644 --- a/fs/nilfs2/inode.c +++ b/fs/nilfs2/inode.c @@ -30,6 +30,7 @@ * @root: pointer on NILFS root object (mounted checkpoint) * @for_gc: inode for GC flag * @for_btnc: inode for B-tree node cache flag + * @for_shadow: inode for shadowed page cache flag */ struct nilfs_iget_args { u64 ino; @@ -37,6 +38,7 @@ struct nilfs_iget_args { struct nilfs_root *root; bool for_gc; bool for_btnc; + bool for_shadow; }; static int nilfs_iget_test(struct inode *inode, void *opaque); @@ -317,7 +319,7 @@ static int nilfs_insert_inode_locked(struct inode *inode, { struct nilfs_iget_args args = { .ino = ino, .root = root, .cno = 0, .for_gc = false, - .for_btnc = false + .for_btnc = false, .for_shadow = false }; return insert_inode_locked4(inode, ino, nilfs_iget_test, &args); @@ -536,6 +538,12 @@ static int nilfs_iget_test(struct inode *inode, void *opaque) } else if (args->for_btnc) { return 0; } + if (test_bit(NILFS_I_SHADOW, &ii->i_state)) { + if (!args->for_shadow) + return 0; + } else if (args->for_shadow) { + return 0; + } if (!test_bit(NILFS_I_GCINODE, &ii->i_state)) return !args->for_gc; @@ -557,6 +565,8 @@ static int nilfs_iget_set(struct inode *inode, void *opaque) NILFS_I(inode)->i_state = BIT(NILFS_I_GCINODE); if (args->for_btnc) NILFS_I(inode)->i_state |= BIT(NILFS_I_BTNC); + if (args->for_shadow) + NILFS_I(inode)->i_state |= BIT(NILFS_I_SHADOW); return 0; } @@ -565,7 +575,7 @@ struct inode *nilfs_ilookup(struct super_block *sb, struct nilfs_root *root, { struct nilfs_iget_args args = { .ino = ino, .root = root, .cno = 0, .for_gc = false, - .for_btnc = false + .for_btnc = false, .for_shadow = false }; return ilookup5(sb, ino, nilfs_iget_test, &args); @@ -576,7 +586,7 @@ struct inode *nilfs_iget_locked(struct super_block *sb, struct nilfs_root *root, { struct nilfs_iget_args args = { .ino = ino, .root = root, .cno = 0, .for_gc = false, - .for_btnc = false + .for_btnc = false, .for_shadow = false }; return iget5_locked(sb, ino, nilfs_iget_test, nilfs_iget_set, &args); @@ -608,7 +618,7 @@ struct inode *nilfs_iget_for_gc(struct super_block *sb, unsigned long ino, { struct nilfs_iget_args args = { .ino = ino, .root = NULL, .cno = cno, .for_gc = true, - .for_btnc = false + .for_btnc = false, .for_shadow = false }; struct inode *inode; int err; @@ -655,6 +665,7 @@ int nilfs_attach_btree_node_cache(struct inode *inode) args.cno = ii->i_cno; args.for_gc = test_bit(NILFS_I_GCINODE, &ii->i_state) != 0; args.for_btnc = true; + args.for_shadow = test_bit(NILFS_I_SHADOW, &ii->i_state) != 0; btnc_inode = iget5_locked(inode->i_sb, inode->i_ino, nilfs_iget_test, nilfs_iget_set, &args); @@ -690,6 +701,50 @@ void nilfs_detach_btree_node_cache(struct inode *inode) } } +/** + * nilfs_iget_for_shadow - obtain inode for shadow mapping + * @inode: inode object that uses shadow mapping + * + * nilfs_iget_for_shadow() allocates a pair of inodes that holds page + * caches for shadow mapping. The page cache for data pages is set up + * in one inode and the one for b-tree node pages is set up in the + * other inode, which is attached to the former inode. + * + * Return Value: On success, a pointer to the inode for data pages is + * returned. On errors, one of the following negative error code is returned + * in a pointer type. + * + * %-ENOMEM - Insufficient memory available. + */ +struct inode *nilfs_iget_for_shadow(struct inode *inode) +{ + struct nilfs_iget_args args = { + .ino = inode->i_ino, .root = NULL, .cno = 0, .for_gc = false, + .for_btnc = false, .for_shadow = true + }; + struct inode *s_inode; + int err; + + s_inode = iget5_locked(inode->i_sb, inode->i_ino, nilfs_iget_test, + nilfs_iget_set, &args); + if (unlikely(!s_inode)) + return ERR_PTR(-ENOMEM); + if (!(s_inode->i_state & I_NEW)) + return inode; + + NILFS_I(s_inode)->i_flags = 0; + memset(NILFS_I(s_inode)->i_bmap, 0, sizeof(struct nilfs_bmap)); + mapping_set_gfp_mask(s_inode->i_mapping, GFP_NOFS); + + err = nilfs_attach_btree_node_cache(s_inode); + if (unlikely(err)) { + iget_failed(s_inode); + return ERR_PTR(err); + } + unlock_new_inode(s_inode); + return s_inode; +} + void nilfs_write_inode_common(struct inode *inode, struct nilfs_inode *raw_inode, int has_bmap) { diff --git a/fs/nilfs2/mdt.c b/fs/nilfs2/mdt.c index d3f6cb9c32a0..e80ef2c0a785 100644 --- a/fs/nilfs2/mdt.c +++ b/fs/nilfs2/mdt.c @@ -469,9 +469,18 @@ int nilfs_mdt_init(struct inode *inode, gfp_t gfp_mask, size_t objsz) void nilfs_mdt_clear(struct inode *inode) { struct nilfs_mdt_info *mdi = NILFS_MDT(inode); + struct nilfs_shadow_map *shadow = mdi->mi_shadow; if (mdi->mi_palloc_cache) nilfs_palloc_destroy_cache(inode); + + if (shadow) { + struct inode *s_inode = shadow->inode; + + shadow->inode = NULL; + iput(s_inode); + mdi->mi_shadow = NULL; + } } /** @@ -505,12 +514,15 @@ int nilfs_mdt_setup_shadow_map(struct inode *inode, struct nilfs_shadow_map *shadow) { struct nilfs_mdt_info *mi = NILFS_MDT(inode); + struct inode *s_inode; INIT_LIST_HEAD(&shadow->frozen_buffers); - address_space_init_once(&shadow->frozen_data); - nilfs_mapping_init(&shadow->frozen_data, inode); - address_space_init_once(&shadow->frozen_btnodes); - nilfs_mapping_init(&shadow->frozen_btnodes, inode); + + s_inode = nilfs_iget_for_shadow(inode); + if (IS_ERR(s_inode)) + return PTR_ERR(s_inode); + + shadow->inode = s_inode; mi->mi_shadow = shadow; return 0; } @@ -524,13 +536,14 @@ int nilfs_mdt_save_to_shadow_map(struct inode *inode) struct nilfs_mdt_info *mi = NILFS_MDT(inode); struct nilfs_inode_info *ii = NILFS_I(inode); struct nilfs_shadow_map *shadow = mi->mi_shadow; + struct inode *s_inode = shadow->inode; int ret; - ret = nilfs_copy_dirty_pages(&shadow->frozen_data, inode->i_mapping); + ret = nilfs_copy_dirty_pages(s_inode->i_mapping, inode->i_mapping); if (ret) goto out; - ret = nilfs_copy_dirty_pages(&shadow->frozen_btnodes, + ret = nilfs_copy_dirty_pages(NILFS_I(s_inode)->i_assoc_inode->i_mapping, ii->i_assoc_inode->i_mapping); if (ret) goto out; @@ -547,7 +560,7 @@ int nilfs_mdt_freeze_buffer(struct inode *inode, struct buffer_head *bh) struct page *page; int blkbits = inode->i_blkbits; - page = grab_cache_page(&shadow->frozen_data, bh->b_page->index); + page = grab_cache_page(shadow->inode->i_mapping, bh->b_page->index); if (!page) return -ENOMEM; @@ -579,7 +592,7 @@ nilfs_mdt_get_frozen_buffer(struct inode *inode, struct buffer_head *bh) struct page *page; int n; - page = find_lock_page(&shadow->frozen_data, bh->b_page->index); + page = find_lock_page(shadow->inode->i_mapping, bh->b_page->index); if (page) { if (page_has_buffers(page)) { n = bh_offset(bh) >> inode->i_blkbits; @@ -620,11 +633,11 @@ void nilfs_mdt_restore_from_shadow_map(struct inode *inode) nilfs_palloc_clear_cache(inode); nilfs_clear_dirty_pages(inode->i_mapping, true); - nilfs_copy_back_pages(inode->i_mapping, &shadow->frozen_data); + nilfs_copy_back_pages(inode->i_mapping, shadow->inode->i_mapping); nilfs_clear_dirty_pages(ii->i_assoc_inode->i_mapping, true); nilfs_copy_back_pages(ii->i_assoc_inode->i_mapping, - &shadow->frozen_btnodes); + NILFS_I(shadow->inode)->i_assoc_inode->i_mapping); nilfs_bmap_restore(ii->i_bmap, &shadow->bmap_store); @@ -639,10 +652,11 @@ void nilfs_mdt_clear_shadow_map(struct inode *inode) { struct nilfs_mdt_info *mi = NILFS_MDT(inode); struct nilfs_shadow_map *shadow = mi->mi_shadow; + struct inode *shadow_btnc_inode = NILFS_I(shadow->inode)->i_assoc_inode; down_write(&mi->mi_sem); nilfs_release_frozen_buffers(shadow); - truncate_inode_pages(&shadow->frozen_data, 0); - truncate_inode_pages(&shadow->frozen_btnodes, 0); + truncate_inode_pages(shadow->inode->i_mapping, 0); + truncate_inode_pages(shadow_btnc_inode->i_mapping, 0); up_write(&mi->mi_sem); } diff --git a/fs/nilfs2/mdt.h b/fs/nilfs2/mdt.h index e77aea4bb921..9d8ac0d27c16 100644 --- a/fs/nilfs2/mdt.h +++ b/fs/nilfs2/mdt.h @@ -18,14 +18,12 @@ /** * struct nilfs_shadow_map - shadow mapping of meta data file * @bmap_store: shadow copy of bmap state - * @frozen_data: shadowed dirty data pages - * @frozen_btnodes: shadowed dirty b-tree nodes' pages + * @inode: holder of page caches used in shadow mapping * @frozen_buffers: list of frozen buffers */ struct nilfs_shadow_map { struct nilfs_bmap_store bmap_store; - struct address_space frozen_data; - struct address_space frozen_btnodes; + struct inode *inode; struct list_head frozen_buffers; }; diff --git a/fs/nilfs2/nilfs.h b/fs/nilfs2/nilfs.h index 635383b30d67..9ca165bc97d2 100644 --- a/fs/nilfs2/nilfs.h +++ b/fs/nilfs2/nilfs.h @@ -92,6 +92,7 @@ enum { NILFS_I_BMAP, /* has bmap and btnode_cache */ NILFS_I_GCINODE, /* inode for GC, on memory only */ NILFS_I_BTNC, /* inode for btree node cache */ + NILFS_I_SHADOW, /* inode for shadowed page cache */ }; /* @@ -260,6 +261,7 @@ extern struct inode *nilfs_iget_for_gc(struct super_block *sb, unsigned long ino, __u64 cno); int nilfs_attach_btree_node_cache(struct inode *inode); void nilfs_detach_btree_node_cache(struct inode *inode); +struct inode *nilfs_iget_for_shadow(struct inode *inode); extern void nilfs_update_inode(struct inode *, struct buffer_head *, int); extern void nilfs_truncate(struct inode *); extern void nilfs_evict_inode(struct inode *); From e2cfa7b0935c2445cd7a2919f91831a0f9821c3d Mon Sep 17 00:00:00 2001 From: Sasha Levin Date: Wed, 18 May 2022 15:28:18 -0400 Subject: [PATCH 0031/1842] Revert "swiotlb: fix info leak with DMA_FROM_DEVICE" This reverts commit d4d975e7921079f877f828099bb8260af335508f. Upstream had a follow-up fix, revert, and a semi-reverted-revert. Instead of going through this chain which is more painful to backport, I'm just going to revert this original commit and pick the final one. Signed-off-by: Sasha Levin --- Documentation/core-api/dma-attributes.rst | 8 -------- include/linux/dma-mapping.h | 8 -------- kernel/dma/swiotlb.c | 3 +-- 3 files changed, 1 insertion(+), 18 deletions(-) diff --git a/Documentation/core-api/dma-attributes.rst b/Documentation/core-api/dma-attributes.rst index 17706dc91ec9..1887d92e8e92 100644 --- a/Documentation/core-api/dma-attributes.rst +++ b/Documentation/core-api/dma-attributes.rst @@ -130,11 +130,3 @@ accesses to DMA buffers in both privileged "supervisor" and unprivileged subsystem that the buffer is fully accessible at the elevated privilege level (and ideally inaccessible or at least read-only at the lesser-privileged levels). - -DMA_ATTR_OVERWRITE ------------------- - -This is a hint to the DMA-mapping subsystem that the device is expected to -overwrite the entire mapped size, thus the caller does not require any of the -previous buffer contents to be preserved. This allows bounce-buffering -implementations to optimise DMA_FROM_DEVICE transfers. diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index a9361178c5db..a7d70cdee25e 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -61,14 +61,6 @@ */ #define DMA_ATTR_PRIVILEGED (1UL << 9) -/* - * This is a hint to the DMA-mapping subsystem that the device is expected - * to overwrite the entire mapped size, thus the caller does not require any - * of the previous buffer contents to be preserved. This allows - * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers. - */ -#define DMA_ATTR_OVERWRITE (1UL << 10) - /* * A dma_addr_t can hold any valid DMA or bus address for the platform. It can * be given to a device to use as a DMA source or target. It is specific to a diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 62b1e5fa8673..0ed0e1f215c7 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -598,8 +598,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, tlb_addr = slot_addr(io_tlb_start, index) + offset; if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE || - dir == DMA_BIDIRECTIONAL)) + (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); return tlb_addr; } From f3f2247ac31cb71d1f05f56536df5946c6652f4a Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Mon, 28 Mar 2022 11:37:05 -0700 Subject: [PATCH 0032/1842] Reinstate some of "swiotlb: rework "fix info leak with DMA_FROM_DEVICE"" MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 901c7280ca0d5e2b4a8929fbe0bfb007ac2a6544 ] Halil Pasic points out [1] that the full revert of that commit (revert in bddac7c1e02b), and that a partial revert that only reverts the problematic case, but still keeps some of the cleanups is probably better.  And that partial revert [2] had already been verified by Oleksandr Natalenko to also fix the issue, I had just missed that in the long discussion. So let's reinstate the cleanups from commit aa6f8dcbab47 ("swiotlb: rework "fix info leak with DMA_FROM_DEVICE""), and effectively only revert the part that caused problems. Link: https://lore.kernel.org/all/20220328013731.017ae3e3.pasic@linux.ibm.com/ [1] Link: https://lore.kernel.org/all/20220324055732.GB12078@lst.de/ [2] Link: https://lore.kernel.org/all/4386660.LvFx2qVVIh@natalenko.name/ [3] Suggested-by: Halil Pasic Tested-by: Oleksandr Natalenko Cc: Christoph Hellwig" Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- kernel/dma/swiotlb.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 0ed0e1f215c7..274587a57717 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -597,9 +597,14 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, io_tlb_orig_addr[index + i] = slot_addr(orig_addr, i); tlb_addr = slot_addr(io_tlb_start, index) + offset; - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) - swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); + /* + * When dir == DMA_FROM_DEVICE we could omit the copy from the orig + * to the tlb buffer, if we knew for sure the device will + * overwirte the entire current content. But we don't. Thus + * unconditional bounce may prevent leaking swiotlb content (i.e. + * kernel memory) to user-space. + */ + swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE); return tlb_addr; } From a34d018b6eaba7ffcbafd0f68e66df6003321995 Mon Sep 17 00:00:00 2001 From: Takashi Iwai Date: Mon, 16 May 2022 12:31:12 +0200 Subject: [PATCH 0033/1842] ALSA: usb-audio: Restore Rane SL-1 quirk commit 5c62383c06837b5719cd5447a5758b791279e653 upstream. At cleaning up and moving the device rename from the quirk table to its own table, we removed the entry for Rane SL-1 as we thought it's only for renaming. It turned out, however, that the quirk is required for matching with the device that declares itself as no standard audio but only as vendor-specific. Restore the quirk entry for Rane SL-1 to fix the regression. BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=215887 Fixes: 5436f59bc5bc ("ALSA: usb-audio: Move device rename and profile quirks to an internal table") Cc: Link: https://lore.kernel.org/r/20220516103112.12950-1-tiwai@suse.de Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/usb/quirks-table.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h index aabd3a10ec5b..1ac91c46da3c 100644 --- a/sound/usb/quirks-table.h +++ b/sound/usb/quirks-table.h @@ -3208,6 +3208,15 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"), } }, +/* Rane SL-1 */ +{ + USB_DEVICE(0x13e5, 0x0001), + .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { + .ifnum = QUIRK_ANY_INTERFACE, + .type = QUIRK_AUDIO_STANDARD_INTERFACE + } +}, + /* disabled due to regression for other devices; * see https://bugzilla.kernel.org/show_bug.cgi?id=199905 */ From 3eaf770163b771ca218482614a1c84281254bb54 Mon Sep 17 00:00:00 2001 From: Takashi Iwai Date: Tue, 10 May 2022 12:36:26 +0200 Subject: [PATCH 0034/1842] ALSA: wavefront: Proper check of get_user() error commit a34ae6c0660d3b96b0055f68ef74dc9478852245 upstream. The antient ISA wavefront driver reads its sample patch data (uploaded over an ioctl) via __get_user() with no good reason; likely just for some performance optimizations in the past. Let's change this to the standard get_user() and the error check for handling the fault case properly. Reported-by: Linus Torvalds Cc: Link: https://lore.kernel.org/r/20220510103626.16635-1-tiwai@suse.de Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/isa/wavefront/wavefront_synth.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sound/isa/wavefront/wavefront_synth.c b/sound/isa/wavefront/wavefront_synth.c index d6420d224d09..09b368761cc0 100644 --- a/sound/isa/wavefront/wavefront_synth.c +++ b/sound/isa/wavefront/wavefront_synth.c @@ -1088,7 +1088,8 @@ wavefront_send_sample (snd_wavefront_t *dev, if (dataptr < data_end) { - __get_user (sample_short, dataptr); + if (get_user(sample_short, dataptr)) + return -EFAULT; dataptr += skip; if (data_is_unsigned) { /* GUS ? */ From 18fb7d533c79909675749f1795dc312246143c6d Mon Sep 17 00:00:00 2001 From: Werner Sembach Date: Thu, 12 May 2022 20:09:56 +0200 Subject: [PATCH 0035/1842] ALSA: hda/realtek: Add quirk for TongFang devices with pop noise commit 8b3b2392ed68bcd17c7eb84ca615ce1e5f115b99 upstream. When audio stops playing there is an audible "pop"-noise when using headphones on the TongFang GMxMRxx, GKxNRxx, GMxZGxx, GMxTGxx and GMxAGxx. This quirk fixes this mostly. Signed-off-by: Werner Sembach Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20220512180956.281804-1-wse@tuxedocomputers.com Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_realtek.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 3f880c4fd5e0..f630f9257ad1 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -9011,6 +9011,14 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802), SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X), SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_SET_COEF_DEFAULTS), + SND_PCI_QUIRK(0x1d05, 0x1096, "TongFang GMxMRxx", ALC269_FIXUP_NO_SHUTUP), + SND_PCI_QUIRK(0x1d05, 0x1100, "TongFang GKxNRxx", ALC269_FIXUP_NO_SHUTUP), + SND_PCI_QUIRK(0x1d05, 0x1111, "TongFang GMxZGxx", ALC269_FIXUP_NO_SHUTUP), + SND_PCI_QUIRK(0x1d05, 0x1119, "TongFang GMxZGxx", ALC269_FIXUP_NO_SHUTUP), + SND_PCI_QUIRK(0x1d05, 0x1129, "TongFang GMxZGxx", ALC269_FIXUP_NO_SHUTUP), + SND_PCI_QUIRK(0x1d05, 0x1147, "TongFang GMxTGxx", ALC269_FIXUP_NO_SHUTUP), + SND_PCI_QUIRK(0x1d05, 0x115c, "TongFang GMxTGxx", ALC269_FIXUP_NO_SHUTUP), + SND_PCI_QUIRK(0x1d05, 0x121b, "TongFang GMxAGxx", ALC269_FIXUP_NO_SHUTUP), SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC), SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC), From 3ee8e109c3c316073a3e0f83ec0769c7ee8a7375 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 20 May 2022 20:38:06 +0200 Subject: [PATCH 0036/1842] perf: Fix sys_perf_event_open() race against self commit 3ac6487e584a1eb54071dbe1212e05b884136704 upstream. Norbert reported that it's possible to race sys_perf_event_open() such that the looser ends up in another context from the group leader, triggering many WARNs. The move_group case checks for races against itself, but the !move_group case doesn't, seemingly relying on the previous group_leader->ctx == ctx check. However, that check is racy due to not holding any locks at that time. Therefore, re-check the result after acquiring locks and bailing if they no longer match. Additionally, clarify the not_move_group case from the move_group-vs-move_group race. Fixes: f63a8daa5812 ("perf: Fix event->ctx locking") Reported-by: Norbert Slusarek Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- kernel/events/core.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/kernel/events/core.c b/kernel/events/core.c index 9aa6563587d8..8ba155a7b59e 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -11946,6 +11946,9 @@ SYSCALL_DEFINE5(perf_event_open, * Do not allow to attach to a group in a different task * or CPU context. If we're moving SW events, we'll fix * this up later, so allow that. + * + * Racy, not holding group_leader->ctx->mutex, see comment with + * perf_event_ctx_lock(). */ if (!move_group && group_leader->ctx != ctx) goto err_context; @@ -12013,6 +12016,7 @@ SYSCALL_DEFINE5(perf_event_open, } else { perf_event_ctx_unlock(group_leader, gctx); move_group = 0; + goto not_move_group; } } @@ -12029,7 +12033,17 @@ SYSCALL_DEFINE5(perf_event_open, } } else { mutex_lock(&ctx->mutex); + + /* + * Now that we hold ctx->lock, (re)validate group_leader->ctx == ctx, + * see the group_leader && !move_group test earlier. + */ + if (group_leader && group_leader->ctx != ctx) { + err = -EINVAL; + goto err_locked; + } } +not_move_group: if (ctx->task == TASK_TOMBSTONE) { err = -ESRCH; From b42e5e3a84ddbcbb104d7c69003aac806cdcdf26 Mon Sep 17 00:00:00 2001 From: Ondrej Mosnacek Date: Tue, 17 May 2022 14:08:16 +0200 Subject: [PATCH 0037/1842] selinux: fix bad cleanup on error in hashtab_duplicate() commit 6254bd3db316c9ccb3b05caa8b438be63245466f upstream. The code attempts to free the 'new' pointer using kmem_cache_free(), which is wrong because this function isn't responsible of freeing it. Instead, the function should free new->htable and clear the contents of *new (to prevent double-free). Cc: stable@vger.kernel.org Fixes: c7c556f1e81b ("selinux: refactor changing booleans") Reported-by: Wander Lairson Costa Signed-off-by: Ondrej Mosnacek Signed-off-by: Paul Moore Signed-off-by: Greg Kroah-Hartman --- security/selinux/ss/hashtab.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/security/selinux/ss/hashtab.c b/security/selinux/ss/hashtab.c index 7335f67ce54e..e8960a59586c 100644 --- a/security/selinux/ss/hashtab.c +++ b/security/selinux/ss/hashtab.c @@ -178,7 +178,8 @@ int hashtab_duplicate(struct hashtab *new, struct hashtab *orig, kmem_cache_free(hashtab_node_cachep, cur); } } - kmem_cache_free(hashtab_node_cachep, new); + kfree(new->htable); + memset(new, 0, sizeof(*new)); return -ENOMEM; } From ec0d801d1a44d9259377142c6218885ecd685e41 Mon Sep 17 00:00:00 2001 From: Al Viro Date: Mon, 16 May 2022 16:42:13 +0800 Subject: [PATCH 0038/1842] Fix double fget() in vhost_net_set_backend() commit fb4554c2232e44d595920f4d5c66cf8f7d13f9bc upstream. Descriptor table is a shared resource; two fget() on the same descriptor may return different struct file references. get_tap_ptr_ring() is called after we'd found (and pinned) the socket we'll be using and it tries to find the private tun/tap data structures associated with it. Redoing the lookup by the same file descriptor we'd used to get the socket is racy - we need to same struct file. Thanks to Jason for spotting a braino in the original variant of patch - I'd missed the use of fd == -1 for disabling backend, and in that case we can end up with sock == NULL and sock != oldsock. Cc: stable@kernel.org Acked-by: Michael S. Tsirkin Signed-off-by: Jason Wang Signed-off-by: Al Viro Signed-off-by: Greg Kroah-Hartman --- drivers/vhost/net.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index e303f6f073d2..5beb20768b20 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -1450,13 +1450,9 @@ static struct socket *get_raw_socket(int fd) return ERR_PTR(r); } -static struct ptr_ring *get_tap_ptr_ring(int fd) +static struct ptr_ring *get_tap_ptr_ring(struct file *file) { struct ptr_ring *ring; - struct file *file = fget(fd); - - if (!file) - return NULL; ring = tun_get_tx_ring(file); if (!IS_ERR(ring)) goto out; @@ -1465,7 +1461,6 @@ static struct ptr_ring *get_tap_ptr_ring(int fd) goto out; ring = NULL; out: - fput(file); return ring; } @@ -1552,8 +1547,12 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd) r = vhost_net_enable_vq(n, vq); if (r) goto err_used; - if (index == VHOST_NET_VQ_RX) - nvq->rx_ring = get_tap_ptr_ring(fd); + if (index == VHOST_NET_VQ_RX) { + if (sock) + nvq->rx_ring = get_tap_ptr_ring(sock->file); + else + nvq->rx_ring = NULL; + } oldubufs = nvq->ubufs; nvq->ubufs = ubufs; From 146128ba265d27b65398faccbf97040f308a3892 Mon Sep 17 00:00:00 2001 From: "Rafael J. Wysocki" Date: Thu, 31 Mar 2022 19:38:51 +0200 Subject: [PATCH 0039/1842] PCI/PM: Avoid putting Elo i2 PCIe Ports in D3cold commit 92597f97a40bf661bebceb92e26ff87c76d562d4 upstream. If a Root Port on Elo i2 is put into D3cold and then back into D0, the downstream device becomes permanently inaccessible, so add a bridge D3 DMI quirk for that system. This was exposed by 14858dcc3b35 ("PCI: Use pci_update_current_state() in pci_enable_device_flags()"), but before that commit the Root Port in question had never been put into D3cold for real due to a mismatch between its power state retrieved from the PCI_PM_CTRL register (which was accessible even though the platform firmware indicated that the port was in D3cold) and the state of an ACPI power resource involved in its power management. BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=215715 Link: https://lore.kernel.org/r/11980172.O9o76ZdvQC@kreacher Reported-by: Stefan Gottwald Signed-off-by: Rafael J. Wysocki Signed-off-by: Bjorn Helgaas Cc: stable@vger.kernel.org # v5.15+ Signed-off-by: Greg Kroah-Hartman --- drivers/pci/pci.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 0d7109018a91..cda17c615148 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -2829,6 +2829,16 @@ static const struct dmi_system_id bridge_d3_blacklist[] = { DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co., Ltd."), DMI_MATCH(DMI_BOARD_NAME, "X299 DESIGNARE EX-CF"), }, + /* + * Downstream device is not accessible after putting a root port + * into D3cold and back into D0 on Elo i2. + */ + .ident = "Elo i2", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Elo Touch Solutions"), + DMI_MATCH(DMI_PRODUCT_NAME, "Elo i2"), + DMI_MATCH(DMI_PRODUCT_VERSION, "RevB"), + }, }, #endif { } From b49bc8d615eea1043cb238318af04f8876e846b3 Mon Sep 17 00:00:00 2001 From: Sean Christopherson Date: Wed, 11 May 2022 14:51:22 +0000 Subject: [PATCH 0040/1842] KVM: x86/mmu: Update number of zapped pages even if page list is stable commit b28cb0cd2c5e80a8c0feb408a0e4b0dbb6d132c5 upstream. When zapping obsolete pages, update the running count of zapped pages regardless of whether or not the list has become unstable due to zapping a shadow page with its own child shadow pages. If the VM is backed by mostly 4kb pages, KVM can zap an absurd number of SPTEs without bumping the batch count and thus without yielding. In the worst case scenario, this can cause a soft lokcup. watchdog: BUG: soft lockup - CPU#12 stuck for 22s! [dirty_log_perf_:13020] RIP: 0010:workingset_activation+0x19/0x130 mark_page_accessed+0x266/0x2e0 kvm_set_pfn_accessed+0x31/0x40 mmu_spte_clear_track_bits+0x136/0x1c0 drop_spte+0x1a/0xc0 mmu_page_zap_pte+0xef/0x120 __kvm_mmu_prepare_zap_page+0x205/0x5e0 kvm_mmu_zap_all_fast+0xd7/0x190 kvm_mmu_invalidate_zap_pages_in_memslot+0xe/0x10 kvm_page_track_flush_slot+0x5c/0x80 kvm_arch_flush_shadow_memslot+0xe/0x10 kvm_set_memslot+0x1a8/0x5d0 __kvm_set_memory_region+0x337/0x590 kvm_vm_ioctl+0xb08/0x1040 Fixes: fbb158cb88b6 ("KVM: x86/mmu: Revert "Revert "KVM: MMU: zap pages in batch""") Reported-by: David Matlack Reviewed-by: Ben Gardon Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Message-Id: <20220511145122.3133334-1-seanjc@google.com> Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/mmu/mmu.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 70ef5b542681..306268f90455 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5375,6 +5375,7 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm) { struct kvm_mmu_page *sp, *node; int nr_zapped, batch = 0; + bool unstable; restart: list_for_each_entry_safe_reverse(sp, node, @@ -5406,11 +5407,12 @@ static void kvm_zap_obsolete_pages(struct kvm *kvm) goto restart; } - if (__kvm_mmu_prepare_zap_page(kvm, sp, - &kvm->arch.zapped_obsolete_pages, &nr_zapped)) { - batch += nr_zapped; + unstable = __kvm_mmu_prepare_zap_page(kvm, sp, + &kvm->arch.zapped_obsolete_pages, &nr_zapped); + batch += nr_zapped; + + if (unstable) goto restart; - } } /* From a817f78ed69bba10d724587dab84a164dac05900 Mon Sep 17 00:00:00 2001 From: Prakruthi Deepak Heragu Date: Fri, 13 May 2022 10:46:54 -0700 Subject: [PATCH 0041/1842] arm64: paravirt: Use RCU read locks to guard stolen_time commit 19bef63f951e47dd4ba54810e6f7c7ff9344a3ef upstream. During hotplug, the stolen time data structure is unmapped and memset. There is a possibility of the timer IRQ being triggered before memset and stolen time is getting updated as part of this timer IRQ handler. This causes the below crash in timer handler - [ 3457.473139][ C5] Unable to handle kernel paging request at virtual address ffffffc03df05148 ... [ 3458.154398][ C5] Call trace: [ 3458.157648][ C5] para_steal_clock+0x30/0x50 [ 3458.162319][ C5] irqtime_account_process_tick+0x30/0x194 [ 3458.168148][ C5] account_process_tick+0x3c/0x280 [ 3458.173274][ C5] update_process_times+0x5c/0xf4 [ 3458.178311][ C5] tick_sched_timer+0x180/0x384 [ 3458.183164][ C5] __run_hrtimer+0x160/0x57c [ 3458.187744][ C5] hrtimer_interrupt+0x258/0x684 [ 3458.192698][ C5] arch_timer_handler_virt+0x5c/0xa0 [ 3458.198002][ C5] handle_percpu_devid_irq+0xdc/0x414 [ 3458.203385][ C5] handle_domain_irq+0xa8/0x168 [ 3458.208241][ C5] gic_handle_irq.34493+0x54/0x244 [ 3458.213359][ C5] call_on_irq_stack+0x40/0x70 [ 3458.218125][ C5] do_interrupt_handler+0x60/0x9c [ 3458.223156][ C5] el1_interrupt+0x34/0x64 [ 3458.227560][ C5] el1h_64_irq_handler+0x1c/0x2c [ 3458.232503][ C5] el1h_64_irq+0x7c/0x80 [ 3458.236736][ C5] free_vmap_area_noflush+0x108/0x39c [ 3458.242126][ C5] remove_vm_area+0xbc/0x118 [ 3458.246714][ C5] vm_remove_mappings+0x48/0x2a4 [ 3458.251656][ C5] __vunmap+0x154/0x278 [ 3458.255796][ C5] stolen_time_cpu_down_prepare+0xc0/0xd8 [ 3458.261542][ C5] cpuhp_invoke_callback+0x248/0xc34 [ 3458.266842][ C5] cpuhp_thread_fun+0x1c4/0x248 [ 3458.271696][ C5] smpboot_thread_fn+0x1b0/0x400 [ 3458.276638][ C5] kthread+0x17c/0x1e0 [ 3458.280691][ C5] ret_from_fork+0x10/0x20 As a fix, introduce rcu lock to update stolen time structure. Fixes: 75df529bec91 ("arm64: paravirt: Initialize steal time when cpu is online") Cc: stable@vger.kernel.org Suggested-by: Will Deacon Signed-off-by: Prakruthi Deepak Heragu Signed-off-by: Elliot Berman Reviewed-by: Srivatsa S. Bhat (VMware) Link: https://lore.kernel.org/r/20220513174654.362169-1-quic_eberman@quicinc.com Signed-off-by: Will Deacon Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/paravirt.c | 29 +++++++++++++++++++++-------- 1 file changed, 21 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c index c07d7a034941..69ec670bcc70 100644 --- a/arch/arm64/kernel/paravirt.c +++ b/arch/arm64/kernel/paravirt.c @@ -30,7 +30,7 @@ struct paravirt_patch_template pv_ops; EXPORT_SYMBOL_GPL(pv_ops); struct pv_time_stolen_time_region { - struct pvclock_vcpu_stolen_time *kaddr; + struct pvclock_vcpu_stolen_time __rcu *kaddr; }; static DEFINE_PER_CPU(struct pv_time_stolen_time_region, stolen_time_region); @@ -47,7 +47,9 @@ early_param("no-steal-acc", parse_no_stealacc); /* return stolen time in ns by asking the hypervisor */ static u64 pv_steal_clock(int cpu) { + struct pvclock_vcpu_stolen_time *kaddr = NULL; struct pv_time_stolen_time_region *reg; + u64 ret = 0; reg = per_cpu_ptr(&stolen_time_region, cpu); @@ -56,28 +58,37 @@ static u64 pv_steal_clock(int cpu) * online notification callback runs. Until the callback * has run we just return zero. */ - if (!reg->kaddr) + rcu_read_lock(); + kaddr = rcu_dereference(reg->kaddr); + if (!kaddr) { + rcu_read_unlock(); return 0; + } - return le64_to_cpu(READ_ONCE(reg->kaddr->stolen_time)); + ret = le64_to_cpu(READ_ONCE(kaddr->stolen_time)); + rcu_read_unlock(); + return ret; } static int stolen_time_cpu_down_prepare(unsigned int cpu) { + struct pvclock_vcpu_stolen_time *kaddr = NULL; struct pv_time_stolen_time_region *reg; reg = this_cpu_ptr(&stolen_time_region); if (!reg->kaddr) return 0; - memunmap(reg->kaddr); - memset(reg, 0, sizeof(*reg)); + kaddr = rcu_replace_pointer(reg->kaddr, NULL, true); + synchronize_rcu(); + memunmap(kaddr); return 0; } static int stolen_time_cpu_online(unsigned int cpu) { + struct pvclock_vcpu_stolen_time *kaddr = NULL; struct pv_time_stolen_time_region *reg; struct arm_smccc_res res; @@ -88,17 +99,19 @@ static int stolen_time_cpu_online(unsigned int cpu) if (res.a0 == SMCCC_RET_NOT_SUPPORTED) return -EINVAL; - reg->kaddr = memremap(res.a0, + kaddr = memremap(res.a0, sizeof(struct pvclock_vcpu_stolen_time), MEMREMAP_WB); + rcu_assign_pointer(reg->kaddr, kaddr); + if (!reg->kaddr) { pr_warn("Failed to map stolen time data structure\n"); return -ENOMEM; } - if (le32_to_cpu(reg->kaddr->revision) != 0 || - le32_to_cpu(reg->kaddr->attributes) != 0) { + if (le32_to_cpu(kaddr->revision) != 0 || + le32_to_cpu(kaddr->attributes) != 0) { pr_warn_once("Unexpected revision or attributes in stolen time data\n"); return -ENXIO; } From 6013ef5f51e0861a904b74a694e621bb451fe7ae Mon Sep 17 00:00:00 2001 From: Catalin Marinas Date: Tue, 17 May 2022 10:35:32 +0100 Subject: [PATCH 0042/1842] arm64: mte: Ensure the cleared tags are visible before setting the PTE commit 1d0cb4c8864addc362bae98e8ffa5500c87e1227 upstream. As an optimisation, only pages mapped with PROT_MTE in user space have the MTE tags zeroed. This is done lazily at the set_pte_at() time via mte_sync_tags(). However, this function is missing a barrier and another CPU may see the PTE updated before the zeroed tags are visible. Add an smp_wmb() barrier if the mapping is Normal Tagged. Signed-off-by: Catalin Marinas Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE") Cc: # 5.10.x Reported-by: Vladimir Murzin Cc: Will Deacon Reviewed-by: Steven Price Tested-by: Vladimir Murzin Link: https://lore.kernel.org/r/20220517093532.127095-1-catalin.marinas@arm.com Signed-off-by: Will Deacon Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kernel/mte.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 7a66a7d9c1ff..4a069f85bd91 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -45,6 +45,9 @@ void mte_sync_tags(pte_t *ptep, pte_t pte) if (!test_and_set_bit(PG_mte_tagged, &page->flags)) mte_sync_page_tags(page, ptep, check_swap); } + + /* ensure the tags are visible before the PTE is set */ + smp_wmb(); } int memcmp_pages(struct page *page1, struct page *page2) From 233a3cc60e7a8fe0be8cf9934ae7b67ba25a866c Mon Sep 17 00:00:00 2001 From: Ondrej Mosnacek Date: Tue, 3 May 2022 13:50:10 +0200 Subject: [PATCH 0043/1842] crypto: qcom-rng - fix infinite loop on requests not multiple of WORD_SZ commit 16287397ec5c08aa58db6acf7dbc55470d78087d upstream. The commit referenced in the Fixes tag removed the 'break' from the else branch in qcom_rng_read(), causing an infinite loop whenever 'max' is not a multiple of WORD_SZ. This can be reproduced e.g. by running: kcapi-rng -b 67 >/dev/null There are many ways to fix this without adding back the 'break', but they all seem more awkward than simply adding it back, so do just that. Tested on a machine with Qualcomm Amberwing processor. Fixes: a680b1832ced ("crypto: qcom-rng - ensure buffer for generate is completely filled") Cc: stable@vger.kernel.org Signed-off-by: Ondrej Mosnacek Reviewed-by: Brian Masney Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman --- drivers/crypto/qcom-rng.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/crypto/qcom-rng.c b/drivers/crypto/qcom-rng.c index 11f30fd48c14..031b5f701a0a 100644 --- a/drivers/crypto/qcom-rng.c +++ b/drivers/crypto/qcom-rng.c @@ -65,6 +65,7 @@ static int qcom_rng_read(struct qcom_rng *rng, u8 *data, unsigned int max) } else { /* copy only remaining bytes */ memcpy(data, &val, max - currsize); + break; } } while (currsize < max); From 8ceca1a0693a314712e7b1b727795b7be55f32e9 Mon Sep 17 00:00:00 2001 From: Ilya Dryomov Date: Sat, 14 May 2022 12:16:47 +0200 Subject: [PATCH 0044/1842] libceph: fix potential use-after-free on linger ping and resends commit 75dbb685f4e8786c33ddef8279bab0eadfb0731f upstream. request_reinit() is not only ugly as the comment rightfully suggests, but also unsafe. Even though it is called with osdc->lock held for write in all cases, resetting the OSD request refcount can still race with handle_reply() and result in use-after-free. Taking linger ping as an example: handle_timeout thread handle_reply thread down_read(&osdc->lock) req = lookup_request(...) ... finish_request(req) # unregisters up_read(&osdc->lock) __complete_request(req) linger_ping_cb(req) # req->r_kref == 2 because handle_reply still holds its ref down_write(&osdc->lock) send_linger_ping(lreq) req = lreq->ping_req # same req # cancel_linger_request is NOT # called - handle_reply already # unregistered request_reinit(req) WARN_ON(req->r_kref != 1) # fires request_init(req) kref_init(req->r_kref) # req->r_kref == 1 after kref_init ceph_osdc_put_request(req) kref_put(req->r_kref) # req->r_kref == 0 after kref_put, req is freed !!! This happens because send_linger_ping() always (re)uses the same OSD request for watch ping requests, relying on cancel_linger_request() to unregister it from the OSD client and rip its messages out from the messenger. send_linger() does the same for watch/notify registration and watch reconnect requests. Unfortunately cancel_request() doesn't guarantee that after it returns the OSD client would be completely done with the OSD request -- a ref could still be held and the callback (if specified) could still be invoked too. The original motivation for request_reinit() was inability to deal with allocation failures in send_linger() and send_linger_ping(). Switching to using osdc->req_mempool (currently only used by CephFS) respects that and allows us to get rid of request_reinit(). Cc: stable@vger.kernel.org Signed-off-by: Ilya Dryomov Reviewed-by: Xiubo Li Acked-by: Jeff Layton Signed-off-by: Greg Kroah-Hartman --- include/linux/ceph/osd_client.h | 3 + net/ceph/osd_client.c | 310 +++++++++++++------------------- 2 files changed, 126 insertions(+), 187 deletions(-) diff --git a/include/linux/ceph/osd_client.h b/include/linux/ceph/osd_client.h index 83fa08a06507..787fff5ec7f5 100644 --- a/include/linux/ceph/osd_client.h +++ b/include/linux/ceph/osd_client.h @@ -287,6 +287,9 @@ struct ceph_osd_linger_request { rados_watcherrcb_t errcb; void *data; + struct ceph_pagelist *request_pl; + struct page **notify_id_pages; + struct page ***preply_pages; size_t *preply_len; }; diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c index 7901ab6c79fd..1e9fab79e245 100644 --- a/net/ceph/osd_client.c +++ b/net/ceph/osd_client.c @@ -537,43 +537,6 @@ static void request_init(struct ceph_osd_request *req) target_init(&req->r_t); } -/* - * This is ugly, but it allows us to reuse linger registration and ping - * requests, keeping the structure of the code around send_linger{_ping}() - * reasonable. Setting up a min_nr=2 mempool for each linger request - * and dealing with copying ops (this blasts req only, watch op remains - * intact) isn't any better. - */ -static void request_reinit(struct ceph_osd_request *req) -{ - struct ceph_osd_client *osdc = req->r_osdc; - bool mempool = req->r_mempool; - unsigned int num_ops = req->r_num_ops; - u64 snapid = req->r_snapid; - struct ceph_snap_context *snapc = req->r_snapc; - bool linger = req->r_linger; - struct ceph_msg *request_msg = req->r_request; - struct ceph_msg *reply_msg = req->r_reply; - - dout("%s req %p\n", __func__, req); - WARN_ON(kref_read(&req->r_kref) != 1); - request_release_checks(req); - - WARN_ON(kref_read(&request_msg->kref) != 1); - WARN_ON(kref_read(&reply_msg->kref) != 1); - target_destroy(&req->r_t); - - request_init(req); - req->r_osdc = osdc; - req->r_mempool = mempool; - req->r_num_ops = num_ops; - req->r_snapid = snapid; - req->r_snapc = snapc; - req->r_linger = linger; - req->r_request = request_msg; - req->r_reply = reply_msg; -} - struct ceph_osd_request *ceph_osdc_alloc_request(struct ceph_osd_client *osdc, struct ceph_snap_context *snapc, unsigned int num_ops, @@ -918,14 +881,30 @@ EXPORT_SYMBOL(osd_req_op_xattr_init); * @watch_opcode: CEPH_OSD_WATCH_OP_* */ static void osd_req_op_watch_init(struct ceph_osd_request *req, int which, - u64 cookie, u8 watch_opcode) + u8 watch_opcode, u64 cookie, u32 gen) { struct ceph_osd_req_op *op; op = osd_req_op_init(req, which, CEPH_OSD_OP_WATCH, 0); op->watch.cookie = cookie; op->watch.op = watch_opcode; - op->watch.gen = 0; + op->watch.gen = gen; +} + +/* + * prot_ver, timeout and notify payload (may be empty) should already be + * encoded in @request_pl + */ +static void osd_req_op_notify_init(struct ceph_osd_request *req, int which, + u64 cookie, struct ceph_pagelist *request_pl) +{ + struct ceph_osd_req_op *op; + + op = osd_req_op_init(req, which, CEPH_OSD_OP_NOTIFY, 0); + op->notify.cookie = cookie; + + ceph_osd_data_pagelist_init(&op->notify.request_data, request_pl); + op->indata_len = request_pl->length; } /* @@ -2727,10 +2706,13 @@ static void linger_release(struct kref *kref) WARN_ON(!list_empty(&lreq->pending_lworks)); WARN_ON(lreq->osd); - if (lreq->reg_req) - ceph_osdc_put_request(lreq->reg_req); - if (lreq->ping_req) - ceph_osdc_put_request(lreq->ping_req); + if (lreq->request_pl) + ceph_pagelist_release(lreq->request_pl); + if (lreq->notify_id_pages) + ceph_release_page_vector(lreq->notify_id_pages, 1); + + ceph_osdc_put_request(lreq->reg_req); + ceph_osdc_put_request(lreq->ping_req); target_destroy(&lreq->t); kfree(lreq); } @@ -2999,6 +2981,12 @@ static void linger_commit_cb(struct ceph_osd_request *req) struct ceph_osd_linger_request *lreq = req->r_priv; mutex_lock(&lreq->lock); + if (req != lreq->reg_req) { + dout("%s lreq %p linger_id %llu unknown req (%p != %p)\n", + __func__, lreq, lreq->linger_id, req, lreq->reg_req); + goto out; + } + dout("%s lreq %p linger_id %llu result %d\n", __func__, lreq, lreq->linger_id, req->r_result); linger_reg_commit_complete(lreq, req->r_result); @@ -3022,6 +3010,7 @@ static void linger_commit_cb(struct ceph_osd_request *req) } } +out: mutex_unlock(&lreq->lock); linger_put(lreq); } @@ -3044,6 +3033,12 @@ static void linger_reconnect_cb(struct ceph_osd_request *req) struct ceph_osd_linger_request *lreq = req->r_priv; mutex_lock(&lreq->lock); + if (req != lreq->reg_req) { + dout("%s lreq %p linger_id %llu unknown req (%p != %p)\n", + __func__, lreq, lreq->linger_id, req, lreq->reg_req); + goto out; + } + dout("%s lreq %p linger_id %llu result %d last_error %d\n", __func__, lreq, lreq->linger_id, req->r_result, lreq->last_error); if (req->r_result < 0) { @@ -3053,46 +3048,64 @@ static void linger_reconnect_cb(struct ceph_osd_request *req) } } +out: mutex_unlock(&lreq->lock); linger_put(lreq); } static void send_linger(struct ceph_osd_linger_request *lreq) { - struct ceph_osd_request *req = lreq->reg_req; - struct ceph_osd_req_op *op = &req->r_ops[0]; + struct ceph_osd_client *osdc = lreq->osdc; + struct ceph_osd_request *req; + int ret; - verify_osdc_wrlocked(req->r_osdc); + verify_osdc_wrlocked(osdc); + mutex_lock(&lreq->lock); dout("%s lreq %p linger_id %llu\n", __func__, lreq, lreq->linger_id); - if (req->r_osd) - cancel_linger_request(req); + if (lreq->reg_req) { + if (lreq->reg_req->r_osd) + cancel_linger_request(lreq->reg_req); + ceph_osdc_put_request(lreq->reg_req); + } + + req = ceph_osdc_alloc_request(osdc, NULL, 1, true, GFP_NOIO); + BUG_ON(!req); - request_reinit(req); target_copy(&req->r_t, &lreq->t); req->r_mtime = lreq->mtime; - mutex_lock(&lreq->lock); if (lreq->is_watch && lreq->committed) { - WARN_ON(op->op != CEPH_OSD_OP_WATCH || - op->watch.cookie != lreq->linger_id); - op->watch.op = CEPH_OSD_WATCH_OP_RECONNECT; - op->watch.gen = ++lreq->register_gen; + osd_req_op_watch_init(req, 0, CEPH_OSD_WATCH_OP_RECONNECT, + lreq->linger_id, ++lreq->register_gen); dout("lreq %p reconnect register_gen %u\n", lreq, - op->watch.gen); + req->r_ops[0].watch.gen); req->r_callback = linger_reconnect_cb; } else { - if (!lreq->is_watch) + if (lreq->is_watch) { + osd_req_op_watch_init(req, 0, CEPH_OSD_WATCH_OP_WATCH, + lreq->linger_id, 0); + } else { lreq->notify_id = 0; - else - WARN_ON(op->watch.op != CEPH_OSD_WATCH_OP_WATCH); + + refcount_inc(&lreq->request_pl->refcnt); + osd_req_op_notify_init(req, 0, lreq->linger_id, + lreq->request_pl); + ceph_osd_data_pages_init( + osd_req_op_data(req, 0, notify, response_data), + lreq->notify_id_pages, PAGE_SIZE, 0, false, false); + } dout("lreq %p register\n", lreq); req->r_callback = linger_commit_cb; } - mutex_unlock(&lreq->lock); + + ret = ceph_osdc_alloc_messages(req, GFP_NOIO); + BUG_ON(ret); req->r_priv = linger_get(lreq); req->r_linger = true; + lreq->reg_req = req; + mutex_unlock(&lreq->lock); submit_request(req, true); } @@ -3102,6 +3115,12 @@ static void linger_ping_cb(struct ceph_osd_request *req) struct ceph_osd_linger_request *lreq = req->r_priv; mutex_lock(&lreq->lock); + if (req != lreq->ping_req) { + dout("%s lreq %p linger_id %llu unknown req (%p != %p)\n", + __func__, lreq, lreq->linger_id, req, lreq->ping_req); + goto out; + } + dout("%s lreq %p linger_id %llu result %d ping_sent %lu last_error %d\n", __func__, lreq, lreq->linger_id, req->r_result, lreq->ping_sent, lreq->last_error); @@ -3117,6 +3136,7 @@ static void linger_ping_cb(struct ceph_osd_request *req) lreq->register_gen, req->r_ops[0].watch.gen); } +out: mutex_unlock(&lreq->lock); linger_put(lreq); } @@ -3124,8 +3144,8 @@ static void linger_ping_cb(struct ceph_osd_request *req) static void send_linger_ping(struct ceph_osd_linger_request *lreq) { struct ceph_osd_client *osdc = lreq->osdc; - struct ceph_osd_request *req = lreq->ping_req; - struct ceph_osd_req_op *op = &req->r_ops[0]; + struct ceph_osd_request *req; + int ret; if (ceph_osdmap_flag(osdc, CEPH_OSDMAP_PAUSERD)) { dout("%s PAUSERD\n", __func__); @@ -3137,19 +3157,26 @@ static void send_linger_ping(struct ceph_osd_linger_request *lreq) __func__, lreq, lreq->linger_id, lreq->ping_sent, lreq->register_gen); - if (req->r_osd) - cancel_linger_request(req); + if (lreq->ping_req) { + if (lreq->ping_req->r_osd) + cancel_linger_request(lreq->ping_req); + ceph_osdc_put_request(lreq->ping_req); + } + + req = ceph_osdc_alloc_request(osdc, NULL, 1, true, GFP_NOIO); + BUG_ON(!req); - request_reinit(req); target_copy(&req->r_t, &lreq->t); - - WARN_ON(op->op != CEPH_OSD_OP_WATCH || - op->watch.cookie != lreq->linger_id || - op->watch.op != CEPH_OSD_WATCH_OP_PING); - op->watch.gen = lreq->register_gen; + osd_req_op_watch_init(req, 0, CEPH_OSD_WATCH_OP_PING, lreq->linger_id, + lreq->register_gen); req->r_callback = linger_ping_cb; + + ret = ceph_osdc_alloc_messages(req, GFP_NOIO); + BUG_ON(ret); + req->r_priv = linger_get(lreq); req->r_linger = true; + lreq->ping_req = req; ceph_osdc_get_request(req); account_request(req); @@ -3165,12 +3192,6 @@ static void linger_submit(struct ceph_osd_linger_request *lreq) down_write(&osdc->lock); linger_register(lreq); - if (lreq->is_watch) { - lreq->reg_req->r_ops[0].watch.cookie = lreq->linger_id; - lreq->ping_req->r_ops[0].watch.cookie = lreq->linger_id; - } else { - lreq->reg_req->r_ops[0].notify.cookie = lreq->linger_id; - } calc_target(osdc, &lreq->t, false); osd = lookup_create_osd(osdc, lreq->t.osd, true); @@ -3202,9 +3223,9 @@ static void cancel_linger_map_check(struct ceph_osd_linger_request *lreq) */ static void __linger_cancel(struct ceph_osd_linger_request *lreq) { - if (lreq->is_watch && lreq->ping_req->r_osd) + if (lreq->ping_req && lreq->ping_req->r_osd) cancel_linger_request(lreq->ping_req); - if (lreq->reg_req->r_osd) + if (lreq->reg_req && lreq->reg_req->r_osd) cancel_linger_request(lreq->reg_req); cancel_linger_map_check(lreq); unlink_linger(lreq->osd, lreq); @@ -4651,43 +4672,6 @@ void ceph_osdc_sync(struct ceph_osd_client *osdc) } EXPORT_SYMBOL(ceph_osdc_sync); -static struct ceph_osd_request * -alloc_linger_request(struct ceph_osd_linger_request *lreq) -{ - struct ceph_osd_request *req; - - req = ceph_osdc_alloc_request(lreq->osdc, NULL, 1, false, GFP_NOIO); - if (!req) - return NULL; - - ceph_oid_copy(&req->r_base_oid, &lreq->t.base_oid); - ceph_oloc_copy(&req->r_base_oloc, &lreq->t.base_oloc); - return req; -} - -static struct ceph_osd_request * -alloc_watch_request(struct ceph_osd_linger_request *lreq, u8 watch_opcode) -{ - struct ceph_osd_request *req; - - req = alloc_linger_request(lreq); - if (!req) - return NULL; - - /* - * Pass 0 for cookie because we don't know it yet, it will be - * filled in by linger_submit(). - */ - osd_req_op_watch_init(req, 0, 0, watch_opcode); - - if (ceph_osdc_alloc_messages(req, GFP_NOIO)) { - ceph_osdc_put_request(req); - return NULL; - } - - return req; -} - /* * Returns a handle, caller owns a ref. */ @@ -4717,18 +4701,6 @@ ceph_osdc_watch(struct ceph_osd_client *osdc, lreq->t.flags = CEPH_OSD_FLAG_WRITE; ktime_get_real_ts64(&lreq->mtime); - lreq->reg_req = alloc_watch_request(lreq, CEPH_OSD_WATCH_OP_WATCH); - if (!lreq->reg_req) { - ret = -ENOMEM; - goto err_put_lreq; - } - - lreq->ping_req = alloc_watch_request(lreq, CEPH_OSD_WATCH_OP_PING); - if (!lreq->ping_req) { - ret = -ENOMEM; - goto err_put_lreq; - } - linger_submit(lreq); ret = linger_reg_commit_wait(lreq); if (ret) { @@ -4766,8 +4738,8 @@ int ceph_osdc_unwatch(struct ceph_osd_client *osdc, ceph_oloc_copy(&req->r_base_oloc, &lreq->t.base_oloc); req->r_flags = CEPH_OSD_FLAG_WRITE; ktime_get_real_ts64(&req->r_mtime); - osd_req_op_watch_init(req, 0, lreq->linger_id, - CEPH_OSD_WATCH_OP_UNWATCH); + osd_req_op_watch_init(req, 0, CEPH_OSD_WATCH_OP_UNWATCH, + lreq->linger_id, 0); ret = ceph_osdc_alloc_messages(req, GFP_NOIO); if (ret) @@ -4853,35 +4825,6 @@ int ceph_osdc_notify_ack(struct ceph_osd_client *osdc, } EXPORT_SYMBOL(ceph_osdc_notify_ack); -static int osd_req_op_notify_init(struct ceph_osd_request *req, int which, - u64 cookie, u32 prot_ver, u32 timeout, - void *payload, u32 payload_len) -{ - struct ceph_osd_req_op *op; - struct ceph_pagelist *pl; - int ret; - - op = osd_req_op_init(req, which, CEPH_OSD_OP_NOTIFY, 0); - op->notify.cookie = cookie; - - pl = ceph_pagelist_alloc(GFP_NOIO); - if (!pl) - return -ENOMEM; - - ret = ceph_pagelist_encode_32(pl, 1); /* prot_ver */ - ret |= ceph_pagelist_encode_32(pl, timeout); - ret |= ceph_pagelist_encode_32(pl, payload_len); - ret |= ceph_pagelist_append(pl, payload, payload_len); - if (ret) { - ceph_pagelist_release(pl); - return -ENOMEM; - } - - ceph_osd_data_pagelist_init(&op->notify.request_data, pl); - op->indata_len = pl->length; - return 0; -} - /* * @timeout: in seconds * @@ -4900,7 +4843,6 @@ int ceph_osdc_notify(struct ceph_osd_client *osdc, size_t *preply_len) { struct ceph_osd_linger_request *lreq; - struct page **pages; int ret; WARN_ON(!timeout); @@ -4913,6 +4855,29 @@ int ceph_osdc_notify(struct ceph_osd_client *osdc, if (!lreq) return -ENOMEM; + lreq->request_pl = ceph_pagelist_alloc(GFP_NOIO); + if (!lreq->request_pl) { + ret = -ENOMEM; + goto out_put_lreq; + } + + ret = ceph_pagelist_encode_32(lreq->request_pl, 1); /* prot_ver */ + ret |= ceph_pagelist_encode_32(lreq->request_pl, timeout); + ret |= ceph_pagelist_encode_32(lreq->request_pl, payload_len); + ret |= ceph_pagelist_append(lreq->request_pl, payload, payload_len); + if (ret) { + ret = -ENOMEM; + goto out_put_lreq; + } + + /* for notify_id */ + lreq->notify_id_pages = ceph_alloc_page_vector(1, GFP_NOIO); + if (IS_ERR(lreq->notify_id_pages)) { + ret = PTR_ERR(lreq->notify_id_pages); + lreq->notify_id_pages = NULL; + goto out_put_lreq; + } + lreq->preply_pages = preply_pages; lreq->preply_len = preply_len; @@ -4920,35 +4885,6 @@ int ceph_osdc_notify(struct ceph_osd_client *osdc, ceph_oloc_copy(&lreq->t.base_oloc, oloc); lreq->t.flags = CEPH_OSD_FLAG_READ; - lreq->reg_req = alloc_linger_request(lreq); - if (!lreq->reg_req) { - ret = -ENOMEM; - goto out_put_lreq; - } - - /* - * Pass 0 for cookie because we don't know it yet, it will be - * filled in by linger_submit(). - */ - ret = osd_req_op_notify_init(lreq->reg_req, 0, 0, 1, timeout, - payload, payload_len); - if (ret) - goto out_put_lreq; - - /* for notify_id */ - pages = ceph_alloc_page_vector(1, GFP_NOIO); - if (IS_ERR(pages)) { - ret = PTR_ERR(pages); - goto out_put_lreq; - } - ceph_osd_data_pages_init(osd_req_op_data(lreq->reg_req, 0, notify, - response_data), - pages, PAGE_SIZE, 0, false, true); - - ret = ceph_osdc_alloc_messages(lreq->reg_req, GFP_NOIO); - if (ret) - goto out_put_lreq; - linger_submit(lreq); ret = linger_reg_commit_wait(lreq); if (!ret) From e5289affbacc0d88214111774adb02d771d1213d Mon Sep 17 00:00:00 2001 From: Hangyu Hua Date: Mon, 16 May 2022 11:20:42 +0800 Subject: [PATCH 0045/1842] drm/dp/mst: fix a possible memory leak in fetch_monitor_name() commit 6e03b13cc7d9427c2c77feed1549191015615202 upstream. drm_dp_mst_get_edid call kmemdup to create mst_edid. So mst_edid need to be freed after use. Signed-off-by: Hangyu Hua Reviewed-by: Lyude Paul Signed-off-by: Lyude Paul Cc: stable@vger.kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20220516032042.13166-1-hbh25y@gmail.com Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/drm_dp_mst_topology.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c index 1f54e9470165..ab423b0413ee 100644 --- a/drivers/gpu/drm/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/drm_dp_mst_topology.c @@ -4792,6 +4792,7 @@ static void fetch_monitor_name(struct drm_dp_mst_topology_mgr *mgr, mst_edid = drm_dp_mst_get_edid(port->connector, mgr, port); drm_edid_get_monitor_name(mst_edid, name, namelen); + kfree(mst_edid); } /** From 3fc28460998a7a197ce7bc469effa78e32f34872 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?J=C3=A9r=C3=B4me=20Pouiller?= Date: Tue, 17 May 2022 09:27:08 +0200 Subject: [PATCH 0046/1842] dma-buf: fix use of DMA_BUF_SET_NAME_{A,B} in userspace MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 7c3e9fcad9c7d8bb5d69a576044fb16b1d2e8a01 upstream. The typedefs u32 and u64 are not available in userspace. Thus user get an error he try to use DMA_BUF_SET_NAME_A or DMA_BUF_SET_NAME_B: $ gcc -Wall -c -MMD -c -o ioctls_list.o ioctls_list.c In file included from /usr/include/x86_64-linux-gnu/asm/ioctl.h:1, from /usr/include/linux/ioctl.h:5, from /usr/include/asm-generic/ioctls.h:5, from ioctls_list.c:11: ioctls_list.c:463:29: error: ‘u32’ undeclared here (not in a function) 463 | { "DMA_BUF_SET_NAME_A", DMA_BUF_SET_NAME_A, -1, -1 }, // linux/dma-buf.h | ^~~~~~~~~~~~~~~~~~ ioctls_list.c:464:29: error: ‘u64’ undeclared here (not in a function) 464 | { "DMA_BUF_SET_NAME_B", DMA_BUF_SET_NAME_B, -1, -1 }, // linux/dma-buf.h | ^~~~~~~~~~~~~~~~~~ The issue was initially reported here[1]. [1]: https://github.com/jerome-pouiller/ioctl/pull/14 Signed-off-by: Jérôme Pouiller Reviewed-by: Christian König Fixes: a5bff92eaac4 ("dma-buf: Fix SET_NAME ioctl uapi") CC: stable@vger.kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20220517072708.245265-1-Jerome.Pouiller@silabs.com Signed-off-by: Christian König Signed-off-by: Greg Kroah-Hartman --- include/uapi/linux/dma-buf.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 7f30393b92c3..f76d11725c6c 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -44,7 +44,7 @@ struct dma_buf_sync { * between them in actual uapi, they're just different numbers. */ #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) -#define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) -#define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) +#define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, __u32) +#define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, __u64) #endif From d8ca684c3d3bc9155353fc24b84674e21128f1f6 Mon Sep 17 00:00:00 2001 From: Jae Hyun Yoo Date: Tue, 29 Mar 2022 10:39:26 -0700 Subject: [PATCH 0047/1842] ARM: dts: aspeed-g6: remove FWQSPID group in pinctrl dtsi [ Upstream commit efddaa397cceefb61476e383c26fafd1f8ab6356 ] FWSPIDQ2 and FWSPIDQ3 are not part of FWSPI18 interface so remove FWQSPID group in pinctrl dtsi. These pins must be used with the FWSPI pins that are dedicated for boot SPI interface which provides same 3.3v logic level. Fixes: 2f6edb6bcb2f ("ARM: dts: aspeed: Fix AST2600 quad spi group") Signed-off-by: Jae Hyun Yoo Reviewed-by: Andrew Jeffery Link: https://lore.kernel.org/r/20220329173932.2588289-2-quic_jaehyoo@quicinc.com Signed-off-by: Joel Stanley Signed-off-by: Sasha Levin --- arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi | 5 ----- 1 file changed, 5 deletions(-) diff --git a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi index a362714ae9fc..546ce37f3f4e 100644 --- a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi +++ b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi @@ -117,11 +117,6 @@ pinctrl_fwspid_default: fwspid_default { groups = "FWSPID"; }; - pinctrl_fwqspid_default: fwqspid_default { - function = "FWSPID"; - groups = "FWQSPID"; - }; - pinctrl_fwspiwp_default: fwspiwp_default { function = "FWSPIWP"; groups = "FWSPIWP"; From 0a2847d44812d765623d6676b40a8f0cc612e512 Mon Sep 17 00:00:00 2001 From: Jae Hyun Yoo Date: Tue, 29 Mar 2022 10:39:27 -0700 Subject: [PATCH 0048/1842] pinctrl: pinctrl-aspeed-g6: remove FWQSPID group in pinctrl [ Upstream commit 3eef2f48ba0933ba995529f522554ad5c276c39b ] FWSPIDQ2 and FWSPIDQ3 are not part of FWSPI18 interface so remove FWQSPID group in pinctrl. These pins must be used with the FWSPI pins that are dedicated for boot SPI interface which provides same 3.3v logic level. Fixes: 2eda1cdec49f ("pinctrl: aspeed: Add AST2600 pinmux support") Signed-off-by: Jae Hyun Yoo Reviewed-by: Andrew Jeffery Link: https://lore.kernel.org/r/20220329173932.2588289-3-quic_jaehyoo@quicinc.com Signed-off-by: Joel Stanley Signed-off-by: Sasha Levin --- drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c | 14 +++----------- 1 file changed, 3 insertions(+), 11 deletions(-) diff --git a/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c b/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c index 5c1a109842a7..c2ba4064ce5b 100644 --- a/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c +++ b/drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c @@ -1224,18 +1224,12 @@ FUNC_GROUP_DECL(SALT8, AA12); FUNC_GROUP_DECL(WDTRST4, AA12); #define AE12 196 -SIG_EXPR_LIST_DECL_SEMG(AE12, FWSPIDQ2, FWQSPID, FWSPID, - SIG_DESC_SET(SCU438, 4)); SIG_EXPR_LIST_DECL_SESG(AE12, GPIOY4, GPIOY4); -PIN_DECL_(AE12, SIG_EXPR_LIST_PTR(AE12, FWSPIDQ2), - SIG_EXPR_LIST_PTR(AE12, GPIOY4)); +PIN_DECL_(AE12, SIG_EXPR_LIST_PTR(AE12, GPIOY4)); #define AF12 197 -SIG_EXPR_LIST_DECL_SEMG(AF12, FWSPIDQ3, FWQSPID, FWSPID, - SIG_DESC_SET(SCU438, 5)); SIG_EXPR_LIST_DECL_SESG(AF12, GPIOY5, GPIOY5); -PIN_DECL_(AF12, SIG_EXPR_LIST_PTR(AF12, FWSPIDQ3), - SIG_EXPR_LIST_PTR(AF12, GPIOY5)); +PIN_DECL_(AF12, SIG_EXPR_LIST_PTR(AF12, GPIOY5)); #define AC12 198 SSSF_PIN_DECL(AC12, GPIOY6, FWSPIABR, SIG_DESC_SET(SCU438, 6)); @@ -1508,9 +1502,8 @@ SIG_EXPR_LIST_DECL_SEMG(Y4, EMMCDAT7, EMMCG8, EMMC, SIG_DESC_SET(SCU404, 3)); PIN_DECL_3(Y4, GPIO18E3, FWSPIDMISO, VBMISO, EMMCDAT7); GROUP_DECL(FWSPID, Y1, Y2, Y3, Y4); -GROUP_DECL(FWQSPID, Y1, Y2, Y3, Y4, AE12, AF12); GROUP_DECL(EMMCG8, AB4, AA4, AC4, AA5, Y5, AB5, AB6, AC5, Y1, Y2, Y3, Y4); -FUNC_DECL_2(FWSPID, FWSPID, FWQSPID); +FUNC_DECL_1(FWSPID, FWSPID); FUNC_GROUP_DECL(VB, Y1, Y2, Y3, Y4); FUNC_DECL_3(EMMC, EMMCG1, EMMCG4, EMMCG8); /* @@ -1906,7 +1899,6 @@ static const struct aspeed_pin_group aspeed_g6_groups[] = { ASPEED_PINCTRL_GROUP(FSI2), ASPEED_PINCTRL_GROUP(FWSPIABR), ASPEED_PINCTRL_GROUP(FWSPID), - ASPEED_PINCTRL_GROUP(FWQSPID), ASPEED_PINCTRL_GROUP(FWSPIWP), ASPEED_PINCTRL_GROUP(GPIT0), ASPEED_PINCTRL_GROUP(GPIT1), From 0599d5a8b4e1a60cc2f8f8af400288f1990ffcc9 Mon Sep 17 00:00:00 2001 From: Jae Hyun Yoo Date: Tue, 29 Mar 2022 10:39:32 -0700 Subject: [PATCH 0049/1842] ARM: dts: aspeed-g6: fix SPI1/SPI2 quad pin group [ Upstream commit 890362d41b244536ab63591f813393f5fdf59ed7 ] Fix incorrect function mappings in pinctrl_qspi1_default and pinctrl_qspi2_default since their function should be SPI1 and SPI2 respectively. Fixes: f510f04c8c83 ("ARM: dts: aspeed: Add AST2600 pinmux nodes") Signed-off-by: Jae Hyun Yoo Reviewed-by: Andrew Jeffery Link: https://lore.kernel.org/r/20220329173932.2588289-8-quic_jaehyoo@quicinc.com Signed-off-by: Joel Stanley Signed-off-by: Sasha Levin --- arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi index 546ce37f3f4e..1ef89dd55d92 100644 --- a/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi +++ b/arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi @@ -648,12 +648,12 @@ pinctrl_pwm9g1_default: pwm9g1_default { }; pinctrl_qspi1_default: qspi1_default { - function = "QSPI1"; + function = "SPI1"; groups = "QSPI1"; }; pinctrl_qspi2_default: qspi2_default { - function = "QSPI2"; + function = "SPI2"; groups = "QSPI2"; }; From 998e305bd160c72b462f4f4c05763c7c6cf9f187 Mon Sep 17 00:00:00 2001 From: Alex Elder Date: Thu, 12 May 2022 10:10:32 -0500 Subject: [PATCH 0050/1842] net: ipa: record proper RX transaction count [ Upstream commit d8290cbe1111105f92f0c8ab455bec8bf98d0630 ] Each time we are notified that some number of transactions on an RX channel has completed, we record the number of bytes that have been transferred since the previous notification. We also track the number of transactions completed, but that is not currently being calculated correctly; we're currently counting the number of such notifications, but each notification can represent many transaction completions. Fix this. Fixes: 650d1603825d8 ("soc: qcom: ipa: the generic software interface") Signed-off-by: Alex Elder Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ipa/gsi.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c index 2a65efd3e8da..fe91b72eca36 100644 --- a/drivers/net/ipa/gsi.c +++ b/drivers/net/ipa/gsi.c @@ -1209,9 +1209,10 @@ static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index) struct gsi_event *event_done; struct gsi_event *event; struct gsi_trans *trans; + u32 trans_count = 0; u32 byte_count = 0; - u32 old_index; u32 event_avail; + u32 old_index; trans_info = &channel->trans_info; @@ -1232,6 +1233,7 @@ static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index) do { trans->len = __le16_to_cpu(event->len); byte_count += trans->len; + trans_count++; /* Move on to the next event and transaction */ if (--event_avail) @@ -1243,7 +1245,7 @@ static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index) /* We record RX bytes when they are received */ channel->byte_count += byte_count; - channel->trans_count++; + channel->trans_count += trans_count; } /* Initialize a ring, including allocating DMA memory for its entries */ From 1bc27eb71b55e738d5e54ee16a7c561b30441cf5 Mon Sep 17 00:00:00 2001 From: Harini Katakam Date: Thu, 12 May 2022 22:49:00 +0530 Subject: [PATCH 0051/1842] net: macb: Increment rx bd head after allocating skb and buffer [ Upstream commit 9500acc631dbb8b73166e25700e656b11f6007b6 ] In gem_rx_refill rx_prepared_head is incremented at the beginning of the while loop preparing the skb and data buffers. If the skb or data buffer allocation fails, this BD will be unusable BDs until the head loops back to the same BD (and obviously buffer allocation succeeds). In the unlikely event that there's a string of allocation failures, there will be an equal number of unusable BDs and an inconsistent RX BD chain. Hence increment the head at the end of the while loop to be clean. Fixes: 4df95131ea80 ("net/macb: change RX path for GEM") Signed-off-by: Harini Katakam Signed-off-by: Michal Simek Signed-off-by: Radhey Shyam Pandey Reviewed-by: Claudiu Beznea Link: https://lore.kernel.org/r/20220512171900.32593-1-harini.katakam@xilinx.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/cadence/macb_main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c index bd13f91efe7c..792c8147c2c4 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -1092,7 +1092,6 @@ static void gem_rx_refill(struct macb_queue *queue) /* Make hw descriptor updates visible to CPU */ rmb(); - queue->rx_prepared_head++; desc = macb_rx_desc(queue, entry); if (!queue->rx_skbuff[entry]) { @@ -1131,6 +1130,7 @@ static void gem_rx_refill(struct macb_queue *queue) dma_wmb(); desc->addr &= ~MACB_BIT(RX_USED); } + queue->rx_prepared_head++; } /* Make descriptor updates visible to hardware */ From 243e72e20446b25496887304f3e01e26702b0ac7 Mon Sep 17 00:00:00 2001 From: Vincent Bernat Date: Sat, 7 Nov 2020 20:35:15 +0100 Subject: [PATCH 0052/1842] net: evaluate net.ipvX.conf.all.disable_policy and disable_xfrm [ Upstream commit 62679a8d3aa4ba15ff63574a43e5686078d7b804 ] The disable_policy and disable_xfrm are a per-interface sysctl to disable IPsec policy or encryption on an interface. However, while a "all" variant is exposed, it was a noop since it was never evaluated. We use the usual "or" logic for this kind of sysctls. Signed-off-by: Vincent Bernat Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- net/ipv4/route.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/net/ipv4/route.c b/net/ipv4/route.c index 4080e3c6c50d..9bd3cd2177f4 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -1776,7 +1776,7 @@ static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr, flags |= RTCF_LOCAL; rth = rt_dst_alloc(dev_net(dev)->loopback_dev, flags, RTN_MULTICAST, - IN_DEV_CONF_GET(in_dev, NOPOLICY), false); + IN_DEV_ORCONF(in_dev, NOPOLICY), false); if (!rth) return -ENOBUFS; @@ -1893,8 +1893,8 @@ static int __mkroute_input(struct sk_buff *skb, } rth = rt_dst_alloc(out_dev->dev, 0, res->type, - IN_DEV_CONF_GET(in_dev, NOPOLICY), - IN_DEV_CONF_GET(out_dev, NOXFRM)); + IN_DEV_ORCONF(in_dev, NOPOLICY), + IN_DEV_ORCONF(out_dev, NOXFRM)); if (!rth) { err = -ENOBUFS; goto cleanup; @@ -2276,7 +2276,7 @@ out: return err; rth = rt_dst_alloc(ip_rt_get_dev(net, res), flags | RTCF_LOCAL, res->type, - IN_DEV_CONF_GET(in_dev, NOPOLICY), false); + IN_DEV_ORCONF(in_dev, NOPOLICY), false); if (!rth) goto e_nobufs; @@ -2499,8 +2499,8 @@ static struct rtable *__mkroute_output(const struct fib_result *res, add: rth = rt_dst_alloc(dev_out, flags, type, - IN_DEV_CONF_GET(in_dev, NOPOLICY), - IN_DEV_CONF_GET(in_dev, NOXFRM)); + IN_DEV_ORCONF(in_dev, NOPOLICY), + IN_DEV_ORCONF(in_dev, NOXFRM)); if (!rth) return ERR_PTR(-ENOBUFS); From 5b7f84b1f9f46327360a64c529433fa0d68cc3f4 Mon Sep 17 00:00:00 2001 From: Steffen Klassert Date: Sun, 18 Jul 2021 09:11:06 +0200 Subject: [PATCH 0053/1842] xfrm: Add possibility to set the default to block if we have no policy [ Upstream commit 2d151d39073aff498358543801fca0f670fea981 ] As the default we assume the traffic to pass, if we have no matching IPsec policy. With this patch, we have a possibility to change this default from allow to block. It can be configured via netlink. Each direction (input/output/forward) can be configured separately. With the default to block configuered, we need allow policies for all packet flows we accept. We do not use default policy lookup for the loopback device. v1->v2 - fix compiling when XFRM is disabled - Reported-by: kernel test robot Co-developed-by: Christian Langrock Signed-off-by: Christian Langrock Co-developed-by: Antony Antony Signed-off-by: Antony Antony Signed-off-by: Steffen Klassert Signed-off-by: Sasha Levin --- include/net/netns/xfrm.h | 7 ++++++ include/net/xfrm.h | 36 ++++++++++++++++++++++----- include/uapi/linux/xfrm.h | 10 ++++++++ net/xfrm/xfrm_policy.c | 16 ++++++++++++ net/xfrm/xfrm_user.c | 52 +++++++++++++++++++++++++++++++++++++++ 5 files changed, 115 insertions(+), 6 deletions(-) diff --git a/include/net/netns/xfrm.h b/include/net/netns/xfrm.h index 22e1bc72b979..b694ff0963cc 100644 --- a/include/net/netns/xfrm.h +++ b/include/net/netns/xfrm.h @@ -64,6 +64,13 @@ struct netns_xfrm { u32 sysctl_aevent_rseqth; int sysctl_larval_drop; u32 sysctl_acq_expires; + + u8 policy_default; +#define XFRM_POL_DEFAULT_IN 1 +#define XFRM_POL_DEFAULT_OUT 2 +#define XFRM_POL_DEFAULT_FWD 4 +#define XFRM_POL_DEFAULT_MASK 7 + #ifdef CONFIG_SYSCTL struct ctl_table_header *sysctl_hdr; #endif diff --git a/include/net/xfrm.h b/include/net/xfrm.h index 0049a7459649..988886f95e5b 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -1088,6 +1088,22 @@ xfrm_state_addr_cmp(const struct xfrm_tmpl *tmpl, const struct xfrm_state *x, un } #ifdef CONFIG_XFRM +static inline bool +xfrm_default_allow(struct net *net, int dir) +{ + u8 def = net->xfrm.policy_default; + + switch (dir) { + case XFRM_POLICY_IN: + return def & XFRM_POL_DEFAULT_IN ? false : true; + case XFRM_POLICY_OUT: + return def & XFRM_POL_DEFAULT_OUT ? false : true; + case XFRM_POLICY_FWD: + return def & XFRM_POL_DEFAULT_FWD ? false : true; + } + return false; +} + int __xfrm_policy_check(struct sock *, int dir, struct sk_buff *skb, unsigned short family); @@ -1101,9 +1117,13 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir, if (sk && sk->sk_policy[XFRM_POLICY_IN]) return __xfrm_policy_check(sk, ndir, skb, family); - return (!net->xfrm.policy_count[dir] && !secpath_exists(skb)) || - (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || - __xfrm_policy_check(sk, ndir, skb, family); + if (xfrm_default_allow(net, dir)) + return (!net->xfrm.policy_count[dir] && !secpath_exists(skb)) || + (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || + __xfrm_policy_check(sk, ndir, skb, family); + else + return (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || + __xfrm_policy_check(sk, ndir, skb, family); } static inline int xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, unsigned short family) @@ -1155,9 +1175,13 @@ static inline int xfrm_route_forward(struct sk_buff *skb, unsigned short family) { struct net *net = dev_net(skb->dev); - return !net->xfrm.policy_count[XFRM_POLICY_OUT] || - (skb_dst(skb)->flags & DST_NOXFRM) || - __xfrm_route_forward(skb, family); + if (xfrm_default_allow(net, XFRM_POLICY_FWD)) + return !net->xfrm.policy_count[XFRM_POLICY_OUT] || + (skb_dst(skb)->flags & DST_NOXFRM) || + __xfrm_route_forward(skb, family); + else + return (skb_dst(skb)->flags & DST_NOXFRM) || + __xfrm_route_forward(skb, family); } static inline int xfrm4_route_forward(struct sk_buff *skb) diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h index 90ddb49fce84..b963e1acf65a 100644 --- a/include/uapi/linux/xfrm.h +++ b/include/uapi/linux/xfrm.h @@ -213,6 +213,11 @@ enum { XFRM_MSG_GETSPDINFO, #define XFRM_MSG_GETSPDINFO XFRM_MSG_GETSPDINFO + XFRM_MSG_SETDEFAULT, +#define XFRM_MSG_SETDEFAULT XFRM_MSG_SETDEFAULT + XFRM_MSG_GETDEFAULT, +#define XFRM_MSG_GETDEFAULT XFRM_MSG_GETDEFAULT + XFRM_MSG_MAPPING, #define XFRM_MSG_MAPPING XFRM_MSG_MAPPING __XFRM_MSG_MAX @@ -515,6 +520,11 @@ struct xfrm_user_offload { #define XFRM_OFFLOAD_IPV6 1 #define XFRM_OFFLOAD_INBOUND 2 +struct xfrm_userpolicy_default { + __u8 dirmask; + __u8 action; +}; + #ifndef __KERNEL__ /* backwards compatibility for userspace */ #define XFRMGRP_ACQUIRE 1 diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index 3d0ffd927004..2c701fb3a61b 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -3161,6 +3161,11 @@ struct dst_entry *xfrm_lookup_with_ifid(struct net *net, return dst; nopol: + if (!(dst_orig->dev->flags & IFF_LOOPBACK) && + !xfrm_default_allow(net, dir)) { + err = -EPERM; + goto error; + } if (!(flags & XFRM_LOOKUP_ICMP)) { dst = dst_orig; goto ok; @@ -3608,6 +3613,11 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, } if (!pol) { + if (!xfrm_default_allow(net, dir)) { + XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOPOLS); + return 0; + } + if (sp && secpath_has_nontransport(sp, 0, &xerr_idx)) { xfrm_secpath_reject(xerr_idx, skb, &fl); XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOPOLS); @@ -3662,6 +3672,12 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, tpp[ti++] = &pols[pi]->xfrm_vec[i]; } xfrm_nr = ti; + + if (!xfrm_default_allow(net, dir) && !xfrm_nr) { + XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOSTATES); + goto reject; + } + if (npols > 1) { xfrm_tmpl_sort(stp, tpp, xfrm_nr, family); tpp = stp; diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index 1ece01cd67a4..dec24f280e83 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -1914,6 +1914,54 @@ static struct sk_buff *xfrm_policy_netlink(struct sk_buff *in_skb, return skb; } +static int xfrm_set_default(struct sk_buff *skb, struct nlmsghdr *nlh, + struct nlattr **attrs) +{ + struct net *net = sock_net(skb->sk); + struct xfrm_userpolicy_default *up = nlmsg_data(nlh); + u8 dirmask = (1 << up->dirmask) & XFRM_POL_DEFAULT_MASK; + u8 old_default = net->xfrm.policy_default; + + net->xfrm.policy_default = (old_default & (0xff ^ dirmask)) + | (up->action << up->dirmask); + + rt_genid_bump_all(net); + + return 0; +} + +static int xfrm_get_default(struct sk_buff *skb, struct nlmsghdr *nlh, + struct nlattr **attrs) +{ + struct sk_buff *r_skb; + struct nlmsghdr *r_nlh; + struct net *net = sock_net(skb->sk); + struct xfrm_userpolicy_default *r_up, *up; + int len = NLMSG_ALIGN(sizeof(struct xfrm_userpolicy_default)); + u32 portid = NETLINK_CB(skb).portid; + u32 seq = nlh->nlmsg_seq; + + up = nlmsg_data(nlh); + + r_skb = nlmsg_new(len, GFP_ATOMIC); + if (!r_skb) + return -ENOMEM; + + r_nlh = nlmsg_put(r_skb, portid, seq, XFRM_MSG_GETDEFAULT, sizeof(*r_up), 0); + if (!r_nlh) { + kfree_skb(r_skb); + return -EMSGSIZE; + } + + r_up = nlmsg_data(r_nlh); + + r_up->action = ((net->xfrm.policy_default & (1 << up->dirmask)) >> up->dirmask); + r_up->dirmask = up->dirmask; + nlmsg_end(r_skb, r_nlh); + + return nlmsg_unicast(net->xfrm.nlsk, r_skb, portid); +} + static int xfrm_get_policy(struct sk_buff *skb, struct nlmsghdr *nlh, struct nlattr **attrs) { @@ -2621,6 +2669,8 @@ const int xfrm_msg_min[XFRM_NR_MSGTYPES] = { [XFRM_MSG_GETSADINFO - XFRM_MSG_BASE] = sizeof(u32), [XFRM_MSG_NEWSPDINFO - XFRM_MSG_BASE] = sizeof(u32), [XFRM_MSG_GETSPDINFO - XFRM_MSG_BASE] = sizeof(u32), + [XFRM_MSG_SETDEFAULT - XFRM_MSG_BASE] = XMSGSIZE(xfrm_userpolicy_default), + [XFRM_MSG_GETDEFAULT - XFRM_MSG_BASE] = XMSGSIZE(xfrm_userpolicy_default), }; EXPORT_SYMBOL_GPL(xfrm_msg_min); @@ -2700,6 +2750,8 @@ static const struct xfrm_link { .nla_pol = xfrma_spd_policy, .nla_max = XFRMA_SPD_MAX }, [XFRM_MSG_GETSPDINFO - XFRM_MSG_BASE] = { .doit = xfrm_get_spdinfo }, + [XFRM_MSG_SETDEFAULT - XFRM_MSG_BASE] = { .doit = xfrm_set_default }, + [XFRM_MSG_GETDEFAULT - XFRM_MSG_BASE] = { .doit = xfrm_get_default }, }; static int xfrm_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, From ab610ee1d1a1d5f9f6e326c21962cb3fc8b81a5b Mon Sep 17 00:00:00 2001 From: Pavel Skripkin Date: Wed, 28 Jul 2021 19:38:18 +0300 Subject: [PATCH 0054/1842] net: xfrm: fix shift-out-of-bounce [ Upstream commit 5d8dbb7fb82b8661c16d496644b931c0e2e3a12e ] We need to check up->dirmask to avoid shift-out-of-bounce bug, since up->dirmask comes from userspace. Also, added XFRM_USERPOLICY_DIRMASK_MAX constant to uapi to inform user-space that up->dirmask has maximum possible value Fixes: 2d151d39073a ("xfrm: Add possibility to set the default to block if we have no policy") Reported-and-tested-by: syzbot+9cd5837a045bbee5b810@syzkaller.appspotmail.com Signed-off-by: Pavel Skripkin Signed-off-by: Steffen Klassert Signed-off-by: Sasha Levin --- include/uapi/linux/xfrm.h | 1 + net/xfrm/xfrm_user.c | 7 ++++++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h index b963e1acf65a..2a2c4dcb015f 100644 --- a/include/uapi/linux/xfrm.h +++ b/include/uapi/linux/xfrm.h @@ -521,6 +521,7 @@ struct xfrm_user_offload { #define XFRM_OFFLOAD_INBOUND 2 struct xfrm_userpolicy_default { +#define XFRM_USERPOLICY_DIRMASK_MAX (sizeof(__u8) * 8) __u8 dirmask; __u8 action; }; diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index dec24f280e83..026f29f80f88 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -1919,9 +1919,14 @@ static int xfrm_set_default(struct sk_buff *skb, struct nlmsghdr *nlh, { struct net *net = sock_net(skb->sk); struct xfrm_userpolicy_default *up = nlmsg_data(nlh); - u8 dirmask = (1 << up->dirmask) & XFRM_POL_DEFAULT_MASK; + u8 dirmask; u8 old_default = net->xfrm.policy_default; + if (up->dirmask >= XFRM_USERPOLICY_DIRMASK_MAX) + return -EINVAL; + + dirmask = (1 << up->dirmask) & XFRM_POL_DEFAULT_MASK; + net->xfrm.policy_default = (old_default & (0xff ^ dirmask)) | (up->action << up->dirmask); From 20fd28df40494400babe79c89549bd8317b7dd9f Mon Sep 17 00:00:00 2001 From: Nicolas Dichtel Date: Tue, 14 Sep 2021 16:46:33 +0200 Subject: [PATCH 0055/1842] xfrm: make user policy API complete [ Upstream commit f8d858e607b2a36808ac6d4218f5f5203d7a7d63 ] >From a userland POV, this API was based on some magic values: - dirmask and action were bitfields but meaning of bits (XFRM_POL_DEFAULT_*) are not exported; - action is confusing, if a bit is set, does it mean drop or accept? Let's try to simplify this uapi by using explicit field and macros. Fixes: 2d151d39073a ("xfrm: Add possibility to set the default to block if we have no policy") Signed-off-by: Nicolas Dichtel Signed-off-by: Steffen Klassert Signed-off-by: Sasha Levin --- include/uapi/linux/xfrm.h | 9 ++++++--- net/xfrm/xfrm_user.c | 31 +++++++++++++++++++------------ 2 files changed, 25 insertions(+), 15 deletions(-) diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h index 2a2c4dcb015f..6bae68645148 100644 --- a/include/uapi/linux/xfrm.h +++ b/include/uapi/linux/xfrm.h @@ -521,9 +521,12 @@ struct xfrm_user_offload { #define XFRM_OFFLOAD_INBOUND 2 struct xfrm_userpolicy_default { -#define XFRM_USERPOLICY_DIRMASK_MAX (sizeof(__u8) * 8) - __u8 dirmask; - __u8 action; +#define XFRM_USERPOLICY_UNSPEC 0 +#define XFRM_USERPOLICY_BLOCK 1 +#define XFRM_USERPOLICY_ACCEPT 2 + __u8 in; + __u8 fwd; + __u8 out; }; #ifndef __KERNEL__ diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index 026f29f80f88..261953e081fb 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -1919,16 +1919,21 @@ static int xfrm_set_default(struct sk_buff *skb, struct nlmsghdr *nlh, { struct net *net = sock_net(skb->sk); struct xfrm_userpolicy_default *up = nlmsg_data(nlh); - u8 dirmask; - u8 old_default = net->xfrm.policy_default; - if (up->dirmask >= XFRM_USERPOLICY_DIRMASK_MAX) - return -EINVAL; + if (up->in == XFRM_USERPOLICY_BLOCK) + net->xfrm.policy_default |= XFRM_POL_DEFAULT_IN; + else if (up->in == XFRM_USERPOLICY_ACCEPT) + net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_IN; - dirmask = (1 << up->dirmask) & XFRM_POL_DEFAULT_MASK; + if (up->fwd == XFRM_USERPOLICY_BLOCK) + net->xfrm.policy_default |= XFRM_POL_DEFAULT_FWD; + else if (up->fwd == XFRM_USERPOLICY_ACCEPT) + net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_FWD; - net->xfrm.policy_default = (old_default & (0xff ^ dirmask)) - | (up->action << up->dirmask); + if (up->out == XFRM_USERPOLICY_BLOCK) + net->xfrm.policy_default |= XFRM_POL_DEFAULT_OUT; + else if (up->out == XFRM_USERPOLICY_ACCEPT) + net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_OUT; rt_genid_bump_all(net); @@ -1941,13 +1946,11 @@ static int xfrm_get_default(struct sk_buff *skb, struct nlmsghdr *nlh, struct sk_buff *r_skb; struct nlmsghdr *r_nlh; struct net *net = sock_net(skb->sk); - struct xfrm_userpolicy_default *r_up, *up; + struct xfrm_userpolicy_default *r_up; int len = NLMSG_ALIGN(sizeof(struct xfrm_userpolicy_default)); u32 portid = NETLINK_CB(skb).portid; u32 seq = nlh->nlmsg_seq; - up = nlmsg_data(nlh); - r_skb = nlmsg_new(len, GFP_ATOMIC); if (!r_skb) return -ENOMEM; @@ -1960,8 +1963,12 @@ static int xfrm_get_default(struct sk_buff *skb, struct nlmsghdr *nlh, r_up = nlmsg_data(r_nlh); - r_up->action = ((net->xfrm.policy_default & (1 << up->dirmask)) >> up->dirmask); - r_up->dirmask = up->dirmask; + r_up->in = net->xfrm.policy_default & XFRM_POL_DEFAULT_IN ? + XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; + r_up->fwd = net->xfrm.policy_default & XFRM_POL_DEFAULT_FWD ? + XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; + r_up->out = net->xfrm.policy_default & XFRM_POL_DEFAULT_OUT ? + XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; nlmsg_end(r_skb, r_nlh); return nlmsg_unicast(net->xfrm.nlsk, r_skb, portid); From 9856c3a129dd625c26df92fc58dda739741204ff Mon Sep 17 00:00:00 2001 From: Nicolas Dichtel Date: Tue, 14 Sep 2021 16:46:34 +0200 Subject: [PATCH 0056/1842] xfrm: notify default policy on update [ Upstream commit 88d0adb5f13b1c52fbb7d755f6f79db18c2f0c2c ] This configuration knob is very sensible, it should be notified when changing. Fixes: 2d151d39073a ("xfrm: Add possibility to set the default to block if we have no policy") Signed-off-by: Nicolas Dichtel Signed-off-by: Steffen Klassert Signed-off-by: Sasha Levin --- net/xfrm/xfrm_user.c | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index 261953e081fb..4152f6399205 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -1914,6 +1914,36 @@ static struct sk_buff *xfrm_policy_netlink(struct sk_buff *in_skb, return skb; } +static int xfrm_notify_userpolicy(struct net *net) +{ + struct xfrm_userpolicy_default *up; + int len = NLMSG_ALIGN(sizeof(*up)); + struct nlmsghdr *nlh; + struct sk_buff *skb; + + skb = nlmsg_new(len, GFP_ATOMIC); + if (skb == NULL) + return -ENOMEM; + + nlh = nlmsg_put(skb, 0, 0, XFRM_MSG_GETDEFAULT, sizeof(*up), 0); + if (nlh == NULL) { + kfree_skb(skb); + return -EMSGSIZE; + } + + up = nlmsg_data(nlh); + up->in = net->xfrm.policy_default & XFRM_POL_DEFAULT_IN ? + XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; + up->fwd = net->xfrm.policy_default & XFRM_POL_DEFAULT_FWD ? + XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; + up->out = net->xfrm.policy_default & XFRM_POL_DEFAULT_OUT ? + XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; + + nlmsg_end(skb, nlh); + + return xfrm_nlmsg_multicast(net, skb, 0, XFRMNLGRP_POLICY); +} + static int xfrm_set_default(struct sk_buff *skb, struct nlmsghdr *nlh, struct nlattr **attrs) { @@ -1937,6 +1967,7 @@ static int xfrm_set_default(struct sk_buff *skb, struct nlmsghdr *nlh, rt_genid_bump_all(net); + xfrm_notify_userpolicy(net); return 0; } From 57c1bbe7098b516d535295a9fa762a44c871a74c Mon Sep 17 00:00:00 2001 From: Nicolas Dichtel Date: Mon, 22 Nov 2021 11:33:13 +0100 Subject: [PATCH 0057/1842] xfrm: fix dflt policy check when there is no policy configured [ Upstream commit ec3bb890817e4398f2d46e12e2e205495b116be9 ] When there is no policy configured on the system, the default policy is checked in xfrm_route_forward. However, it was done with the wrong direction (XFRM_POLICY_FWD instead of XFRM_POLICY_OUT). The default policy for XFRM_POLICY_FWD was checked just before, with a call to xfrm[46]_policy_check(). CC: stable@vger.kernel.org Fixes: 2d151d39073a ("xfrm: Add possibility to set the default to block if we have no policy") Signed-off-by: Nicolas Dichtel Signed-off-by: Steffen Klassert Signed-off-by: Sasha Levin --- include/net/xfrm.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/net/xfrm.h b/include/net/xfrm.h index 988886f95e5b..6a9e3b4c8a35 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -1175,7 +1175,7 @@ static inline int xfrm_route_forward(struct sk_buff *skb, unsigned short family) { struct net *net = dev_net(skb->dev); - if (xfrm_default_allow(net, XFRM_POLICY_FWD)) + if (xfrm_default_allow(net, XFRM_POLICY_OUT)) return !net->xfrm.policy_count[XFRM_POLICY_OUT] || (skb_dst(skb)->flags & DST_NOXFRM) || __xfrm_route_forward(skb, family); From 0d2e9d8000efbe733ef23e6de5aba956c783014b Mon Sep 17 00:00:00 2001 From: Nicolas Dichtel Date: Mon, 14 Mar 2022 11:38:22 +0100 Subject: [PATCH 0058/1842] xfrm: rework default policy structure [ Upstream commit b58b1f563ab78955d37e9e43e02790a85c66ac05 ] This is a follow up of commit f8d858e607b2 ("xfrm: make user policy API complete"). The goal is to align userland API to the internal structures. Signed-off-by: Nicolas Dichtel Reviewed-by: Antony Antony Signed-off-by: Steffen Klassert Signed-off-by: Sasha Levin --- include/net/netns/xfrm.h | 6 +---- include/net/xfrm.h | 48 +++++++++++++++------------------------- net/xfrm/xfrm_policy.c | 10 ++++++--- net/xfrm/xfrm_user.c | 43 +++++++++++++++-------------------- 4 files changed, 44 insertions(+), 63 deletions(-) diff --git a/include/net/netns/xfrm.h b/include/net/netns/xfrm.h index b694ff0963cc..69e4161462fb 100644 --- a/include/net/netns/xfrm.h +++ b/include/net/netns/xfrm.h @@ -65,11 +65,7 @@ struct netns_xfrm { int sysctl_larval_drop; u32 sysctl_acq_expires; - u8 policy_default; -#define XFRM_POL_DEFAULT_IN 1 -#define XFRM_POL_DEFAULT_OUT 2 -#define XFRM_POL_DEFAULT_FWD 4 -#define XFRM_POL_DEFAULT_MASK 7 + u8 policy_default[XFRM_POLICY_MAX]; #ifdef CONFIG_SYSCTL struct ctl_table_header *sysctl_hdr; diff --git a/include/net/xfrm.h b/include/net/xfrm.h index 6a9e3b4c8a35..86e5d1aa9628 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -1088,25 +1088,18 @@ xfrm_state_addr_cmp(const struct xfrm_tmpl *tmpl, const struct xfrm_state *x, un } #ifdef CONFIG_XFRM -static inline bool -xfrm_default_allow(struct net *net, int dir) -{ - u8 def = net->xfrm.policy_default; - - switch (dir) { - case XFRM_POLICY_IN: - return def & XFRM_POL_DEFAULT_IN ? false : true; - case XFRM_POLICY_OUT: - return def & XFRM_POL_DEFAULT_OUT ? false : true; - case XFRM_POLICY_FWD: - return def & XFRM_POL_DEFAULT_FWD ? false : true; - } - return false; -} - int __xfrm_policy_check(struct sock *, int dir, struct sk_buff *skb, unsigned short family); +static inline bool __xfrm_check_nopolicy(struct net *net, struct sk_buff *skb, + int dir) +{ + if (!net->xfrm.policy_count[dir] && !secpath_exists(skb)) + return net->xfrm.policy_default[dir] == XFRM_USERPOLICY_ACCEPT; + + return false; +} + static inline int __xfrm_policy_check2(struct sock *sk, int dir, struct sk_buff *skb, unsigned int family, int reverse) @@ -1117,13 +1110,9 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir, if (sk && sk->sk_policy[XFRM_POLICY_IN]) return __xfrm_policy_check(sk, ndir, skb, family); - if (xfrm_default_allow(net, dir)) - return (!net->xfrm.policy_count[dir] && !secpath_exists(skb)) || - (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || - __xfrm_policy_check(sk, ndir, skb, family); - else - return (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || - __xfrm_policy_check(sk, ndir, skb, family); + return __xfrm_check_nopolicy(net, skb, dir) || + (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || + __xfrm_policy_check(sk, ndir, skb, family); } static inline int xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, unsigned short family) @@ -1175,13 +1164,12 @@ static inline int xfrm_route_forward(struct sk_buff *skb, unsigned short family) { struct net *net = dev_net(skb->dev); - if (xfrm_default_allow(net, XFRM_POLICY_OUT)) - return !net->xfrm.policy_count[XFRM_POLICY_OUT] || - (skb_dst(skb)->flags & DST_NOXFRM) || - __xfrm_route_forward(skb, family); - else - return (skb_dst(skb)->flags & DST_NOXFRM) || - __xfrm_route_forward(skb, family); + if (!net->xfrm.policy_count[XFRM_POLICY_OUT] && + net->xfrm.policy_default[XFRM_POLICY_OUT] == XFRM_USERPOLICY_ACCEPT) + return true; + + return (skb_dst(skb)->flags & DST_NOXFRM) || + __xfrm_route_forward(skb, family); } static inline int xfrm4_route_forward(struct sk_buff *skb) diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index 2c701fb3a61b..93cbcc8f9b39 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -3162,7 +3162,7 @@ struct dst_entry *xfrm_lookup_with_ifid(struct net *net, nopol: if (!(dst_orig->dev->flags & IFF_LOOPBACK) && - !xfrm_default_allow(net, dir)) { + net->xfrm.policy_default[dir] == XFRM_USERPOLICY_BLOCK) { err = -EPERM; goto error; } @@ -3613,7 +3613,7 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, } if (!pol) { - if (!xfrm_default_allow(net, dir)) { + if (net->xfrm.policy_default[dir] == XFRM_USERPOLICY_BLOCK) { XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOPOLS); return 0; } @@ -3673,7 +3673,8 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, } xfrm_nr = ti; - if (!xfrm_default_allow(net, dir) && !xfrm_nr) { + if (net->xfrm.policy_default[dir] == XFRM_USERPOLICY_BLOCK && + !xfrm_nr) { XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOSTATES); goto reject; } @@ -4162,6 +4163,9 @@ static int __net_init xfrm_net_init(struct net *net) spin_lock_init(&net->xfrm.xfrm_policy_lock); seqcount_spinlock_init(&net->xfrm.xfrm_policy_hash_generation, &net->xfrm.xfrm_policy_lock); mutex_init(&net->xfrm.xfrm_cfg_mutex); + net->xfrm.policy_default[XFRM_POLICY_IN] = XFRM_USERPOLICY_ACCEPT; + net->xfrm.policy_default[XFRM_POLICY_FWD] = XFRM_USERPOLICY_ACCEPT; + net->xfrm.policy_default[XFRM_POLICY_OUT] = XFRM_USERPOLICY_ACCEPT; rv = xfrm_statistics_init(net); if (rv < 0) diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index 4152f6399205..d9841f44487f 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -1932,38 +1932,35 @@ static int xfrm_notify_userpolicy(struct net *net) } up = nlmsg_data(nlh); - up->in = net->xfrm.policy_default & XFRM_POL_DEFAULT_IN ? - XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; - up->fwd = net->xfrm.policy_default & XFRM_POL_DEFAULT_FWD ? - XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; - up->out = net->xfrm.policy_default & XFRM_POL_DEFAULT_OUT ? - XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; + up->in = net->xfrm.policy_default[XFRM_POLICY_IN]; + up->fwd = net->xfrm.policy_default[XFRM_POLICY_FWD]; + up->out = net->xfrm.policy_default[XFRM_POLICY_OUT]; nlmsg_end(skb, nlh); return xfrm_nlmsg_multicast(net, skb, 0, XFRMNLGRP_POLICY); } +static bool xfrm_userpolicy_is_valid(__u8 policy) +{ + return policy == XFRM_USERPOLICY_BLOCK || + policy == XFRM_USERPOLICY_ACCEPT; +} + static int xfrm_set_default(struct sk_buff *skb, struct nlmsghdr *nlh, struct nlattr **attrs) { struct net *net = sock_net(skb->sk); struct xfrm_userpolicy_default *up = nlmsg_data(nlh); - if (up->in == XFRM_USERPOLICY_BLOCK) - net->xfrm.policy_default |= XFRM_POL_DEFAULT_IN; - else if (up->in == XFRM_USERPOLICY_ACCEPT) - net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_IN; + if (xfrm_userpolicy_is_valid(up->in)) + net->xfrm.policy_default[XFRM_POLICY_IN] = up->in; - if (up->fwd == XFRM_USERPOLICY_BLOCK) - net->xfrm.policy_default |= XFRM_POL_DEFAULT_FWD; - else if (up->fwd == XFRM_USERPOLICY_ACCEPT) - net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_FWD; + if (xfrm_userpolicy_is_valid(up->fwd)) + net->xfrm.policy_default[XFRM_POLICY_FWD] = up->fwd; - if (up->out == XFRM_USERPOLICY_BLOCK) - net->xfrm.policy_default |= XFRM_POL_DEFAULT_OUT; - else if (up->out == XFRM_USERPOLICY_ACCEPT) - net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_OUT; + if (xfrm_userpolicy_is_valid(up->out)) + net->xfrm.policy_default[XFRM_POLICY_OUT] = up->out; rt_genid_bump_all(net); @@ -1993,13 +1990,9 @@ static int xfrm_get_default(struct sk_buff *skb, struct nlmsghdr *nlh, } r_up = nlmsg_data(r_nlh); - - r_up->in = net->xfrm.policy_default & XFRM_POL_DEFAULT_IN ? - XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; - r_up->fwd = net->xfrm.policy_default & XFRM_POL_DEFAULT_FWD ? - XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; - r_up->out = net->xfrm.policy_default & XFRM_POL_DEFAULT_OUT ? - XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; + r_up->in = net->xfrm.policy_default[XFRM_POLICY_IN]; + r_up->fwd = net->xfrm.policy_default[XFRM_POLICY_FWD]; + r_up->out = net->xfrm.policy_default[XFRM_POLICY_OUT]; nlmsg_end(r_skb, r_nlh); return nlmsg_unicast(net->xfrm.nlsk, r_skb, portid); From 47f04f95edb1aa00ac2d9a9fb0f1e50031965618 Mon Sep 17 00:00:00 2001 From: Eyal Birger Date: Fri, 13 May 2022 23:34:02 +0300 Subject: [PATCH 0059/1842] xfrm: fix "disable_policy" flag use when arriving from different devices [ Upstream commit e6175a2ed1f18bf2f649625bf725e07adcfa6a28 ] In IPv4 setting the "disable_policy" flag on a device means no policy should be enforced for traffic originating from the device. This was implemented by seting the DST_NOPOLICY flag in the dst based on the originating device. However, dsts are cached in nexthops regardless of the originating devices, in which case, the DST_NOPOLICY flag value may be incorrect. Consider the following setup: +------------------------------+ | ROUTER | +-------------+ | +-----------------+ | | ipsec src |----|-|ipsec0 | | +-------------+ | |disable_policy=0 | +----+ | | +-----------------+ |eth1|-|----- +-------------+ | +-----------------+ +----+ | | noipsec src |----|-|eth0 | | +-------------+ | |disable_policy=1 | | | +-----------------+ | +------------------------------+ Where ROUTER has a default route towards eth1. dst entries for traffic arriving from eth0 would have DST_NOPOLICY and would be cached and therefore can be reused by traffic originating from ipsec0, skipping policy check. Fix by setting a IPSKB_NOPOLICY flag in IPCB and observing it instead of the DST in IN/FWD IPv4 policy checks. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Reported-by: Shmulik Ladkani Signed-off-by: Eyal Birger Signed-off-by: Steffen Klassert Signed-off-by: Sasha Levin --- include/net/ip.h | 1 + include/net/xfrm.h | 14 +++++++++++++- net/ipv4/route.c | 23 ++++++++++++++++++----- 3 files changed, 32 insertions(+), 6 deletions(-) diff --git a/include/net/ip.h b/include/net/ip.h index de2dc22a78f9..76aaa7eb5b82 100644 --- a/include/net/ip.h +++ b/include/net/ip.h @@ -55,6 +55,7 @@ struct inet_skb_parm { #define IPSKB_DOREDIRECT BIT(5) #define IPSKB_FRAG_PMTU BIT(6) #define IPSKB_L3SLAVE BIT(7) +#define IPSKB_NOPOLICY BIT(8) u16 frag_max_size; }; diff --git a/include/net/xfrm.h b/include/net/xfrm.h index 86e5d1aa9628..8a9943d935f1 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -1100,6 +1100,18 @@ static inline bool __xfrm_check_nopolicy(struct net *net, struct sk_buff *skb, return false; } +static inline bool __xfrm_check_dev_nopolicy(struct sk_buff *skb, + int dir, unsigned short family) +{ + if (dir != XFRM_POLICY_OUT && family == AF_INET) { + /* same dst may be used for traffic originating from + * devices with different policy settings. + */ + return IPCB(skb)->flags & IPSKB_NOPOLICY; + } + return skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY); +} + static inline int __xfrm_policy_check2(struct sock *sk, int dir, struct sk_buff *skb, unsigned int family, int reverse) @@ -1111,7 +1123,7 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir, return __xfrm_policy_check(sk, ndir, skb, family); return __xfrm_check_nopolicy(net, skb, dir) || - (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || + __xfrm_check_dev_nopolicy(skb, dir, family) || __xfrm_policy_check(sk, ndir, skb, family); } diff --git a/net/ipv4/route.c b/net/ipv4/route.c index 9bd3cd2177f4..aab8ac383d5d 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -1765,6 +1765,7 @@ static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr, struct in_device *in_dev = __in_dev_get_rcu(dev); unsigned int flags = RTCF_MULTICAST; struct rtable *rth; + bool no_policy; u32 itag = 0; int err; @@ -1775,8 +1776,12 @@ static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr, if (our) flags |= RTCF_LOCAL; + no_policy = IN_DEV_ORCONF(in_dev, NOPOLICY); + if (no_policy) + IPCB(skb)->flags |= IPSKB_NOPOLICY; + rth = rt_dst_alloc(dev_net(dev)->loopback_dev, flags, RTN_MULTICAST, - IN_DEV_ORCONF(in_dev, NOPOLICY), false); + no_policy, false); if (!rth) return -ENOBUFS; @@ -1835,7 +1840,7 @@ static int __mkroute_input(struct sk_buff *skb, struct rtable *rth; int err; struct in_device *out_dev; - bool do_cache; + bool do_cache, no_policy; u32 itag = 0; /* get a working reference to the output device */ @@ -1880,6 +1885,10 @@ static int __mkroute_input(struct sk_buff *skb, } } + no_policy = IN_DEV_ORCONF(in_dev, NOPOLICY); + if (no_policy) + IPCB(skb)->flags |= IPSKB_NOPOLICY; + fnhe = find_exception(nhc, daddr); if (do_cache) { if (fnhe) @@ -1892,8 +1901,7 @@ static int __mkroute_input(struct sk_buff *skb, } } - rth = rt_dst_alloc(out_dev->dev, 0, res->type, - IN_DEV_ORCONF(in_dev, NOPOLICY), + rth = rt_dst_alloc(out_dev->dev, 0, res->type, no_policy, IN_DEV_ORCONF(out_dev, NOXFRM)); if (!rth) { err = -ENOBUFS; @@ -2145,6 +2153,7 @@ static int ip_route_input_slow(struct sk_buff *skb, __be32 daddr, __be32 saddr, struct rtable *rth; struct flowi4 fl4; bool do_cache = true; + bool no_policy; /* IP on this device is disabled. */ @@ -2262,6 +2271,10 @@ out: return err; RT_CACHE_STAT_INC(in_brd); local_input: + no_policy = IN_DEV_ORCONF(in_dev, NOPOLICY); + if (no_policy) + IPCB(skb)->flags |= IPSKB_NOPOLICY; + do_cache &= res->fi && !itag; if (do_cache) { struct fib_nh_common *nhc = FIB_RES_NHC(*res); @@ -2276,7 +2289,7 @@ out: return err; rth = rt_dst_alloc(ip_rt_get_dev(net, res), flags | RTCF_LOCAL, res->type, - IN_DEV_ORCONF(in_dev, NOPOLICY), false); + no_policy, false); if (!rth) goto e_nobufs; From 9bfe898e2b767d68da4a8500efa9ef56f8d62426 Mon Sep 17 00:00:00 2001 From: Paolo Abeni Date: Fri, 13 May 2022 11:27:06 +0200 Subject: [PATCH 0060/1842] net/sched: act_pedit: sanitize shift argument before usage [ Upstream commit 4d42d54a7d6aa6d29221d3fd4f2ae9503e94f011 ] syzbot was able to trigger an Out-of-Bound on the pedit action: UBSAN: shift-out-of-bounds in net/sched/act_pedit.c:238:43 shift exponent 1400735974 is too large for 32-bit type 'unsigned int' CPU: 0 PID: 3606 Comm: syz-executor151 Not tainted 5.18.0-rc5-syzkaller-00165-g810c2f0a3f86 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 ubsan_epilogue+0xb/0x50 lib/ubsan.c:151 __ubsan_handle_shift_out_of_bounds.cold+0xb1/0x187 lib/ubsan.c:322 tcf_pedit_init.cold+0x1a/0x1f net/sched/act_pedit.c:238 tcf_action_init_1+0x414/0x690 net/sched/act_api.c:1367 tcf_action_init+0x530/0x8d0 net/sched/act_api.c:1432 tcf_action_add+0xf9/0x480 net/sched/act_api.c:1956 tc_ctl_action+0x346/0x470 net/sched/act_api.c:2015 rtnetlink_rcv_msg+0x413/0xb80 net/core/rtnetlink.c:5993 netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2502 netlink_unicast_kernel net/netlink/af_netlink.c:1319 [inline] netlink_unicast+0x543/0x7f0 net/netlink/af_netlink.c:1345 netlink_sendmsg+0x904/0xe00 net/netlink/af_netlink.c:1921 sock_sendmsg_nosec net/socket.c:705 [inline] sock_sendmsg+0xcf/0x120 net/socket.c:725 ____sys_sendmsg+0x6e2/0x800 net/socket.c:2413 ___sys_sendmsg+0xf3/0x170 net/socket.c:2467 __sys_sendmsg+0xe5/0x1b0 net/socket.c:2496 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x7fe36e9e1b59 Code: 28 c3 e8 2a 14 00 00 66 2e 0f 1f 84 00 00 00 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffef796fe88 EFLAGS: 00000246 ORIG_RAX: 000000000000002e RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fe36e9e1b59 RDX: 0000000000000000 RSI: 0000000020000300 RDI: 0000000000000003 RBP: 00007fe36e9a5d00 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00007fe36e9a5d90 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 The 'shift' field is not validated, and any value above 31 will trigger out-of-bounds. The issue predates the git history, but syzbot was able to trigger it only after the commit mentioned in the fixes tag, and this change only applies on top of such commit. Address the issue bounding the 'shift' value to the maximum allowed by the relevant operator. Reported-and-tested-by: syzbot+8ed8fc4c57e9dcf23ca6@syzkaller.appspotmail.com Fixes: 8b796475fd78 ("net/sched: act_pedit: really ensure the skb is writable") Signed-off-by: Paolo Abeni Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/sched/act_pedit.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c index 90510298b32a..0d5463ddfd62 100644 --- a/net/sched/act_pedit.c +++ b/net/sched/act_pedit.c @@ -232,6 +232,10 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla, for (i = 0; i < p->tcfp_nkeys; ++i) { u32 cur = p->tcfp_keys[i].off; + /* sanitize the shift value for any later use */ + p->tcfp_keys[i].shift = min_t(size_t, BITS_PER_TYPE(int) - 1, + p->tcfp_keys[i].shift); + /* The AT option can read a single byte, we can bound the actual * value with uchar max. */ From 201e5b5c27996f38e90a54aac866bccca65f8877 Mon Sep 17 00:00:00 2001 From: Christophe JAILLET Date: Sun, 15 May 2022 19:01:56 +0200 Subject: [PATCH 0061/1842] net: systemport: Fix an error handling path in bcm_sysport_probe() [ Upstream commit ef6b1cd11962aec21c58d137006ab122dbc8d6fd ] if devm_clk_get_optional() fails, we still need to go through the error handling path. Add the missing goto. Fixes: 6328a126896ea ("net: systemport: Manage Wake-on-LAN clock") Signed-off-by: Christophe JAILLET Acked-by: Florian Fainelli Link: https://lore.kernel.org/r/99d70634a81c229885ae9e4ee69b2035749f7edc.1652634040.git.christophe.jaillet@wanadoo.fr Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- drivers/net/ethernet/broadcom/bcmsysport.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c index 1a703b95208b..82d369d9f7a5 100644 --- a/drivers/net/ethernet/broadcom/bcmsysport.c +++ b/drivers/net/ethernet/broadcom/bcmsysport.c @@ -2592,8 +2592,10 @@ static int bcm_sysport_probe(struct platform_device *pdev) device_set_wakeup_capable(&pdev->dev, 1); priv->wol_clk = devm_clk_get_optional(&pdev->dev, "sw_sysportwol"); - if (IS_ERR(priv->wol_clk)) - return PTR_ERR(priv->wol_clk); + if (IS_ERR(priv->wol_clk)) { + ret = PTR_ERR(priv->wol_clk); + goto err_deregister_fixed_link; + } /* Set the needed headroom once and for all */ BUILD_BUG_ON(sizeof(struct bcm_tsb) != 8); From a54d86cf418427584e0a3cd1e89f757c92df5e89 Mon Sep 17 00:00:00 2001 From: Zixuan Fu Date: Sat, 14 May 2022 13:06:56 +0800 Subject: [PATCH 0062/1842] net: vmxnet3: fix possible use-after-free bugs in vmxnet3_rq_alloc_rx_buf() [ Upstream commit 9e7fef9521e73ca8afd7da9e58c14654b02dfad8 ] In vmxnet3_rq_alloc_rx_buf(), when dma_map_single() fails, rbi->skb is freed immediately. Similarly, in another branch, when dma_map_page() fails, rbi->page is also freed. In the two cases, vmxnet3_rq_alloc_rx_buf() returns an error to its callers vmxnet3_rq_init() -> vmxnet3_rq_init_all() -> vmxnet3_activate_dev(). Then vmxnet3_activate_dev() calls vmxnet3_rq_cleanup_all() in error handling code, and rbi->skb or rbi->page are freed again in vmxnet3_rq_cleanup_all(), causing use-after-free bugs. To fix these possible bugs, rbi->skb and rbi->page should be cleared after they are freed. The error log in our fault-injection testing is shown as follows: [ 14.319016] BUG: KASAN: use-after-free in consume_skb+0x2f/0x150 ... [ 14.321586] Call Trace: ... [ 14.325357] consume_skb+0x2f/0x150 [ 14.325671] vmxnet3_rq_cleanup_all+0x33a/0x4e0 [vmxnet3] [ 14.326150] vmxnet3_activate_dev+0xb9d/0x2ca0 [vmxnet3] [ 14.326616] vmxnet3_open+0x387/0x470 [vmxnet3] ... [ 14.361675] Allocated by task 351: ... [ 14.362688] __netdev_alloc_skb+0x1b3/0x6f0 [ 14.362960] vmxnet3_rq_alloc_rx_buf+0x1b0/0x8d0 [vmxnet3] [ 14.363317] vmxnet3_activate_dev+0x3e3/0x2ca0 [vmxnet3] [ 14.363661] vmxnet3_open+0x387/0x470 [vmxnet3] ... [ 14.367309] [ 14.367412] Freed by task 351: ... [ 14.368932] __dev_kfree_skb_any+0xd2/0xe0 [ 14.369193] vmxnet3_rq_alloc_rx_buf+0x71e/0x8d0 [vmxnet3] [ 14.369544] vmxnet3_activate_dev+0x3e3/0x2ca0 [vmxnet3] [ 14.369883] vmxnet3_open+0x387/0x470 [vmxnet3] [ 14.370174] __dev_open+0x28a/0x420 [ 14.370399] __dev_change_flags+0x192/0x590 [ 14.370667] dev_change_flags+0x7a/0x180 [ 14.370919] do_setlink+0xb28/0x3570 [ 14.371150] rtnl_newlink+0x1160/0x1740 [ 14.371399] rtnetlink_rcv_msg+0x5bf/0xa50 [ 14.371661] netlink_rcv_skb+0x1cd/0x3e0 [ 14.371913] netlink_unicast+0x5dc/0x840 [ 14.372169] netlink_sendmsg+0x856/0xc40 [ 14.372420] ____sys_sendmsg+0x8a7/0x8d0 [ 14.372673] __sys_sendmsg+0x1c2/0x270 [ 14.372914] do_syscall_64+0x41/0x90 [ 14.373145] entry_SYSCALL_64_after_hwframe+0x44/0xae ... Fixes: 5738a09d58d5a ("vmxnet3: fix checks for dma mapping errors") Reported-by: TOTE Robot Signed-off-by: Zixuan Fu Link: https://lore.kernel.org/r/20220514050656.2636588-1-r33s3n6@gmail.com Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- drivers/net/vmxnet3/vmxnet3_drv.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index 932a39945cc6..530d555988ae 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -595,6 +595,7 @@ vmxnet3_rq_alloc_rx_buf(struct vmxnet3_rx_queue *rq, u32 ring_idx, if (dma_mapping_error(&adapter->pdev->dev, rbi->dma_addr)) { dev_kfree_skb_any(rbi->skb); + rbi->skb = NULL; rq->stats.rx_buf_alloc_failure++; break; } @@ -619,6 +620,7 @@ vmxnet3_rq_alloc_rx_buf(struct vmxnet3_rx_queue *rq, u32 ring_idx, if (dma_mapping_error(&adapter->pdev->dev, rbi->dma_addr)) { put_page(rbi->page); + rbi->page = NULL; rq->stats.rx_buf_alloc_failure++; break; } From 6e2caee5cddc3d9e0ad0484c9c21b9f10676c044 Mon Sep 17 00:00:00 2001 From: Zixuan Fu Date: Sat, 14 May 2022 13:07:11 +0800 Subject: [PATCH 0063/1842] net: vmxnet3: fix possible NULL pointer dereference in vmxnet3_rq_cleanup() [ Upstream commit edf410cb74dc612fd47ef5be319c5a0bcd6e6ccd ] In vmxnet3_rq_create(), when dma_alloc_coherent() fails, vmxnet3_rq_destroy() is called. It sets rq->rx_ring[i].base to NULL. Then vmxnet3_rq_create() returns an error to its callers mxnet3_rq_create_all() -> vmxnet3_change_mtu(). Then vmxnet3_change_mtu() calls vmxnet3_force_close() -> dev_close() in error handling code. And the driver calls vmxnet3_close() -> vmxnet3_quiesce_dev() -> vmxnet3_rq_cleanup_all() -> vmxnet3_rq_cleanup(). In vmxnet3_rq_cleanup(), rq->rx_ring[ring_idx].base is accessed, but this variable is NULL, causing a NULL pointer dereference. To fix this possible bug, an if statement is added to check whether rq->rx_ring[0].base is NULL in vmxnet3_rq_cleanup() and exit early if so. The error log in our fault-injection testing is shown as follows: [ 65.220135] BUG: kernel NULL pointer dereference, address: 0000000000000008 ... [ 65.222633] RIP: 0010:vmxnet3_rq_cleanup_all+0x396/0x4e0 [vmxnet3] ... [ 65.227977] Call Trace: ... [ 65.228262] vmxnet3_quiesce_dev+0x80f/0x8a0 [vmxnet3] [ 65.228580] vmxnet3_close+0x2c4/0x3f0 [vmxnet3] [ 65.228866] __dev_close_many+0x288/0x350 [ 65.229607] dev_close_many+0xa4/0x480 [ 65.231124] dev_close+0x138/0x230 [ 65.231933] vmxnet3_force_close+0x1f0/0x240 [vmxnet3] [ 65.232248] vmxnet3_change_mtu+0x75d/0x920 [vmxnet3] ... Fixes: d1a890fa37f27 ("net: VMware virtual Ethernet NIC driver: vmxnet3") Reported-by: TOTE Robot Signed-off-by: Zixuan Fu Link: https://lore.kernel.org/r/20220514050711.2636709-1-r33s3n6@gmail.com Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- drivers/net/vmxnet3/vmxnet3_drv.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/vmxnet3/vmxnet3_drv.c b/drivers/net/vmxnet3/vmxnet3_drv.c index 530d555988ae..6678a734cc4d 100644 --- a/drivers/net/vmxnet3/vmxnet3_drv.c +++ b/drivers/net/vmxnet3/vmxnet3_drv.c @@ -1656,6 +1656,10 @@ vmxnet3_rq_cleanup(struct vmxnet3_rx_queue *rq, u32 i, ring_idx; struct Vmxnet3_RxDesc *rxd; + /* ring has already been cleaned up */ + if (!rq->rx_ring[0].base) + return; + for (ring_idx = 0; ring_idx < 2; ring_idx++) { for (i = 0; i < rq->rx_ring[ring_idx].size; i++) { #ifdef __BIG_ENDIAN_BITFIELD From 9e0e75a5e753d52aa4ff6ac3efca5225fb6a08d7 Mon Sep 17 00:00:00 2001 From: Paul Greenwalt Date: Thu, 28 Apr 2022 14:11:42 -0700 Subject: [PATCH 0064/1842] ice: fix possible under reporting of ethtool Tx and Rx statistics [ Upstream commit 31b6298fd8e29effe9ed6b77351ac5969be56ce0 ] The hardware statistics counters are not cleared during resets so the drivers first access is to initialize the baseline and then subsequent reads are for reporting the counters. The statistics counters are read during the watchdog subtask when the interface is up. If the baseline is not initialized before the interface is up, then there can be a brief window in which some traffic can be transmitted/received before the initial baseline reading takes place. Directly initialize ethtool statistics in driver open so the baseline will be initialized when the interface is up, and any dropped packets incremented before the interface is up won't be reported. Fixes: 28dc1b86f8ea9 ("ice: ignore dropped packets during init") Signed-off-by: Paul Greenwalt Tested-by: Gurucharan (A Contingent worker at Intel) Signed-off-by: Tony Nguyen Signed-off-by: Sasha Levin --- drivers/net/ethernet/intel/ice/ice_main.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index eb0625b52e45..aae79fdd5172 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -5271,9 +5271,10 @@ static int ice_up_complete(struct ice_vsi *vsi) netif_carrier_on(vsi->netdev); } - /* clear this now, and the first stats read will be used as baseline */ - vsi->stat_offsets_loaded = false; - + /* Perform an initial read of the statistics registers now to + * set the baseline so counters are ready when interface is up + */ + ice_update_eth_stats(vsi); ice_service_task_schedule(pf); return 0; From d35bf8d766b11e99784cc74be5b7810f2f39240e Mon Sep 17 00:00:00 2001 From: Codrin Ciubotariu Date: Wed, 13 Apr 2022 10:13:18 +0300 Subject: [PATCH 0065/1842] clk: at91: generated: consider range when calculating best rate [ Upstream commit d0031e6fbed955ff8d5f5bbc8fe7382482559cec ] clk_generated_best_diff() helps in finding the parent and the divisor to compute a rate closest to the required one. However, it doesn't take into account the request's range for the new rate. Make sure the new rate is within the required range. Fixes: 8a8f4bf0c480 ("clk: at91: clk-generated: create function to find best_diff") Signed-off-by: Codrin Ciubotariu Link: https://lore.kernel.org/r/20220413071318.244912-1-codrin.ciubotariu@microchip.com Reviewed-by: Claudiu Beznea Signed-off-by: Stephen Boyd Signed-off-by: Sasha Levin --- drivers/clk/at91/clk-generated.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/clk/at91/clk-generated.c b/drivers/clk/at91/clk-generated.c index b656d25a9767..fe772baeb15f 100644 --- a/drivers/clk/at91/clk-generated.c +++ b/drivers/clk/at91/clk-generated.c @@ -106,6 +106,10 @@ static void clk_generated_best_diff(struct clk_rate_request *req, tmp_rate = parent_rate; else tmp_rate = parent_rate / div; + + if (tmp_rate < req->min_rate || tmp_rate > req->max_rate) + return; + tmp_diff = abs(req->rate - tmp_rate); if (*best_diff < 0 || *best_diff >= tmp_diff) { From 42d4287cc1e4887b75fe4036b71c399628354669 Mon Sep 17 00:00:00 2001 From: Christophe JAILLET Date: Sun, 15 May 2022 20:07:02 +0200 Subject: [PATCH 0066/1842] net/qla3xxx: Fix a test in ql_reset_work() [ Upstream commit 5361448e45fac6fb96738df748229432a62d78b6 ] test_bit() tests if one bit is set or not. Here the logic seems to check of bit QL_RESET_PER_SCSI (i.e. 4) OR bit QL_RESET_START (i.e. 3) is set. In fact, it checks if bit 7 (4 | 3 = 7) is set, that is to say QL_ADAPTER_UP. This looks harmless, because this bit is likely be set, and when the ql_reset_work() delayed work is scheduled in ql3xxx_isr() (the only place that schedule this work), QL_RESET_START or QL_RESET_PER_SCSI is set. This has been spotted by smatch. Fixes: 5a4faa873782 ("[PATCH] qla3xxx NIC driver") Signed-off-by: Christophe JAILLET Link: https://lore.kernel.org/r/80e73e33f390001d9c0140ffa9baddf6466a41a2.1652637337.git.christophe.jaillet@wanadoo.fr Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/qlogic/qla3xxx.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c index c9f32fc50254..2219e4c59ae6 100644 --- a/drivers/net/ethernet/qlogic/qla3xxx.c +++ b/drivers/net/ethernet/qlogic/qla3xxx.c @@ -3628,7 +3628,8 @@ static void ql_reset_work(struct work_struct *work) qdev->mem_map_registers; unsigned long hw_flags; - if (test_bit((QL_RESET_PER_SCSI | QL_RESET_START), &qdev->flags)) { + if (test_bit(QL_RESET_PER_SCSI, &qdev->flags) || + test_bit(QL_RESET_START, &qdev->flags)) { clear_bit(QL_LINK_MASTER, &qdev->flags); /* From b503d0228c9246f090c49c9b2ad54b42164abf46 Mon Sep 17 00:00:00 2001 From: Duoming Zhou Date: Tue, 17 May 2022 09:25:30 +0800 Subject: [PATCH 0067/1842] NFC: nci: fix sleep in atomic context bugs caused by nci_skb_alloc [ Upstream commit 23dd4581350d4ffa23d58976ec46408f8f4c1e16 ] There are sleep in atomic context bugs when the request to secure element of st-nci is timeout. The root cause is that nci_skb_alloc with GFP_KERNEL parameter is called in st_nci_se_wt_timeout which is a timer handler. The call paths that could trigger bugs are shown below: (interrupt context 1) st_nci_se_wt_timeout nci_hci_send_event nci_hci_send_data nci_skb_alloc(..., GFP_KERNEL) //may sleep (interrupt context 2) st_nci_se_wt_timeout nci_hci_send_event nci_hci_send_data nci_send_data nci_queue_tx_data_frags nci_skb_alloc(..., GFP_KERNEL) //may sleep This patch changes allocation mode of nci_skb_alloc from GFP_KERNEL to GFP_ATOMIC in order to prevent atomic context sleeping. The GFP_ATOMIC flag makes memory allocation operation could be used in atomic context. Fixes: ed06aeefdac3 ("nfc: st-nci: Rename st21nfcb to st-nci") Signed-off-by: Duoming Zhou Reviewed-by: Krzysztof Kozlowski Link: https://lore.kernel.org/r/20220517012530.75714-1-duoming@zju.edu.cn Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- net/nfc/nci/data.c | 2 +- net/nfc/nci/hci.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/net/nfc/nci/data.c b/net/nfc/nci/data.c index ce3382be937f..b002e18f38c8 100644 --- a/net/nfc/nci/data.c +++ b/net/nfc/nci/data.c @@ -118,7 +118,7 @@ static int nci_queue_tx_data_frags(struct nci_dev *ndev, skb_frag = nci_skb_alloc(ndev, (NCI_DATA_HDR_SIZE + frag_len), - GFP_KERNEL); + GFP_ATOMIC); if (skb_frag == NULL) { rc = -ENOMEM; goto free_exit; diff --git a/net/nfc/nci/hci.c b/net/nfc/nci/hci.c index 04e55ccb3383..4fe336ff2bfa 100644 --- a/net/nfc/nci/hci.c +++ b/net/nfc/nci/hci.c @@ -153,7 +153,7 @@ static int nci_hci_send_data(struct nci_dev *ndev, u8 pipe, i = 0; skb = nci_skb_alloc(ndev, conn_info->max_pkt_payload_len + - NCI_DATA_HDR_SIZE, GFP_KERNEL); + NCI_DATA_HDR_SIZE, GFP_ATOMIC); if (!skb) return -ENOMEM; @@ -186,7 +186,7 @@ static int nci_hci_send_data(struct nci_dev *ndev, u8 pipe, if (i < data_len) { skb = nci_skb_alloc(ndev, conn_info->max_pkt_payload_len + - NCI_DATA_HDR_SIZE, GFP_KERNEL); + NCI_DATA_HDR_SIZE, GFP_ATOMIC); if (!skb) return -ENOMEM; From 697f3219ee2f1ad3b3be1172164e393f18337a77 Mon Sep 17 00:00:00 2001 From: Maxim Mikityanskiy Date: Tue, 12 Apr 2022 18:37:03 +0300 Subject: [PATCH 0068/1842] net/mlx5e: Properly block LRO when XDP is enabled [ Upstream commit cf6e34c8c22fba66bd21244b95ea47e235f68974 ] LRO is incompatible and mutually exclusive with XDP. However, the needed checks are only made when enabling XDP. If LRO is enabled when XDP is already active, the command will succeed, and XDP will be skipped in the data path, although still enabled. This commit fixes the bug by checking the XDP status in mlx5e_fix_features and disabling LRO if XDP is enabled. Fixes: 86994156c736 ("net/mlx5e: XDP fast RX drop bpf programs support") Signed-off-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed Signed-off-by: Sasha Levin --- drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 16e98ac47624..d9cc0ed6c5f7 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -4009,6 +4009,13 @@ static netdev_features_t mlx5e_fix_features(struct net_device *netdev, } } + if (params->xdp_prog) { + if (features & NETIF_F_LRO) { + netdev_warn(netdev, "LRO is incompatible with XDP\n"); + features &= ~NETIF_F_LRO; + } + } + if (MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_CQE_COMPRESS)) { features &= ~NETIF_F_RXHASH; if (netdev->features & NETIF_F_RXHASH) From 7b676abe328a476ca1f718a6effff3b1e44cde3d Mon Sep 17 00:00:00 2001 From: Jiasheng Jiang Date: Tue, 17 May 2022 17:42:31 +0800 Subject: [PATCH 0069/1842] net: af_key: add check for pfkey_broadcast in function pfkey_process [ Upstream commit 4dc2a5a8f6754492180741facf2a8787f2c415d7 ] If skb_clone() returns null pointer, pfkey_broadcast() will return error. Therefore, it should be better to check the return value of pfkey_broadcast() and return error if fails. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Jiasheng Jiang Signed-off-by: Steffen Klassert Signed-off-by: Sasha Levin --- net/key/af_key.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/net/key/af_key.c b/net/key/af_key.c index bd9b5c573b5a..61505b0df57d 100644 --- a/net/key/af_key.c +++ b/net/key/af_key.c @@ -2830,8 +2830,10 @@ static int pfkey_process(struct sock *sk, struct sk_buff *skb, const struct sadb void *ext_hdrs[SADB_EXT_MAX]; int err; - pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL, - BROADCAST_PROMISC_ONLY, NULL, sock_net(sk)); + err = pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL, + BROADCAST_PROMISC_ONLY, NULL, sock_net(sk)); + if (err) + return err; memset(ext_hdrs, 0, sizeof(ext_hdrs)); err = parse_exthdrs(skb, hdr, ext_hdrs); From 1756b45d8d83d2cdbc0eff3296cdb1aa80aef55b Mon Sep 17 00:00:00 2001 From: Ard Biesheuvel Date: Wed, 20 Apr 2022 09:44:51 +0100 Subject: [PATCH 0070/1842] ARM: 9196/1: spectre-bhb: enable for Cortex-A15 [ Upstream commit 0dc14aa94ccd8ba35eb17a0f9b123d1566efd39e ] The Spectre-BHB mitigations were inadvertently left disabled for Cortex-A15, due to the fact that cpu_v7_bugs_init() is not called in that case. So fix that. Fixes: b9baf5c8c5c3 ("ARM: Spectre-BHB workaround") Signed-off-by: Ard Biesheuvel Signed-off-by: Russell King (Oracle) Signed-off-by: Sasha Levin --- arch/arm/mm/proc-v7-bugs.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c index 06dbfb968182..fb9f3eb6bf48 100644 --- a/arch/arm/mm/proc-v7-bugs.c +++ b/arch/arm/mm/proc-v7-bugs.c @@ -288,6 +288,7 @@ void cpu_v7_ca15_ibe(void) { if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0))) cpu_v7_spectre_v2_init(); + cpu_v7_spectre_bhb_init(); } void cpu_v7_bugs_init(void) From a89888648e0cbc7dd1a46bac3955810fa32b04ad Mon Sep 17 00:00:00 2001 From: Ard Biesheuvel Date: Wed, 20 Apr 2022 09:46:17 +0100 Subject: [PATCH 0071/1842] ARM: 9197/1: spectre-bhb: fix loop8 sequence for Thumb2 [ Upstream commit 3cfb3019979666bdf33a1010147363cf05e0f17b ] In Thumb2, 'b . + 4' produces a branch instruction that uses a narrow encoding, and so it does not jump to the following instruction as expected. So use W(b) instead. Fixes: 6c7cb60bff7a ("ARM: fix Thumb2 regression with Spectre BHB") Signed-off-by: Ard Biesheuvel Signed-off-by: Russell King (Oracle) Signed-off-by: Sasha Levin --- arch/arm/kernel/entry-armv.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index c3ebe3584103..030351d169aa 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -1043,7 +1043,7 @@ vector_bhb_loop8_\name: @ bhb workaround mov r0, #8 -3: b . + 4 +3: W(b) . + 4 subs r0, r0, #1 bne 3b dsb From 579061f39143aa38922af164745712fc71166a98 Mon Sep 17 00:00:00 2001 From: Kevin Mitchell Date: Tue, 17 May 2022 11:01:05 -0700 Subject: [PATCH 0072/1842] igb: skip phy status check where unavailable [ Upstream commit 942d2ad5d2e0df758a645ddfadffde2795322728 ] igb_read_phy_reg() will silently return, leaving phy_data untouched, if hw->ops.read_reg isn't set. Depending on the uninitialized value of phy_data, this led to the phy status check either succeeding immediately or looping continuously for 2 seconds before emitting a noisy err-level timeout. This message went out to the console even though there was no actual problem. Instead, first check if there is read_reg function pointer. If not, proceed without trying to check the phy status register. Fixes: b72f3f72005d ("igb: When GbE link up, wait for Remote receiver status condition") Signed-off-by: Kevin Mitchell Tested-by: Gurucharan (A Contingent worker at Intel) Signed-off-by: Tony Nguyen Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/intel/igb/igb_main.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index f854d41c6c94..5e67c9c119d2 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -5499,7 +5499,8 @@ static void igb_watchdog_task(struct work_struct *work) break; } - if (adapter->link_speed != SPEED_1000) + if (adapter->link_speed != SPEED_1000 || + !hw->phy.ops.read_reg) goto no_wait; /* wait for Remote receiver status OK */ From dfd1f0cb628bf8a212e5155f53b5c7648a105939 Mon Sep 17 00:00:00 2001 From: Andrew Lunn Date: Wed, 18 May 2022 02:58:40 +0200 Subject: [PATCH 0073/1842] net: bridge: Clear offload_fwd_mark when passing frame up bridge interface. [ Upstream commit fbb3abdf2223cd0dfc07de85fe5a43ba7f435bdf ] It is possible to stack bridges on top of each other. Consider the following which makes use of an Ethernet switch: br1 / \ / \ / \ br0.11 wlan0 | br0 / | \ p1 p2 p3 br0 is offloaded to the switch. Above br0 is a vlan interface, for vlan 11. This vlan interface is then a slave of br1. br1 also has a wireless interface as a slave. This setup trunks wireless lan traffic over the copper network inside a VLAN. A frame received on p1 which is passed up to the bridge has the skb->offload_fwd_mark flag set to true, indicating that the switch has dealt with forwarding the frame out ports p2 and p3 as needed. This flag instructs the software bridge it does not need to pass the frame back down again. However, the flag is not getting reset when the frame is passed upwards. As a result br1 sees the flag, wrongly interprets it, and fails to forward the frame to wlan0. When passing a frame upwards, clear the flag. This is the Rx equivalent of br_switchdev_frame_unmark() in br_dev_xmit(). Fixes: f1c2eddf4cb6 ("bridge: switchdev: Use an helper to clear forward mark") Signed-off-by: Andrew Lunn Reviewed-by: Ido Schimmel Tested-by: Ido Schimmel Acked-by: Nikolay Aleksandrov Link: https://lore.kernel.org/r/20220518005840.771575-1-andrew@lunn.ch Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- net/bridge/br_input.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c index 59a318b9f646..bf5bf148091f 100644 --- a/net/bridge/br_input.c +++ b/net/bridge/br_input.c @@ -43,6 +43,13 @@ static int br_pass_frame_up(struct sk_buff *skb) u64_stats_update_end(&brstats->syncp); vg = br_vlan_group_rcu(br); + + /* Reset the offload_fwd_mark because there could be a stacked + * bridge above, and it should not think this bridge it doing + * that bridge's work forwarding out its ports. + */ + br_switchdev_frame_unmark(skb); + /* Bridge is just like any other port. Make sure the * packet is allowed except in promisc modue when someone * may be running packet capture. From ea8a9cb4a7797e4a37c7f46130462562bb45dc7b Mon Sep 17 00:00:00 2001 From: Krzysztof Kozlowski Date: Thu, 7 Apr 2022 21:38:56 +0200 Subject: [PATCH 0074/1842] riscv: dts: sifive: fu540-c000: align dma node name with dtschema [ Upstream commit b17410182b6f98191fbf7f42d3b4a78512769d29 ] Fixes dtbs_check warnings like: dma@3000000: $nodename:0: 'dma@3000000' does not match '^dma-controller(@.*)?$' Signed-off-by: Krzysztof Kozlowski Link: https://lore.kernel.org/r/20220407193856.18223-1-krzysztof.kozlowski@linaro.org Fixes: c5ab54e9945b ("riscv: dts: add support for PDMA device of HiFive Unleashed Rev A00") Signed-off-by: Palmer Dabbelt Signed-off-by: Sasha Levin --- arch/riscv/boot/dts/sifive/fu540-c000.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/riscv/boot/dts/sifive/fu540-c000.dtsi b/arch/riscv/boot/dts/sifive/fu540-c000.dtsi index 7db861053483..64c06c9b41dc 100644 --- a/arch/riscv/boot/dts/sifive/fu540-c000.dtsi +++ b/arch/riscv/boot/dts/sifive/fu540-c000.dtsi @@ -166,7 +166,7 @@ uart0: serial@10010000 { clocks = <&prci PRCI_CLK_TLCLK>; status = "disabled"; }; - dma: dma@3000000 { + dma: dma-controller@3000000 { compatible = "sifive,fu540-c000-pdma"; reg = <0x0 0x3000000 0x0 0x8000>; interrupt-parent = <&plic0>; From 30d4721feced120f885e64872f3af0ade0c674c1 Mon Sep 17 00:00:00 2001 From: Haibo Chen Date: Wed, 11 May 2022 10:15:04 +0800 Subject: [PATCH 0075/1842] gpio: gpio-vf610: do not touch other bits when set the target bit [ Upstream commit 9bf3ac466faa83d51a8fe9212131701e58fdef74 ] For gpio controller contain register PDDR, when set one target bit, current logic will clear all other bits, this is wrong. Use operator '|=' to fix it. Fixes: 659d8a62311f ("gpio: vf610: add imx7ulp support") Reviewed-by: Peng Fan Signed-off-by: Haibo Chen Signed-off-by: Bartosz Golaszewski Signed-off-by: Sasha Levin --- drivers/gpio/gpio-vf610.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/gpio/gpio-vf610.c b/drivers/gpio/gpio-vf610.c index 58776f2d69ff..1ae612c796ee 100644 --- a/drivers/gpio/gpio-vf610.c +++ b/drivers/gpio/gpio-vf610.c @@ -125,9 +125,13 @@ static int vf610_gpio_direction_output(struct gpio_chip *chip, unsigned gpio, { struct vf610_gpio_port *port = gpiochip_get_data(chip); unsigned long mask = BIT(gpio); + u32 val; - if (port->sdata && port->sdata->have_paddr) - vf610_gpio_writel(mask, port->gpio_base + GPIO_PDDR); + if (port->sdata && port->sdata->have_paddr) { + val = vf610_gpio_readl(port->gpio_base + GPIO_PDDR); + val |= mask; + vf610_gpio_writel(val, port->gpio_base + GPIO_PDDR); + } vf610_gpio_set(chip, gpio, value); From 210ea7da5c1f72b8ef07b3b5f450db9d3e92a764 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Uwe=20Kleine-K=C3=B6nig?= Date: Wed, 11 May 2022 09:58:56 +0200 Subject: [PATCH 0076/1842] gpio: mvebu/pwm: Refuse requests with inverted polarity MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 3ecb10175b1f776f076553c24e2689e42953fef5 ] The driver doesn't take struct pwm_state::polarity into account when configuring the hardware, so refuse requests for inverted polarity. Fixes: 757642f9a584 ("gpio: mvebu: Add limited PWM support") Signed-off-by: Uwe Kleine-König Signed-off-by: Bartosz Golaszewski Signed-off-by: Sasha Levin --- drivers/gpio/gpio-mvebu.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c index ed7c5fc47f52..2ab34a8e6273 100644 --- a/drivers/gpio/gpio-mvebu.c +++ b/drivers/gpio/gpio-mvebu.c @@ -700,6 +700,9 @@ static int mvebu_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm, unsigned long flags; unsigned int on, off; + if (state->polarity != PWM_POLARITY_NORMAL) + return -EINVAL; + val = (unsigned long long) mvpwm->clk_rate * state->duty_cycle; do_div(val, NSEC_PER_SEC); if (val > UINT_MAX) From c5af34174733c700bbfb1dde243576c60a2762d5 Mon Sep 17 00:00:00 2001 From: Thomas Richter Date: Fri, 20 May 2022 10:11:58 +0200 Subject: [PATCH 0077/1842] perf bench numa: Address compiler error on s390 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit f8ac1c478424a9a14669b8cef7389b1e14e5229d ] The compilation on s390 results in this error: # make DEBUG=y bench/numa.o ... bench/numa.c: In function ‘__bench_numa’: bench/numa.c:1749:81: error: ‘%d’ directive output may be truncated writing between 1 and 11 bytes into a region of size between 10 and 20 [-Werror=format-truncation=] 1749 | snprintf(tname, sizeof(tname), "process%d:thread%d", p, t); ^~ ... bench/numa.c:1749:64: note: directive argument in the range [-2147483647, 2147483646] ... # The maximum length of the %d replacement is 11 characters because of the negative sign. Therefore extend the array by two more characters. Output after: # make DEBUG=y bench/numa.o > /dev/null 2>&1; ll bench/numa.o -rw-r--r-- 1 root root 418320 May 19 09:11 bench/numa.o # Fixes: 3aff8ba0a4c9c919 ("perf bench numa: Avoid possible truncation when using snprintf()") Suggested-by: Namhyung Kim Signed-off-by: Thomas Richter Cc: Heiko Carstens Cc: Sumanth Korikkar Cc: Sven Schnelle Cc: Vasily Gorbik Link: https://lore.kernel.org/r/20220520081158.2990006-1-tmricht@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Sasha Levin --- tools/perf/bench/numa.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/perf/bench/numa.c b/tools/perf/bench/numa.c index 11726ec6285f..88c11305bdd5 100644 --- a/tools/perf/bench/numa.c +++ b/tools/perf/bench/numa.c @@ -1656,7 +1656,7 @@ static int __bench_numa(const char *name) "GB/sec,", "total-speed", "GB/sec total speed"); if (g->p.show_details >= 2) { - char tname[14 + 2 * 10 + 1]; + char tname[14 + 2 * 11 + 1]; struct thread_data *td; for (p = 0; p < g->p.nr_proc; p++) { for (t = 0; t < g->p.nr_threads; t++) { From e21d734fd05cd5b27f2e689777790db9122da39e Mon Sep 17 00:00:00 2001 From: Gleb Chesnokov Date: Fri, 15 Apr 2022 12:42:29 +0000 Subject: [PATCH 0078/1842] scsi: qla2xxx: Fix missed DMA unmap for aborted commands [ Upstream commit 26f9ce53817a8fd84b69a73473a7de852a24c897 ] Aborting commands that have already been sent to the firmware can cause BUG in qlt_free_cmd(): BUG_ON(cmd->sg_mapped) For instance: - Command passes rdx_to_xfer state, maps sgl, sends to the firmware - Reset occurs, qla2xxx performs ISP error recovery, aborts the command - Target stack calls qlt_abort_cmd() and then qlt_free_cmd() - BUG_ON(cmd->sg_mapped) in qlt_free_cmd() occurs because sgl was not unmapped Thus, unmap sgl in qlt_abort_cmd() for commands with the aborted flag set. Link: https://lore.kernel.org/r/AS8PR10MB4952D545F84B6B1DFD39EC1E9DEE9@AS8PR10MB4952.EURPRD10.PROD.OUTLOOK.COM Reviewed-by: Himanshu Madhani Signed-off-by: Gleb Chesnokov Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin --- drivers/scsi/qla2xxx/qla_target.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c index cf9ae0ab489a..ba823e8eb902 100644 --- a/drivers/scsi/qla2xxx/qla_target.c +++ b/drivers/scsi/qla2xxx/qla_target.c @@ -3773,6 +3773,9 @@ int qlt_abort_cmd(struct qla_tgt_cmd *cmd) spin_lock_irqsave(&cmd->cmd_lock, flags); if (cmd->aborted) { + if (cmd->sg_mapped) + qlt_unmap_sg(vha, cmd); + spin_unlock_irqrestore(&cmd->cmd_lock, flags); /* * It's normal to see 2 calls in this path: From a0f5ff20496bea104bc0d3d791fdb3813fcee989 Mon Sep 17 00:00:00 2001 From: Felix Fietkau Date: Wed, 20 Apr 2022 12:50:38 +0200 Subject: [PATCH 0079/1842] mac80211: fix rx reordering with non explicit / psmp ack policy [ Upstream commit 5e469ed9764d4722c59562da13120bd2dc6834c5 ] When the QoS ack policy was set to non explicit / psmp ack, frames are treated as not being part of a BA session, which causes extra latency on reordering. Fix this by only bypassing reordering for packets with no-ack policy Signed-off-by: Felix Fietkau Link: https://lore.kernel.org/r/20220420105038.36443-1-nbd@nbd.name Signed-off-by: Johannes Berg Signed-off-by: Sasha Levin --- net/mac80211/rx.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c index 1e7614abd947..e991abb45f68 100644 --- a/net/mac80211/rx.c +++ b/net/mac80211/rx.c @@ -1387,8 +1387,7 @@ static void ieee80211_rx_reorder_ampdu(struct ieee80211_rx_data *rx, goto dont_reorder; /* not part of a BA session */ - if (ack_policy != IEEE80211_QOS_CTL_ACK_POLICY_BLOCKACK && - ack_policy != IEEE80211_QOS_CTL_ACK_POLICY_NORMAL) + if (ack_policy == IEEE80211_QOS_CTL_ACK_POLICY_NOACK) goto dont_reorder; /* new, potentially un-ordered, ampdu frame - process it */ From 1cfbf6d3a7f6f844ec722bbaedce062cd757801c Mon Sep 17 00:00:00 2001 From: Kieran Frewen Date: Wed, 20 Apr 2022 04:13:21 +0000 Subject: [PATCH 0080/1842] nl80211: validate S1G channel width [ Upstream commit 5d087aa759eb82b8208411913f6c2158bd85abc0 ] Validate the S1G channel width input by user to ensure it matches that of the requested channel Signed-off-by: Kieran Frewen Signed-off-by: Bassem Dawood Link: https://lore.kernel.org/r/20220420041321.3788789-2-kieran.frewen@morsemicro.com Signed-off-by: Johannes Berg Signed-off-by: Sasha Levin --- net/wireless/nl80211.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c index 12f44ad4e0d8..283447df5fc6 100644 --- a/net/wireless/nl80211.c +++ b/net/wireless/nl80211.c @@ -2955,6 +2955,15 @@ int nl80211_parse_chandef(struct cfg80211_registered_device *rdev, } else if (attrs[NL80211_ATTR_CHANNEL_WIDTH]) { chandef->width = nla_get_u32(attrs[NL80211_ATTR_CHANNEL_WIDTH]); + if (chandef->chan->band == NL80211_BAND_S1GHZ) { + /* User input error for channel width doesn't match channel */ + if (chandef->width != ieee80211_s1g_channel_width(chandef->chan)) { + NL_SET_ERR_MSG_ATTR(extack, + attrs[NL80211_ATTR_CHANNEL_WIDTH], + "bad channel width"); + return -EINVAL; + } + } if (attrs[NL80211_ATTR_CENTER_FREQ1]) { chandef->center_freq1 = nla_get_u32(attrs[NL80211_ATTR_CENTER_FREQ1]); From efe580c436f9102b3142de8ba381b7b280cd1912 Mon Sep 17 00:00:00 2001 From: Nicolas Dichtel Date: Wed, 4 May 2022 11:07:39 +0200 Subject: [PATCH 0081/1842] selftests: add ping test with ping_group_range tuned [ Upstream commit e71b7f1f44d3d88c677769c85ef0171caf9fc89f ] The 'ping' utility is able to manage two kind of sockets (raw or icmp), depending on the sysctl ping_group_range. By default, ping_group_range is set to '1 0', which forces ping to use an ip raw socket. Let's replay the ping tests by allowing 'ping' to use the ip icmp socket. After the previous patch, ipv4 tests results are the same with both kinds of socket. For ipv6, there are a lot a new failures (the previous patch fixes only two cases). Signed-off-by: Nicolas Dichtel Reviewed-by: David Ahern Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- tools/testing/selftests/net/fcnal-test.sh | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/tools/testing/selftests/net/fcnal-test.sh b/tools/testing/selftests/net/fcnal-test.sh index ace976d89125..4a11ea2261cb 100755 --- a/tools/testing/selftests/net/fcnal-test.sh +++ b/tools/testing/selftests/net/fcnal-test.sh @@ -794,10 +794,16 @@ ipv4_ping() setup set_sysctl net.ipv4.raw_l3mdev_accept=1 2>/dev/null ipv4_ping_novrf + setup + set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null + ipv4_ping_novrf log_subsection "With VRF" setup "yes" ipv4_ping_vrf + setup "yes" + set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null + ipv4_ping_vrf } ################################################################################ @@ -2261,10 +2267,16 @@ ipv6_ping() log_subsection "No VRF" setup ipv6_ping_novrf + setup + set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null + ipv6_ping_novrf log_subsection "With VRF" setup "yes" ipv6_ping_vrf + setup "yes" + set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null + ipv6_ping_vrf } ################################################################################ From 3a6dee284fa03fcfe45b869e7a29e46541bde42a Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Fri, 6 May 2022 10:21:38 +0200 Subject: [PATCH 0082/1842] nl80211: fix locking in nl80211_set_tx_bitrate_mask() [ Upstream commit f971e1887fdb3ab500c9bebf4b98f62d49a20655 ] This accesses the wdev's chandef etc., so cannot safely be used without holding the lock. Signed-off-by: Johannes Berg Link: https://lore.kernel.org/r/20220506102136.06b7205419e6.I2a87c05fbd8bc5e565e84d190d4cfd2e92695a90@changeid Signed-off-by: Johannes Berg Signed-off-by: Sasha Levin --- net/wireless/nl80211.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c index 283447df5fc6..f8d5f35cfc66 100644 --- a/net/wireless/nl80211.c +++ b/net/wireless/nl80211.c @@ -11095,18 +11095,23 @@ static int nl80211_set_tx_bitrate_mask(struct sk_buff *skb, struct cfg80211_bitrate_mask mask; struct cfg80211_registered_device *rdev = info->user_ptr[0]; struct net_device *dev = info->user_ptr[1]; + struct wireless_dev *wdev = dev->ieee80211_ptr; int err; if (!rdev->ops->set_bitrate_mask) return -EOPNOTSUPP; + wdev_lock(wdev); err = nl80211_parse_tx_bitrate_mask(info, info->attrs, NL80211_ATTR_TX_RATES, &mask, dev); if (err) - return err; + goto out; - return rdev_set_bitrate_mask(rdev, dev, NULL, &mask); + err = rdev_set_bitrate_mask(rdev, dev, NULL, &mask); +out: + wdev_unlock(wdev); + return err; } static int nl80211_register_mgmt(struct sk_buff *skb, struct genl_info *info) From d4c6e5cebcf5b5d4f90d61074fdbb1cdcb7cdc60 Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Fri, 6 May 2022 17:42:50 +0800 Subject: [PATCH 0083/1842] ethernet: tulip: fix missing pci_disable_device() on error in tulip_init_one() [ Upstream commit 51ca86b4c9c7c75f5630fa0dbe5f8f0bd98e3c3e ] Fix the missing pci_disable_device() before return from tulip_init_one() in the error handling case. Reported-by: Hulk Robot Signed-off-by: Yang Yingliang Link: https://lore.kernel.org/r/20220506094250.3630615-1-yangyingliang@huawei.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/dec/tulip/tulip_core.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/dec/tulip/tulip_core.c b/drivers/net/ethernet/dec/tulip/tulip_core.c index e7b0d7de40fd..c22d945a79fd 100644 --- a/drivers/net/ethernet/dec/tulip/tulip_core.c +++ b/drivers/net/ethernet/dec/tulip/tulip_core.c @@ -1396,8 +1396,10 @@ static int tulip_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) /* alloc_etherdev ensures aligned and zeroed private structures */ dev = alloc_etherdev (sizeof (*tp)); - if (!dev) + if (!dev) { + pci_disable_device(pdev); return -ENOMEM; + } SET_NETDEV_DEV(dev, &pdev->dev); if (pci_resource_len (pdev, 0) < tulip_tbl[chip_idx].io_size) { @@ -1774,6 +1776,7 @@ static int tulip_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) err_out_free_netdev: free_netdev (dev); + pci_disable_device(pdev); return -ENODEV; } From 0ae23a1d472ac5c977cdab2f18a82da2ee0e3f4a Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Tue, 10 May 2022 11:13:16 +0800 Subject: [PATCH 0084/1842] net: stmmac: fix missing pci_disable_device() on error in stmmac_pci_probe() [ Upstream commit 0807ce0b010418a191e0e4009803b2d74c3245d5 ] Switch to using pcim_enable_device() to avoid missing pci_disable_device(). Reported-by: Hulk Robot Signed-off-by: Yang Yingliang Link: https://lore.kernel.org/r/20220510031316.1780409-1-yangyingliang@huawei.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c index 272cb47af9f2..a7a1227c9b92 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c @@ -175,7 +175,7 @@ static int stmmac_pci_probe(struct pci_dev *pdev, return -ENOMEM; /* Enable pci device */ - ret = pci_enable_device(pdev); + ret = pcim_enable_device(pdev); if (ret) { dev_err(&pdev->dev, "%s: ERROR: failed to enable device\n", __func__); @@ -227,8 +227,6 @@ static void stmmac_pci_remove(struct pci_dev *pdev) pcim_iounmap_regions(pdev, BIT(i)); break; } - - pci_disable_device(pdev); } static int __maybe_unused stmmac_pci_suspend(struct device *dev) From 9b84e83a92cdacd1a768c4de04ec3d9a81b26c12 Mon Sep 17 00:00:00 2001 From: Grant Grundler Date: Mon, 9 May 2022 19:28:23 -0700 Subject: [PATCH 0085/1842] net: atlantic: fix "frag[0] not initialized" [ Upstream commit 62e0ae0f4020250f961cf8d0103a4621be74e077 ] In aq_ring_rx_clean(), if buff->is_eop is not set AND buff->len < AQ_CFG_RX_HDR_SIZE, then hdr_len remains equal to buff->len and skb_add_rx_frag(xxx, *0*, ...) is not called. The loop following this code starts calling skb_add_rx_frag() starting with i=1 and thus frag[0] is never initialized. Since i is initialized to zero at the top of the primary loop, we can just reference and post-increment i instead of hardcoding the 0 when calling skb_add_rx_frag() the first time. Reported-by: Aashay Shringarpure Reported-by: Yi Chou Reported-by: Shervin Oloumi Signed-off-by: Grant Grundler Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/aquantia/atlantic/aq_ring.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c index 72f8751784c3..7cf5a48e9a7d 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c @@ -445,7 +445,7 @@ int aq_ring_rx_clean(struct aq_ring_s *self, ALIGN(hdr_len, sizeof(long))); if (buff->len - hdr_len > 0) { - skb_add_rx_frag(skb, 0, buff->rxdata.page, + skb_add_rx_frag(skb, i++, buff->rxdata.page, buff->rxdata.pg_off + hdr_len, buff->len - hdr_len, AQ_CFG_RX_FRAME_MAX); @@ -454,7 +454,6 @@ int aq_ring_rx_clean(struct aq_ring_s *self, if (!buff->is_eop) { buff_ = buff; - i = 1U; do { next_ = buff_->next; buff_ = &self->buff_ring[next_]; From 9bee8b4275ece3472b2492892708ac6c1efe4fbf Mon Sep 17 00:00:00 2001 From: Grant Grundler Date: Mon, 9 May 2022 19:28:24 -0700 Subject: [PATCH 0086/1842] net: atlantic: reduce scope of is_rsc_complete [ Upstream commit 79784d77ebbd3ec516b7a5ce555d979fb7946202 ] Don't defer handling the err case outside the loop. That's pointless. And since is_rsc_complete is only used inside this loop, declare it inside the loop to reduce it's scope. Signed-off-by: Grant Grundler Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/aquantia/atlantic/aq_ring.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c index 7cf5a48e9a7d..339efdfb1d49 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c @@ -345,7 +345,6 @@ int aq_ring_rx_clean(struct aq_ring_s *self, int budget) { struct net_device *ndev = aq_nic_get_ndev(self->aq_nic); - bool is_rsc_completed = true; int err = 0; for (; (self->sw_head != self->hw_head) && budget; @@ -365,6 +364,8 @@ int aq_ring_rx_clean(struct aq_ring_s *self, if (!buff->is_eop) { buff_ = buff; do { + bool is_rsc_completed = true; + if (buff_->next >= self->size) { err = -EIO; goto err_exit; @@ -376,18 +377,16 @@ int aq_ring_rx_clean(struct aq_ring_s *self, next_, self->hw_head); - if (unlikely(!is_rsc_completed)) - break; + if (unlikely(!is_rsc_completed)) { + err = 0; + goto err_exit; + } buff->is_error |= buff_->is_error; buff->is_cso_err |= buff_->is_cso_err; } while (!buff_->is_eop); - if (!is_rsc_completed) { - err = 0; - goto err_exit; - } if (buff->is_error || (buff->is_lro && buff->is_cso_err)) { buff_ = buff; From cd66ab20a8f84474564a68fffffd37d998f6c340 Mon Sep 17 00:00:00 2001 From: Grant Grundler Date: Mon, 9 May 2022 19:28:25 -0700 Subject: [PATCH 0087/1842] net: atlantic: add check for MAX_SKB_FRAGS [ Upstream commit 6aecbba12b5c90b26dc062af3b9de8c4b3a2f19f ] Enforce that the CPU can not get stuck in an infinite loop. Reported-by: Aashay Shringarpure Reported-by: Yi Chou Reported-by: Shervin Oloumi Signed-off-by: Grant Grundler Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/aquantia/atlantic/aq_ring.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c index 339efdfb1d49..e9c6f1fa0b1a 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c @@ -362,6 +362,7 @@ int aq_ring_rx_clean(struct aq_ring_s *self, continue; if (!buff->is_eop) { + unsigned int frag_cnt = 0U; buff_ = buff; do { bool is_rsc_completed = true; @@ -370,6 +371,8 @@ int aq_ring_rx_clean(struct aq_ring_s *self, err = -EIO; goto err_exit; } + + frag_cnt++; next_ = buff_->next, buff_ = &self->buff_ring[next_]; is_rsc_completed = @@ -377,7 +380,8 @@ int aq_ring_rx_clean(struct aq_ring_s *self, next_, self->hw_head); - if (unlikely(!is_rsc_completed)) { + if (unlikely(!is_rsc_completed) || + frag_cnt > MAX_SKB_FRAGS) { err = 0; goto err_exit; } From 696292b9b5f66c8d3134c514af2c9387e5157d5f Mon Sep 17 00:00:00 2001 From: Grant Grundler Date: Mon, 9 May 2022 19:28:26 -0700 Subject: [PATCH 0088/1842] net: atlantic: verify hw_head_ lies within TX buffer ring [ Upstream commit 2120b7f4d128433ad8c5f503a9584deba0684901 ] Bounds check hw_head index provided by NIC to verify it lies within the TX buffer ring. Reported-by: Aashay Shringarpure Reported-by: Yi Chou Reported-by: Shervin Oloumi Signed-off-by: Grant Grundler Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c index 9f1b15077e7d..45c17c585d74 100644 --- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c +++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c @@ -889,6 +889,13 @@ int hw_atl_b0_hw_ring_tx_head_update(struct aq_hw_s *self, err = -ENXIO; goto err_exit; } + + /* Validate that the new hw_head_ is reasonable. */ + if (hw_head_ >= ring->size) { + err = -ENXIO; + goto err_exit; + } + ring->hw_head = hw_head_; err = aq_hw_err_from_flags(self); From a698bf1f728c95087a60e0b71cd5cc225ec3332f Mon Sep 17 00:00:00 2001 From: Shreyas K K Date: Thu, 12 May 2022 16:31:34 +0530 Subject: [PATCH 0089/1842] arm64: Enable repeat tlbi workaround on KRYO4XX gold CPUs [ Upstream commit 51f559d66527e238f9a5f82027bff499784d4eac ] Add KRYO4XX gold/big cores to the list of CPUs that need the repeat TLBI workaround. Apply this to the affected KRYO4XX cores (rcpe to rfpe). The variant and revision bits are implementation defined and are different from the their Cortex CPU counterparts on which they are based on, i.e., (r0p0 to r3p0) is equivalent to (rcpe to rfpe). Signed-off-by: Shreyas K K Reviewed-by: Sai Prakash Ranjan Link: https://lore.kernel.org/r/20220512110134.12179-1-quic_shrekk@quicinc.com Signed-off-by: Will Deacon Signed-off-by: Sasha Levin --- Documentation/arm64/silicon-errata.rst | 3 +++ arch/arm64/kernel/cpu_errata.c | 2 ++ 2 files changed, 5 insertions(+) diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst index 719510247292..f01eed0ee23a 100644 --- a/Documentation/arm64/silicon-errata.rst +++ b/Documentation/arm64/silicon-errata.rst @@ -160,6 +160,9 @@ stable kernels. +----------------+-----------------+-----------------+-----------------------------+ | Qualcomm Tech. | Kryo4xx Silver | N/A | ARM64_ERRATUM_1024718 | +----------------+-----------------+-----------------+-----------------------------+ +| Qualcomm Tech. | Kryo4xx Gold | N/A | ARM64_ERRATUM_1286807 | ++----------------+-----------------+-----------------+-----------------------------+ + +----------------+-----------------+-----------------+-----------------------------+ | Fujitsu | A64FX | E#010001 | FUJITSU_ERRATUM_010001 | +----------------+-----------------+-----------------+-----------------------------+ diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index 533559c7d2b3..ca42d58e8c82 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -220,6 +220,8 @@ static const struct arm64_cpu_capabilities arm64_repeat_tlbi_list[] = { #ifdef CONFIG_ARM64_ERRATUM_1286807 { ERRATA_MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 0), + /* Kryo4xx Gold (rcpe to rfpe) => (r0p0 to r3p0) */ + ERRATA_MIDR_RANGE(MIDR_QCOM_KRYO_4XX_GOLD, 0xc, 0xe, 0xf, 0xe), }, #endif {}, From d30fdf7d1343da4fc585308eb44cfd2a4276956a Mon Sep 17 00:00:00 2001 From: Marek Vasut Date: Wed, 18 May 2022 14:28:32 -0700 Subject: [PATCH 0090/1842] Input: ili210x - fix reset timing commit e4920d42ce0e9c8aafb7f64b6d9d4ae02161e51e upstream. According to Ilitek "231x & ILI251x Programming Guide" Version: 2.30 "2.1. Power Sequence", "T4 Chip Reset and discharge time" is minimum 10ms and "T2 Chip initial time" is maximum 150ms. Adjust the reset timings such that T4 is 12ms and T2 is 160ms to fit those figures. This prevents sporadic touch controller start up failures when some systems with at least ILI251x controller boot, without this patch the systems sometimes fail to communicate with the touch controller. Fixes: 201f3c803544c ("Input: ili210x - add reset GPIO support") Signed-off-by: Marek Vasut Link: https://lore.kernel.org/r/20220518204901.93534-1-marex@denx.de Cc: stable@vger.kernel.org Signed-off-by: Dmitry Torokhov Signed-off-by: Greg Kroah-Hartman --- drivers/input/touchscreen/ili210x.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/input/touchscreen/ili210x.c b/drivers/input/touchscreen/ili210x.c index 30576a5f2f04..f437eefec94a 100644 --- a/drivers/input/touchscreen/ili210x.c +++ b/drivers/input/touchscreen/ili210x.c @@ -420,9 +420,9 @@ static int ili210x_i2c_probe(struct i2c_client *client, if (error) return error; - usleep_range(50, 100); + usleep_range(12000, 15000); gpiod_set_value_cansleep(reset_gpio, 0); - msleep(100); + msleep(160); } priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); From 355141fdbfef75b91973d2d34049accd1c06fd9e Mon Sep 17 00:00:00 2001 From: Jae Hyun Yoo Date: Tue, 29 Mar 2022 10:39:28 -0700 Subject: [PATCH 0091/1842] dt-bindings: pinctrl: aspeed-g6: remove FWQSPID group commit a29c96a4053dc3c1d39353b61089882f81c6b23d upstream. FWQSPID is not a group of FWSPID so remove it. Fixes: 7488838f2315 ("dt-bindings: pinctrl: aspeed: Document AST2600 pinmux") Signed-off-by: Jae Hyun Yoo Acked-by: Rob Herring Reviewed-by: Andrew Jeffery Link: https://lore.kernel.org/r/20220329173932.2588289-4-quic_jaehyoo@quicinc.com Signed-off-by: Joel Stanley Signed-off-by: Greg Kroah-Hartman --- .../devicetree/bindings/pinctrl/aspeed,ast2600-pinctrl.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Documentation/devicetree/bindings/pinctrl/aspeed,ast2600-pinctrl.yaml b/Documentation/devicetree/bindings/pinctrl/aspeed,ast2600-pinctrl.yaml index c78ab7e2eee7..fa83c69820f8 100644 --- a/Documentation/devicetree/bindings/pinctrl/aspeed,ast2600-pinctrl.yaml +++ b/Documentation/devicetree/bindings/pinctrl/aspeed,ast2600-pinctrl.yaml @@ -58,7 +58,7 @@ patternProperties: $ref: "/schemas/types.yaml#/definitions/string" enum: [ ADC0, ADC1, ADC10, ADC11, ADC12, ADC13, ADC14, ADC15, ADC2, ADC3, ADC4, ADC5, ADC6, ADC7, ADC8, ADC9, BMCINT, EMMCG1, EMMCG4, - EMMCG8, ESPI, ESPIALT, FSI1, FSI2, FWSPIABR, FWSPID, FWQSPID, FWSPIWP, + EMMCG8, ESPI, ESPIALT, FSI1, FSI2, FWSPIABR, FWSPID, FWSPIWP, GPIT0, GPIT1, GPIT2, GPIT3, GPIT4, GPIT5, GPIT6, GPIT7, GPIU0, GPIU1, GPIU2, GPIU3, GPIU4, GPIU5, GPIU6, GPIU7, HVI3C3, HVI3C4, I2C1, I2C10, I2C11, I2C12, I2C13, I2C14, I2C15, I2C16, I2C2, I2C3, I2C4, I2C5, From 030de84d453a8ec3d99294246f8c07d845bb3828 Mon Sep 17 00:00:00 2001 From: Jessica Yu Date: Tue, 23 Mar 2021 13:15:41 +0100 Subject: [PATCH 0092/1842] module: treat exit sections the same as init sections when !CONFIG_MODULE_UNLOAD commit 33121347fb1c359bd6e3e680b9f2c6ced5734a81 upstream. Dynamic code patching (alternatives, jump_label and static_call) can have sites in __exit code, even it __exit is never executed. Therefore __exit must be present at runtime, at least for as long as __init code is. Additionally, for jump_label and static_call, the __exit sites must also identify as within_module_init(), such that the infrastructure is aware to never touch them after module init -- alternatives are only ran once at init and hence don't have this particular constraint. By making __exit identify as __init for MODULE_UNLOAD, the above is satisfied. So, when !CONFIG_MODULE_UNLOAD, the section ordering should look like the following, with the .exit sections moved to the init region of the module. Core section allocation order: .text .rodata __ksymtab_gpl __ksymtab_strings .note.* sections .bss .data .gnu.linkonce.this_module Init section allocation order: .init.text .exit.text .symtab .strtab [jeyu: thanks to Peter Zijlstra for most of changelog] Link: https://lore.kernel.org/lkml/YFiuphGw0RKehWsQ@gunter/ Link: https://lore.kernel.org/r/20210323142756.11443-1-jeyu@kernel.org Acked-by: Peter Zijlstra (Intel) Signed-off-by: Jessica Yu Cc: Joerg Vehlow Signed-off-by: Greg Kroah-Hartman --- kernel/module.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/kernel/module.c b/kernel/module.c index 5f4403198f04..569ad650e8b8 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -2861,7 +2861,11 @@ void * __weak module_alloc(unsigned long size) bool __weak module_init_section(const char *name) { +#ifndef CONFIG_MODULE_UNLOAD + return strstarts(name, ".init") || module_exit_section(name); +#else return strstarts(name, ".init"); +#endif } bool __weak module_exit_section(const char *name) @@ -3171,11 +3175,6 @@ static int rewrite_section_headers(struct load_info *info, int flags) temporary image. */ shdr->sh_addr = (size_t)info->hdr + shdr->sh_offset; -#ifndef CONFIG_MODULE_UNLOAD - /* Don't load .exit sections */ - if (module_exit_section(info->secstrings+shdr->sh_name)) - shdr->sh_flags &= ~(unsigned long)SHF_ALLOC; -#endif } /* Track but don't keep modinfo and version sections. */ From 606011cb6a69b3f211c366aa10594845e6c5ad8c Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Sat, 14 May 2022 10:31:47 +0800 Subject: [PATCH 0093/1842] i2c: mt7621: fix missing clk_disable_unprepare() on error in mtk_i2c_probe() [ Upstream commit a2537c98a8a3b57002e54a262d180b9490bc7190 ] Fix the missing clk_disable_unprepare() before return from mtk_i2c_probe() in the error handling case. Fixes: d04913ec5f89 ("i2c: mt7621: Add MediaTek MT7621/7628/7688 I2C driver") Signed-off-by: Yang Yingliang Reviewed-by: Stefan Roese Signed-off-by: Wolfram Sang Signed-off-by: Sasha Levin --- drivers/i2c/busses/i2c-mt7621.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/i2c/busses/i2c-mt7621.c b/drivers/i2c/busses/i2c-mt7621.c index 45fe4a7fe0c0..901f0fb04fee 100644 --- a/drivers/i2c/busses/i2c-mt7621.c +++ b/drivers/i2c/busses/i2c-mt7621.c @@ -304,7 +304,8 @@ static int mtk_i2c_probe(struct platform_device *pdev) if (i2c->bus_freq == 0) { dev_warn(i2c->dev, "clock-frequency 0 not supported\n"); - return -EINVAL; + ret = -EINVAL; + goto err_disable_clk; } adap = &i2c->adap; @@ -322,10 +323,15 @@ static int mtk_i2c_probe(struct platform_device *pdev) ret = i2c_add_adapter(adap); if (ret < 0) - return ret; + goto err_disable_clk; dev_info(&pdev->dev, "clock %u kHz\n", i2c->bus_freq / 1000); + return 0; + +err_disable_clk: + clk_disable_unprepare(i2c->clk); + return ret; } From 61a4cc41e5c1b77d05a12798f8032050aa75f3c8 Mon Sep 17 00:00:00 2001 From: David Howells Date: Sat, 21 May 2022 08:18:28 +0100 Subject: [PATCH 0094/1842] afs: Fix afs_getattr() to refetch file status if callback break occurred [ Upstream commit 2aeb8c86d49967552394d5e723f87454cb53f501 ] If a callback break occurs (change notification), afs_getattr() needs to issue an FS.FetchStatus RPC operation to update the status of the file being examined by the stat-family of system calls. Fix afs_getattr() to do this if AFS_VNODE_CB_PROMISED has been cleared on a vnode by a callback break. Skip this if AT_STATX_DONT_SYNC is set. This can be tested by appending to a file on one AFS client and then using "stat -L" to examine its length on a machine running kafs. This can also be watched through tracing on the kafs machine. The callback break is seen: kworker/1:1-46 [001] ..... 978.910812: afs_cb_call: c=0000005f YFSCB.CallBack kworker/1:1-46 [001] ...1. 978.910829: afs_cb_break: 100058:23b4c:242d2c2 b=2 s=1 break-cb kworker/1:1-46 [001] ..... 978.911062: afs_call_done: c=0000005f ret=0 ab=0 [0000000082994ead] And then the stat command generated no traffic if unpatched, but with this change a call to fetch the status can be observed: stat-4471 [000] ..... 986.744122: afs_make_fs_call: c=000000ab 100058:023b4c:242d2c2 YFS.FetchStatus stat-4471 [000] ..... 986.745578: afs_call_done: c=000000ab ret=0 ab=0 [0000000087fc8c84] Fixes: 08e0e7c82eea ("[AF_RXRPC]: Make the in-kernel AFS filesystem use AF_RXRPC.") Reported-by: Markus Suvanto Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org Tested-by: Markus Suvanto Tested-by: kafs-testing+fedora34_64checkkafs-build-496@auristor.com Link: https://bugzilla.kernel.org/show_bug.cgi?id=216010 Link: https://lore.kernel.org/r/165308359800.162686.14122417881564420962.stgit@warthog.procyon.org.uk/ # v1 Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- fs/afs/inode.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/fs/afs/inode.c b/fs/afs/inode.c index f81a972bdd29..7e7a9454bcb9 100644 --- a/fs/afs/inode.c +++ b/fs/afs/inode.c @@ -729,10 +729,22 @@ int afs_getattr(const struct path *path, struct kstat *stat, { struct inode *inode = d_inode(path->dentry); struct afs_vnode *vnode = AFS_FS_I(inode); - int seq = 0; + struct key *key; + int ret, seq = 0; _enter("{ ino=%lu v=%u }", inode->i_ino, inode->i_generation); + if (!(query_flags & AT_STATX_DONT_SYNC) && + !test_bit(AFS_VNODE_CB_PROMISED, &vnode->flags)) { + key = afs_request_key(vnode->volume->cell); + if (IS_ERR(key)) + return PTR_ERR(key); + ret = afs_validate(vnode, key); + key_put(key); + if (ret < 0) + return ret; + } + do { read_seqbegin_or_lock(&vnode->cb_lock, &seq); generic_fillattr(inode, stat); From 633be494c3ca66a6c36e61100ffa0bd3d2923954 Mon Sep 17 00:00:00 2001 From: Eugene Syromiatnikov Date: Sun, 12 Sep 2021 14:22:34 +0200 Subject: [PATCH 0095/1842] include/uapi/linux/xfrm.h: Fix XFRM_MSG_MAPPING ABI breakage commit 844f7eaaed9267ae17d33778efe65548cc940205 upstream. Commit 2d151d39073a ("xfrm: Add possibility to set the default to block if we have no policy") broke ABI by changing the value of the XFRM_MSG_MAPPING enum item, thus also evading the build-time check in security/selinux/nlmsgtab.c:selinux_nlmsg_lookup for presence of proper security permission checks in nlmsg_xfrm_perms. Fix it by placing XFRM_MSG_SETDEFAULT/XFRM_MSG_GETDEFAULT to the end of the enum, right before __XFRM_MSG_MAX, and updating the nlmsg_xfrm_perms accordingly. Fixes: 2d151d39073a ("xfrm: Add possibility to set the default to block if we have no policy") References: https://lore.kernel.org/netdev/20210901151402.GA2557@altlinux.org/ Signed-off-by: Eugene Syromiatnikov Acked-by: Antony Antony Acked-by: Nicolas Dichtel Signed-off-by: Steffen Klassert Signed-off-by: Greg Kroah-Hartman --- include/uapi/linux/xfrm.h | 6 +++--- security/selinux/nlmsgtab.c | 4 +++- 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h index 6bae68645148..65e13a099b1a 100644 --- a/include/uapi/linux/xfrm.h +++ b/include/uapi/linux/xfrm.h @@ -213,13 +213,13 @@ enum { XFRM_MSG_GETSPDINFO, #define XFRM_MSG_GETSPDINFO XFRM_MSG_GETSPDINFO + XFRM_MSG_MAPPING, +#define XFRM_MSG_MAPPING XFRM_MSG_MAPPING + XFRM_MSG_SETDEFAULT, #define XFRM_MSG_SETDEFAULT XFRM_MSG_SETDEFAULT XFRM_MSG_GETDEFAULT, #define XFRM_MSG_GETDEFAULT XFRM_MSG_GETDEFAULT - - XFRM_MSG_MAPPING, -#define XFRM_MSG_MAPPING XFRM_MSG_MAPPING __XFRM_MSG_MAX }; #define XFRM_MSG_MAX (__XFRM_MSG_MAX - 1) diff --git a/security/selinux/nlmsgtab.c b/security/selinux/nlmsgtab.c index b69231918686..c4fb57e90b6a 100644 --- a/security/selinux/nlmsgtab.c +++ b/security/selinux/nlmsgtab.c @@ -123,6 +123,8 @@ static const struct nlmsg_perm nlmsg_xfrm_perms[] = { XFRM_MSG_NEWSPDINFO, NETLINK_XFRM_SOCKET__NLMSG_WRITE }, { XFRM_MSG_GETSPDINFO, NETLINK_XFRM_SOCKET__NLMSG_READ }, { XFRM_MSG_MAPPING, NETLINK_XFRM_SOCKET__NLMSG_READ }, + { XFRM_MSG_SETDEFAULT, NETLINK_XFRM_SOCKET__NLMSG_WRITE }, + { XFRM_MSG_GETDEFAULT, NETLINK_XFRM_SOCKET__NLMSG_READ }, }; static const struct nlmsg_perm nlmsg_audit_perms[] = @@ -186,7 +188,7 @@ int selinux_nlmsg_lookup(u16 sclass, u16 nlmsg_type, u32 *perm) * structures at the top of this file with the new mappings * before updating the BUILD_BUG_ON() macro! */ - BUILD_BUG_ON(XFRM_MSG_MAX != XFRM_MSG_MAPPING); + BUILD_BUG_ON(XFRM_MSG_MAX != XFRM_MSG_GETDEFAULT); err = nlmsg_perm(nlmsg_type, perm, nlmsg_xfrm_perms, sizeof(nlmsg_xfrm_perms)); break; From 56642f6af2ab3f9a4c77cc5e97ba61b9b46aa92d Mon Sep 17 00:00:00 2001 From: Jessica Yu Date: Wed, 12 May 2021 15:45:46 +0200 Subject: [PATCH 0096/1842] module: check for exit sections in layout_sections() instead of module_init_section() commit 055f23b74b20f2824ce33047b4cf2e2aa856bf3b upstream. Previously, when CONFIG_MODULE_UNLOAD=n, the module loader just does not attempt to load exit sections since it never expects that any code in those sections will ever execute. However, dynamic code patching (alternatives, jump_label and static_call) can have sites in __exit code, even if __exit is never executed. Therefore __exit must be present at runtime, at least for as long as __init code is. Commit 33121347fb1c ("module: treat exit sections the same as init sections when !CONFIG_MODULE_UNLOAD") solves the requirements of jump_labels and static_calls by putting the exit sections in the init region of the module so that they are at least present at init, and discarded afterwards. It does this by including a check for exit sections in module_init_section(), so that it also returns true for exit sections, and the module loader will automatically sort them in the init region of the module. However, the solution there was not completely arch-independent. ARM is a special case where it supplies its own module_{init, exit}_section() functions. Instead of pushing the exit section checks into module_init_section(), just implement the exit section check in layout_sections(), so that we don't have to touch arch-dependent code. Fixes: 33121347fb1c ("module: treat exit sections the same as init sections when !CONFIG_MODULE_UNLOAD") Reviewed-by: Russell King (Oracle) Signed-off-by: Jessica Yu Signed-off-by: Greg Kroah-Hartman --- kernel/module.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/kernel/module.c b/kernel/module.c index 569ad650e8b8..6a0fd245c048 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -2280,6 +2280,15 @@ void *__symbol_get(const char *symbol) } EXPORT_SYMBOL_GPL(__symbol_get); +static bool module_init_layout_section(const char *sname) +{ +#ifndef CONFIG_MODULE_UNLOAD + if (module_exit_section(sname)) + return true; +#endif + return module_init_section(sname); +} + /* * Ensure that an exported symbol [global namespace] does not already exist * in the kernel or in some other module's exported symbol table. @@ -2489,7 +2498,7 @@ static void layout_sections(struct module *mod, struct load_info *info) if ((s->sh_flags & masks[m][0]) != masks[m][0] || (s->sh_flags & masks[m][1]) || s->sh_entsize != ~0UL - || module_init_section(sname)) + || module_init_layout_section(sname)) continue; s->sh_entsize = get_offset(mod, &mod->core_layout.size, s, i); pr_debug("\t%s\n", sname); @@ -2522,7 +2531,7 @@ static void layout_sections(struct module *mod, struct load_info *info) if ((s->sh_flags & masks[m][0]) != masks[m][0] || (s->sh_flags & masks[m][1]) || s->sh_entsize != ~0UL - || !module_init_section(sname)) + || !module_init_layout_section(sname)) continue; s->sh_entsize = (get_offset(mod, &mod->init_layout.size, s, i) | INIT_OFFSET_MASK); @@ -2861,11 +2870,7 @@ void * __weak module_alloc(unsigned long size) bool __weak module_init_section(const char *name) { -#ifndef CONFIG_MODULE_UNLOAD - return strstarts(name, ".init") || module_exit_section(name); -#else return strstarts(name, ".init"); -#endif } bool __weak module_exit_section(const char *name) From c204ee3350ebbc4e2ab108cbce7afc0cac1c407d Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Wed, 25 May 2022 09:18:02 +0200 Subject: [PATCH 0097/1842] Linux 5.10.118 Link: https://lore.kernel.org/r/20220523165812.244140613@linuxfoundation.org Tested-by: Florian Fainelli Tested-by: Shuah Khan Tested-by: Fox Chen Tested-by: Salvatore Bonaccorso Tested-by: Sudip Mukherjee Tested-by: Pavel Machek (CIP) Tested-by: Hulk Robot Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 8f611d79d5e1..f9210e43121d 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 117 +SUBLEVEL = 118 EXTRAVERSION = NAME = Dare mighty things From a8f4d63142f947cd22fa615b8b3b8921cdaf4991 Mon Sep 17 00:00:00 2001 From: Daniel Thompson Date: Mon, 23 May 2022 19:11:02 +0100 Subject: [PATCH 0098/1842] lockdown: also lock down previous kgdb use commit eadb2f47a3ced5c64b23b90fd2a3463f63726066 upstream. KGDB and KDB allow read and write access to kernel memory, and thus should be restricted during lockdown. An attacker with access to a serial port (for example, via a hypervisor console, which some cloud vendors provide over the network) could trigger the debugger so it is important that the debugger respect the lockdown mode when/if it is triggered. Fix this by integrating lockdown into kdb's existing permissions mechanism. Unfortunately kgdb does not have any permissions mechanism (although it certainly could be added later) so, for now, kgdb is simply and brutally disabled by immediately exiting the gdb stub without taking any action. For lockdowns established early in the boot (e.g. the normal case) then this should be fine but on systems where kgdb has set breakpoints before the lockdown is enacted than "bad things" will happen. CVE: CVE-2022-21499 Co-developed-by: Stephen Brennan Signed-off-by: Stephen Brennan Reviewed-by: Douglas Anderson Signed-off-by: Daniel Thompson Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- include/linux/security.h | 2 ++ kernel/debug/debug_core.c | 24 ++++++++++++++ kernel/debug/kdb/kdb_main.c | 62 +++++++++++++++++++++++++++++++++++-- security/security.c | 2 ++ 4 files changed, 87 insertions(+), 3 deletions(-) diff --git a/include/linux/security.h b/include/linux/security.h index 35355429648e..330029ef7e89 100644 --- a/include/linux/security.h +++ b/include/linux/security.h @@ -121,10 +121,12 @@ enum lockdown_reason { LOCKDOWN_DEBUGFS, LOCKDOWN_XMON_WR, LOCKDOWN_BPF_WRITE_USER, + LOCKDOWN_DBG_WRITE_KERNEL, LOCKDOWN_INTEGRITY_MAX, LOCKDOWN_KCORE, LOCKDOWN_KPROBES, LOCKDOWN_BPF_READ, + LOCKDOWN_DBG_READ_KERNEL, LOCKDOWN_PERF, LOCKDOWN_TRACEFS, LOCKDOWN_XMON_RW, diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c index 8661eb2b1771..0f31b22abe8d 100644 --- a/kernel/debug/debug_core.c +++ b/kernel/debug/debug_core.c @@ -56,6 +56,7 @@ #include #include #include +#include #include #include @@ -756,6 +757,29 @@ static int kgdb_cpu_enter(struct kgdb_state *ks, struct pt_regs *regs, continue; kgdb_connected = 0; } else { + /* + * This is a brutal way to interfere with the debugger + * and prevent gdb being used to poke at kernel memory. + * This could cause trouble if lockdown is applied when + * there is already an active gdb session. For now the + * answer is simply "don't do that". Typically lockdown + * *will* be applied before the debug core gets started + * so only developers using kgdb for fairly advanced + * early kernel debug can be biten by this. Hopefully + * they are sophisticated enough to take care of + * themselves, especially with help from the lockdown + * message printed on the console! + */ + if (security_locked_down(LOCKDOWN_DBG_WRITE_KERNEL)) { + if (IS_ENABLED(CONFIG_KGDB_KDB)) { + /* Switch back to kdb if possible... */ + dbg_kdb_mode = 1; + continue; + } else { + /* ... otherwise just bail */ + break; + } + } error = gdb_serial_stub(ks); } diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c index 930ac1b25ec7..4e09fab52faf 100644 --- a/kernel/debug/kdb/kdb_main.c +++ b/kernel/debug/kdb/kdb_main.c @@ -45,6 +45,7 @@ #include #include #include +#include #include "kdb_private.h" #undef MODULE_PARAM_PREFIX @@ -197,10 +198,62 @@ struct task_struct *kdb_curr_task(int cpu) } /* - * Check whether the flags of the current command and the permissions - * of the kdb console has allow a command to be run. + * Update the permissions flags (kdb_cmd_enabled) to match the + * current lockdown state. + * + * Within this function the calls to security_locked_down() are "lazy". We + * avoid calling them if the current value of kdb_cmd_enabled already excludes + * flags that might be subject to lockdown. Additionally we deliberately check + * the lockdown flags independently (even though read lockdown implies write + * lockdown) since that results in both simpler code and clearer messages to + * the user on first-time debugger entry. + * + * The permission masks during a read+write lockdown permits the following + * flags: INSPECT, SIGNAL, REBOOT (and ALWAYS_SAFE). + * + * The INSPECT commands are not blocked during lockdown because they are + * not arbitrary memory reads. INSPECT covers the backtrace family (sometimes + * forcing them to have no arguments) and lsmod. These commands do expose + * some kernel state but do not allow the developer seated at the console to + * choose what state is reported. SIGNAL and REBOOT should not be controversial, + * given these are allowed for root during lockdown already. */ -static inline bool kdb_check_flags(kdb_cmdflags_t flags, int permissions, +static void kdb_check_for_lockdown(void) +{ + const int write_flags = KDB_ENABLE_MEM_WRITE | + KDB_ENABLE_REG_WRITE | + KDB_ENABLE_FLOW_CTRL; + const int read_flags = KDB_ENABLE_MEM_READ | + KDB_ENABLE_REG_READ; + + bool need_to_lockdown_write = false; + bool need_to_lockdown_read = false; + + if (kdb_cmd_enabled & (KDB_ENABLE_ALL | write_flags)) + need_to_lockdown_write = + security_locked_down(LOCKDOWN_DBG_WRITE_KERNEL); + + if (kdb_cmd_enabled & (KDB_ENABLE_ALL | read_flags)) + need_to_lockdown_read = + security_locked_down(LOCKDOWN_DBG_READ_KERNEL); + + /* De-compose KDB_ENABLE_ALL if required */ + if (need_to_lockdown_write || need_to_lockdown_read) + if (kdb_cmd_enabled & KDB_ENABLE_ALL) + kdb_cmd_enabled = KDB_ENABLE_MASK & ~KDB_ENABLE_ALL; + + if (need_to_lockdown_write) + kdb_cmd_enabled &= ~write_flags; + + if (need_to_lockdown_read) + kdb_cmd_enabled &= ~read_flags; +} + +/* + * Check whether the flags of the current command, the permissions of the kdb + * console and the lockdown state allow a command to be run. + */ +static bool kdb_check_flags(kdb_cmdflags_t flags, int permissions, bool no_args) { /* permissions comes from userspace so needs massaging slightly */ @@ -1194,6 +1247,9 @@ static int kdb_local(kdb_reason_t reason, int error, struct pt_regs *regs, kdb_curr_task(raw_smp_processor_id()); KDB_DEBUG_STATE("kdb_local 1", reason); + + kdb_check_for_lockdown(); + kdb_go_count = 0; if (reason == KDB_REASON_DEBUG) { /* special case below */ diff --git a/security/security.c b/security/security.c index d9d42d64f89f..360706cdabab 100644 --- a/security/security.c +++ b/security/security.c @@ -59,10 +59,12 @@ const char *const lockdown_reasons[LOCKDOWN_CONFIDENTIALITY_MAX+1] = { [LOCKDOWN_DEBUGFS] = "debugfs access", [LOCKDOWN_XMON_WR] = "xmon write access", [LOCKDOWN_BPF_WRITE_USER] = "use of bpf to write user RAM", + [LOCKDOWN_DBG_WRITE_KERNEL] = "use of kgdb/kdb to write kernel RAM", [LOCKDOWN_INTEGRITY_MAX] = "integrity", [LOCKDOWN_KCORE] = "/proc/kcore access", [LOCKDOWN_KPROBES] = "use of kprobes", [LOCKDOWN_BPF_READ] = "use of bpf to read kernel RAM", + [LOCKDOWN_DBG_READ_KERNEL] = "use of kgdb/kdb to read kernel RAM", [LOCKDOWN_PERF] = "unsafe use of perf", [LOCKDOWN_TRACEFS] = "use of tracefs", [LOCKDOWN_XMON_RW] = "xmon read and write access", From c06e5f751a08a05d8e7e14f77a3d1ba783641b26 Mon Sep 17 00:00:00 2001 From: "Denis Efremov (Oracle)" Date: Fri, 20 May 2022 07:57:30 +0400 Subject: [PATCH 0099/1842] staging: rtl8723bs: prevent ->Ssid overflow in rtw_wx_set_scan() This code has a check to prevent read overflow but it needs another check to prevent writing beyond the end of the ->Ssid[] array. Fixes: 554c0a3abf21 ("staging: Add rtl8723bs sdio wifi driver") Cc: stable Signed-off-by: Denis Efremov (Oracle) Signed-off-by: Greg Kroah-Hartman --- drivers/staging/rtl8723bs/os_dep/ioctl_linux.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c b/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c index 902ac8169948..083ff72976cf 100644 --- a/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c +++ b/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c @@ -1351,9 +1351,11 @@ static int rtw_wx_set_scan(struct net_device *dev, struct iw_request_info *a, sec_len = *(pos++); len -= 1; - if (sec_len > 0 && sec_len <= len) { + if (sec_len > 0 && + sec_len <= len && + sec_len <= 32) { ssid[ssid_index].SsidLength = sec_len; - memcpy(ssid[ssid_index].Ssid, pos, ssid[ssid_index].SsidLength); + memcpy(ssid[ssid_index].Ssid, pos, sec_len); /* DBG_871X("%s COMBO_SCAN with specific ssid:%s, %d\n", __func__ */ /* , ssid[ssid_index].Ssid, ssid[ssid_index].SsidLength); */ ssid_index++; From 74c6e5d584354c6126f1231667f9d8e85d7f536f Mon Sep 17 00:00:00 2001 From: Vitaly Kuznetsov Date: Thu, 22 Apr 2021 11:29:48 +0200 Subject: [PATCH 0100/1842] KVM: x86: Properly handle APF vs disabled LAPIC situation commit 2f15d027c05fac406decdb5eceb9ec0902b68f53 upstream. Async PF 'page ready' event may happen when LAPIC is (temporary) disabled. In particular, Sebastien reports that when Linux kernel is directly booted by Cloud Hypervisor, LAPIC is 'software disabled' when APF mechanism is initialized. On initialization KVM tries to inject 'wakeup all' event and puts the corresponding token to the slot. It is, however, failing to inject an interrupt (kvm_apic_set_irq() -> __apic_accept_irq() -> !apic_enabled()) so the guest never gets notified and the whole APF mechanism gets stuck. The same issue is likely to happen if the guest temporary disables LAPIC and a previously unavailable page becomes available. Do two things to resolve the issue: - Avoid dequeuing 'page ready' events from APF queue when LAPIC is disabled. - Trigger an attempt to deliver pending 'page ready' events when LAPIC becomes enabled (SPIV or MSR_IA32_APICBASE). Reported-by: Sebastien Boeuf Signed-off-by: Vitaly Kuznetsov Message-Id: <20210422092948.568327-1-vkuznets@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini [Guoqing: backport to 5.10-stable ] Signed-off-by: Guoqing Jiang Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/lapic.c | 6 ++++++ arch/x86/kvm/x86.c | 2 +- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index a3ef793fce5f..6ed6b090be94 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -297,6 +297,10 @@ static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val) atomic_set_release(&apic->vcpu->kvm->arch.apic_map_dirty, DIRTY); } + + /* Check if there are APF page ready requests pending */ + if (enabled) + kvm_make_request(KVM_REQ_APF_READY, apic->vcpu); } static inline void kvm_apic_set_xapic_id(struct kvm_lapic *apic, u8 id) @@ -2260,6 +2264,8 @@ void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value) if (value & MSR_IA32_APICBASE_ENABLE) { kvm_apic_set_xapic_id(apic, vcpu->vcpu_id); static_key_slow_dec_deferred(&apic_hw_disabled); + /* Check if there are APF page ready requests pending */ + kvm_make_request(KVM_REQ_APF_READY, vcpu); } else { static_key_slow_inc(&apic_hw_disabled.key); atomic_set_release(&apic->vcpu->kvm->arch.apic_map_dirty, DIRTY); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4588f73bf59a..ae18062c26a6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11146,7 +11146,7 @@ bool kvm_arch_can_dequeue_async_page_present(struct kvm_vcpu *vcpu) if (!kvm_pv_async_pf_enabled(vcpu)) return true; else - return apf_pageready_slot_free(vcpu); + return kvm_lapic_enabled(vcpu) && apf_pageready_slot_free(vcpu); } void kvm_arch_start_assignment(struct kvm *kvm) From 9b4aa0d80b18b9d19e62dd47d22e274ce92cdc95 Mon Sep 17 00:00:00 2001 From: Paolo Bonzini Date: Fri, 20 May 2022 13:48:11 -0400 Subject: [PATCH 0101/1842] KVM: x86/mmu: fix NULL pointer dereference on guest INVPCID commit 9f46c187e2e680ecd9de7983e4d081c3391acc76 upstream. With shadow paging enabled, the INVPCID instruction results in a call to kvm_mmu_invpcid_gva. If INVPCID is executed with CR0.PG=0, the invlpg callback is not set and the result is a NULL pointer dereference. Fix it trivially by checking for mmu->invlpg before every call. There are other possibilities: - check for CR0.PG, because KVM (like all Intel processors after P5) flushes guest TLB on CR0.PG changes so that INVPCID/INVLPG are a nop with paging disabled - check for EFER.LMA, because KVM syncs and flushes when switching MMU contexts outside of 64-bit mode All of these are tricky, go for the simple solution. This is CVE-2022-1789. Reported-by: Yongkang Jia Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini [fix conflict due to missing b9e5603c2a3accbadfec570ac501a54431a6bdba] Signed-off-by: Vegard Nossum Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/mmu/mmu.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 306268f90455..6096d0f1a62a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5178,14 +5178,16 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu *vcpu, gva_t gva, unsigned long pcid) uint i; if (pcid == kvm_get_active_pcid(vcpu)) { - mmu->invlpg(vcpu, gva, mmu->root_hpa); + if (mmu->invlpg) + mmu->invlpg(vcpu, gva, mmu->root_hpa); tlb_flush = true; } for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) { if (VALID_PAGE(mmu->prev_roots[i].hpa) && pcid == kvm_get_pcid(vcpu, mmu->prev_roots[i].pgd)) { - mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); + if (mmu->invlpg) + mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa); tlb_flush = true; } } From 33f1b4a27abced7ae0f740d2ec3040debf7c4b3c Mon Sep 17 00:00:00 2001 From: Eric Dumazet Date: Tue, 9 Feb 2021 11:20:27 -0800 Subject: [PATCH 0102/1842] tcp: change source port randomizarion at connect() time commit 190cc82489f46f9d88e73c81a47e14f80a791e1a upstream. RFC 6056 (Recommendations for Transport-Protocol Port Randomization) provides good summary of why source selection needs extra care. David Dworken reminded us that linux implements Algorithm 3 as described in RFC 6056 3.3.3 Quoting David : In the context of the web, this creates an interesting info leak where websites can count how many TCP connections a user's computer is establishing over time. For example, this allows a website to count exactly how many subresources a third party website loaded. This also allows: - Distinguishing between different users behind a VPN based on distinct source port ranges. - Tracking users over time across multiple networks. - Covert communication channels between different browsers/browser profiles running on the same computer - Tracking what applications are running on a computer based on the pattern of how fast source ports are getting incremented. Section 3.3.4 describes an enhancement, that reduces attackers ability to use the basic information currently stored into the shared 'u32 hint'. This change also decreases collision rate when multiple applications need to connect() to different destinations. Signed-off-by: Eric Dumazet Reported-by: David Dworken Cc: Willem de Bruijn Signed-off-by: David S. Miller Signed-off-by: Stefan Ghinea Signed-off-by: Greg Kroah-Hartman --- net/ipv4/inet_hashtables.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index 915b8e1bd9ef..3beaf9e84cf2 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -722,6 +722,17 @@ void inet_unhash(struct sock *sk) } EXPORT_SYMBOL_GPL(inet_unhash); +/* RFC 6056 3.3.4. Algorithm 4: Double-Hash Port Selection Algorithm + * Note that we use 32bit integers (vs RFC 'short integers') + * because 2^16 is not a multiple of num_ephemeral and this + * property might be used by clever attacker. + * RFC claims using TABLE_LENGTH=10 buckets gives an improvement, + * we use 256 instead to really give more isolation and + * privacy, this only consumes 1 KB of kernel memory. + */ +#define INET_TABLE_PERTURB_SHIFT 8 +static u32 table_perturb[1 << INET_TABLE_PERTURB_SHIFT]; + int __inet_hash_connect(struct inet_timewait_death_row *death_row, struct sock *sk, u32 port_offset, int (*check_established)(struct inet_timewait_death_row *, @@ -735,8 +746,8 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row, struct inet_bind_bucket *tb; u32 remaining, offset; int ret, i, low, high; - static u32 hint; int l3mdev; + u32 index; if (port) { head = &hinfo->bhash[inet_bhashfn(net, port, @@ -763,7 +774,10 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row, if (likely(remaining > 1)) remaining &= ~1U; - offset = (hint + port_offset) % remaining; + net_get_random_once(table_perturb, sizeof(table_perturb)); + index = hash_32(port_offset, INET_TABLE_PERTURB_SHIFT); + + offset = (READ_ONCE(table_perturb[index]) + port_offset) % remaining; /* In first pass we try ports of @low parity. * inet_csk_get_port() does the opposite choice. */ @@ -817,7 +831,7 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row, return -EADDRNOTAVAIL; ok: - hint += i + 2; + WRITE_ONCE(table_perturb[index], READ_ONCE(table_perturb[index]) + i + 2); /* Head lock still held and bh's disabled */ inet_bind_hash(sk, tb, port); From a5c68f457fbf52c5564ca4eea03f84776ef14e41 Mon Sep 17 00:00:00 2001 From: Willy Tarreau Date: Mon, 2 May 2022 10:46:08 +0200 Subject: [PATCH 0103/1842] secure_seq: use the 64 bits of the siphash for port offset calculation commit b2d057560b8107c633b39aabe517ff9d93f285e3 upstream. SipHash replaced MD5 in secure_ipv{4,6}_port_ephemeral() via commit 7cd23e5300c1 ("secure_seq: use SipHash in place of MD5"), but the output remained truncated to 32-bit only. In order to exploit more bits from the hash, let's make the functions return the full 64-bit of siphash_3u32(). We also make sure the port offset calculation in __inet_hash_connect() remains done on 32-bit to avoid the need for div_u64_rem() and an extra cost on 32-bit systems. Cc: Jason A. Donenfeld Cc: Moshe Kol Cc: Yossi Gilad Cc: Amit Klein Reviewed-by: Eric Dumazet Signed-off-by: Willy Tarreau Signed-off-by: Jakub Kicinski [SG: Adjusted context] Signed-off-by: Stefan Ghinea Signed-off-by: Greg Kroah-Hartman --- include/net/inet_hashtables.h | 2 +- include/net/secure_seq.h | 4 ++-- net/core/secure_seq.c | 4 ++-- net/ipv4/inet_hashtables.c | 10 ++++++---- net/ipv6/inet6_hashtables.c | 4 ++-- 5 files changed, 13 insertions(+), 11 deletions(-) diff --git a/include/net/inet_hashtables.h b/include/net/inet_hashtables.h index ca6a3ea9057e..d4d611064a76 100644 --- a/include/net/inet_hashtables.h +++ b/include/net/inet_hashtables.h @@ -419,7 +419,7 @@ static inline void sk_rcv_saddr_set(struct sock *sk, __be32 addr) } int __inet_hash_connect(struct inet_timewait_death_row *death_row, - struct sock *sk, u32 port_offset, + struct sock *sk, u64 port_offset, int (*check_established)(struct inet_timewait_death_row *, struct sock *, __u16, struct inet_timewait_sock **)); diff --git a/include/net/secure_seq.h b/include/net/secure_seq.h index d7d2495f83c2..dac91aa38c5a 100644 --- a/include/net/secure_seq.h +++ b/include/net/secure_seq.h @@ -4,8 +4,8 @@ #include -u32 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport); -u32 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr, +u64 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport); +u64 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr, __be16 dport); u32 secure_tcp_seq(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport); diff --git a/net/core/secure_seq.c b/net/core/secure_seq.c index b8a33c841846..7131cd1fb2ad 100644 --- a/net/core/secure_seq.c +++ b/net/core/secure_seq.c @@ -96,7 +96,7 @@ u32 secure_tcpv6_seq(const __be32 *saddr, const __be32 *daddr, } EXPORT_SYMBOL(secure_tcpv6_seq); -u32 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr, +u64 secure_ipv6_port_ephemeral(const __be32 *saddr, const __be32 *daddr, __be16 dport) { const struct { @@ -146,7 +146,7 @@ u32 secure_tcp_seq(__be32 saddr, __be32 daddr, } EXPORT_SYMBOL_GPL(secure_tcp_seq); -u32 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport) +u64 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport) { net_secret_init(); return siphash_4u32((__force u32)saddr, (__force u32)daddr, diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index 3beaf9e84cf2..44b524136f95 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -504,7 +504,7 @@ static int __inet_check_established(struct inet_timewait_death_row *death_row, return -EADDRNOTAVAIL; } -static u32 inet_sk_port_offset(const struct sock *sk) +static u64 inet_sk_port_offset(const struct sock *sk) { const struct inet_sock *inet = inet_sk(sk); @@ -734,7 +734,7 @@ EXPORT_SYMBOL_GPL(inet_unhash); static u32 table_perturb[1 << INET_TABLE_PERTURB_SHIFT]; int __inet_hash_connect(struct inet_timewait_death_row *death_row, - struct sock *sk, u32 port_offset, + struct sock *sk, u64 port_offset, int (*check_established)(struct inet_timewait_death_row *, struct sock *, __u16, struct inet_timewait_sock **)) { @@ -777,7 +777,9 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row, net_get_random_once(table_perturb, sizeof(table_perturb)); index = hash_32(port_offset, INET_TABLE_PERTURB_SHIFT); - offset = (READ_ONCE(table_perturb[index]) + port_offset) % remaining; + offset = READ_ONCE(table_perturb[index]) + port_offset; + offset %= remaining; + /* In first pass we try ports of @low parity. * inet_csk_get_port() does the opposite choice. */ @@ -854,7 +856,7 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row, int inet_hash_connect(struct inet_timewait_death_row *death_row, struct sock *sk) { - u32 port_offset = 0; + u64 port_offset = 0; if (!inet_sk(sk)->inet_num) port_offset = inet_sk_port_offset(sk); diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c index 0a2e7f228391..40203255ed88 100644 --- a/net/ipv6/inet6_hashtables.c +++ b/net/ipv6/inet6_hashtables.c @@ -308,7 +308,7 @@ static int __inet6_check_established(struct inet_timewait_death_row *death_row, return -EADDRNOTAVAIL; } -static u32 inet6_sk_port_offset(const struct sock *sk) +static u64 inet6_sk_port_offset(const struct sock *sk) { const struct inet_sock *inet = inet_sk(sk); @@ -320,7 +320,7 @@ static u32 inet6_sk_port_offset(const struct sock *sk) int inet6_hash_connect(struct inet_timewait_death_row *death_row, struct sock *sk) { - u32 port_offset = 0; + u64 port_offset = 0; if (!inet_sk(sk)->inet_num) port_offset = inet6_sk_port_offset(sk); From ed0e71cc3f1e57b71a004446cd804b366826dbeb Mon Sep 17 00:00:00 2001 From: Sakari Ailus Date: Tue, 10 Nov 2020 00:07:22 +0100 Subject: [PATCH 0104/1842] media: vim2m: Register video device after setting up internals commit cf7f34777a5b4100a3a44ff95f3d949c62892bdd upstream. Prevent NULL (or close to NULL) pointer dereference in various places by registering the video device only when the V4L2 m2m framework has been set up. Fixes: commit 96d8eab5d0a1 ("V4L/DVB: [v5,2/2] v4l: Add a mem-to-mem videobuf framework test device") Signed-off-by: Sakari Ailus Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Mark-PK Tsai Signed-off-by: Greg Kroah-Hartman --- drivers/media/test-drivers/vim2m.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/drivers/media/test-drivers/vim2m.c b/drivers/media/test-drivers/vim2m.c index a776bb8e0e09..331a9053a0ed 100644 --- a/drivers/media/test-drivers/vim2m.c +++ b/drivers/media/test-drivers/vim2m.c @@ -1325,12 +1325,6 @@ static int vim2m_probe(struct platform_device *pdev) vfd->lock = &dev->dev_mutex; vfd->v4l2_dev = &dev->v4l2_dev; - ret = video_register_device(vfd, VFL_TYPE_VIDEO, 0); - if (ret) { - v4l2_err(&dev->v4l2_dev, "Failed to register video device\n"); - goto error_v4l2; - } - video_set_drvdata(vfd, dev); v4l2_info(&dev->v4l2_dev, "Device registered as /dev/video%d\n", vfd->num); @@ -1345,6 +1339,12 @@ static int vim2m_probe(struct platform_device *pdev) goto error_dev; } + ret = video_register_device(vfd, VFL_TYPE_VIDEO, 0); + if (ret) { + v4l2_err(&dev->v4l2_dev, "Failed to register video device\n"); + goto error_m2m; + } + #ifdef CONFIG_MEDIA_CONTROLLER dev->mdev.dev = &pdev->dev; strscpy(dev->mdev.model, "vim2m", sizeof(dev->mdev.model)); @@ -1358,7 +1358,7 @@ static int vim2m_probe(struct platform_device *pdev) MEDIA_ENT_F_PROC_VIDEO_SCALER); if (ret) { v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem media controller\n"); - goto error_dev; + goto error_v4l2; } ret = media_device_register(&dev->mdev); @@ -1373,11 +1373,13 @@ static int vim2m_probe(struct platform_device *pdev) error_m2m_mc: v4l2_m2m_unregister_media_controller(dev->m2m_dev); #endif -error_dev: +error_v4l2: video_unregister_device(&dev->vfd); /* vim2m_device_release called by video_unregister_device to release various objects */ return ret; -error_v4l2: +error_m2m: + v4l2_m2m_release(dev->m2m_dev); +error_dev: v4l2_device_unregister(&dev->v4l2_dev); error_free: kfree(dev); From 0debc69f003bfedab65a08ebd29566e861cdd3cb Mon Sep 17 00:00:00 2001 From: Hans Verkuil Date: Tue, 2 Feb 2021 15:49:23 +0100 Subject: [PATCH 0105/1842] media: vim2m: initialize the media device earlier commit 1a28dce222a6ece725689ad58c0cf4a1b48894f4 upstream. Before the video device node is registered, the v4l2_dev.mdev pointer must be set in order to correctly associate the video device with the media device. Move the initialization of the media device up. Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Mark-PK Tsai Signed-off-by: Greg Kroah-Hartman --- drivers/media/test-drivers/vim2m.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/media/test-drivers/vim2m.c b/drivers/media/test-drivers/vim2m.c index 331a9053a0ed..a24624353f9e 100644 --- a/drivers/media/test-drivers/vim2m.c +++ b/drivers/media/test-drivers/vim2m.c @@ -1339,12 +1339,6 @@ static int vim2m_probe(struct platform_device *pdev) goto error_dev; } - ret = video_register_device(vfd, VFL_TYPE_VIDEO, 0); - if (ret) { - v4l2_err(&dev->v4l2_dev, "Failed to register video device\n"); - goto error_m2m; - } - #ifdef CONFIG_MEDIA_CONTROLLER dev->mdev.dev = &pdev->dev; strscpy(dev->mdev.model, "vim2m", sizeof(dev->mdev.model)); @@ -1353,7 +1347,15 @@ static int vim2m_probe(struct platform_device *pdev) media_device_init(&dev->mdev); dev->mdev.ops = &m2m_media_ops; dev->v4l2_dev.mdev = &dev->mdev; +#endif + ret = video_register_device(vfd, VFL_TYPE_VIDEO, 0); + if (ret) { + v4l2_err(&dev->v4l2_dev, "Failed to register video device\n"); + goto error_m2m; + } + +#ifdef CONFIG_MEDIA_CONTROLLER ret = v4l2_m2m_register_media_controller(dev->m2m_dev, vfd, MEDIA_ENT_F_PROC_VIDEO_SCALER); if (ret) { From 14fa2769ea6c8a76c76a092d4a0773f44df3e402 Mon Sep 17 00:00:00 2001 From: Andy Shevchenko Date: Wed, 16 Jun 2021 20:03:32 +0300 Subject: [PATCH 0106/1842] ACPI: sysfs: Make sparse happy about address space in use commit bdd56d7d8931e842775d2e5b93d426a8d1940e33 upstream. Sparse is not happy about address space in use in acpi_data_show(): drivers/acpi/sysfs.c:428:14: warning: incorrect type in assignment (different address spaces) drivers/acpi/sysfs.c:428:14: expected void [noderef] __iomem *base drivers/acpi/sysfs.c:428:14: got void * drivers/acpi/sysfs.c:431:59: warning: incorrect type in argument 4 (different address spaces) drivers/acpi/sysfs.c:431:59: expected void const *from drivers/acpi/sysfs.c:431:59: got void [noderef] __iomem *base drivers/acpi/sysfs.c:433:30: warning: incorrect type in argument 1 (different address spaces) drivers/acpi/sysfs.c:433:30: expected void *logical_address drivers/acpi/sysfs.c:433:30: got void [noderef] __iomem *base Indeed, acpi_os_map_memory() returns a void pointer with dropped specific address space. Hence, we don't need to carry out __iomem in acpi_data_show(). Signed-off-by: Andy Shevchenko Signed-off-by: Rafael J. Wysocki Cc: dann frazier Signed-off-by: Greg Kroah-Hartman --- drivers/acpi/sysfs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/acpi/sysfs.c b/drivers/acpi/sysfs.c index a5cc4f3bb1e3..28425368fc6f 100644 --- a/drivers/acpi/sysfs.c +++ b/drivers/acpi/sysfs.c @@ -438,7 +438,7 @@ static ssize_t acpi_data_show(struct file *filp, struct kobject *kobj, loff_t offset, size_t count) { struct acpi_data_attr *data_attr; - void __iomem *base; + void *base; ssize_t rc; data_attr = container_of(bin_attr, struct acpi_data_attr, attr); From 257fbea15ab1f9efd2185d469649cdf346a38c8d Mon Sep 17 00:00:00 2001 From: Lorenzo Pieralisi Date: Thu, 7 Apr 2022 11:51:20 +0100 Subject: [PATCH 0107/1842] ACPI: sysfs: Fix BERT error region memory mapping commit 1bbc21785b7336619fb6a67f1fff5afdaf229acc upstream. Currently the sysfs interface maps the BERT error region as "memory" (through acpi_os_map_memory()) in order to copy the error records into memory buffers through memory operations (eg memory_read_from_buffer()). The OS system cannot detect whether the BERT error region is part of system RAM or it is "device memory" (eg BMC memory) and therefore it cannot detect which memory attributes the bus to memory support (and corresponding kernel mapping, unless firmware provides the required information). The acpi_os_map_memory() arch backend implementation determines the mapping attributes. On arm64, if the BERT error region is not present in the EFI memory map, the error region is mapped as device-nGnRnE; this triggers alignment faults since memcpy unaligned accesses are not allowed in device-nGnRnE regions. The ACPI sysfs code cannot therefore map by default the BERT error region with memory semantics but should use a safer default. Change the sysfs code to map the BERT error region as MMIO (through acpi_os_map_iomem()) and use the memcpy_fromio() interface to read the error region into the kernel buffer. Link: https://lore.kernel.org/linux-arm-kernel/31ffe8fc-f5ee-2858-26c5-0fd8bdd68702@arm.com Link: https://lore.kernel.org/linux-acpi/CAJZ5v0g+OVbhuUUDrLUCfX_mVqY_e8ubgLTU98=jfjTeb4t+Pw@mail.gmail.com Signed-off-by: Lorenzo Pieralisi Tested-by: Veronika Kabatova Tested-by: Aristeu Rozanski Acked-by: Ard Biesheuvel Signed-off-by: Rafael J. Wysocki Cc: dann frazier Signed-off-by: Greg Kroah-Hartman --- drivers/acpi/sysfs.c | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-) diff --git a/drivers/acpi/sysfs.c b/drivers/acpi/sysfs.c index 28425368fc6f..1d94c4625f36 100644 --- a/drivers/acpi/sysfs.c +++ b/drivers/acpi/sysfs.c @@ -438,19 +438,30 @@ static ssize_t acpi_data_show(struct file *filp, struct kobject *kobj, loff_t offset, size_t count) { struct acpi_data_attr *data_attr; - void *base; - ssize_t rc; + void __iomem *base; + ssize_t size; data_attr = container_of(bin_attr, struct acpi_data_attr, attr); + size = data_attr->attr.size; - base = acpi_os_map_memory(data_attr->addr, data_attr->attr.size); + if (offset < 0) + return -EINVAL; + + if (offset >= size) + return 0; + + if (count > size - offset) + count = size - offset; + + base = acpi_os_map_iomem(data_attr->addr, size); if (!base) return -ENOMEM; - rc = memory_read_from_buffer(buf, count, &offset, base, - data_attr->attr.size); - acpi_os_unmap_memory(base, data_attr->attr.size); - return rc; + memcpy_fromio(buf, base + offset, count); + + acpi_os_unmap_iomem(base, size); + + return count; } static int acpi_bert_data_init(void *th, struct acpi_data_attr *data_attr) From 6227458fef95fda06e8d16381462a90fd6e03bec Mon Sep 17 00:00:00 2001 From: Ard Biesheuvel Date: Thu, 5 Nov 2020 16:29:44 +0100 Subject: [PATCH 0108/1842] random: avoid arch_get_random_seed_long() when collecting IRQ randomness commit 390596c9959c2a4f5b456df339f0604df3d55fe0 upstream. When reseeding the CRNG periodically, arch_get_random_seed_long() is called to obtain entropy from an architecture specific source if one is implemented. In most cases, these are special instructions, but in some cases, such as on ARM, we may want to back this using firmware calls, which are considerably more expensive. Another call to arch_get_random_seed_long() exists in the CRNG driver, in add_interrupt_randomness(), which collects entropy by capturing inter-interrupt timing and relying on interrupt jitter to provide random bits. This is done by keeping a per-CPU state, and mixing in the IRQ number, the cycle counter and the return address every time an interrupt is taken, and mixing this per-CPU state into the entropy pool every 64 invocations, or at least once per second. The entropy that is gathered this way is credited as 1 bit of entropy. Every time this happens, arch_get_random_seed_long() is invoked, and the result is mixed in as well, and also credited with 1 bit of entropy. This means that arch_get_random_seed_long() is called at least once per second on every CPU, which seems excessive, and doesn't really scale, especially in a virtualization scenario where CPUs may be oversubscribed: in cases where arch_get_random_seed_long() is backed by an instruction that actually goes back to a shared hardware entropy source (such as RNDRRS on ARM), we will end up hitting it hundreds of times per second. So let's drop the call to arch_get_random_seed_long() from add_interrupt_randomness(), and instead, rely on crng_reseed() to call the arch hook to get random seed material from the platform. Signed-off-by: Ard Biesheuvel Reviewed-by: Andre Przywara Tested-by: Andre Przywara Reviewed-by: Eric Biggers Acked-by: Marc Zyngier Reviewed-by: Jason A. Donenfeld Link: https://lore.kernel.org/r/20201105152944.16953-1-ardb@kernel.org Signed-off-by: Will Deacon Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 15 +-------------- 1 file changed, 1 insertion(+), 14 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 5f541c946559..e3743bb464f1 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1281,8 +1281,6 @@ void add_interrupt_randomness(int irq, int irq_flags) cycles_t cycles = random_get_entropy(); __u32 c_high, j_high; __u64 ip; - unsigned long seed; - int credit = 0; if (cycles == 0) cycles = get_reg(fast_pool, regs); @@ -1318,23 +1316,12 @@ void add_interrupt_randomness(int irq, int irq_flags) fast_pool->last = now; __mix_pool_bytes(r, &fast_pool->pool, sizeof(fast_pool->pool)); - - /* - * If we have architectural seed generator, produce a seed and - * add it to the pool. For the sake of paranoia don't let the - * architectural seed generator dominate the input from the - * interrupt noise. - */ - if (arch_get_random_seed_long(&seed)) { - __mix_pool_bytes(r, &seed, sizeof(seed)); - credit = 1; - } spin_unlock(&r->lock); fast_pool->count = 0; /* award one bit for the contents of the fast pool */ - credit_entropy_bits(r, credit + 1); + credit_entropy_bits(r, 1); } EXPORT_SYMBOL_GPL(add_interrupt_randomness); From acb198c4d11f4ffb538037c9057aa3d3731d21b8 Mon Sep 17 00:00:00 2001 From: Eric Biggers Date: Sun, 21 Mar 2021 22:14:00 -0700 Subject: [PATCH 0109/1842] random: remove dead code left over from blocking pool commit 118a4417e14348b2e46f5e467da8444ec4757a45 upstream. Remove some dead code that was left over following commit 90ea1c6436d2 ("random: remove the blocking pool"). Cc: linux-crypto@vger.kernel.org Cc: Andy Lutomirski Cc: Jann Horn Cc: Theodore Ts'o Reviewed-by: Andy Lutomirski Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 17 ++----- include/trace/events/random.h | 83 ----------------------------------- 2 files changed, 3 insertions(+), 97 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index e3743bb464f1..1ff55a3f03c3 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -501,7 +501,6 @@ struct entropy_store { unsigned short add_ptr; unsigned short input_rotate; int entropy_count; - unsigned int initialized:1; unsigned int last_data_init:1; __u8 last_data[EXTRACT_SIZE]; }; @@ -661,7 +660,7 @@ static void process_random_ready_list(void) */ static void credit_entropy_bits(struct entropy_store *r, int nbits) { - int entropy_count, orig, has_initialized = 0; + int entropy_count, orig; const int pool_size = r->poolinfo->poolfracbits; int nfrac = nbits << ENTROPY_SHIFT; @@ -718,23 +717,14 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits) if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) goto retry; - if (has_initialized) { - r->initialized = 1; - kill_fasync(&fasync, SIGIO, POLL_IN); - } - trace_credit_entropy_bits(r->name, nbits, entropy_count >> ENTROPY_SHIFT, _RET_IP_); if (r == &input_pool) { int entropy_bits = entropy_count >> ENTROPY_SHIFT; - if (crng_init < 2) { - if (entropy_bits < 128) - return; + if (crng_init < 2 && entropy_bits >= 128) crng_reseed(&primary_crng, r); - entropy_bits = ENTROPY_BITS(r); - } } } @@ -1392,8 +1382,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min, } /* - * This function does the actual extraction for extract_entropy and - * extract_entropy_user. + * This function does the actual extraction for extract_entropy. * * Note: we assume that .poolwords is a multiple of 16 words. */ diff --git a/include/trace/events/random.h b/include/trace/events/random.h index 9570a10cb949..3d7b432ca5f3 100644 --- a/include/trace/events/random.h +++ b/include/trace/events/random.h @@ -85,28 +85,6 @@ TRACE_EVENT(credit_entropy_bits, __entry->entropy_count, (void *)__entry->IP) ); -TRACE_EVENT(push_to_pool, - TP_PROTO(const char *pool_name, int pool_bits, int input_bits), - - TP_ARGS(pool_name, pool_bits, input_bits), - - TP_STRUCT__entry( - __field( const char *, pool_name ) - __field( int, pool_bits ) - __field( int, input_bits ) - ), - - TP_fast_assign( - __entry->pool_name = pool_name; - __entry->pool_bits = pool_bits; - __entry->input_bits = input_bits; - ), - - TP_printk("%s: pool_bits %d input_pool_bits %d", - __entry->pool_name, __entry->pool_bits, - __entry->input_bits) -); - TRACE_EVENT(debit_entropy, TP_PROTO(const char *pool_name, int debit_bits), @@ -161,35 +139,6 @@ TRACE_EVENT(add_disk_randomness, MINOR(__entry->dev), __entry->input_bits) ); -TRACE_EVENT(xfer_secondary_pool, - TP_PROTO(const char *pool_name, int xfer_bits, int request_bits, - int pool_entropy, int input_entropy), - - TP_ARGS(pool_name, xfer_bits, request_bits, pool_entropy, - input_entropy), - - TP_STRUCT__entry( - __field( const char *, pool_name ) - __field( int, xfer_bits ) - __field( int, request_bits ) - __field( int, pool_entropy ) - __field( int, input_entropy ) - ), - - TP_fast_assign( - __entry->pool_name = pool_name; - __entry->xfer_bits = xfer_bits; - __entry->request_bits = request_bits; - __entry->pool_entropy = pool_entropy; - __entry->input_entropy = input_entropy; - ), - - TP_printk("pool %s xfer_bits %d request_bits %d pool_entropy %d " - "input_entropy %d", __entry->pool_name, __entry->xfer_bits, - __entry->request_bits, __entry->pool_entropy, - __entry->input_entropy) -); - DECLARE_EVENT_CLASS(random__get_random_bytes, TP_PROTO(int nbytes, unsigned long IP), @@ -253,38 +202,6 @@ DEFINE_EVENT(random__extract_entropy, extract_entropy, TP_ARGS(pool_name, nbytes, entropy_count, IP) ); -DEFINE_EVENT(random__extract_entropy, extract_entropy_user, - TP_PROTO(const char *pool_name, int nbytes, int entropy_count, - unsigned long IP), - - TP_ARGS(pool_name, nbytes, entropy_count, IP) -); - -TRACE_EVENT(random_read, - TP_PROTO(int got_bits, int need_bits, int pool_left, int input_left), - - TP_ARGS(got_bits, need_bits, pool_left, input_left), - - TP_STRUCT__entry( - __field( int, got_bits ) - __field( int, need_bits ) - __field( int, pool_left ) - __field( int, input_left ) - ), - - TP_fast_assign( - __entry->got_bits = got_bits; - __entry->need_bits = need_bits; - __entry->pool_left = pool_left; - __entry->input_left = input_left; - ), - - TP_printk("got_bits %d still_needed_bits %d " - "blocking_pool_entropy_left %d input_entropy_left %d", - __entry->got_bits, __entry->got_bits, __entry->pool_left, - __entry->input_left) -); - TRACE_EVENT(urandom_read, TP_PROTO(int got_bits, int pool_left, int input_left), From c4882c6e1ec915493fc224a6352b5fb11b30dcd7 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 30 Nov 2021 13:43:15 -0500 Subject: [PATCH 0110/1842] MAINTAINERS: co-maintain random.c commit 58e1100fdc5990b0cc0d4beaf2562a92e621ac7d upstream. random.c is a bit understaffed, and folks want more prompt reviews. I've got the crypto background and the interest to do these reviews, and have authored parts of the file already. Cc: Theodore Ts'o Cc: Greg Kroah-Hartman Signed-off-by: Jason A. Donenfeld Signed-off-by: Linus Torvalds Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) diff --git a/MAINTAINERS b/MAINTAINERS index c64c9354c287..9231589a6ffa 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -14671,6 +14671,7 @@ F: arch/mips/generic/board-ranchu.c RANDOM NUMBER DRIVER M: "Theodore Ts'o" +M: Jason A. Donenfeld S: Maintained F: drivers/char/random.c From c3a4645d803e924c1a45e7df3cb57d76368d9240 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 25 Dec 2021 01:50:07 +0100 Subject: [PATCH 0111/1842] MAINTAINERS: add git tree for random.c commit 9bafaa9375cbf892033f188d8cb624ae328754b5 upstream. This is handy not just for humans, but also so that the 0-day bot can automatically test posted mailing list patches against the right tree. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- MAINTAINERS | 1 + 1 file changed, 1 insertion(+) diff --git a/MAINTAINERS b/MAINTAINERS index 9231589a6ffa..7c118b507912 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -14672,6 +14672,7 @@ F: arch/mips/generic/board-ranchu.c RANDOM NUMBER DRIVER M: "Theodore Ts'o" M: Jason A. Donenfeld +T: git https://git.kernel.org/pub/scm/linux/kernel/git/crng/random.git S: Maintained F: drivers/char/random.c From 0f8fcf5b6ed7ab1351064c87b1a5c47f9634f3c1 Mon Sep 17 00:00:00 2001 From: Herbert Xu Date: Fri, 27 Nov 2020 16:43:18 +1100 Subject: [PATCH 0112/1842] crypto: lib/blake2s - Move selftest prototype into header file commit ce0d5d63e897cc7c3a8fd043c7942fc6a78ec6f4 upstream. This patch fixes a missing prototype warning on blake2s_selftest. Reported-by: kernel test robot Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- include/crypto/internal/blake2s.h | 2 ++ lib/crypto/blake2s-selftest.c | 2 +- lib/crypto/blake2s.c | 2 -- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/crypto/internal/blake2s.h b/include/crypto/internal/blake2s.h index 74ff77032e52..6e376ae6b6b5 100644 --- a/include/crypto/internal/blake2s.h +++ b/include/crypto/internal/blake2s.h @@ -16,6 +16,8 @@ void blake2s_compress_generic(struct blake2s_state *state,const u8 *block, void blake2s_compress_arch(struct blake2s_state *state,const u8 *block, size_t nblocks, const u32 inc); +bool blake2s_selftest(void); + static inline void blake2s_set_lastblock(struct blake2s_state *state) { state->f[0] = -1; diff --git a/lib/crypto/blake2s-selftest.c b/lib/crypto/blake2s-selftest.c index 79ef404a990d..5d9ea53be973 100644 --- a/lib/crypto/blake2s-selftest.c +++ b/lib/crypto/blake2s-selftest.c @@ -3,7 +3,7 @@ * Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. */ -#include +#include #include /* diff --git a/lib/crypto/blake2s.c b/lib/crypto/blake2s.c index 41025a30c524..6a4b6b78d630 100644 --- a/lib/crypto/blake2s.c +++ b/lib/crypto/blake2s.c @@ -17,8 +17,6 @@ #include #include -bool blake2s_selftest(void); - void blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen) { const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen; From 89f9ee998e36681013ffd71cbe6a24cc44ec1037 Mon Sep 17 00:00:00 2001 From: Eric Biggers Date: Wed, 23 Dec 2020 00:09:50 -0800 Subject: [PATCH 0113/1842] crypto: blake2s - define shash_alg structs using macros commit 0d396058f92ae7e5ac62839fed54bc2bba630ab5 upstream. The shash_alg structs for the four variants of BLAKE2s are identical except for the algorithm name, driver name, and digest size. So, avoid code duplication by using a macro to define these structs. Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- crypto/blake2s_generic.c | 86 ++++++++++++---------------------------- 1 file changed, 26 insertions(+), 60 deletions(-) diff --git a/crypto/blake2s_generic.c b/crypto/blake2s_generic.c index 005783ff45ad..e3aa6e7ff3d8 100644 --- a/crypto/blake2s_generic.c +++ b/crypto/blake2s_generic.c @@ -83,67 +83,33 @@ static int crypto_blake2s_final(struct shash_desc *desc, u8 *out) return 0; } -static struct shash_alg blake2s_algs[] = {{ - .base.cra_name = "blake2s-128", - .base.cra_driver_name = "blake2s-128-generic", - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), - .base.cra_priority = 200, - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, +#define BLAKE2S_ALG(name, driver_name, digest_size) \ + { \ + .base.cra_name = name, \ + .base.cra_driver_name = driver_name, \ + .base.cra_priority = 100, \ + .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \ + .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, \ + .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), \ + .base.cra_module = THIS_MODULE, \ + .digestsize = digest_size, \ + .setkey = crypto_blake2s_setkey, \ + .init = crypto_blake2s_init, \ + .update = crypto_blake2s_update, \ + .final = crypto_blake2s_final, \ + .descsize = sizeof(struct blake2s_state), \ + } - .digestsize = BLAKE2S_128_HASH_SIZE, - .setkey = crypto_blake2s_setkey, - .init = crypto_blake2s_init, - .update = crypto_blake2s_update, - .final = crypto_blake2s_final, - .descsize = sizeof(struct blake2s_state), -}, { - .base.cra_name = "blake2s-160", - .base.cra_driver_name = "blake2s-160-generic", - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), - .base.cra_priority = 200, - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, - - .digestsize = BLAKE2S_160_HASH_SIZE, - .setkey = crypto_blake2s_setkey, - .init = crypto_blake2s_init, - .update = crypto_blake2s_update, - .final = crypto_blake2s_final, - .descsize = sizeof(struct blake2s_state), -}, { - .base.cra_name = "blake2s-224", - .base.cra_driver_name = "blake2s-224-generic", - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), - .base.cra_priority = 200, - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, - - .digestsize = BLAKE2S_224_HASH_SIZE, - .setkey = crypto_blake2s_setkey, - .init = crypto_blake2s_init, - .update = crypto_blake2s_update, - .final = crypto_blake2s_final, - .descsize = sizeof(struct blake2s_state), -}, { - .base.cra_name = "blake2s-256", - .base.cra_driver_name = "blake2s-256-generic", - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), - .base.cra_priority = 200, - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, - - .digestsize = BLAKE2S_256_HASH_SIZE, - .setkey = crypto_blake2s_setkey, - .init = crypto_blake2s_init, - .update = crypto_blake2s_update, - .final = crypto_blake2s_final, - .descsize = sizeof(struct blake2s_state), -}}; +static struct shash_alg blake2s_algs[] = { + BLAKE2S_ALG("blake2s-128", "blake2s-128-generic", + BLAKE2S_128_HASH_SIZE), + BLAKE2S_ALG("blake2s-160", "blake2s-160-generic", + BLAKE2S_160_HASH_SIZE), + BLAKE2S_ALG("blake2s-224", "blake2s-224-generic", + BLAKE2S_224_HASH_SIZE), + BLAKE2S_ALG("blake2s-256", "blake2s-256-generic", + BLAKE2S_256_HASH_SIZE), +}; static int __init blake2s_mod_init(void) { From 198a19d7ee95186fd2cfd69326c3b5ff6e63f13c Mon Sep 17 00:00:00 2001 From: Eric Biggers Date: Wed, 23 Dec 2020 00:09:51 -0800 Subject: [PATCH 0114/1842] crypto: x86/blake2s - define shash_alg structs using macros commit 1aa90f4cf034ed4f016a02330820ac0551a6c13c upstream. The shash_alg structs for the four variants of BLAKE2s are identical except for the algorithm name, driver name, and digest size. So, avoid code duplication by using a macro to define these structs. Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/x86/crypto/blake2s-glue.c | 82 +++++++++------------------------- 1 file changed, 22 insertions(+), 60 deletions(-) diff --git a/arch/x86/crypto/blake2s-glue.c b/arch/x86/crypto/blake2s-glue.c index c025a01cf708..4dcb2ee89efc 100644 --- a/arch/x86/crypto/blake2s-glue.c +++ b/arch/x86/crypto/blake2s-glue.c @@ -129,67 +129,29 @@ static int crypto_blake2s_final(struct shash_desc *desc, u8 *out) return 0; } -static struct shash_alg blake2s_algs[] = {{ - .base.cra_name = "blake2s-128", - .base.cra_driver_name = "blake2s-128-x86", - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), - .base.cra_priority = 200, - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, +#define BLAKE2S_ALG(name, driver_name, digest_size) \ + { \ + .base.cra_name = name, \ + .base.cra_driver_name = driver_name, \ + .base.cra_priority = 200, \ + .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \ + .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, \ + .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), \ + .base.cra_module = THIS_MODULE, \ + .digestsize = digest_size, \ + .setkey = crypto_blake2s_setkey, \ + .init = crypto_blake2s_init, \ + .update = crypto_blake2s_update, \ + .final = crypto_blake2s_final, \ + .descsize = sizeof(struct blake2s_state), \ + } - .digestsize = BLAKE2S_128_HASH_SIZE, - .setkey = crypto_blake2s_setkey, - .init = crypto_blake2s_init, - .update = crypto_blake2s_update, - .final = crypto_blake2s_final, - .descsize = sizeof(struct blake2s_state), -}, { - .base.cra_name = "blake2s-160", - .base.cra_driver_name = "blake2s-160-x86", - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), - .base.cra_priority = 200, - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, - - .digestsize = BLAKE2S_160_HASH_SIZE, - .setkey = crypto_blake2s_setkey, - .init = crypto_blake2s_init, - .update = crypto_blake2s_update, - .final = crypto_blake2s_final, - .descsize = sizeof(struct blake2s_state), -}, { - .base.cra_name = "blake2s-224", - .base.cra_driver_name = "blake2s-224-x86", - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), - .base.cra_priority = 200, - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, - - .digestsize = BLAKE2S_224_HASH_SIZE, - .setkey = crypto_blake2s_setkey, - .init = crypto_blake2s_init, - .update = crypto_blake2s_update, - .final = crypto_blake2s_final, - .descsize = sizeof(struct blake2s_state), -}, { - .base.cra_name = "blake2s-256", - .base.cra_driver_name = "blake2s-256-x86", - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), - .base.cra_priority = 200, - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, - - .digestsize = BLAKE2S_256_HASH_SIZE, - .setkey = crypto_blake2s_setkey, - .init = crypto_blake2s_init, - .update = crypto_blake2s_update, - .final = crypto_blake2s_final, - .descsize = sizeof(struct blake2s_state), -}}; +static struct shash_alg blake2s_algs[] = { + BLAKE2S_ALG("blake2s-128", "blake2s-128-x86", BLAKE2S_128_HASH_SIZE), + BLAKE2S_ALG("blake2s-160", "blake2s-160-x86", BLAKE2S_160_HASH_SIZE), + BLAKE2S_ALG("blake2s-224", "blake2s-224-x86", BLAKE2S_224_HASH_SIZE), + BLAKE2S_ALG("blake2s-256", "blake2s-256-x86", BLAKE2S_256_HASH_SIZE), +}; static int __init blake2s_mod_init(void) { From e467a55bd006be15200f8f533929bc491ccb28f0 Mon Sep 17 00:00:00 2001 From: Eric Biggers Date: Wed, 23 Dec 2020 00:09:52 -0800 Subject: [PATCH 0115/1842] crypto: blake2s - remove unneeded includes commit df412e7efda1e2c5b5fcb06701bba77434cbd1e8 upstream. It doesn't make sense for the generic implementation of BLAKE2s to include and , as these are things that would only be useful in an architecture-specific implementation. Remove these unnecessary includes. Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- crypto/blake2s_generic.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/crypto/blake2s_generic.c b/crypto/blake2s_generic.c index e3aa6e7ff3d8..b89536c3671c 100644 --- a/crypto/blake2s_generic.c +++ b/crypto/blake2s_generic.c @@ -4,11 +4,9 @@ */ #include -#include #include #include -#include #include #include From 72e5b68f33a12a99d6fdf702ca739fc5250a9fc0 Mon Sep 17 00:00:00 2001 From: Eric Biggers Date: Wed, 23 Dec 2020 00:09:53 -0800 Subject: [PATCH 0116/1842] crypto: blake2s - move update and final logic to internal/blake2s.h commit 057edc9c8bb2d5ff5b058b521792c392428a0714 upstream. Move most of blake2s_update() and blake2s_final() into new inline functions __blake2s_update() and __blake2s_final() in include/crypto/internal/blake2s.h so that this logic can be shared by the shash helper functions. This will avoid duplicating this logic between the library and shash implementations. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- include/crypto/internal/blake2s.h | 41 ++++++++++++++++++++++++++ lib/crypto/blake2s.c | 48 ++++++------------------------- 2 files changed, 49 insertions(+), 40 deletions(-) diff --git a/include/crypto/internal/blake2s.h b/include/crypto/internal/blake2s.h index 6e376ae6b6b5..42deba4b8cee 100644 --- a/include/crypto/internal/blake2s.h +++ b/include/crypto/internal/blake2s.h @@ -4,6 +4,7 @@ #define BLAKE2S_INTERNAL_H #include +#include struct blake2s_tfm_ctx { u8 key[BLAKE2S_KEY_SIZE]; @@ -23,4 +24,44 @@ static inline void blake2s_set_lastblock(struct blake2s_state *state) state->f[0] = -1; } +typedef void (*blake2s_compress_t)(struct blake2s_state *state, + const u8 *block, size_t nblocks, u32 inc); + +static inline void __blake2s_update(struct blake2s_state *state, + const u8 *in, size_t inlen, + blake2s_compress_t compress) +{ + const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen; + + if (unlikely(!inlen)) + return; + if (inlen > fill) { + memcpy(state->buf + state->buflen, in, fill); + (*compress)(state, state->buf, 1, BLAKE2S_BLOCK_SIZE); + state->buflen = 0; + in += fill; + inlen -= fill; + } + if (inlen > BLAKE2S_BLOCK_SIZE) { + const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2S_BLOCK_SIZE); + /* Hash one less (full) block than strictly possible */ + (*compress)(state, in, nblocks - 1, BLAKE2S_BLOCK_SIZE); + in += BLAKE2S_BLOCK_SIZE * (nblocks - 1); + inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1); + } + memcpy(state->buf + state->buflen, in, inlen); + state->buflen += inlen; +} + +static inline void __blake2s_final(struct blake2s_state *state, u8 *out, + blake2s_compress_t compress) +{ + blake2s_set_lastblock(state); + memset(state->buf + state->buflen, 0, + BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */ + (*compress)(state, state->buf, 1, state->buflen); + cpu_to_le32_array(state->h, ARRAY_SIZE(state->h)); + memcpy(out, state->h, state->outlen); +} + #endif /* BLAKE2S_INTERNAL_H */ diff --git a/lib/crypto/blake2s.c b/lib/crypto/blake2s.c index 6a4b6b78d630..c64ac8bfb6a9 100644 --- a/lib/crypto/blake2s.c +++ b/lib/crypto/blake2s.c @@ -15,55 +15,23 @@ #include #include #include -#include + +#if IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_BLAKE2S) +# define blake2s_compress blake2s_compress_arch +#else +# define blake2s_compress blake2s_compress_generic +#endif void blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen) { - const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen; - - if (unlikely(!inlen)) - return; - if (inlen > fill) { - memcpy(state->buf + state->buflen, in, fill); - if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_BLAKE2S)) - blake2s_compress_arch(state, state->buf, 1, - BLAKE2S_BLOCK_SIZE); - else - blake2s_compress_generic(state, state->buf, 1, - BLAKE2S_BLOCK_SIZE); - state->buflen = 0; - in += fill; - inlen -= fill; - } - if (inlen > BLAKE2S_BLOCK_SIZE) { - const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2S_BLOCK_SIZE); - /* Hash one less (full) block than strictly possible */ - if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_BLAKE2S)) - blake2s_compress_arch(state, in, nblocks - 1, - BLAKE2S_BLOCK_SIZE); - else - blake2s_compress_generic(state, in, nblocks - 1, - BLAKE2S_BLOCK_SIZE); - in += BLAKE2S_BLOCK_SIZE * (nblocks - 1); - inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1); - } - memcpy(state->buf + state->buflen, in, inlen); - state->buflen += inlen; + __blake2s_update(state, in, inlen, blake2s_compress); } EXPORT_SYMBOL(blake2s_update); void blake2s_final(struct blake2s_state *state, u8 *out) { WARN_ON(IS_ENABLED(DEBUG) && !out); - blake2s_set_lastblock(state); - memset(state->buf + state->buflen, 0, - BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */ - if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_BLAKE2S)) - blake2s_compress_arch(state, state->buf, 1, state->buflen); - else - blake2s_compress_generic(state, state->buf, 1, state->buflen); - cpu_to_le32_array(state->h, ARRAY_SIZE(state->h)); - memcpy(out, state->h, state->outlen); + __blake2s_final(state, out, blake2s_compress); memzero_explicit(state, sizeof(*state)); } EXPORT_SYMBOL(blake2s_final); From 6c362b7c7764770926cf04b9ca776c0f28df0e6a Mon Sep 17 00:00:00 2001 From: Eric Biggers Date: Wed, 23 Dec 2020 00:09:54 -0800 Subject: [PATCH 0117/1842] crypto: blake2s - share the "shash" API boilerplate code commit 8c4a93a1270ddffc7660ae43fa8030ecfe9c06d9 upstream. Add helper functions for shash implementations of BLAKE2s to include/crypto/internal/blake2s.h, taking advantage of __blake2s_update() and __blake2s_final() that were added by the previous patch to share more code between the library and shash implementations. crypto_blake2s_setkey() and crypto_blake2s_init() are usable as shash_alg::setkey and shash_alg::init directly, while crypto_blake2s_update() and crypto_blake2s_final() take an extra 'blake2s_compress_t' function pointer parameter. This allows the implementation of the compression function to be overridden, which is the only part that optimized implementations really care about. The new functions are inline functions (similar to those in sha1_base.h, sha256_base.h, and sm3_base.h) because this avoids needing to add a new module blake2s_helpers.ko, they aren't *too* long, and this avoids indirect calls which are expensive these days. Note that they can't go in blake2s_generic.ko, as that would require selecting CRYPTO_BLAKE2S from CRYPTO_BLAKE2S_X86, which would cause a recursive dependency. Finally, use these new helper functions in the x86 implementation of BLAKE2s. (This part should be a separate patch, but unfortunately the x86 implementation used the exact same function names like "crypto_blake2s_update()", so it had to be updated at the same time.) Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/x86/crypto/blake2s-glue.c | 74 +++--------------------------- crypto/blake2s_generic.c | 76 ++++--------------------------- include/crypto/internal/blake2s.h | 65 ++++++++++++++++++++++++-- 3 files changed, 76 insertions(+), 139 deletions(-) diff --git a/arch/x86/crypto/blake2s-glue.c b/arch/x86/crypto/blake2s-glue.c index 4dcb2ee89efc..a40365ab301e 100644 --- a/arch/x86/crypto/blake2s-glue.c +++ b/arch/x86/crypto/blake2s-glue.c @@ -58,75 +58,15 @@ void blake2s_compress_arch(struct blake2s_state *state, } EXPORT_SYMBOL(blake2s_compress_arch); -static int crypto_blake2s_setkey(struct crypto_shash *tfm, const u8 *key, - unsigned int keylen) +static int crypto_blake2s_update_x86(struct shash_desc *desc, + const u8 *in, unsigned int inlen) { - struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(tfm); - - if (keylen == 0 || keylen > BLAKE2S_KEY_SIZE) - return -EINVAL; - - memcpy(tctx->key, key, keylen); - tctx->keylen = keylen; - - return 0; + return crypto_blake2s_update(desc, in, inlen, blake2s_compress_arch); } -static int crypto_blake2s_init(struct shash_desc *desc) +static int crypto_blake2s_final_x86(struct shash_desc *desc, u8 *out) { - struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); - struct blake2s_state *state = shash_desc_ctx(desc); - const int outlen = crypto_shash_digestsize(desc->tfm); - - if (tctx->keylen) - blake2s_init_key(state, outlen, tctx->key, tctx->keylen); - else - blake2s_init(state, outlen); - - return 0; -} - -static int crypto_blake2s_update(struct shash_desc *desc, const u8 *in, - unsigned int inlen) -{ - struct blake2s_state *state = shash_desc_ctx(desc); - const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen; - - if (unlikely(!inlen)) - return 0; - if (inlen > fill) { - memcpy(state->buf + state->buflen, in, fill); - blake2s_compress_arch(state, state->buf, 1, BLAKE2S_BLOCK_SIZE); - state->buflen = 0; - in += fill; - inlen -= fill; - } - if (inlen > BLAKE2S_BLOCK_SIZE) { - const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2S_BLOCK_SIZE); - /* Hash one less (full) block than strictly possible */ - blake2s_compress_arch(state, in, nblocks - 1, BLAKE2S_BLOCK_SIZE); - in += BLAKE2S_BLOCK_SIZE * (nblocks - 1); - inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1); - } - memcpy(state->buf + state->buflen, in, inlen); - state->buflen += inlen; - - return 0; -} - -static int crypto_blake2s_final(struct shash_desc *desc, u8 *out) -{ - struct blake2s_state *state = shash_desc_ctx(desc); - - blake2s_set_lastblock(state); - memset(state->buf + state->buflen, 0, - BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */ - blake2s_compress_arch(state, state->buf, 1, state->buflen); - cpu_to_le32_array(state->h, ARRAY_SIZE(state->h)); - memcpy(out, state->h, state->outlen); - memzero_explicit(state, sizeof(*state)); - - return 0; + return crypto_blake2s_final(desc, out, blake2s_compress_arch); } #define BLAKE2S_ALG(name, driver_name, digest_size) \ @@ -141,8 +81,8 @@ static int crypto_blake2s_final(struct shash_desc *desc, u8 *out) .digestsize = digest_size, \ .setkey = crypto_blake2s_setkey, \ .init = crypto_blake2s_init, \ - .update = crypto_blake2s_update, \ - .final = crypto_blake2s_final, \ + .update = crypto_blake2s_update_x86, \ + .final = crypto_blake2s_final_x86, \ .descsize = sizeof(struct blake2s_state), \ } diff --git a/crypto/blake2s_generic.c b/crypto/blake2s_generic.c index b89536c3671c..72fe480f9bd6 100644 --- a/crypto/blake2s_generic.c +++ b/crypto/blake2s_generic.c @@ -1,5 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR MIT /* + * shash interface to the generic implementation of BLAKE2s + * * Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. */ @@ -10,75 +12,15 @@ #include #include -static int crypto_blake2s_setkey(struct crypto_shash *tfm, const u8 *key, - unsigned int keylen) +static int crypto_blake2s_update_generic(struct shash_desc *desc, + const u8 *in, unsigned int inlen) { - struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(tfm); - - if (keylen == 0 || keylen > BLAKE2S_KEY_SIZE) - return -EINVAL; - - memcpy(tctx->key, key, keylen); - tctx->keylen = keylen; - - return 0; + return crypto_blake2s_update(desc, in, inlen, blake2s_compress_generic); } -static int crypto_blake2s_init(struct shash_desc *desc) +static int crypto_blake2s_final_generic(struct shash_desc *desc, u8 *out) { - struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); - struct blake2s_state *state = shash_desc_ctx(desc); - const int outlen = crypto_shash_digestsize(desc->tfm); - - if (tctx->keylen) - blake2s_init_key(state, outlen, tctx->key, tctx->keylen); - else - blake2s_init(state, outlen); - - return 0; -} - -static int crypto_blake2s_update(struct shash_desc *desc, const u8 *in, - unsigned int inlen) -{ - struct blake2s_state *state = shash_desc_ctx(desc); - const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen; - - if (unlikely(!inlen)) - return 0; - if (inlen > fill) { - memcpy(state->buf + state->buflen, in, fill); - blake2s_compress_generic(state, state->buf, 1, BLAKE2S_BLOCK_SIZE); - state->buflen = 0; - in += fill; - inlen -= fill; - } - if (inlen > BLAKE2S_BLOCK_SIZE) { - const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2S_BLOCK_SIZE); - /* Hash one less (full) block than strictly possible */ - blake2s_compress_generic(state, in, nblocks - 1, BLAKE2S_BLOCK_SIZE); - in += BLAKE2S_BLOCK_SIZE * (nblocks - 1); - inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1); - } - memcpy(state->buf + state->buflen, in, inlen); - state->buflen += inlen; - - return 0; -} - -static int crypto_blake2s_final(struct shash_desc *desc, u8 *out) -{ - struct blake2s_state *state = shash_desc_ctx(desc); - - blake2s_set_lastblock(state); - memset(state->buf + state->buflen, 0, - BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */ - blake2s_compress_generic(state, state->buf, 1, state->buflen); - cpu_to_le32_array(state->h, ARRAY_SIZE(state->h)); - memcpy(out, state->h, state->outlen); - memzero_explicit(state, sizeof(*state)); - - return 0; + return crypto_blake2s_final(desc, out, blake2s_compress_generic); } #define BLAKE2S_ALG(name, driver_name, digest_size) \ @@ -93,8 +35,8 @@ static int crypto_blake2s_final(struct shash_desc *desc, u8 *out) .digestsize = digest_size, \ .setkey = crypto_blake2s_setkey, \ .init = crypto_blake2s_init, \ - .update = crypto_blake2s_update, \ - .final = crypto_blake2s_final, \ + .update = crypto_blake2s_update_generic, \ + .final = crypto_blake2s_final_generic, \ .descsize = sizeof(struct blake2s_state), \ } diff --git a/include/crypto/internal/blake2s.h b/include/crypto/internal/blake2s.h index 42deba4b8cee..2ea0a8f5e7f4 100644 --- a/include/crypto/internal/blake2s.h +++ b/include/crypto/internal/blake2s.h @@ -1,16 +1,16 @@ /* SPDX-License-Identifier: GPL-2.0 OR MIT */ +/* + * Helper functions for BLAKE2s implementations. + * Keep this in sync with the corresponding BLAKE2b header. + */ #ifndef BLAKE2S_INTERNAL_H #define BLAKE2S_INTERNAL_H #include +#include #include -struct blake2s_tfm_ctx { - u8 key[BLAKE2S_KEY_SIZE]; - unsigned int keylen; -}; - void blake2s_compress_generic(struct blake2s_state *state,const u8 *block, size_t nblocks, const u32 inc); @@ -27,6 +27,8 @@ static inline void blake2s_set_lastblock(struct blake2s_state *state) typedef void (*blake2s_compress_t)(struct blake2s_state *state, const u8 *block, size_t nblocks, u32 inc); +/* Helper functions for BLAKE2s shared by the library and shash APIs */ + static inline void __blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen, blake2s_compress_t compress) @@ -64,4 +66,57 @@ static inline void __blake2s_final(struct blake2s_state *state, u8 *out, memcpy(out, state->h, state->outlen); } +/* Helper functions for shash implementations of BLAKE2s */ + +struct blake2s_tfm_ctx { + u8 key[BLAKE2S_KEY_SIZE]; + unsigned int keylen; +}; + +static inline int crypto_blake2s_setkey(struct crypto_shash *tfm, + const u8 *key, unsigned int keylen) +{ + struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(tfm); + + if (keylen == 0 || keylen > BLAKE2S_KEY_SIZE) + return -EINVAL; + + memcpy(tctx->key, key, keylen); + tctx->keylen = keylen; + + return 0; +} + +static inline int crypto_blake2s_init(struct shash_desc *desc) +{ + const struct blake2s_tfm_ctx *tctx = crypto_shash_ctx(desc->tfm); + struct blake2s_state *state = shash_desc_ctx(desc); + unsigned int outlen = crypto_shash_digestsize(desc->tfm); + + if (tctx->keylen) + blake2s_init_key(state, outlen, tctx->key, tctx->keylen); + else + blake2s_init(state, outlen); + return 0; +} + +static inline int crypto_blake2s_update(struct shash_desc *desc, + const u8 *in, unsigned int inlen, + blake2s_compress_t compress) +{ + struct blake2s_state *state = shash_desc_ctx(desc); + + __blake2s_update(state, in, inlen, compress); + return 0; +} + +static inline int crypto_blake2s_final(struct shash_desc *desc, u8 *out, + blake2s_compress_t compress) +{ + struct blake2s_state *state = shash_desc_ctx(desc); + + __blake2s_final(state, out, compress); + return 0; +} + #endif /* BLAKE2S_INTERNAL_H */ From d45ae768b71b96869643fcea7c2d629120d01ba2 Mon Sep 17 00:00:00 2001 From: Eric Biggers Date: Wed, 23 Dec 2020 00:09:55 -0800 Subject: [PATCH 0118/1842] crypto: blake2s - optimize blake2s initialization commit 42ad8cf821f0d8564c393e9ad7d00a1a271d18ae upstream. If no key was provided, then don't waste time initializing the block buffer, as its initial contents won't be used. Also, make crypto_blake2s_init() and blake2s() call a single internal function __blake2s_init() which treats the key as optional, rather than conditionally calling blake2s_init() or blake2s_init_key(). This reduces the compiled code size, as previously both blake2s_init() and blake2s_init_key() were being inlined into these two callers, except when the key size passed to blake2s() was a compile-time constant. These optimizations aren't that significant for BLAKE2s. However, the equivalent optimizations will be more significant for BLAKE2b, as everything is twice as big in BLAKE2b. And it's good to keep things consistent rather than making optimizations for BLAKE2b but not BLAKE2s. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- include/crypto/blake2s.h | 53 ++++++++++++++++--------------- include/crypto/internal/blake2s.h | 5 +-- 2 files changed, 28 insertions(+), 30 deletions(-) diff --git a/include/crypto/blake2s.h b/include/crypto/blake2s.h index b471deac28ff..734ed22b7a6a 100644 --- a/include/crypto/blake2s.h +++ b/include/crypto/blake2s.h @@ -43,29 +43,34 @@ enum blake2s_iv { BLAKE2S_IV7 = 0x5BE0CD19UL, }; -void blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen); -void blake2s_final(struct blake2s_state *state, u8 *out); - -static inline void blake2s_init_param(struct blake2s_state *state, - const u32 param) +static inline void __blake2s_init(struct blake2s_state *state, size_t outlen, + const void *key, size_t keylen) { - *state = (struct blake2s_state){{ - BLAKE2S_IV0 ^ param, - BLAKE2S_IV1, - BLAKE2S_IV2, - BLAKE2S_IV3, - BLAKE2S_IV4, - BLAKE2S_IV5, - BLAKE2S_IV6, - BLAKE2S_IV7, - }}; + state->h[0] = BLAKE2S_IV0 ^ (0x01010000 | keylen << 8 | outlen); + state->h[1] = BLAKE2S_IV1; + state->h[2] = BLAKE2S_IV2; + state->h[3] = BLAKE2S_IV3; + state->h[4] = BLAKE2S_IV4; + state->h[5] = BLAKE2S_IV5; + state->h[6] = BLAKE2S_IV6; + state->h[7] = BLAKE2S_IV7; + state->t[0] = 0; + state->t[1] = 0; + state->f[0] = 0; + state->f[1] = 0; + state->buflen = 0; + state->outlen = outlen; + if (keylen) { + memcpy(state->buf, key, keylen); + memset(&state->buf[keylen], 0, BLAKE2S_BLOCK_SIZE - keylen); + state->buflen = BLAKE2S_BLOCK_SIZE; + } } static inline void blake2s_init(struct blake2s_state *state, const size_t outlen) { - blake2s_init_param(state, 0x01010000 | outlen); - state->outlen = outlen; + __blake2s_init(state, outlen, NULL, 0); } static inline void blake2s_init_key(struct blake2s_state *state, @@ -75,12 +80,12 @@ static inline void blake2s_init_key(struct blake2s_state *state, WARN_ON(IS_ENABLED(DEBUG) && (!outlen || outlen > BLAKE2S_HASH_SIZE || !key || !keylen || keylen > BLAKE2S_KEY_SIZE)); - blake2s_init_param(state, 0x01010000 | keylen << 8 | outlen); - memcpy(state->buf, key, keylen); - state->buflen = BLAKE2S_BLOCK_SIZE; - state->outlen = outlen; + __blake2s_init(state, outlen, key, keylen); } +void blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen); +void blake2s_final(struct blake2s_state *state, u8 *out); + static inline void blake2s(u8 *out, const u8 *in, const u8 *key, const size_t outlen, const size_t inlen, const size_t keylen) @@ -91,11 +96,7 @@ static inline void blake2s(u8 *out, const u8 *in, const u8 *key, outlen > BLAKE2S_HASH_SIZE || keylen > BLAKE2S_KEY_SIZE || (!key && keylen))); - if (keylen) - blake2s_init_key(&state, outlen, key, keylen); - else - blake2s_init(&state, outlen); - + __blake2s_init(&state, outlen, key, keylen); blake2s_update(&state, in, inlen); blake2s_final(&state, out); } diff --git a/include/crypto/internal/blake2s.h b/include/crypto/internal/blake2s.h index 2ea0a8f5e7f4..867ef3753f5c 100644 --- a/include/crypto/internal/blake2s.h +++ b/include/crypto/internal/blake2s.h @@ -93,10 +93,7 @@ static inline int crypto_blake2s_init(struct shash_desc *desc) struct blake2s_state *state = shash_desc_ctx(desc); unsigned int outlen = crypto_shash_digestsize(desc->tfm); - if (tctx->keylen) - blake2s_init_key(state, outlen, tctx->key, tctx->keylen); - else - blake2s_init(state, outlen); + __blake2s_init(state, outlen, tctx->key, tctx->keylen); return 0; } From fea91e907076163ce319d41e0ea5862c9a0c10c0 Mon Sep 17 00:00:00 2001 From: Eric Biggers Date: Wed, 23 Dec 2020 00:09:56 -0800 Subject: [PATCH 0119/1842] crypto: blake2s - add comment for blake2s_state fields commit 7d87131fadd53a0401b5c078dd64e58c3ea6994c upstream. The first three fields of 'struct blake2s_state' are used in assembly code, which isn't immediately obvious, so add a comment to this effect. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- include/crypto/blake2s.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/crypto/blake2s.h b/include/crypto/blake2s.h index 734ed22b7a6a..f1c8330a61a9 100644 --- a/include/crypto/blake2s.h +++ b/include/crypto/blake2s.h @@ -24,6 +24,7 @@ enum blake2s_lengths { }; struct blake2s_state { + /* 'h', 't', and 'f' are used in assembly code, so keep them as-is. */ u32 h[8]; u32 t[2]; u32 f[2]; From 030d3443aa61153a5354b716a668ae501ead228b Mon Sep 17 00:00:00 2001 From: Eric Biggers Date: Wed, 23 Dec 2020 00:09:57 -0800 Subject: [PATCH 0120/1842] crypto: blake2s - adjust include guard naming commit 8786841bc2020f7f2513a6c74e64912f07b9c0dc upstream. Use the full path in the include guards for the BLAKE2s headers to avoid ambiguity and to match the convention for most files in include/crypto/. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- include/crypto/blake2s.h | 6 +++--- include/crypto/internal/blake2s.h | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/crypto/blake2s.h b/include/crypto/blake2s.h index f1c8330a61a9..3f06183c2d80 100644 --- a/include/crypto/blake2s.h +++ b/include/crypto/blake2s.h @@ -3,8 +3,8 @@ * Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. */ -#ifndef BLAKE2S_H -#define BLAKE2S_H +#ifndef _CRYPTO_BLAKE2S_H +#define _CRYPTO_BLAKE2S_H #include #include @@ -105,4 +105,4 @@ static inline void blake2s(u8 *out, const u8 *in, const u8 *key, void blake2s256_hmac(u8 *out, const u8 *in, const u8 *key, const size_t inlen, const size_t keylen); -#endif /* BLAKE2S_H */ +#endif /* _CRYPTO_BLAKE2S_H */ diff --git a/include/crypto/internal/blake2s.h b/include/crypto/internal/blake2s.h index 867ef3753f5c..8e50d487500f 100644 --- a/include/crypto/internal/blake2s.h +++ b/include/crypto/internal/blake2s.h @@ -4,8 +4,8 @@ * Keep this in sync with the corresponding BLAKE2b header. */ -#ifndef BLAKE2S_INTERNAL_H -#define BLAKE2S_INTERNAL_H +#ifndef _CRYPTO_INTERNAL_BLAKE2S_H +#define _CRYPTO_INTERNAL_BLAKE2S_H #include #include @@ -116,4 +116,4 @@ static inline int crypto_blake2s_final(struct shash_desc *desc, u8 *out, return 0; } -#endif /* BLAKE2S_INTERNAL_H */ +#endif /* _CRYPTO_INTERNAL_BLAKE2S_H */ From aec0878b1d13642033e9da6a05e530f2b8908446 Mon Sep 17 00:00:00 2001 From: Eric Biggers Date: Wed, 23 Dec 2020 00:09:58 -0800 Subject: [PATCH 0121/1842] crypto: blake2s - include instead of commit bbda6e0f1303953c855ee3669655a81b69fbe899 upstream. Address the following checkpatch warning: WARNING: Use #include instead of Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- include/crypto/blake2s.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/include/crypto/blake2s.h b/include/crypto/blake2s.h index 3f06183c2d80..bc3fb59442ce 100644 --- a/include/crypto/blake2s.h +++ b/include/crypto/blake2s.h @@ -6,12 +6,11 @@ #ifndef _CRYPTO_BLAKE2S_H #define _CRYPTO_BLAKE2S_H +#include #include #include #include -#include - enum blake2s_lengths { BLAKE2S_BLOCK_SIZE = 64, BLAKE2S_HASH_SIZE = 32, From 62531d446a985fd87fd1320beafec74455cc2bb5 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Wed, 22 Dec 2021 14:56:58 +0100 Subject: [PATCH 0122/1842] lib/crypto: blake2s: include as built-in commit 6048fdcc5f269c7f31d774c295ce59081b36e6f9 upstream. In preparation for using blake2s in the RNG, we change the way that it is wired-in to the build system. Instead of using ifdefs to select the right symbol, we use weak symbols. And because ARM doesn't need the generic implementation, we make the generic one default only if an arch library doesn't need it already, and then have arch libraries that do need it opt-in. So that the arch libraries can remain tristate rather than bool, we then split the shash part from the glue code. Acked-by: Herbert Xu Acked-by: Ard Biesheuvel Acked-by: Greg Kroah-Hartman Cc: Masahiro Yamada Cc: linux-kbuild@vger.kernel.org Cc: linux-crypto@vger.kernel.org Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/x86/crypto/Makefile | 4 +- arch/x86/crypto/blake2s-glue.c | 68 +++------------------------ arch/x86/crypto/blake2s-shash.c | 77 +++++++++++++++++++++++++++++++ crypto/Kconfig | 3 +- drivers/net/Kconfig | 1 - include/crypto/internal/blake2s.h | 6 +-- lib/crypto/Kconfig | 23 +++------ lib/crypto/Makefile | 9 ++-- lib/crypto/blake2s-generic.c | 6 ++- lib/crypto/blake2s.c | 6 --- 10 files changed, 106 insertions(+), 97 deletions(-) create mode 100644 arch/x86/crypto/blake2s-shash.c diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile index a31de0c6ccde..66f25c2938bf 100644 --- a/arch/x86/crypto/Makefile +++ b/arch/x86/crypto/Makefile @@ -66,7 +66,9 @@ obj-$(CONFIG_CRYPTO_SHA512_SSSE3) += sha512-ssse3.o sha512-ssse3-y := sha512-ssse3-asm.o sha512-avx-asm.o sha512-avx2-asm.o sha512_ssse3_glue.o obj-$(CONFIG_CRYPTO_BLAKE2S_X86) += blake2s-x86_64.o -blake2s-x86_64-y := blake2s-core.o blake2s-glue.o +blake2s-x86_64-y := blake2s-shash.o +obj-$(if $(CONFIG_CRYPTO_BLAKE2S_X86),y) += libblake2s-x86_64.o +libblake2s-x86_64-y := blake2s-core.o blake2s-glue.o obj-$(CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL) += ghash-clmulni-intel.o ghash-clmulni-intel-y := ghash-clmulni-intel_asm.o ghash-clmulni-intel_glue.o diff --git a/arch/x86/crypto/blake2s-glue.c b/arch/x86/crypto/blake2s-glue.c index a40365ab301e..69853c13e8fb 100644 --- a/arch/x86/crypto/blake2s-glue.c +++ b/arch/x86/crypto/blake2s-glue.c @@ -5,7 +5,6 @@ #include #include -#include #include #include @@ -28,9 +27,8 @@ asmlinkage void blake2s_compress_avx512(struct blake2s_state *state, static __ro_after_init DEFINE_STATIC_KEY_FALSE(blake2s_use_ssse3); static __ro_after_init DEFINE_STATIC_KEY_FALSE(blake2s_use_avx512); -void blake2s_compress_arch(struct blake2s_state *state, - const u8 *block, size_t nblocks, - const u32 inc) +void blake2s_compress(struct blake2s_state *state, const u8 *block, + size_t nblocks, const u32 inc) { /* SIMD disables preemption, so relax after processing each page. */ BUILD_BUG_ON(SZ_4K / BLAKE2S_BLOCK_SIZE < 8); @@ -56,49 +54,12 @@ void blake2s_compress_arch(struct blake2s_state *state, block += blocks * BLAKE2S_BLOCK_SIZE; } while (nblocks); } -EXPORT_SYMBOL(blake2s_compress_arch); - -static int crypto_blake2s_update_x86(struct shash_desc *desc, - const u8 *in, unsigned int inlen) -{ - return crypto_blake2s_update(desc, in, inlen, blake2s_compress_arch); -} - -static int crypto_blake2s_final_x86(struct shash_desc *desc, u8 *out) -{ - return crypto_blake2s_final(desc, out, blake2s_compress_arch); -} - -#define BLAKE2S_ALG(name, driver_name, digest_size) \ - { \ - .base.cra_name = name, \ - .base.cra_driver_name = driver_name, \ - .base.cra_priority = 200, \ - .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \ - .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, \ - .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), \ - .base.cra_module = THIS_MODULE, \ - .digestsize = digest_size, \ - .setkey = crypto_blake2s_setkey, \ - .init = crypto_blake2s_init, \ - .update = crypto_blake2s_update_x86, \ - .final = crypto_blake2s_final_x86, \ - .descsize = sizeof(struct blake2s_state), \ - } - -static struct shash_alg blake2s_algs[] = { - BLAKE2S_ALG("blake2s-128", "blake2s-128-x86", BLAKE2S_128_HASH_SIZE), - BLAKE2S_ALG("blake2s-160", "blake2s-160-x86", BLAKE2S_160_HASH_SIZE), - BLAKE2S_ALG("blake2s-224", "blake2s-224-x86", BLAKE2S_224_HASH_SIZE), - BLAKE2S_ALG("blake2s-256", "blake2s-256-x86", BLAKE2S_256_HASH_SIZE), -}; +EXPORT_SYMBOL(blake2s_compress); static int __init blake2s_mod_init(void) { - if (!boot_cpu_has(X86_FEATURE_SSSE3)) - return 0; - - static_branch_enable(&blake2s_use_ssse3); + if (boot_cpu_has(X86_FEATURE_SSSE3)) + static_branch_enable(&blake2s_use_ssse3); if (IS_ENABLED(CONFIG_AS_AVX512) && boot_cpu_has(X86_FEATURE_AVX) && @@ -109,26 +70,9 @@ static int __init blake2s_mod_init(void) XFEATURE_MASK_AVX512, NULL)) static_branch_enable(&blake2s_use_avx512); - return IS_REACHABLE(CONFIG_CRYPTO_HASH) ? - crypto_register_shashes(blake2s_algs, - ARRAY_SIZE(blake2s_algs)) : 0; -} - -static void __exit blake2s_mod_exit(void) -{ - if (IS_REACHABLE(CONFIG_CRYPTO_HASH) && boot_cpu_has(X86_FEATURE_SSSE3)) - crypto_unregister_shashes(blake2s_algs, ARRAY_SIZE(blake2s_algs)); + return 0; } module_init(blake2s_mod_init); -module_exit(blake2s_mod_exit); -MODULE_ALIAS_CRYPTO("blake2s-128"); -MODULE_ALIAS_CRYPTO("blake2s-128-x86"); -MODULE_ALIAS_CRYPTO("blake2s-160"); -MODULE_ALIAS_CRYPTO("blake2s-160-x86"); -MODULE_ALIAS_CRYPTO("blake2s-224"); -MODULE_ALIAS_CRYPTO("blake2s-224-x86"); -MODULE_ALIAS_CRYPTO("blake2s-256"); -MODULE_ALIAS_CRYPTO("blake2s-256-x86"); MODULE_LICENSE("GPL v2"); diff --git a/arch/x86/crypto/blake2s-shash.c b/arch/x86/crypto/blake2s-shash.c new file mode 100644 index 000000000000..f9e2fecdb761 --- /dev/null +++ b/arch/x86/crypto/blake2s-shash.c @@ -0,0 +1,77 @@ +// SPDX-License-Identifier: GPL-2.0 OR MIT +/* + * Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. + */ + +#include +#include +#include + +#include +#include +#include +#include + +#include +#include + +static int crypto_blake2s_update_x86(struct shash_desc *desc, + const u8 *in, unsigned int inlen) +{ + return crypto_blake2s_update(desc, in, inlen, blake2s_compress); +} + +static int crypto_blake2s_final_x86(struct shash_desc *desc, u8 *out) +{ + return crypto_blake2s_final(desc, out, blake2s_compress); +} + +#define BLAKE2S_ALG(name, driver_name, digest_size) \ + { \ + .base.cra_name = name, \ + .base.cra_driver_name = driver_name, \ + .base.cra_priority = 200, \ + .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY, \ + .base.cra_blocksize = BLAKE2S_BLOCK_SIZE, \ + .base.cra_ctxsize = sizeof(struct blake2s_tfm_ctx), \ + .base.cra_module = THIS_MODULE, \ + .digestsize = digest_size, \ + .setkey = crypto_blake2s_setkey, \ + .init = crypto_blake2s_init, \ + .update = crypto_blake2s_update_x86, \ + .final = crypto_blake2s_final_x86, \ + .descsize = sizeof(struct blake2s_state), \ + } + +static struct shash_alg blake2s_algs[] = { + BLAKE2S_ALG("blake2s-128", "blake2s-128-x86", BLAKE2S_128_HASH_SIZE), + BLAKE2S_ALG("blake2s-160", "blake2s-160-x86", BLAKE2S_160_HASH_SIZE), + BLAKE2S_ALG("blake2s-224", "blake2s-224-x86", BLAKE2S_224_HASH_SIZE), + BLAKE2S_ALG("blake2s-256", "blake2s-256-x86", BLAKE2S_256_HASH_SIZE), +}; + +static int __init blake2s_mod_init(void) +{ + if (IS_REACHABLE(CONFIG_CRYPTO_HASH) && boot_cpu_has(X86_FEATURE_SSSE3)) + return crypto_register_shashes(blake2s_algs, ARRAY_SIZE(blake2s_algs)); + return 0; +} + +static void __exit blake2s_mod_exit(void) +{ + if (IS_REACHABLE(CONFIG_CRYPTO_HASH) && boot_cpu_has(X86_FEATURE_SSSE3)) + crypto_unregister_shashes(blake2s_algs, ARRAY_SIZE(blake2s_algs)); +} + +module_init(blake2s_mod_init); +module_exit(blake2s_mod_exit); + +MODULE_ALIAS_CRYPTO("blake2s-128"); +MODULE_ALIAS_CRYPTO("blake2s-128-x86"); +MODULE_ALIAS_CRYPTO("blake2s-160"); +MODULE_ALIAS_CRYPTO("blake2s-160-x86"); +MODULE_ALIAS_CRYPTO("blake2s-224"); +MODULE_ALIAS_CRYPTO("blake2s-224-x86"); +MODULE_ALIAS_CRYPTO("blake2s-256"); +MODULE_ALIAS_CRYPTO("blake2s-256-x86"); +MODULE_LICENSE("GPL v2"); diff --git a/crypto/Kconfig b/crypto/Kconfig index 1157f82dc9cf..0dee9242491c 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1936,9 +1936,10 @@ config CRYPTO_STATS config CRYPTO_HASH_INFO bool -source "lib/crypto/Kconfig" source "drivers/crypto/Kconfig" source "crypto/asymmetric_keys/Kconfig" source "certs/Kconfig" endif # if CRYPTO + +source "lib/crypto/Kconfig" diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index cb865b7ec375..f20808024305 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -80,7 +80,6 @@ config WIREGUARD select CRYPTO select CRYPTO_LIB_CURVE25519 select CRYPTO_LIB_CHACHA20POLY1305 - select CRYPTO_LIB_BLAKE2S select CRYPTO_CHACHA20_X86_64 if X86 && 64BIT select CRYPTO_POLY1305_X86_64 if X86 && 64BIT select CRYPTO_BLAKE2S_X86 if X86 && 64BIT diff --git a/include/crypto/internal/blake2s.h b/include/crypto/internal/blake2s.h index 8e50d487500f..d39cfa0d333e 100644 --- a/include/crypto/internal/blake2s.h +++ b/include/crypto/internal/blake2s.h @@ -11,11 +11,11 @@ #include #include -void blake2s_compress_generic(struct blake2s_state *state,const u8 *block, +void blake2s_compress_generic(struct blake2s_state *state, const u8 *block, size_t nblocks, const u32 inc); -void blake2s_compress_arch(struct blake2s_state *state,const u8 *block, - size_t nblocks, const u32 inc); +void blake2s_compress(struct blake2s_state *state, const u8 *block, + size_t nblocks, const u32 inc); bool blake2s_selftest(void); diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index 14c032de276e..af3da5a8bde8 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -1,7 +1,5 @@ # SPDX-License-Identifier: GPL-2.0 -comment "Crypto library routines" - config CRYPTO_LIB_AES tristate @@ -9,14 +7,14 @@ config CRYPTO_LIB_ARC4 tristate config CRYPTO_ARCH_HAVE_LIB_BLAKE2S - tristate + bool help Declares whether the architecture provides an arch-specific accelerated implementation of the Blake2s library interface, either builtin or as a module. config CRYPTO_LIB_BLAKE2S_GENERIC - tristate + def_bool !CRYPTO_ARCH_HAVE_LIB_BLAKE2S help This symbol can be depended upon by arch implementations of the Blake2s library interface that require the generic code as a @@ -24,15 +22,6 @@ config CRYPTO_LIB_BLAKE2S_GENERIC implementation is enabled, this implementation serves the users of CRYPTO_LIB_BLAKE2S. -config CRYPTO_LIB_BLAKE2S - tristate "BLAKE2s hash function library" - depends on CRYPTO_ARCH_HAVE_LIB_BLAKE2S || !CRYPTO_ARCH_HAVE_LIB_BLAKE2S - select CRYPTO_LIB_BLAKE2S_GENERIC if CRYPTO_ARCH_HAVE_LIB_BLAKE2S=n - help - Enable the Blake2s library interface. This interface may be fulfilled - by either the generic implementation or an arch-specific one, if one - is available and enabled. - config CRYPTO_ARCH_HAVE_LIB_CHACHA tristate help @@ -51,7 +40,7 @@ config CRYPTO_LIB_CHACHA_GENERIC of CRYPTO_LIB_CHACHA. config CRYPTO_LIB_CHACHA - tristate "ChaCha library interface" + tristate depends on CRYPTO_ARCH_HAVE_LIB_CHACHA || !CRYPTO_ARCH_HAVE_LIB_CHACHA select CRYPTO_LIB_CHACHA_GENERIC if CRYPTO_ARCH_HAVE_LIB_CHACHA=n help @@ -76,7 +65,7 @@ config CRYPTO_LIB_CURVE25519_GENERIC of CRYPTO_LIB_CURVE25519. config CRYPTO_LIB_CURVE25519 - tristate "Curve25519 scalar multiplication library" + tristate depends on CRYPTO_ARCH_HAVE_LIB_CURVE25519 || !CRYPTO_ARCH_HAVE_LIB_CURVE25519 select CRYPTO_LIB_CURVE25519_GENERIC if CRYPTO_ARCH_HAVE_LIB_CURVE25519=n help @@ -111,7 +100,7 @@ config CRYPTO_LIB_POLY1305_GENERIC of CRYPTO_LIB_POLY1305. config CRYPTO_LIB_POLY1305 - tristate "Poly1305 library interface" + tristate depends on CRYPTO_ARCH_HAVE_LIB_POLY1305 || !CRYPTO_ARCH_HAVE_LIB_POLY1305 select CRYPTO_LIB_POLY1305_GENERIC if CRYPTO_ARCH_HAVE_LIB_POLY1305=n help @@ -120,7 +109,7 @@ config CRYPTO_LIB_POLY1305 is available and enabled. config CRYPTO_LIB_CHACHA20POLY1305 - tristate "ChaCha20-Poly1305 AEAD support (8-byte nonce library version)" + tristate depends on CRYPTO_ARCH_HAVE_LIB_CHACHA || !CRYPTO_ARCH_HAVE_LIB_CHACHA depends on CRYPTO_ARCH_HAVE_LIB_POLY1305 || !CRYPTO_ARCH_HAVE_LIB_POLY1305 select CRYPTO_LIB_CHACHA diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile index 3a435629d9ce..26be2bbe09c5 100644 --- a/lib/crypto/Makefile +++ b/lib/crypto/Makefile @@ -10,11 +10,10 @@ libaes-y := aes.o obj-$(CONFIG_CRYPTO_LIB_ARC4) += libarc4.o libarc4-y := arc4.o -obj-$(CONFIG_CRYPTO_LIB_BLAKE2S_GENERIC) += libblake2s-generic.o -libblake2s-generic-y += blake2s-generic.o - -obj-$(CONFIG_CRYPTO_LIB_BLAKE2S) += libblake2s.o -libblake2s-y += blake2s.o +# blake2s is used by the /dev/random driver which is always builtin +obj-y += libblake2s.o +libblake2s-y := blake2s.o +libblake2s-$(CONFIG_CRYPTO_LIB_BLAKE2S_GENERIC) += blake2s-generic.o obj-$(CONFIG_CRYPTO_LIB_CHACHA20POLY1305) += libchacha20poly1305.o libchacha20poly1305-y += chacha20poly1305.o diff --git a/lib/crypto/blake2s-generic.c b/lib/crypto/blake2s-generic.c index 04ff8df24513..75ccb3e633e6 100644 --- a/lib/crypto/blake2s-generic.c +++ b/lib/crypto/blake2s-generic.c @@ -37,7 +37,11 @@ static inline void blake2s_increment_counter(struct blake2s_state *state, state->t[1] += (state->t[0] < inc); } -void blake2s_compress_generic(struct blake2s_state *state,const u8 *block, +void blake2s_compress(struct blake2s_state *state, const u8 *block, + size_t nblocks, const u32 inc) + __weak __alias(blake2s_compress_generic); + +void blake2s_compress_generic(struct blake2s_state *state, const u8 *block, size_t nblocks, const u32 inc) { u32 m[16]; diff --git a/lib/crypto/blake2s.c b/lib/crypto/blake2s.c index c64ac8bfb6a9..416a076964a2 100644 --- a/lib/crypto/blake2s.c +++ b/lib/crypto/blake2s.c @@ -16,12 +16,6 @@ #include #include -#if IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_BLAKE2S) -# define blake2s_compress blake2s_compress_arch -#else -# define blake2s_compress blake2s_compress_generic -#endif - void blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen) { __blake2s_update(state, in, inlen, blake2s_compress); From 5fb6a3ba3af6aff7cdc53d319fc4cc6f79555ca1 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 11 Jan 2022 14:37:41 +0100 Subject: [PATCH 0123/1842] lib/crypto: blake2s: move hmac construction into wireguard commit d8d83d8ab0a453e17e68b3a3bed1f940c34b8646 upstream. Basically nobody should use blake2s in an HMAC construction; it already has a keyed variant. But unfortunately for historical reasons, Noise, used by WireGuard, uses HKDF quite strictly, which means we have to use this. Because this really shouldn't be used by others, this commit moves it into wireguard's noise.c locally, so that kernels that aren't using WireGuard don't get this superfluous code baked in. On m68k systems, this shaves off ~314 bytes. Cc: Herbert Xu Tested-by: Geert Uytterhoeven Acked-by: Ard Biesheuvel Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/net/wireguard/noise.c | 45 ++++++++++++++++++++++++++++++----- include/crypto/blake2s.h | 3 --- lib/crypto/blake2s-selftest.c | 31 ------------------------ lib/crypto/blake2s.c | 37 ---------------------------- 4 files changed, 39 insertions(+), 77 deletions(-) diff --git a/drivers/net/wireguard/noise.c b/drivers/net/wireguard/noise.c index c0cfd9b36c0b..720952b92e78 100644 --- a/drivers/net/wireguard/noise.c +++ b/drivers/net/wireguard/noise.c @@ -302,6 +302,41 @@ void wg_noise_set_static_identity_private_key( static_identity->static_public, private_key); } +static void hmac(u8 *out, const u8 *in, const u8 *key, const size_t inlen, const size_t keylen) +{ + struct blake2s_state state; + u8 x_key[BLAKE2S_BLOCK_SIZE] __aligned(__alignof__(u32)) = { 0 }; + u8 i_hash[BLAKE2S_HASH_SIZE] __aligned(__alignof__(u32)); + int i; + + if (keylen > BLAKE2S_BLOCK_SIZE) { + blake2s_init(&state, BLAKE2S_HASH_SIZE); + blake2s_update(&state, key, keylen); + blake2s_final(&state, x_key); + } else + memcpy(x_key, key, keylen); + + for (i = 0; i < BLAKE2S_BLOCK_SIZE; ++i) + x_key[i] ^= 0x36; + + blake2s_init(&state, BLAKE2S_HASH_SIZE); + blake2s_update(&state, x_key, BLAKE2S_BLOCK_SIZE); + blake2s_update(&state, in, inlen); + blake2s_final(&state, i_hash); + + for (i = 0; i < BLAKE2S_BLOCK_SIZE; ++i) + x_key[i] ^= 0x5c ^ 0x36; + + blake2s_init(&state, BLAKE2S_HASH_SIZE); + blake2s_update(&state, x_key, BLAKE2S_BLOCK_SIZE); + blake2s_update(&state, i_hash, BLAKE2S_HASH_SIZE); + blake2s_final(&state, i_hash); + + memcpy(out, i_hash, BLAKE2S_HASH_SIZE); + memzero_explicit(x_key, BLAKE2S_BLOCK_SIZE); + memzero_explicit(i_hash, BLAKE2S_HASH_SIZE); +} + /* This is Hugo Krawczyk's HKDF: * - https://eprint.iacr.org/2010/264.pdf * - https://tools.ietf.org/html/rfc5869 @@ -322,14 +357,14 @@ static void kdf(u8 *first_dst, u8 *second_dst, u8 *third_dst, const u8 *data, ((third_len || third_dst) && (!second_len || !second_dst)))); /* Extract entropy from data into secret */ - blake2s256_hmac(secret, data, chaining_key, data_len, NOISE_HASH_LEN); + hmac(secret, data, chaining_key, data_len, NOISE_HASH_LEN); if (!first_dst || !first_len) goto out; /* Expand first key: key = secret, data = 0x1 */ output[0] = 1; - blake2s256_hmac(output, output, secret, 1, BLAKE2S_HASH_SIZE); + hmac(output, output, secret, 1, BLAKE2S_HASH_SIZE); memcpy(first_dst, output, first_len); if (!second_dst || !second_len) @@ -337,8 +372,7 @@ static void kdf(u8 *first_dst, u8 *second_dst, u8 *third_dst, const u8 *data, /* Expand second key: key = secret, data = first-key || 0x2 */ output[BLAKE2S_HASH_SIZE] = 2; - blake2s256_hmac(output, output, secret, BLAKE2S_HASH_SIZE + 1, - BLAKE2S_HASH_SIZE); + hmac(output, output, secret, BLAKE2S_HASH_SIZE + 1, BLAKE2S_HASH_SIZE); memcpy(second_dst, output, second_len); if (!third_dst || !third_len) @@ -346,8 +380,7 @@ static void kdf(u8 *first_dst, u8 *second_dst, u8 *third_dst, const u8 *data, /* Expand third key: key = secret, data = second-key || 0x3 */ output[BLAKE2S_HASH_SIZE] = 3; - blake2s256_hmac(output, output, secret, BLAKE2S_HASH_SIZE + 1, - BLAKE2S_HASH_SIZE); + hmac(output, output, secret, BLAKE2S_HASH_SIZE + 1, BLAKE2S_HASH_SIZE); memcpy(third_dst, output, third_len); out: diff --git a/include/crypto/blake2s.h b/include/crypto/blake2s.h index bc3fb59442ce..4e30e1799e61 100644 --- a/include/crypto/blake2s.h +++ b/include/crypto/blake2s.h @@ -101,7 +101,4 @@ static inline void blake2s(u8 *out, const u8 *in, const u8 *key, blake2s_final(&state, out); } -void blake2s256_hmac(u8 *out, const u8 *in, const u8 *key, const size_t inlen, - const size_t keylen); - #endif /* _CRYPTO_BLAKE2S_H */ diff --git a/lib/crypto/blake2s-selftest.c b/lib/crypto/blake2s-selftest.c index 5d9ea53be973..409e4b728770 100644 --- a/lib/crypto/blake2s-selftest.c +++ b/lib/crypto/blake2s-selftest.c @@ -15,7 +15,6 @@ * #include * * #include - * #include * * #define BLAKE2S_TESTVEC_COUNT 256 * @@ -58,16 +57,6 @@ * } * printf("};\n\n"); * - * printf("static const u8 blake2s_hmac_testvecs[][BLAKE2S_HASH_SIZE] __initconst = {\n"); - * - * HMAC(EVP_blake2s256(), key, sizeof(key), buf, sizeof(buf), hash, NULL); - * print_vec(hash, BLAKE2S_OUTBYTES); - * - * HMAC(EVP_blake2s256(), buf, sizeof(buf), key, sizeof(key), hash, NULL); - * print_vec(hash, BLAKE2S_OUTBYTES); - * - * printf("};\n"); - * * return 0; *} */ @@ -554,15 +543,6 @@ static const u8 blake2s_testvecs[][BLAKE2S_HASH_SIZE] __initconst = { 0xd6, 0x98, 0x6b, 0x07, 0x10, 0x65, 0x52, 0x65, }, }; -static const u8 blake2s_hmac_testvecs[][BLAKE2S_HASH_SIZE] __initconst = { - { 0xce, 0xe1, 0x57, 0x69, 0x82, 0xdc, 0xbf, 0x43, 0xad, 0x56, 0x4c, 0x70, - 0xed, 0x68, 0x16, 0x96, 0xcf, 0xa4, 0x73, 0xe8, 0xe8, 0xfc, 0x32, 0x79, - 0x08, 0x0a, 0x75, 0x82, 0xda, 0x3f, 0x05, 0x11, }, - { 0x77, 0x2f, 0x0c, 0x71, 0x41, 0xf4, 0x4b, 0x2b, 0xb3, 0xc6, 0xb6, 0xf9, - 0x60, 0xde, 0xe4, 0x52, 0x38, 0x66, 0xe8, 0xbf, 0x9b, 0x96, 0xc4, 0x9f, - 0x60, 0xd9, 0x24, 0x37, 0x99, 0xd6, 0xec, 0x31, }, -}; - bool __init blake2s_selftest(void) { u8 key[BLAKE2S_KEY_SIZE]; @@ -607,16 +587,5 @@ bool __init blake2s_selftest(void) } } - if (success) { - blake2s256_hmac(hash, buf, key, sizeof(buf), sizeof(key)); - success &= !memcmp(hash, blake2s_hmac_testvecs[0], BLAKE2S_HASH_SIZE); - - blake2s256_hmac(hash, key, buf, sizeof(key), sizeof(buf)); - success &= !memcmp(hash, blake2s_hmac_testvecs[1], BLAKE2S_HASH_SIZE); - - if (!success) - pr_err("blake2s256_hmac self-test: FAIL\n"); - } - return success; } diff --git a/lib/crypto/blake2s.c b/lib/crypto/blake2s.c index 416a076964a2..bbe387245a01 100644 --- a/lib/crypto/blake2s.c +++ b/lib/crypto/blake2s.c @@ -30,43 +30,6 @@ void blake2s_final(struct blake2s_state *state, u8 *out) } EXPORT_SYMBOL(blake2s_final); -void blake2s256_hmac(u8 *out, const u8 *in, const u8 *key, const size_t inlen, - const size_t keylen) -{ - struct blake2s_state state; - u8 x_key[BLAKE2S_BLOCK_SIZE] __aligned(__alignof__(u32)) = { 0 }; - u8 i_hash[BLAKE2S_HASH_SIZE] __aligned(__alignof__(u32)); - int i; - - if (keylen > BLAKE2S_BLOCK_SIZE) { - blake2s_init(&state, BLAKE2S_HASH_SIZE); - blake2s_update(&state, key, keylen); - blake2s_final(&state, x_key); - } else - memcpy(x_key, key, keylen); - - for (i = 0; i < BLAKE2S_BLOCK_SIZE; ++i) - x_key[i] ^= 0x36; - - blake2s_init(&state, BLAKE2S_HASH_SIZE); - blake2s_update(&state, x_key, BLAKE2S_BLOCK_SIZE); - blake2s_update(&state, in, inlen); - blake2s_final(&state, i_hash); - - for (i = 0; i < BLAKE2S_BLOCK_SIZE; ++i) - x_key[i] ^= 0x5c ^ 0x36; - - blake2s_init(&state, BLAKE2S_HASH_SIZE); - blake2s_update(&state, x_key, BLAKE2S_BLOCK_SIZE); - blake2s_update(&state, i_hash, BLAKE2S_HASH_SIZE); - blake2s_final(&state, i_hash); - - memcpy(out, i_hash, BLAKE2S_HASH_SIZE); - memzero_explicit(x_key, BLAKE2S_BLOCK_SIZE); - memzero_explicit(i_hash, BLAKE2S_HASH_SIZE); -} -EXPORT_SYMBOL(blake2s256_hmac); - static int __init mod_init(void) { if (!IS_ENABLED(CONFIG_CRYPTO_MANAGER_DISABLE_TESTS) && From 07918ddba3ab7549ea6f69156accbdb6c467832a Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 11 Jan 2022 18:58:43 +0100 Subject: [PATCH 0124/1842] lib/crypto: sha1: re-roll loops to reduce code size commit 9a1536b093bb5bf60689021275fd24d513bb8db0 upstream. With SHA-1 no longer being used for anything performance oriented, and also soon to be phased out entirely, we can make up for the space added by unrolled BLAKE2s by simply re-rolling SHA-1. Since SHA-1 is so much more complex, re-rolling it more or less takes care of the code size added by BLAKE2s. And eventually, hopefully we'll see SHA-1 removed entirely from most small kernel builds. Cc: Herbert Xu Cc: Ard Biesheuvel Tested-by: Geert Uytterhoeven Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- lib/sha1.c | 95 ++++++++---------------------------------------------- 1 file changed, 14 insertions(+), 81 deletions(-) diff --git a/lib/sha1.c b/lib/sha1.c index 49257a915bb6..5ad4e4948272 100644 --- a/lib/sha1.c +++ b/lib/sha1.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include @@ -55,7 +56,8 @@ #define SHA_ROUND(t, input, fn, constant, A, B, C, D, E) do { \ __u32 TEMP = input(t); setW(t, TEMP); \ E += TEMP + rol32(A,5) + (fn) + (constant); \ - B = ror32(B, 2); } while (0) + B = ror32(B, 2); \ + TEMP = E; E = D; D = C; C = B; B = A; A = TEMP; } while (0) #define T_0_15(t, A, B, C, D, E) SHA_ROUND(t, SHA_SRC, (((C^D)&B)^D) , 0x5a827999, A, B, C, D, E ) #define T_16_19(t, A, B, C, D, E) SHA_ROUND(t, SHA_MIX, (((C^D)&B)^D) , 0x5a827999, A, B, C, D, E ) @@ -84,6 +86,7 @@ void sha1_transform(__u32 *digest, const char *data, __u32 *array) { __u32 A, B, C, D, E; + unsigned int i = 0; A = digest[0]; B = digest[1]; @@ -92,94 +95,24 @@ void sha1_transform(__u32 *digest, const char *data, __u32 *array) E = digest[4]; /* Round 1 - iterations 0-16 take their input from 'data' */ - T_0_15( 0, A, B, C, D, E); - T_0_15( 1, E, A, B, C, D); - T_0_15( 2, D, E, A, B, C); - T_0_15( 3, C, D, E, A, B); - T_0_15( 4, B, C, D, E, A); - T_0_15( 5, A, B, C, D, E); - T_0_15( 6, E, A, B, C, D); - T_0_15( 7, D, E, A, B, C); - T_0_15( 8, C, D, E, A, B); - T_0_15( 9, B, C, D, E, A); - T_0_15(10, A, B, C, D, E); - T_0_15(11, E, A, B, C, D); - T_0_15(12, D, E, A, B, C); - T_0_15(13, C, D, E, A, B); - T_0_15(14, B, C, D, E, A); - T_0_15(15, A, B, C, D, E); + for (; i < 16; ++i) + T_0_15(i, A, B, C, D, E); /* Round 1 - tail. Input from 512-bit mixing array */ - T_16_19(16, E, A, B, C, D); - T_16_19(17, D, E, A, B, C); - T_16_19(18, C, D, E, A, B); - T_16_19(19, B, C, D, E, A); + for (; i < 20; ++i) + T_16_19(i, A, B, C, D, E); /* Round 2 */ - T_20_39(20, A, B, C, D, E); - T_20_39(21, E, A, B, C, D); - T_20_39(22, D, E, A, B, C); - T_20_39(23, C, D, E, A, B); - T_20_39(24, B, C, D, E, A); - T_20_39(25, A, B, C, D, E); - T_20_39(26, E, A, B, C, D); - T_20_39(27, D, E, A, B, C); - T_20_39(28, C, D, E, A, B); - T_20_39(29, B, C, D, E, A); - T_20_39(30, A, B, C, D, E); - T_20_39(31, E, A, B, C, D); - T_20_39(32, D, E, A, B, C); - T_20_39(33, C, D, E, A, B); - T_20_39(34, B, C, D, E, A); - T_20_39(35, A, B, C, D, E); - T_20_39(36, E, A, B, C, D); - T_20_39(37, D, E, A, B, C); - T_20_39(38, C, D, E, A, B); - T_20_39(39, B, C, D, E, A); + for (; i < 40; ++i) + T_20_39(i, A, B, C, D, E); /* Round 3 */ - T_40_59(40, A, B, C, D, E); - T_40_59(41, E, A, B, C, D); - T_40_59(42, D, E, A, B, C); - T_40_59(43, C, D, E, A, B); - T_40_59(44, B, C, D, E, A); - T_40_59(45, A, B, C, D, E); - T_40_59(46, E, A, B, C, D); - T_40_59(47, D, E, A, B, C); - T_40_59(48, C, D, E, A, B); - T_40_59(49, B, C, D, E, A); - T_40_59(50, A, B, C, D, E); - T_40_59(51, E, A, B, C, D); - T_40_59(52, D, E, A, B, C); - T_40_59(53, C, D, E, A, B); - T_40_59(54, B, C, D, E, A); - T_40_59(55, A, B, C, D, E); - T_40_59(56, E, A, B, C, D); - T_40_59(57, D, E, A, B, C); - T_40_59(58, C, D, E, A, B); - T_40_59(59, B, C, D, E, A); + for (; i < 60; ++i) + T_40_59(i, A, B, C, D, E); /* Round 4 */ - T_60_79(60, A, B, C, D, E); - T_60_79(61, E, A, B, C, D); - T_60_79(62, D, E, A, B, C); - T_60_79(63, C, D, E, A, B); - T_60_79(64, B, C, D, E, A); - T_60_79(65, A, B, C, D, E); - T_60_79(66, E, A, B, C, D); - T_60_79(67, D, E, A, B, C); - T_60_79(68, C, D, E, A, B); - T_60_79(69, B, C, D, E, A); - T_60_79(70, A, B, C, D, E); - T_60_79(71, E, A, B, C, D); - T_60_79(72, D, E, A, B, C); - T_60_79(73, C, D, E, A, B); - T_60_79(74, B, C, D, E, A); - T_60_79(75, A, B, C, D, E); - T_60_79(76, E, A, B, C, D); - T_60_79(77, D, E, A, B, C); - T_60_79(78, C, D, E, A, B); - T_60_79(79, B, C, D, E, A); + for (; i < 80; ++i) + T_60_79(i, A, B, C, D, E); digest[0] += A; digest[1] += B; From ae33c501e059b7d43ce11937354605d5e96ba2a6 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Wed, 19 Jan 2022 14:35:06 +0100 Subject: [PATCH 0125/1842] lib/crypto: blake2s: avoid indirect calls to compression function for Clang CFI commit d2a02e3c8bb6b347818518edff5a4b40ff52d6d8 upstream. blake2s_compress_generic is weakly aliased by blake2s_compress. The current harness for function selection uses a function pointer, which is ordinarily inlined and resolved at compile time. But when Clang's CFI is enabled, CFI still triggers when making an indirect call via a weak symbol. This seems like a bug in Clang's CFI, as though it's bucketing weak symbols and strong symbols differently. It also only seems to trigger when "full LTO" mode is used, rather than "thin LTO". [ 0.000000][ T0] Kernel panic - not syncing: CFI failure (target: blake2s_compress_generic+0x0/0x1444) [ 0.000000][ T0] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.16.0-mainline-06981-g076c855b846e #1 [ 0.000000][ T0] Hardware name: MT6873 (DT) [ 0.000000][ T0] Call trace: [ 0.000000][ T0] dump_backtrace+0xfc/0x1dc [ 0.000000][ T0] dump_stack_lvl+0xa8/0x11c [ 0.000000][ T0] panic+0x194/0x464 [ 0.000000][ T0] __cfi_check_fail+0x54/0x58 [ 0.000000][ T0] __cfi_slowpath_diag+0x354/0x4b0 [ 0.000000][ T0] blake2s_update+0x14c/0x178 [ 0.000000][ T0] _extract_entropy+0xf4/0x29c [ 0.000000][ T0] crng_initialize_primary+0x24/0x94 [ 0.000000][ T0] rand_initialize+0x2c/0x6c [ 0.000000][ T0] start_kernel+0x2f8/0x65c [ 0.000000][ T0] __primary_switched+0xc4/0x7be4 [ 0.000000][ T0] Rebooting in 5 seconds.. Nonetheless, the function pointer method isn't so terrific anyway, so this patch replaces it with a simple boolean, which also gets inlined away. This successfully works around the Clang bug. In general, I'm not too keen on all of the indirection involved here; it clearly does more harm than good. Hopefully the whole thing can get cleaned up down the road when lib/crypto is overhauled more comprehensively. But for now, we go with a simple bandaid. Fixes: 6048fdcc5f26 ("lib/crypto: blake2s: include as built-in") Link: https://github.com/ClangBuiltLinux/linux/issues/1567 Reported-by: Miles Chen Tested-by: Miles Chen Tested-by: Nathan Chancellor Tested-by: John Stultz Acked-by: Nick Desaulniers Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/x86/crypto/blake2s-shash.c | 4 ++-- crypto/blake2s_generic.c | 4 ++-- include/crypto/internal/blake2s.h | 40 +++++++++++++++++++------------ lib/crypto/blake2s.c | 4 ++-- 4 files changed, 31 insertions(+), 21 deletions(-) diff --git a/arch/x86/crypto/blake2s-shash.c b/arch/x86/crypto/blake2s-shash.c index f9e2fecdb761..59ae28abe35c 100644 --- a/arch/x86/crypto/blake2s-shash.c +++ b/arch/x86/crypto/blake2s-shash.c @@ -18,12 +18,12 @@ static int crypto_blake2s_update_x86(struct shash_desc *desc, const u8 *in, unsigned int inlen) { - return crypto_blake2s_update(desc, in, inlen, blake2s_compress); + return crypto_blake2s_update(desc, in, inlen, false); } static int crypto_blake2s_final_x86(struct shash_desc *desc, u8 *out) { - return crypto_blake2s_final(desc, out, blake2s_compress); + return crypto_blake2s_final(desc, out, false); } #define BLAKE2S_ALG(name, driver_name, digest_size) \ diff --git a/crypto/blake2s_generic.c b/crypto/blake2s_generic.c index 72fe480f9bd6..5f96a21f8788 100644 --- a/crypto/blake2s_generic.c +++ b/crypto/blake2s_generic.c @@ -15,12 +15,12 @@ static int crypto_blake2s_update_generic(struct shash_desc *desc, const u8 *in, unsigned int inlen) { - return crypto_blake2s_update(desc, in, inlen, blake2s_compress_generic); + return crypto_blake2s_update(desc, in, inlen, true); } static int crypto_blake2s_final_generic(struct shash_desc *desc, u8 *out) { - return crypto_blake2s_final(desc, out, blake2s_compress_generic); + return crypto_blake2s_final(desc, out, true); } #define BLAKE2S_ALG(name, driver_name, digest_size) \ diff --git a/include/crypto/internal/blake2s.h b/include/crypto/internal/blake2s.h index d39cfa0d333e..52363eee2b20 100644 --- a/include/crypto/internal/blake2s.h +++ b/include/crypto/internal/blake2s.h @@ -24,14 +24,11 @@ static inline void blake2s_set_lastblock(struct blake2s_state *state) state->f[0] = -1; } -typedef void (*blake2s_compress_t)(struct blake2s_state *state, - const u8 *block, size_t nblocks, u32 inc); - /* Helper functions for BLAKE2s shared by the library and shash APIs */ -static inline void __blake2s_update(struct blake2s_state *state, - const u8 *in, size_t inlen, - blake2s_compress_t compress) +static __always_inline void +__blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen, + bool force_generic) { const size_t fill = BLAKE2S_BLOCK_SIZE - state->buflen; @@ -39,7 +36,12 @@ static inline void __blake2s_update(struct blake2s_state *state, return; if (inlen > fill) { memcpy(state->buf + state->buflen, in, fill); - (*compress)(state, state->buf, 1, BLAKE2S_BLOCK_SIZE); + if (force_generic) + blake2s_compress_generic(state, state->buf, 1, + BLAKE2S_BLOCK_SIZE); + else + blake2s_compress(state, state->buf, 1, + BLAKE2S_BLOCK_SIZE); state->buflen = 0; in += fill; inlen -= fill; @@ -47,7 +49,12 @@ static inline void __blake2s_update(struct blake2s_state *state, if (inlen > BLAKE2S_BLOCK_SIZE) { const size_t nblocks = DIV_ROUND_UP(inlen, BLAKE2S_BLOCK_SIZE); /* Hash one less (full) block than strictly possible */ - (*compress)(state, in, nblocks - 1, BLAKE2S_BLOCK_SIZE); + if (force_generic) + blake2s_compress_generic(state, in, nblocks - 1, + BLAKE2S_BLOCK_SIZE); + else + blake2s_compress(state, in, nblocks - 1, + BLAKE2S_BLOCK_SIZE); in += BLAKE2S_BLOCK_SIZE * (nblocks - 1); inlen -= BLAKE2S_BLOCK_SIZE * (nblocks - 1); } @@ -55,13 +62,16 @@ static inline void __blake2s_update(struct blake2s_state *state, state->buflen += inlen; } -static inline void __blake2s_final(struct blake2s_state *state, u8 *out, - blake2s_compress_t compress) +static __always_inline void +__blake2s_final(struct blake2s_state *state, u8 *out, bool force_generic) { blake2s_set_lastblock(state); memset(state->buf + state->buflen, 0, BLAKE2S_BLOCK_SIZE - state->buflen); /* Padding */ - (*compress)(state, state->buf, 1, state->buflen); + if (force_generic) + blake2s_compress_generic(state, state->buf, 1, state->buflen); + else + blake2s_compress(state, state->buf, 1, state->buflen); cpu_to_le32_array(state->h, ARRAY_SIZE(state->h)); memcpy(out, state->h, state->outlen); } @@ -99,20 +109,20 @@ static inline int crypto_blake2s_init(struct shash_desc *desc) static inline int crypto_blake2s_update(struct shash_desc *desc, const u8 *in, unsigned int inlen, - blake2s_compress_t compress) + bool force_generic) { struct blake2s_state *state = shash_desc_ctx(desc); - __blake2s_update(state, in, inlen, compress); + __blake2s_update(state, in, inlen, force_generic); return 0; } static inline int crypto_blake2s_final(struct shash_desc *desc, u8 *out, - blake2s_compress_t compress) + bool force_generic) { struct blake2s_state *state = shash_desc_ctx(desc); - __blake2s_final(state, out, compress); + __blake2s_final(state, out, force_generic); return 0; } diff --git a/lib/crypto/blake2s.c b/lib/crypto/blake2s.c index bbe387245a01..80b194f9a0a0 100644 --- a/lib/crypto/blake2s.c +++ b/lib/crypto/blake2s.c @@ -18,14 +18,14 @@ void blake2s_update(struct blake2s_state *state, const u8 *in, size_t inlen) { - __blake2s_update(state, in, inlen, blake2s_compress); + __blake2s_update(state, in, inlen, false); } EXPORT_SYMBOL(blake2s_update); void blake2s_final(struct blake2s_state *state, u8 *out) { WARN_ON(IS_ENABLED(DEBUG) && !out); - __blake2s_final(state, out, blake2s_compress); + __blake2s_final(state, out, false); memzero_explicit(state, sizeof(*state)); } EXPORT_SYMBOL(blake2s_final); From b57a888740886462697453febcad22cdaa392218 Mon Sep 17 00:00:00 2001 From: Mark Brown Date: Wed, 1 Dec 2021 17:44:49 +0000 Subject: [PATCH 0126/1842] random: document add_hwgenerator_randomness() with other input functions commit 2b6c6e3d9ce3aa0e547ac25d60e06fe035cd9f79 upstream. The section at the top of random.c which documents the input functions available does not document add_hwgenerator_randomness() which might lead a reader to overlook it. Add a brief note about it. Signed-off-by: Mark Brown [Jason: reorganize position of function in doc comment and also document add_bootloader_randomness() while we're at it.] Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index 1ff55a3f03c3..04b39fd1a8ea 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -202,6 +202,9 @@ * unsigned int value); * void add_interrupt_randomness(int irq, int irq_flags); * void add_disk_randomness(struct gendisk *disk); + * void add_hwgenerator_randomness(const char *buffer, size_t count, + * size_t entropy); + * void add_bootloader_randomness(const void *buf, unsigned int size); * * add_device_randomness() is for adding data to the random pool that * is likely to differ between two devices (or possibly even per boot). @@ -228,6 +231,14 @@ * particular randomness source. They do this by keeping track of the * first and second order deltas of the event timings. * + * add_hwgenerator_randomness() is for true hardware RNGs, and will credit + * entropy as specified by the caller. If the entropy pool is full it will + * block until more entropy is needed. + * + * add_bootloader_randomness() is the same as add_hwgenerator_randomness() or + * add_device_randomness(), depending on whether or not the configuration + * option CONFIG_RANDOM_TRUST_BOOTLOADER is set. + * * Ensuring unpredictability at system startup * ============================================ * From 33c30bfe4fb4332ffbd896e5551cbcdf5ad360b7 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Tue, 7 Dec 2021 13:17:33 +0100 Subject: [PATCH 0127/1842] random: remove unused irq_flags argument from add_interrupt_randomness() commit 703f7066f40599c290babdb79dd61319264987e9 upstream. Since commit ee3e00e9e7101 ("random: use registers from interrupted code for CPU's w/o a cycle counter") the irq_flags argument is no longer used. Remove unused irq_flags. Cc: Borislav Petkov Cc: Dave Hansen Cc: Dexuan Cui Cc: H. Peter Anvin Cc: Haiyang Zhang Cc: Ingo Molnar Cc: K. Y. Srinivasan Cc: Stephen Hemminger Cc: Thomas Gleixner Cc: Wei Liu Cc: linux-hyperv@vger.kernel.org Cc: x86@kernel.org Signed-off-by: Sebastian Andrzej Siewior Acked-by: Wei Liu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/mshyperv.c | 2 +- drivers/char/random.c | 4 ++-- drivers/hv/vmbus_drv.c | 2 +- include/linux/random.h | 2 +- kernel/irq/handle.c | 2 +- 5 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index 65d11711cd7b..021cd067733e 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -84,7 +84,7 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_hyperv_stimer0) inc_irq_stat(hyperv_stimer0_count); if (hv_stimer0_handler) hv_stimer0_handler(); - add_interrupt_randomness(HYPERV_STIMER0_VECTOR, 0); + add_interrupt_randomness(HYPERV_STIMER0_VECTOR); ack_APIC_irq(); set_irq_regs(old_regs); diff --git a/drivers/char/random.c b/drivers/char/random.c index 04b39fd1a8ea..d3e83b05860b 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -200,7 +200,7 @@ * void add_device_randomness(const void *buf, unsigned int size); * void add_input_randomness(unsigned int type, unsigned int code, * unsigned int value); - * void add_interrupt_randomness(int irq, int irq_flags); + * void add_interrupt_randomness(int irq); * void add_disk_randomness(struct gendisk *disk); * void add_hwgenerator_randomness(const char *buffer, size_t count, * size_t entropy); @@ -1273,7 +1273,7 @@ static __u32 get_reg(struct fast_pool *f, struct pt_regs *regs) return *ptr; } -void add_interrupt_randomness(int irq, int irq_flags) +void add_interrupt_randomness(int irq) { struct entropy_store *r; struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness); diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c index b9ac357e465d..5d820037e291 100644 --- a/drivers/hv/vmbus_drv.c +++ b/drivers/hv/vmbus_drv.c @@ -1351,7 +1351,7 @@ static void vmbus_isr(void) tasklet_schedule(&hv_cpu->msg_dpc); } - add_interrupt_randomness(hv_get_vector(), 0); + add_interrupt_randomness(hv_get_vector()); } /* diff --git a/include/linux/random.h b/include/linux/random.h index f45b8be3e3c4..c45b2693e51f 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -35,7 +35,7 @@ static inline void add_latent_entropy(void) {} extern void add_input_randomness(unsigned int type, unsigned int code, unsigned int value) __latent_entropy; -extern void add_interrupt_randomness(int irq, int irq_flags) __latent_entropy; +extern void add_interrupt_randomness(int irq) __latent_entropy; extern void get_random_bytes(void *buf, int nbytes); extern int wait_for_random_bytes(void); diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c index 762a928e18f9..8806444a6855 100644 --- a/kernel/irq/handle.c +++ b/kernel/irq/handle.c @@ -195,7 +195,7 @@ irqreturn_t handle_irq_event_percpu(struct irq_desc *desc) retval = __handle_irq_event_percpu(desc, &flags); - add_interrupt_randomness(desc->irq_data.irq, flags); + add_interrupt_randomness(desc->irq_data.irq); if (!noirqdebug) note_interrupt(desc, retval); From 685200b076ff2d2b8a13aa1c03da48e373d205b2 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 21 Dec 2021 16:31:27 +0100 Subject: [PATCH 0128/1842] random: use BLAKE2s instead of SHA1 in extraction commit 9f9eff85a008b095eafc5f4ecbaf5aca689271c1 upstream. This commit addresses one of the lower hanging fruits of the RNG: its usage of SHA1. BLAKE2s is generally faster, and certainly more secure, than SHA1, which has [1] been [2] really [3] very [4] broken [5]. Additionally, the current construction in the RNG doesn't use the full SHA1 function, as specified, and allows overwriting the IV with RDRAND output in an undocumented way, even in the case when RDRAND isn't set to "trusted", which means potential malicious IV choices. And its short length means that keeping only half of it secret when feeding back into the mixer gives us only 2^80 bits of forward secrecy. In other words, not only is the choice of hash function dated, but the use of it isn't really great either. This commit aims to fix both of these issues while also keeping the general structure and semantics as close to the original as possible. Specifically: a) Rather than overwriting the hash IV with RDRAND, we put it into BLAKE2's documented "salt" and "personal" fields, which were specifically created for this type of usage. b) Since this function feeds the full hash result back into the entropy collector, we only return from it half the length of the hash, just as it was done before. This increases the construction's forward secrecy from 2^80 to a much more comfortable 2^128. c) Rather than using the raw "sha1_transform" function alone, we instead use the full proper BLAKE2s function, with finalization. This also has the advantage of supplying 16 bytes at a time rather than SHA1's 10 bytes, which, in addition to having a faster compression function to begin with, means faster extraction in general. On an Intel i7-11850H, this commit makes initial seeding around 131% faster. BLAKE2s itself has the nice property of internally being based on the ChaCha permutation, which the RNG is already using for expansion, so there shouldn't be any issue with newness, funkiness, or surprising CPU behavior, since it's based on something already in use. [1] https://eprint.iacr.org/2005/010.pdf [2] https://www.iacr.org/archive/crypto2005/36210017/36210017.pdf [3] https://eprint.iacr.org/2015/967.pdf [4] https://shattered.io/static/shattered.pdf [5] https://www.usenix.org/system/files/sec20-leurent.pdf Reviewed-by: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Greg Kroah-Hartman Reviewed-by: Jean-Philippe Aumasson Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 71 ++++++++++++++++++------------------------- 1 file changed, 30 insertions(+), 41 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index d3e83b05860b..ca773f100fd6 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1,8 +1,7 @@ /* * random.c -- A strong random number generator * - * Copyright (C) 2017 Jason A. Donenfeld . All - * Rights Reserved. + * Copyright (C) 2017-2022 Jason A. Donenfeld . All Rights Reserved. * * Copyright Matt Mackall , 2003, 2004, 2005 * @@ -78,12 +77,12 @@ * an *estimate* of how many bits of randomness have been stored into * the random number generator's internal state. * - * When random bytes are desired, they are obtained by taking the SHA - * hash of the contents of the "entropy pool". The SHA hash avoids + * When random bytes are desired, they are obtained by taking the BLAKE2s + * hash of the contents of the "entropy pool". The BLAKE2s hash avoids * exposing the internal state of the entropy pool. It is believed to * be computationally infeasible to derive any useful information - * about the input of SHA from its output. Even if it is possible to - * analyze SHA in some clever way, as long as the amount of data + * about the input of BLAKE2s from its output. Even if it is possible to + * analyze BLAKE2s in some clever way, as long as the amount of data * returned from the generator is less than the inherent entropy in * the pool, the output data is totally unpredictable. For this * reason, the routine decreases its internal estimate of how many @@ -93,7 +92,7 @@ * If this estimate goes to zero, the routine can still generate * random numbers; however, an attacker may (at least in theory) be * able to infer the future output of the generator from prior - * outputs. This requires successful cryptanalysis of SHA, which is + * outputs. This requires successful cryptanalysis of BLAKE2s, which is * not believed to be feasible, but there is a remote possibility. * Nonetheless, these numbers should be useful for the vast majority * of purposes. @@ -347,7 +346,7 @@ #include #include #include -#include +#include #include #include @@ -367,10 +366,7 @@ #define INPUT_POOL_WORDS (1 << (INPUT_POOL_SHIFT-5)) #define OUTPUT_POOL_SHIFT 10 #define OUTPUT_POOL_WORDS (1 << (OUTPUT_POOL_SHIFT-5)) -#define EXTRACT_SIZE 10 - - -#define LONGS(x) (((x) + sizeof(unsigned long) - 1)/sizeof(unsigned long)) +#define EXTRACT_SIZE (BLAKE2S_HASH_SIZE / 2) /* * To allow fractional bits to be tracked, the entropy_count field is @@ -406,7 +402,7 @@ static int random_write_wakeup_bits = 28 * OUTPUT_POOL_WORDS; * Thanks to Colin Plumb for suggesting this. * * The mixing operation is much less sensitive than the output hash, - * where we use SHA-1. All that we want of mixing operation is that + * where we use BLAKE2s. All that we want of mixing operation is that * it be a good non-cryptographic hash; i.e. it not produce collisions * when fed "random" data of the sort we expect to see. As long as * the pool state differs for different inputs, we have preserved the @@ -1399,56 +1395,49 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min, */ static void extract_buf(struct entropy_store *r, __u8 *out) { - int i; - union { - __u32 w[5]; - unsigned long l[LONGS(20)]; - } hash; - __u32 workspace[SHA1_WORKSPACE_WORDS]; + struct blake2s_state state __aligned(__alignof__(unsigned long)); + u8 hash[BLAKE2S_HASH_SIZE]; + unsigned long *salt; unsigned long flags; + blake2s_init(&state, sizeof(hash)); + /* * If we have an architectural hardware random number - * generator, use it for SHA's initial vector + * generator, use it for BLAKE2's salt & personal fields. */ - sha1_init(hash.w); - for (i = 0; i < LONGS(20); i++) { + for (salt = (unsigned long *)&state.h[4]; + salt < (unsigned long *)&state.h[8]; ++salt) { unsigned long v; if (!arch_get_random_long(&v)) break; - hash.l[i] = v; + *salt ^= v; } - /* Generate a hash across the pool, 16 words (512 bits) at a time */ + /* Generate a hash across the pool */ spin_lock_irqsave(&r->lock, flags); - for (i = 0; i < r->poolinfo->poolwords; i += 16) - sha1_transform(hash.w, (__u8 *)(r->pool + i), workspace); + blake2s_update(&state, (const u8 *)r->pool, + r->poolinfo->poolwords * sizeof(*r->pool)); + blake2s_final(&state, hash); /* final zeros out state */ /* * We mix the hash back into the pool to prevent backtracking * attacks (where the attacker knows the state of the pool * plus the current outputs, and attempts to find previous - * ouputs), unless the hash function can be inverted. By - * mixing at least a SHA1 worth of hash data back, we make + * outputs), unless the hash function can be inverted. By + * mixing at least a hash worth of hash data back, we make * brute-forcing the feedback as hard as brute-forcing the * hash. */ - __mix_pool_bytes(r, hash.w, sizeof(hash.w)); + __mix_pool_bytes(r, hash, sizeof(hash)); spin_unlock_irqrestore(&r->lock, flags); - memzero_explicit(workspace, sizeof(workspace)); - - /* - * In case the hash function has some recognizable output - * pattern, we fold it in half. Thus, we always feed back - * twice as much data as we output. + /* Note that EXTRACT_SIZE is half of hash size here, because above + * we've dumped the full length back into mixer. By reducing the + * amount that we emit, we retain a level of forward secrecy. */ - hash.w[0] ^= hash.w[3]; - hash.w[1] ^= hash.w[4]; - hash.w[2] ^= rol32(hash.w[2], 16); - - memcpy(out, &hash, EXTRACT_SIZE); - memzero_explicit(&hash, sizeof(hash)); + memcpy(out, hash, EXTRACT_SIZE); + memzero_explicit(hash, sizeof(hash)); } static ssize_t _extract_entropy(struct entropy_store *r, void *buf, From 2bfdf588a8119ce4fc0ff2e12cc783abd620dfb8 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 24 Dec 2021 19:17:58 +0100 Subject: [PATCH 0129/1842] random: do not sign extend bytes for rotation when mixing commit 0d9488ffbf2faddebc6bac055bfa6c93b94056a3 upstream. By using `char` instead of `unsigned char`, certain platforms will sign extend the byte when `w = rol32(*bytes++, input_rotate)` is called, meaning that bit 7 is overrepresented when mixing. This isn't a real problem (unless the mixer itself is already broken) since it's still invertible, but it's not quite correct either. Fix this by using an explicit unsigned type. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index ca773f100fd6..8eac76539fdd 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -547,7 +547,7 @@ static void _mix_pool_bytes(struct entropy_store *r, const void *in, unsigned long i, tap1, tap2, tap3, tap4, tap5; int input_rotate; int wordmask = r->poolinfo->poolwords - 1; - const char *bytes = in; + const unsigned char *bytes = in; __u32 w; tap1 = r->poolinfo->tap1; From 542d8ebedb4d506f3da2c077845eb9909ec647eb Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Wed, 29 Dec 2021 22:10:04 +0100 Subject: [PATCH 0130/1842] random: do not re-init if crng_reseed completes before primary init commit 9c3ddde3f811aabbb83778a2a615bf141b4909ef upstream. If the bootloader supplies sufficient material and crng_reseed() is called very early on, but not too early that wqs aren't available yet, then we might transition to crng_init==2 before rand_initialize()'s call to crng_initialize_primary() made. Then, when crng_initialize_primary() is called, if we're trusting the CPU's RDRAND instructions, we'll needlessly reinitialize the RNG and emit a message about it. This is mostly harmless, as numa_crng_init() will allocate and then free what it just allocated, and excessive calls to invalidate_batched_entropy() aren't so harmful. But it is funky and the extra message is confusing, so avoid the re-initialization all together by checking for crng_init < 2 in crng_initialize_primary(), just as we already do in crng_reseed(). Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 8eac76539fdd..de9ca41ea870 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -827,7 +827,7 @@ static void __init crng_initialize_primary(struct crng_state *crng) { chacha_init_consts(crng->state); _extract_entropy(&input_pool, &crng->state[4], sizeof(__u32) * 12, 0); - if (crng_init_try_arch_early(crng) && trust_cpu) { + if (crng_init_try_arch_early(crng) && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); numa_crng_init(); crng_init = 2; From ca57d51126e4b49ff6c1c42963b72337a885dea7 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Wed, 29 Dec 2021 22:10:06 +0100 Subject: [PATCH 0131/1842] random: mix bootloader randomness into pool commit 57826feeedb63b091f807ba8325d736775d39afd upstream. If we're trusting bootloader randomness, crng_fast_load() is called by add_hwgenerator_randomness(), which sets us to crng_init==1. However, usually it is only called once for an initial 64-byte push, so bootloader entropy will not mix any bytes into the input pool. So it's conceivable that crng_init==1 when crng_initialize_primary() is called later, but then the input pool is empty. When that happens, the crng state key will be overwritten with extracted output from the empty input pool. That's bad. In contrast, if we're not trusting bootloader randomness, we call crng_slow_load() *and* we call mix_pool_bytes(), so that later crng_initialize_primary() isn't drawing on nothing. In order to prevent crng_initialize_primary() from extracting an empty pool, have the trusted bootloader case mirror that of the untrusted bootloader case, mixing the input into the pool. [linux@dominikbrodowski.net: rewrite commit message] Signed-off-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index de9ca41ea870..60dc2cd87d2b 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -2301,6 +2301,7 @@ void add_hwgenerator_randomness(const char *buffer, size_t count, if (unlikely(crng_init == 0)) { size_t ret = crng_fast_load(buffer, count); + mix_pool_bytes(poolp, buffer, ret); count -= ret; buffer += ret; if (!count || crng_init == 0) From 644320410266e11f13b2f003025b2555c2d36383 Mon Sep 17 00:00:00 2001 From: Dominik Brodowski Date: Wed, 29 Dec 2021 22:10:07 +0100 Subject: [PATCH 0132/1842] random: harmonize "crng init done" messages commit 161212c7fd1d9069b232785c75492e50941e2ea8 upstream. We print out "crng init done" for !TRUST_CPU, so we should also print out the same for TRUST_CPU. Signed-off-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 60dc2cd87d2b..26939a300059 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -831,7 +831,7 @@ static void __init crng_initialize_primary(struct crng_state *crng) invalidate_batched_entropy(); numa_crng_init(); crng_init = 2; - pr_notice("crng done (trusting CPU's manufacturer)\n"); + pr_notice("crng init done (trusting CPU's manufacturer)\n"); } crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1; } From efaddd56bc546fc4f2daf2210d8decf8934c1ec9 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 30 Dec 2021 15:59:26 +0100 Subject: [PATCH 0133/1842] random: use IS_ENABLED(CONFIG_NUMA) instead of ifdefs commit 7b87324112df2e1f9b395217361626362dcfb9fb upstream. Rather than an awkward combination of ifdefs and __maybe_unused, we can ensure more source gets parsed, regardless of the configuration, by using IS_ENABLED for the CONFIG_NUMA conditional code. This makes things cleaner and easier to follow. I've confirmed that on !CONFIG_NUMA, we don't wind up with excess code by accident; the generated object file is the same. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 30 +++++++++++------------------- 1 file changed, 11 insertions(+), 19 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 26939a300059..55b73e454a12 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -759,7 +759,6 @@ static int credit_entropy_bits_safe(struct entropy_store *r, int nbits) static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); -#ifdef CONFIG_NUMA /* * Hack to deal with crazy userspace progams when they are all trying * to access /dev/urandom in parallel. The programs are almost @@ -767,7 +766,6 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); * their brain damage. */ static struct crng_state **crng_node_pool __read_mostly; -#endif static void invalidate_batched_entropy(void); static void numa_crng_init(void); @@ -815,7 +813,7 @@ static bool __init crng_init_try_arch_early(struct crng_state *crng) return arch_init; } -static void __maybe_unused crng_initialize_secondary(struct crng_state *crng) +static void crng_initialize_secondary(struct crng_state *crng) { chacha_init_consts(crng->state); _get_random_bytes(&crng->state[4], sizeof(__u32) * 12); @@ -866,7 +864,6 @@ static void crng_finalize_init(struct crng_state *crng) } } -#ifdef CONFIG_NUMA static void do_numa_crng_init(struct work_struct *work) { int i; @@ -893,29 +890,24 @@ static DECLARE_WORK(numa_crng_init_work, do_numa_crng_init); static void numa_crng_init(void) { - schedule_work(&numa_crng_init_work); + if (IS_ENABLED(CONFIG_NUMA)) + schedule_work(&numa_crng_init_work); } static struct crng_state *select_crng(void) { - struct crng_state **pool; - int nid = numa_node_id(); + if (IS_ENABLED(CONFIG_NUMA)) { + struct crng_state **pool; + int nid = numa_node_id(); - /* pairs with cmpxchg_release() in do_numa_crng_init() */ - pool = READ_ONCE(crng_node_pool); - if (pool && pool[nid]) - return pool[nid]; + /* pairs with cmpxchg_release() in do_numa_crng_init() */ + pool = READ_ONCE(crng_node_pool); + if (pool && pool[nid]) + return pool[nid]; + } return &primary_crng; } -#else -static void numa_crng_init(void) {} - -static struct crng_state *select_crng(void) -{ - return &primary_crng; -} -#endif /* * crng_fast_load() can be called by code in the interrupt service From c245231aecd3836f1b76f5bea8302394732b174d Mon Sep 17 00:00:00 2001 From: Dominik Brodowski Date: Fri, 31 Dec 2021 09:26:08 +0100 Subject: [PATCH 0134/1842] random: early initialization of ChaCha constants commit 96562f286884e2db89c74215b199a1084b5fb7f7 upstream. Previously, the ChaCha constants for the primary pool were only initialized in crng_initialize_primary(), called by rand_initialize(). However, some randomness is actually extracted from the primary pool beforehand, e.g. by kmem_cache_create(). Therefore, statically initialize the ChaCha constants for the primary pool. Cc: Herbert Xu Cc: "David S. Miller" Cc: Signed-off-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 5 ++++- include/crypto/chacha.h | 15 +++++++++++---- 2 files changed, 15 insertions(+), 5 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 55b73e454a12..2ecfcf924890 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -457,6 +457,10 @@ struct crng_state { static struct crng_state primary_crng = { .lock = __SPIN_LOCK_UNLOCKED(primary_crng.lock), + .state[0] = CHACHA_CONSTANT_EXPA, + .state[1] = CHACHA_CONSTANT_ND_3, + .state[2] = CHACHA_CONSTANT_2_BY, + .state[3] = CHACHA_CONSTANT_TE_K, }; /* @@ -823,7 +827,6 @@ static void crng_initialize_secondary(struct crng_state *crng) static void __init crng_initialize_primary(struct crng_state *crng) { - chacha_init_consts(crng->state); _extract_entropy(&input_pool, &crng->state[4], sizeof(__u32) * 12, 0); if (crng_init_try_arch_early(crng) && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); diff --git a/include/crypto/chacha.h b/include/crypto/chacha.h index dabaee698718..b3ea73b81944 100644 --- a/include/crypto/chacha.h +++ b/include/crypto/chacha.h @@ -47,12 +47,19 @@ static inline void hchacha_block(const u32 *state, u32 *out, int nrounds) hchacha_block_generic(state, out, nrounds); } +enum chacha_constants { /* expand 32-byte k */ + CHACHA_CONSTANT_EXPA = 0x61707865U, + CHACHA_CONSTANT_ND_3 = 0x3320646eU, + CHACHA_CONSTANT_2_BY = 0x79622d32U, + CHACHA_CONSTANT_TE_K = 0x6b206574U +}; + static inline void chacha_init_consts(u32 *state) { - state[0] = 0x61707865; /* "expa" */ - state[1] = 0x3320646e; /* "nd 3" */ - state[2] = 0x79622d32; /* "2-by" */ - state[3] = 0x6b206574; /* "te k" */ + state[0] = CHACHA_CONSTANT_EXPA; + state[1] = CHACHA_CONSTANT_ND_3; + state[2] = CHACHA_CONSTANT_2_BY; + state[3] = CHACHA_CONSTANT_TE_K; } void chacha_init_arch(u32 *state, const u32 *key, const u8 *iv); From 17420c77f04cc3c0d68a4aff87d686a7ffa6561b Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 30 Dec 2021 17:50:52 +0100 Subject: [PATCH 0135/1842] random: avoid superfluous call to RDRAND in CRNG extraction commit 2ee25b6968b1b3c66ffa408de23d023c1bce81cf upstream. RDRAND is not fast. RDRAND is actually quite slow. We've known this for a while, which is why functions like get_random_u{32,64} were converted to use batching of our ChaCha-based CRNG instead. Yet CRNG extraction still includes a call to RDRAND, in the hot path of every call to get_random_bytes(), /dev/urandom, and getrandom(2). This call to RDRAND here seems quite superfluous. CRNG is already extracting things based on a 256-bit key, based on good entropy, which is then reseeded periodically, updated, backtrack-mutated, and so forth. The CRNG extraction construction is something that we're already relying on to be secure and solid. If it's not, that's a serious problem, and it's unlikely that mixing in a measly 32 bits from RDRAND is going to alleviate things. And in the case where the CRNG doesn't have enough entropy yet, we're already initializing the ChaCha key row with RDRAND in crng_init_try_arch_early(). Removing the call to RDRAND improves performance on an i7-11850H by 370%. In other words, the vast majority of the work done by extract_crng() prior to this commit was devoted to fetching 32 bits of RDRAND. Reviewed-by: Theodore Ts'o Acked-by: Ard Biesheuvel Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 2ecfcf924890..8f5e040b9e3b 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1023,7 +1023,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) static void _extract_crng(struct crng_state *crng, __u8 out[CHACHA_BLOCK_SIZE]) { - unsigned long v, flags, init_time; + unsigned long flags, init_time; if (crng_ready()) { init_time = READ_ONCE(crng->init_time); @@ -1033,8 +1033,6 @@ static void _extract_crng(struct crng_state *crng, &input_pool : NULL); } spin_lock_irqsave(&crng->lock, flags); - if (arch_get_random_long(&v)) - crng->state[14] ^= v; chacha20_block(&crng->state[0], out); if (crng->state[12] == 0) crng->state[13]++; From 0ad5d6384d25b88ce6ff8aa6d7550b1c932ccea9 Mon Sep 17 00:00:00 2001 From: Jann Horn Date: Mon, 3 Jan 2022 16:59:31 +0100 Subject: [PATCH 0136/1842] random: don't reset crng_init_cnt on urandom_read() commit 6c8e11e08a5b74bb8a5cdd5cbc1e5143df0fba72 upstream. At the moment, urandom_read() (used for /dev/urandom) resets crng_init_cnt to zero when it is called at crng_init<2. This is inconsistent: We do it for /dev/urandom reads, but not for the equivalent getrandom(GRND_INSECURE). (And worse, as Jason pointed out, we're only doing this as long as maxwarn>0.) crng_init_cnt is only read in crng_fast_load(); it is relevant at crng_init==0 for determining when to switch to crng_init==1 (and where in the RNG state array to write). As far as I understand: - crng_init==0 means "we have nothing, we might just be returning the same exact numbers on every boot on every machine, we don't even have non-cryptographic randomness; we should shove every bit of entropy we can get into the RNG immediately" - crng_init==1 means "well we have something, it might not be cryptographic, but at least we're not gonna return the same data every time or whatever, it's probably good enough for TCP and ASLR and stuff; we now have time to build up actual cryptographic entropy in the input pool" - crng_init==2 means "this is supposed to be cryptographically secure now, but we'll keep adding more entropy just to be sure". The current code means that if someone is pulling data from /dev/urandom fast enough at crng_init==0, we'll keep resetting crng_init_cnt, and we'll never make forward progress to crng_init==1. It seems to be intended to prevent an attacker from bruteforcing the contents of small individual RNG inputs on the way from crng_init==0 to crng_init==1, but that's misguided; crng_init==1 isn't supposed to provide proper cryptographic security anyway, RNG users who care about getting secure RNG output have to wait until crng_init==2. This code was inconsistent, and it probably made things worse - just get rid of it. Signed-off-by: Jann Horn Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 8f5e040b9e3b..08072d46ac8b 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1831,7 +1831,6 @@ urandom_read_nowarn(struct file *file, char __user *buf, size_t nbytes, static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) { - unsigned long flags; static int maxwarn = 10; if (!crng_ready() && maxwarn > 0) { @@ -1839,9 +1838,6 @@ urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) if (__ratelimit(&urandom_warning)) pr_notice("%s: uninitialized urandom read (%zd bytes read)\n", current->comm, nbytes); - spin_lock_irqsave(&primary_crng.lock, flags); - crng_init_cnt = 0; - spin_unlock_irqrestore(&primary_crng.lock, flags); } return urandom_read_nowarn(file, buf, nbytes, ppos); From 8a3b78f9177c7d0f061e05b317f812507f3046a2 Mon Sep 17 00:00:00 2001 From: Schspa Shi Date: Fri, 14 Jan 2022 16:12:16 +0800 Subject: [PATCH 0137/1842] random: fix typo in comments commit c0a8a61e7abbf66729687ee63659ee25983fbb1e upstream. s/or/for Signed-off-by: Schspa Shi Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 08072d46ac8b..c6d798b9f0f0 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -101,7 +101,7 @@ * =============================== * * There are four exported interfaces; two for use within the kernel, - * and two or use from userspace. + * and two for use from userspace. * * Exported interfaces ---- userspace output * ----------------------------------------- From c9e108e36dc83c3c4fb4ec1cf53e76ecca3528a9 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sun, 9 Jan 2022 17:32:02 +0100 Subject: [PATCH 0138/1842] random: cleanup poolinfo abstraction commit 91ec0fe138f107232cb36bc6112211db37cb5306 upstream. Now that we're only using one polynomial, we can cleanup its representation into constants, instead of passing around pointers dynamically to select different polynomials. This improves the codegen and makes the code a bit more straightforward. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 67 +++++++++++++++++++------------------------ 1 file changed, 30 insertions(+), 37 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index c6d798b9f0f0..55d70a3981d3 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -430,14 +430,20 @@ static int random_write_wakeup_bits = 28 * OUTPUT_POOL_WORDS; * polynomial which improves the resulting TGFSR polynomial to be * irreducible, which we have made here. */ -static const struct poolinfo { - int poolbitshift, poolwords, poolbytes, poolfracbits; -#define S(x) ilog2(x)+5, (x), (x)*4, (x) << (ENTROPY_SHIFT+5) - int tap1, tap2, tap3, tap4, tap5; -} poolinfo_table[] = { - /* was: x^128 + x^103 + x^76 + x^51 +x^25 + x + 1 */ +enum poolinfo { + POOL_WORDS = 128, + POOL_WORDMASK = POOL_WORDS - 1, + POOL_BYTES = POOL_WORDS * sizeof(u32), + POOL_BITS = POOL_BYTES * 8, + POOL_BITSHIFT = ilog2(POOL_WORDS) + 5, + POOL_FRACBITS = POOL_WORDS << (ENTROPY_SHIFT + 5), + /* x^128 + x^104 + x^76 + x^51 +x^25 + x + 1 */ - { S(128), 104, 76, 51, 25, 1 }, + POOL_TAP1 = 104, + POOL_TAP2 = 76, + POOL_TAP3 = 51, + POOL_TAP4 = 25, + POOL_TAP5 = 1 }; /* @@ -503,7 +509,6 @@ MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression"); struct entropy_store; struct entropy_store { /* read-only data: */ - const struct poolinfo *poolinfo; __u32 *pool; const char *name; @@ -525,7 +530,6 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r); static __u32 input_pool_data[INPUT_POOL_WORDS] __latent_entropy; static struct entropy_store input_pool = { - .poolinfo = &poolinfo_table[0], .name = "input", .lock = __SPIN_LOCK_UNLOCKED(input_pool.lock), .pool = input_pool_data @@ -548,33 +552,26 @@ static __u32 const twist_table[8] = { static void _mix_pool_bytes(struct entropy_store *r, const void *in, int nbytes) { - unsigned long i, tap1, tap2, tap3, tap4, tap5; + unsigned long i; int input_rotate; - int wordmask = r->poolinfo->poolwords - 1; const unsigned char *bytes = in; __u32 w; - tap1 = r->poolinfo->tap1; - tap2 = r->poolinfo->tap2; - tap3 = r->poolinfo->tap3; - tap4 = r->poolinfo->tap4; - tap5 = r->poolinfo->tap5; - input_rotate = r->input_rotate; i = r->add_ptr; /* mix one byte at a time to simplify size handling and churn faster */ while (nbytes--) { w = rol32(*bytes++, input_rotate); - i = (i - 1) & wordmask; + i = (i - 1) & POOL_WORDMASK; /* XOR in the various taps */ w ^= r->pool[i]; - w ^= r->pool[(i + tap1) & wordmask]; - w ^= r->pool[(i + tap2) & wordmask]; - w ^= r->pool[(i + tap3) & wordmask]; - w ^= r->pool[(i + tap4) & wordmask]; - w ^= r->pool[(i + tap5) & wordmask]; + w ^= r->pool[(i + POOL_TAP1) & POOL_WORDMASK]; + w ^= r->pool[(i + POOL_TAP2) & POOL_WORDMASK]; + w ^= r->pool[(i + POOL_TAP3) & POOL_WORDMASK]; + w ^= r->pool[(i + POOL_TAP4) & POOL_WORDMASK]; + w ^= r->pool[(i + POOL_TAP5) & POOL_WORDMASK]; /* Mix the result back in with a twist */ r->pool[i] = (w >> 3) ^ twist_table[w & 7]; @@ -672,7 +669,6 @@ static void process_random_ready_list(void) static void credit_entropy_bits(struct entropy_store *r, int nbits) { int entropy_count, orig; - const int pool_size = r->poolinfo->poolfracbits; int nfrac = nbits << ENTROPY_SHIFT; if (!nbits) @@ -706,25 +702,25 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits) * turns no matter how large nbits is. */ int pnfrac = nfrac; - const int s = r->poolinfo->poolbitshift + ENTROPY_SHIFT + 2; + const int s = POOL_BITSHIFT + ENTROPY_SHIFT + 2; /* The +2 corresponds to the /4 in the denominator */ do { - unsigned int anfrac = min(pnfrac, pool_size/2); + unsigned int anfrac = min(pnfrac, POOL_FRACBITS/2); unsigned int add = - ((pool_size - entropy_count)*anfrac*3) >> s; + ((POOL_FRACBITS - entropy_count)*anfrac*3) >> s; entropy_count += add; pnfrac -= anfrac; - } while (unlikely(entropy_count < pool_size-2 && pnfrac)); + } while (unlikely(entropy_count < POOL_FRACBITS-2 && pnfrac)); } if (WARN_ON(entropy_count < 0)) { pr_warn("negative entropy/overflow: pool %s count %d\n", r->name, entropy_count); entropy_count = 0; - } else if (entropy_count > pool_size) - entropy_count = pool_size; + } else if (entropy_count > POOL_FRACBITS) + entropy_count = POOL_FRACBITS; if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) goto retry; @@ -741,13 +737,11 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits) static int credit_entropy_bits_safe(struct entropy_store *r, int nbits) { - const int nbits_max = r->poolinfo->poolwords * 32; - if (nbits < 0) return -EINVAL; /* Cap the value to avoid overflows */ - nbits = min(nbits, nbits_max); + nbits = min(nbits, POOL_BITS); credit_entropy_bits(r, nbits); return 0; @@ -1343,7 +1337,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min, int entropy_count, orig, have_bytes; size_t ibytes, nfrac; - BUG_ON(r->entropy_count > r->poolinfo->poolfracbits); + BUG_ON(r->entropy_count > POOL_FRACBITS); /* Can we pull enough? */ retry: @@ -1409,8 +1403,7 @@ static void extract_buf(struct entropy_store *r, __u8 *out) /* Generate a hash across the pool */ spin_lock_irqsave(&r->lock, flags); - blake2s_update(&state, (const u8 *)r->pool, - r->poolinfo->poolwords * sizeof(*r->pool)); + blake2s_update(&state, (const u8 *)r->pool, POOL_BYTES); blake2s_final(&state, hash); /* final zeros out state */ /* @@ -1766,7 +1759,7 @@ static void __init init_std_data(struct entropy_store *r) unsigned long rv; mix_pool_bytes(r, &now, sizeof(now)); - for (i = r->poolinfo->poolbytes; i > 0; i -= sizeof(rv)) { + for (i = POOL_BYTES; i > 0; i -= sizeof(rv)) { if (!arch_get_random_seed_long(&rv) && !arch_get_random_long(&rv)) rv = random_get_entropy(); From 7abbc9809fa085393d90da01f9da3f25451168f1 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sun, 9 Jan 2022 17:48:58 +0100 Subject: [PATCH 0139/1842] random: cleanup integer types commit d38bb0853589c939573ea50e9cb64f733e0e273d upstream. Rather than using the userspace type, __uXX, switch to using uXX. And rather than using variously chosen `char *` or `unsigned char *`, use `u8 *` uniformly for things that aren't strings, in the case where we are doing byte-by-byte traversal. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 105 +++++++++++++++++++++--------------------- 1 file changed, 52 insertions(+), 53 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 55d70a3981d3..6f260780d663 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -456,7 +456,7 @@ static DEFINE_SPINLOCK(random_ready_list_lock); static LIST_HEAD(random_ready_list); struct crng_state { - __u32 state[16]; + u32 state[16]; unsigned long init_time; spinlock_t lock; }; @@ -483,9 +483,9 @@ static bool crng_need_final_init = false; static int crng_init_cnt = 0; static unsigned long crng_global_init_time = 0; #define CRNG_INIT_CNT_THRESH (2*CHACHA_KEY_SIZE) -static void _extract_crng(struct crng_state *crng, __u8 out[CHACHA_BLOCK_SIZE]); +static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZE]); static void _crng_backtrack_protect(struct crng_state *crng, - __u8 tmp[CHACHA_BLOCK_SIZE], int used); + u8 tmp[CHACHA_BLOCK_SIZE], int used); static void process_random_ready_list(void); static void _get_random_bytes(void *buf, int nbytes); @@ -509,16 +509,16 @@ MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression"); struct entropy_store; struct entropy_store { /* read-only data: */ - __u32 *pool; + u32 *pool; const char *name; /* read-write data: */ spinlock_t lock; - unsigned short add_ptr; - unsigned short input_rotate; + u16 add_ptr; + u16 input_rotate; int entropy_count; unsigned int last_data_init:1; - __u8 last_data[EXTRACT_SIZE]; + u8 last_data[EXTRACT_SIZE]; }; static ssize_t extract_entropy(struct entropy_store *r, void *buf, @@ -527,7 +527,7 @@ static ssize_t _extract_entropy(struct entropy_store *r, void *buf, size_t nbytes, int fips); static void crng_reseed(struct crng_state *crng, struct entropy_store *r); -static __u32 input_pool_data[INPUT_POOL_WORDS] __latent_entropy; +static u32 input_pool_data[INPUT_POOL_WORDS] __latent_entropy; static struct entropy_store input_pool = { .name = "input", @@ -535,7 +535,7 @@ static struct entropy_store input_pool = { .pool = input_pool_data }; -static __u32 const twist_table[8] = { +static u32 const twist_table[8] = { 0x00000000, 0x3b6e20c8, 0x76dc4190, 0x4db26158, 0xedb88320, 0xd6d6a3e8, 0x9b64c2b0, 0xa00ae278 }; @@ -554,8 +554,8 @@ static void _mix_pool_bytes(struct entropy_store *r, const void *in, { unsigned long i; int input_rotate; - const unsigned char *bytes = in; - __u32 w; + const u8 *bytes = in; + u32 w; input_rotate = r->input_rotate; i = r->add_ptr; @@ -608,10 +608,10 @@ static void mix_pool_bytes(struct entropy_store *r, const void *in, } struct fast_pool { - __u32 pool[4]; + u32 pool[4]; unsigned long last; - unsigned short reg_idx; - unsigned char count; + u16 reg_idx; + u8 count; }; /* @@ -621,8 +621,8 @@ struct fast_pool { */ static void fast_mix(struct fast_pool *f) { - __u32 a = f->pool[0], b = f->pool[1]; - __u32 c = f->pool[2], d = f->pool[3]; + u32 a = f->pool[0], b = f->pool[1]; + u32 c = f->pool[2], d = f->pool[3]; a += b; c += d; b = rol32(b, 6); d = rol32(d, 27); @@ -814,14 +814,14 @@ static bool __init crng_init_try_arch_early(struct crng_state *crng) static void crng_initialize_secondary(struct crng_state *crng) { chacha_init_consts(crng->state); - _get_random_bytes(&crng->state[4], sizeof(__u32) * 12); + _get_random_bytes(&crng->state[4], sizeof(u32) * 12); crng_init_try_arch(crng); crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1; } static void __init crng_initialize_primary(struct crng_state *crng) { - _extract_entropy(&input_pool, &crng->state[4], sizeof(__u32) * 12, 0); + _extract_entropy(&input_pool, &crng->state[4], sizeof(u32) * 12, 0); if (crng_init_try_arch_early(crng) && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); numa_crng_init(); @@ -911,10 +911,10 @@ static struct crng_state *select_crng(void) * path. So we can't afford to dilly-dally. Returns the number of * bytes processed from cp. */ -static size_t crng_fast_load(const char *cp, size_t len) +static size_t crng_fast_load(const u8 *cp, size_t len) { unsigned long flags; - char *p; + u8 *p; size_t ret = 0; if (!spin_trylock_irqsave(&primary_crng.lock, flags)) @@ -923,7 +923,7 @@ static size_t crng_fast_load(const char *cp, size_t len) spin_unlock_irqrestore(&primary_crng.lock, flags); return 0; } - p = (unsigned char *) &primary_crng.state[4]; + p = (u8 *) &primary_crng.state[4]; while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) { p[crng_init_cnt % CHACHA_KEY_SIZE] ^= *cp; cp++; crng_init_cnt++; len--; ret++; @@ -951,14 +951,14 @@ static size_t crng_fast_load(const char *cp, size_t len) * like a fixed DMI table (for example), which might very well be * unique to the machine, but is otherwise unvarying. */ -static int crng_slow_load(const char *cp, size_t len) +static int crng_slow_load(const u8 *cp, size_t len) { unsigned long flags; - static unsigned char lfsr = 1; - unsigned char tmp; - unsigned i, max = CHACHA_KEY_SIZE; - const char * src_buf = cp; - char * dest_buf = (char *) &primary_crng.state[4]; + static u8 lfsr = 1; + u8 tmp; + unsigned int i, max = CHACHA_KEY_SIZE; + const u8 * src_buf = cp; + u8 * dest_buf = (u8 *) &primary_crng.state[4]; if (!spin_trylock_irqsave(&primary_crng.lock, flags)) return 0; @@ -987,8 +987,8 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) unsigned long flags; int i, num; union { - __u8 block[CHACHA_BLOCK_SIZE]; - __u32 key[8]; + u8 block[CHACHA_BLOCK_SIZE]; + u32 key[8]; } buf; if (r) { @@ -1015,7 +1015,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) } static void _extract_crng(struct crng_state *crng, - __u8 out[CHACHA_BLOCK_SIZE]) + u8 out[CHACHA_BLOCK_SIZE]) { unsigned long flags, init_time; @@ -1033,7 +1033,7 @@ static void _extract_crng(struct crng_state *crng, spin_unlock_irqrestore(&crng->lock, flags); } -static void extract_crng(__u8 out[CHACHA_BLOCK_SIZE]) +static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]) { _extract_crng(select_crng(), out); } @@ -1043,26 +1043,26 @@ static void extract_crng(__u8 out[CHACHA_BLOCK_SIZE]) * enough) to mutate the CRNG key to provide backtracking protection. */ static void _crng_backtrack_protect(struct crng_state *crng, - __u8 tmp[CHACHA_BLOCK_SIZE], int used) + u8 tmp[CHACHA_BLOCK_SIZE], int used) { unsigned long flags; - __u32 *s, *d; + u32 *s, *d; int i; - used = round_up(used, sizeof(__u32)); + used = round_up(used, sizeof(u32)); if (used + CHACHA_KEY_SIZE > CHACHA_BLOCK_SIZE) { extract_crng(tmp); used = 0; } spin_lock_irqsave(&crng->lock, flags); - s = (__u32 *) &tmp[used]; + s = (u32 *) &tmp[used]; d = &crng->state[4]; for (i=0; i < 8; i++) *d++ ^= *s++; spin_unlock_irqrestore(&crng->lock, flags); } -static void crng_backtrack_protect(__u8 tmp[CHACHA_BLOCK_SIZE], int used) +static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used) { _crng_backtrack_protect(select_crng(), tmp, used); } @@ -1070,7 +1070,7 @@ static void crng_backtrack_protect(__u8 tmp[CHACHA_BLOCK_SIZE], int used) static ssize_t extract_crng_user(void __user *buf, size_t nbytes) { ssize_t ret = 0, i = CHACHA_BLOCK_SIZE; - __u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4); + u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4); int large_request = (nbytes > 256); while (nbytes) { @@ -1158,8 +1158,8 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num) struct entropy_store *r; struct { long jiffies; - unsigned cycles; - unsigned num; + unsigned int cycles; + unsigned int num; } sample; long delta, delta2, delta3; @@ -1241,15 +1241,15 @@ static void add_interrupt_bench(cycles_t start) #define add_interrupt_bench(x) #endif -static __u32 get_reg(struct fast_pool *f, struct pt_regs *regs) +static u32 get_reg(struct fast_pool *f, struct pt_regs *regs) { - __u32 *ptr = (__u32 *) regs; + u32 *ptr = (u32 *) regs; unsigned int idx; if (regs == NULL) return 0; idx = READ_ONCE(f->reg_idx); - if (idx >= sizeof(struct pt_regs) / sizeof(__u32)) + if (idx >= sizeof(struct pt_regs) / sizeof(u32)) idx = 0; ptr += idx++; WRITE_ONCE(f->reg_idx, idx); @@ -1263,8 +1263,8 @@ void add_interrupt_randomness(int irq) struct pt_regs *regs = get_irq_regs(); unsigned long now = jiffies; cycles_t cycles = random_get_entropy(); - __u32 c_high, j_high; - __u64 ip; + u32 c_high, j_high; + u64 ip; if (cycles == 0) cycles = get_reg(fast_pool, regs); @@ -1282,8 +1282,7 @@ void add_interrupt_randomness(int irq) if (unlikely(crng_init == 0)) { if ((fast_pool->count >= 64) && - crng_fast_load((char *) fast_pool->pool, - sizeof(fast_pool->pool)) > 0) { + crng_fast_load((u8 *)fast_pool->pool, sizeof(fast_pool->pool)) > 0) { fast_pool->count = 0; fast_pool->last = now; } @@ -1380,7 +1379,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min, * * Note: we assume that .poolwords is a multiple of 16 words. */ -static void extract_buf(struct entropy_store *r, __u8 *out) +static void extract_buf(struct entropy_store *r, u8 *out) { struct blake2s_state state __aligned(__alignof__(unsigned long)); u8 hash[BLAKE2S_HASH_SIZE]; @@ -1430,7 +1429,7 @@ static ssize_t _extract_entropy(struct entropy_store *r, void *buf, size_t nbytes, int fips) { ssize_t ret = 0, i; - __u8 tmp[EXTRACT_SIZE]; + u8 tmp[EXTRACT_SIZE]; unsigned long flags; while (nbytes) { @@ -1468,7 +1467,7 @@ static ssize_t _extract_entropy(struct entropy_store *r, void *buf, static ssize_t extract_entropy(struct entropy_store *r, void *buf, size_t nbytes, int min, int reserved) { - __u8 tmp[EXTRACT_SIZE]; + u8 tmp[EXTRACT_SIZE]; unsigned long flags; /* if last_data isn't primed, we need EXTRACT_SIZE extra bytes */ @@ -1530,7 +1529,7 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller, */ static void _get_random_bytes(void *buf, int nbytes) { - __u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4); + u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4); trace_get_random_bytes(nbytes, _RET_IP_); @@ -1724,7 +1723,7 @@ EXPORT_SYMBOL(del_random_ready_callback); int __must_check get_random_bytes_arch(void *buf, int nbytes) { int left = nbytes; - char *p = buf; + u8 *p = buf; trace_get_random_bytes_arch(left, _RET_IP_); while (left) { @@ -1866,7 +1865,7 @@ static int write_pool(struct entropy_store *r, const char __user *buffer, size_t count) { size_t bytes; - __u32 t, buf[16]; + u32 t, buf[16]; const char __user *p = buffer; while (count > 0) { @@ -1876,7 +1875,7 @@ write_pool(struct entropy_store *r, const char __user *buffer, size_t count) if (copy_from_user(&buf, p, bytes)) return -EFAULT; - for (b = bytes ; b > 0 ; b -= sizeof(__u32), i++) { + for (b = bytes; b > 0; b -= sizeof(u32), i++) { if (!arch_get_random_int(&t)) break; buf[i] ^= t; From ae093ca1256e286616ddb06bf7bc0c9cc4a2ec5d Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Wed, 12 Jan 2022 15:22:30 +0100 Subject: [PATCH 0140/1842] random: remove incomplete last_data logic commit a4bfa9b31802c14ff5847123c12b98d5e36b3985 upstream. There were a few things added under the "if (fips_enabled)" banner, which never really got completed, and the FIPS people anyway are choosing a different direction. Rather than keep around this halfbaked code, get rid of it so that we can focus on a single design of the RNG rather than two designs. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 39 ++++----------------------------------- 1 file changed, 4 insertions(+), 35 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 6f260780d663..e821efa0d161 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -337,7 +337,6 @@ #include #include #include -#include #include #include #include @@ -517,14 +516,12 @@ struct entropy_store { u16 add_ptr; u16 input_rotate; int entropy_count; - unsigned int last_data_init:1; - u8 last_data[EXTRACT_SIZE]; }; static ssize_t extract_entropy(struct entropy_store *r, void *buf, size_t nbytes, int min, int rsvd); static ssize_t _extract_entropy(struct entropy_store *r, void *buf, - size_t nbytes, int fips); + size_t nbytes); static void crng_reseed(struct crng_state *crng, struct entropy_store *r); static u32 input_pool_data[INPUT_POOL_WORDS] __latent_entropy; @@ -821,7 +818,7 @@ static void crng_initialize_secondary(struct crng_state *crng) static void __init crng_initialize_primary(struct crng_state *crng) { - _extract_entropy(&input_pool, &crng->state[4], sizeof(u32) * 12, 0); + _extract_entropy(&input_pool, &crng->state[4], sizeof(u32) * 12); if (crng_init_try_arch_early(crng) && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); numa_crng_init(); @@ -1426,22 +1423,13 @@ static void extract_buf(struct entropy_store *r, u8 *out) } static ssize_t _extract_entropy(struct entropy_store *r, void *buf, - size_t nbytes, int fips) + size_t nbytes) { ssize_t ret = 0, i; u8 tmp[EXTRACT_SIZE]; - unsigned long flags; while (nbytes) { extract_buf(r, tmp); - - if (fips) { - spin_lock_irqsave(&r->lock, flags); - if (!memcmp(tmp, r->last_data, EXTRACT_SIZE)) - panic("Hardware RNG duplicated output!\n"); - memcpy(r->last_data, tmp, EXTRACT_SIZE); - spin_unlock_irqrestore(&r->lock, flags); - } i = min_t(int, nbytes, EXTRACT_SIZE); memcpy(buf, tmp, i); nbytes -= i; @@ -1467,28 +1455,9 @@ static ssize_t _extract_entropy(struct entropy_store *r, void *buf, static ssize_t extract_entropy(struct entropy_store *r, void *buf, size_t nbytes, int min, int reserved) { - u8 tmp[EXTRACT_SIZE]; - unsigned long flags; - - /* if last_data isn't primed, we need EXTRACT_SIZE extra bytes */ - if (fips_enabled) { - spin_lock_irqsave(&r->lock, flags); - if (!r->last_data_init) { - r->last_data_init = 1; - spin_unlock_irqrestore(&r->lock, flags); - trace_extract_entropy(r->name, EXTRACT_SIZE, - ENTROPY_BITS(r), _RET_IP_); - extract_buf(r, tmp); - spin_lock_irqsave(&r->lock, flags); - memcpy(r->last_data, tmp, EXTRACT_SIZE); - } - spin_unlock_irqrestore(&r->lock, flags); - } - trace_extract_entropy(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_); nbytes = account(r, nbytes, min, reserved); - - return _extract_entropy(r, buf, nbytes, fips_enabled); + return _extract_entropy(r, buf, nbytes); } #define warn_unseeded_randomness(previous) \ From 5897e06ac15a26d6dd408d927e2f53f3d060582a Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Wed, 12 Jan 2022 15:28:21 +0100 Subject: [PATCH 0141/1842] random: remove unused extract_entropy() reserved argument commit 8b2d953b91e7f60200c24067ab17b77cc7bfd0d4 upstream. This argument is always set to zero, as a result of us not caring about keeping a certain amount reserved in the pool these days. So just remove it and cleanup the function signatures. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index e821efa0d161..eefd335b6982 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -519,7 +519,7 @@ struct entropy_store { }; static ssize_t extract_entropy(struct entropy_store *r, void *buf, - size_t nbytes, int min, int rsvd); + size_t nbytes, int min); static ssize_t _extract_entropy(struct entropy_store *r, void *buf, size_t nbytes); @@ -989,7 +989,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) } buf; if (r) { - num = extract_entropy(r, &buf, 32, 16, 0); + num = extract_entropy(r, &buf, 32, 16); if (num == 0) return; } else { @@ -1327,8 +1327,7 @@ EXPORT_SYMBOL_GPL(add_disk_randomness); * This function decides how many bytes to actually take from the * given pool, and also debits the entropy count accordingly. */ -static size_t account(struct entropy_store *r, size_t nbytes, int min, - int reserved) +static size_t account(struct entropy_store *r, size_t nbytes, int min) { int entropy_count, orig, have_bytes; size_t ibytes, nfrac; @@ -1342,7 +1341,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min, /* never pull more than available */ have_bytes = entropy_count >> (ENTROPY_SHIFT + 3); - if ((have_bytes -= reserved) < 0) + if (have_bytes < 0) have_bytes = 0; ibytes = min_t(size_t, ibytes, have_bytes); if (ibytes < min) @@ -1448,15 +1447,13 @@ static ssize_t _extract_entropy(struct entropy_store *r, void *buf, * returns it in a buffer. * * The min parameter specifies the minimum amount we can pull before - * failing to avoid races that defeat catastrophic reseeding while the - * reserved parameter indicates how much entropy we must leave in the - * pool after each pull to avoid starving other readers. + * failing to avoid races that defeat catastrophic reseeding. */ static ssize_t extract_entropy(struct entropy_store *r, void *buf, - size_t nbytes, int min, int reserved) + size_t nbytes, int min) { trace_extract_entropy(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_); - nbytes = account(r, nbytes, min, reserved); + nbytes = account(r, nbytes, min); return _extract_entropy(r, buf, nbytes); } From 8cc5260c19da9ff46cd34e951d0cdbbc3733e418 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Wed, 12 Jan 2022 17:18:08 +0100 Subject: [PATCH 0142/1842] random: rather than entropy_store abstraction, use global commit 90ed1e67e896cc8040a523f8428fc02f9b164394 upstream. Originally, the RNG used several pools, so having things abstracted out over a generic entropy_store object made sense. These days, there's only one input pool, and then an uneven mix of usage via the abstraction and usage via &input_pool. Rather than this uneasy mixture, just get rid of the abstraction entirely and have things always use the global. This simplifies the code and makes reading it a bit easier. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 219 +++++++++++++++------------------- include/trace/events/random.h | 56 ++++----- 2 files changed, 117 insertions(+), 158 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index eefd335b6982..be103e3dfe24 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -375,7 +375,7 @@ * credit_entropy_bits() needs to be 64 bits wide. */ #define ENTROPY_SHIFT 3 -#define ENTROPY_BITS(r) ((r)->entropy_count >> ENTROPY_SHIFT) +#define ENTROPY_BITS() (input_pool.entropy_count >> ENTROPY_SHIFT) /* * If the entropy count falls under this number of bits, then we @@ -505,33 +505,27 @@ MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression"); * **********************************************************************/ -struct entropy_store; -struct entropy_store { +static u32 input_pool_data[INPUT_POOL_WORDS] __latent_entropy; + +static struct { /* read-only data: */ u32 *pool; - const char *name; /* read-write data: */ spinlock_t lock; u16 add_ptr; u16 input_rotate; int entropy_count; -}; - -static ssize_t extract_entropy(struct entropy_store *r, void *buf, - size_t nbytes, int min); -static ssize_t _extract_entropy(struct entropy_store *r, void *buf, - size_t nbytes); - -static void crng_reseed(struct crng_state *crng, struct entropy_store *r); -static u32 input_pool_data[INPUT_POOL_WORDS] __latent_entropy; - -static struct entropy_store input_pool = { - .name = "input", +} input_pool = { .lock = __SPIN_LOCK_UNLOCKED(input_pool.lock), .pool = input_pool_data }; +static ssize_t extract_entropy(void *buf, size_t nbytes, int min); +static ssize_t _extract_entropy(void *buf, size_t nbytes); + +static void crng_reseed(struct crng_state *crng, bool use_input_pool); + static u32 const twist_table[8] = { 0x00000000, 0x3b6e20c8, 0x76dc4190, 0x4db26158, 0xedb88320, 0xd6d6a3e8, 0x9b64c2b0, 0xa00ae278 }; @@ -546,16 +540,15 @@ static u32 const twist_table[8] = { * it's cheap to do so and helps slightly in the expected case where * the entropy is concentrated in the low-order bits. */ -static void _mix_pool_bytes(struct entropy_store *r, const void *in, - int nbytes) +static void _mix_pool_bytes(const void *in, int nbytes) { unsigned long i; int input_rotate; const u8 *bytes = in; u32 w; - input_rotate = r->input_rotate; - i = r->add_ptr; + input_rotate = input_pool.input_rotate; + i = input_pool.add_ptr; /* mix one byte at a time to simplify size handling and churn faster */ while (nbytes--) { @@ -563,15 +556,15 @@ static void _mix_pool_bytes(struct entropy_store *r, const void *in, i = (i - 1) & POOL_WORDMASK; /* XOR in the various taps */ - w ^= r->pool[i]; - w ^= r->pool[(i + POOL_TAP1) & POOL_WORDMASK]; - w ^= r->pool[(i + POOL_TAP2) & POOL_WORDMASK]; - w ^= r->pool[(i + POOL_TAP3) & POOL_WORDMASK]; - w ^= r->pool[(i + POOL_TAP4) & POOL_WORDMASK]; - w ^= r->pool[(i + POOL_TAP5) & POOL_WORDMASK]; + w ^= input_pool.pool[i]; + w ^= input_pool.pool[(i + POOL_TAP1) & POOL_WORDMASK]; + w ^= input_pool.pool[(i + POOL_TAP2) & POOL_WORDMASK]; + w ^= input_pool.pool[(i + POOL_TAP3) & POOL_WORDMASK]; + w ^= input_pool.pool[(i + POOL_TAP4) & POOL_WORDMASK]; + w ^= input_pool.pool[(i + POOL_TAP5) & POOL_WORDMASK]; /* Mix the result back in with a twist */ - r->pool[i] = (w >> 3) ^ twist_table[w & 7]; + input_pool.pool[i] = (w >> 3) ^ twist_table[w & 7]; /* * Normally, we add 7 bits of rotation to the pool. @@ -582,26 +575,24 @@ static void _mix_pool_bytes(struct entropy_store *r, const void *in, input_rotate = (input_rotate + (i ? 7 : 14)) & 31; } - r->input_rotate = input_rotate; - r->add_ptr = i; + input_pool.input_rotate = input_rotate; + input_pool.add_ptr = i; } -static void __mix_pool_bytes(struct entropy_store *r, const void *in, - int nbytes) +static void __mix_pool_bytes(const void *in, int nbytes) { - trace_mix_pool_bytes_nolock(r->name, nbytes, _RET_IP_); - _mix_pool_bytes(r, in, nbytes); + trace_mix_pool_bytes_nolock(nbytes, _RET_IP_); + _mix_pool_bytes(in, nbytes); } -static void mix_pool_bytes(struct entropy_store *r, const void *in, - int nbytes) +static void mix_pool_bytes(const void *in, int nbytes) { unsigned long flags; - trace_mix_pool_bytes(r->name, nbytes, _RET_IP_); - spin_lock_irqsave(&r->lock, flags); - _mix_pool_bytes(r, in, nbytes); - spin_unlock_irqrestore(&r->lock, flags); + trace_mix_pool_bytes(nbytes, _RET_IP_); + spin_lock_irqsave(&input_pool.lock, flags); + _mix_pool_bytes(in, nbytes); + spin_unlock_irqrestore(&input_pool.lock, flags); } struct fast_pool { @@ -663,16 +654,16 @@ static void process_random_ready_list(void) * Use credit_entropy_bits_safe() if the value comes from userspace * or otherwise should be checked for extreme values. */ -static void credit_entropy_bits(struct entropy_store *r, int nbits) +static void credit_entropy_bits(int nbits) { - int entropy_count, orig; + int entropy_count, entropy_bits, orig; int nfrac = nbits << ENTROPY_SHIFT; if (!nbits) return; retry: - entropy_count = orig = READ_ONCE(r->entropy_count); + entropy_count = orig = READ_ONCE(input_pool.entropy_count); if (nfrac < 0) { /* Debit */ entropy_count += nfrac; @@ -713,26 +704,21 @@ static void credit_entropy_bits(struct entropy_store *r, int nbits) } if (WARN_ON(entropy_count < 0)) { - pr_warn("negative entropy/overflow: pool %s count %d\n", - r->name, entropy_count); + pr_warn("negative entropy/overflow: count %d\n", entropy_count); entropy_count = 0; } else if (entropy_count > POOL_FRACBITS) entropy_count = POOL_FRACBITS; - if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) + if (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig) goto retry; - trace_credit_entropy_bits(r->name, nbits, - entropy_count >> ENTROPY_SHIFT, _RET_IP_); + trace_credit_entropy_bits(nbits, entropy_count >> ENTROPY_SHIFT, _RET_IP_); - if (r == &input_pool) { - int entropy_bits = entropy_count >> ENTROPY_SHIFT; - - if (crng_init < 2 && entropy_bits >= 128) - crng_reseed(&primary_crng, r); - } + entropy_bits = entropy_count >> ENTROPY_SHIFT; + if (crng_init < 2 && entropy_bits >= 128) + crng_reseed(&primary_crng, true); } -static int credit_entropy_bits_safe(struct entropy_store *r, int nbits) +static int credit_entropy_bits_safe(int nbits) { if (nbits < 0) return -EINVAL; @@ -740,7 +726,7 @@ static int credit_entropy_bits_safe(struct entropy_store *r, int nbits) /* Cap the value to avoid overflows */ nbits = min(nbits, POOL_BITS); - credit_entropy_bits(r, nbits); + credit_entropy_bits(nbits); return 0; } @@ -818,7 +804,7 @@ static void crng_initialize_secondary(struct crng_state *crng) static void __init crng_initialize_primary(struct crng_state *crng) { - _extract_entropy(&input_pool, &crng->state[4], sizeof(u32) * 12); + _extract_entropy(&crng->state[4], sizeof(u32) * 12); if (crng_init_try_arch_early(crng) && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); numa_crng_init(); @@ -979,7 +965,7 @@ static int crng_slow_load(const u8 *cp, size_t len) return 1; } -static void crng_reseed(struct crng_state *crng, struct entropy_store *r) +static void crng_reseed(struct crng_state *crng, bool use_input_pool) { unsigned long flags; int i, num; @@ -988,8 +974,8 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) u32 key[8]; } buf; - if (r) { - num = extract_entropy(r, &buf, 32, 16); + if (use_input_pool) { + num = extract_entropy(&buf, 32, 16); if (num == 0) return; } else { @@ -1020,8 +1006,7 @@ static void _extract_crng(struct crng_state *crng, init_time = READ_ONCE(crng->init_time); if (time_after(READ_ONCE(crng_global_init_time), init_time) || time_after(jiffies, init_time + CRNG_RESEED_INTERVAL)) - crng_reseed(crng, crng == &primary_crng ? - &input_pool : NULL); + crng_reseed(crng, crng == &primary_crng); } spin_lock_irqsave(&crng->lock, flags); chacha20_block(&crng->state[0], out); @@ -1132,8 +1117,8 @@ void add_device_randomness(const void *buf, unsigned int size) trace_add_device_randomness(size, _RET_IP_); spin_lock_irqsave(&input_pool.lock, flags); - _mix_pool_bytes(&input_pool, buf, size); - _mix_pool_bytes(&input_pool, &time, sizeof(time)); + _mix_pool_bytes(buf, size); + _mix_pool_bytes(&time, sizeof(time)); spin_unlock_irqrestore(&input_pool.lock, flags); } EXPORT_SYMBOL(add_device_randomness); @@ -1152,7 +1137,6 @@ static struct timer_rand_state input_timer_state = INIT_TIMER_RAND_STATE; */ static void add_timer_randomness(struct timer_rand_state *state, unsigned num) { - struct entropy_store *r; struct { long jiffies; unsigned int cycles; @@ -1163,8 +1147,7 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num) sample.jiffies = jiffies; sample.cycles = random_get_entropy(); sample.num = num; - r = &input_pool; - mix_pool_bytes(r, &sample, sizeof(sample)); + mix_pool_bytes(&sample, sizeof(sample)); /* * Calculate number of bits of randomness we probably added. @@ -1196,7 +1179,7 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num) * Round down by 1 bit on general principles, * and limit entropy estimate to 12 bits. */ - credit_entropy_bits(r, min_t(int, fls(delta>>1), 11)); + credit_entropy_bits(min_t(int, fls(delta>>1), 11)); } void add_input_randomness(unsigned int type, unsigned int code, @@ -1211,7 +1194,7 @@ void add_input_randomness(unsigned int type, unsigned int code, last_value = value; add_timer_randomness(&input_timer_state, (type << 4) ^ code ^ (code >> 4) ^ value); - trace_add_input_randomness(ENTROPY_BITS(&input_pool)); + trace_add_input_randomness(ENTROPY_BITS()); } EXPORT_SYMBOL_GPL(add_input_randomness); @@ -1255,7 +1238,6 @@ static u32 get_reg(struct fast_pool *f, struct pt_regs *regs) void add_interrupt_randomness(int irq) { - struct entropy_store *r; struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness); struct pt_regs *regs = get_irq_regs(); unsigned long now = jiffies; @@ -1290,18 +1272,17 @@ void add_interrupt_randomness(int irq) !time_after(now, fast_pool->last + HZ)) return; - r = &input_pool; - if (!spin_trylock(&r->lock)) + if (!spin_trylock(&input_pool.lock)) return; fast_pool->last = now; - __mix_pool_bytes(r, &fast_pool->pool, sizeof(fast_pool->pool)); - spin_unlock(&r->lock); + __mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool)); + spin_unlock(&input_pool.lock); fast_pool->count = 0; /* award one bit for the contents of the fast pool */ - credit_entropy_bits(r, 1); + credit_entropy_bits(1); } EXPORT_SYMBOL_GPL(add_interrupt_randomness); @@ -1312,7 +1293,7 @@ void add_disk_randomness(struct gendisk *disk) return; /* first major is 1, so we get >= 0x200 here */ add_timer_randomness(disk->random, 0x100 + disk_devt(disk)); - trace_add_disk_randomness(disk_devt(disk), ENTROPY_BITS(&input_pool)); + trace_add_disk_randomness(disk_devt(disk), ENTROPY_BITS()); } EXPORT_SYMBOL_GPL(add_disk_randomness); #endif @@ -1327,16 +1308,16 @@ EXPORT_SYMBOL_GPL(add_disk_randomness); * This function decides how many bytes to actually take from the * given pool, and also debits the entropy count accordingly. */ -static size_t account(struct entropy_store *r, size_t nbytes, int min) +static size_t account(size_t nbytes, int min) { int entropy_count, orig, have_bytes; size_t ibytes, nfrac; - BUG_ON(r->entropy_count > POOL_FRACBITS); + BUG_ON(input_pool.entropy_count > POOL_FRACBITS); /* Can we pull enough? */ retry: - entropy_count = orig = READ_ONCE(r->entropy_count); + entropy_count = orig = READ_ONCE(input_pool.entropy_count); ibytes = nbytes; /* never pull more than available */ have_bytes = entropy_count >> (ENTROPY_SHIFT + 3); @@ -1348,8 +1329,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min) ibytes = 0; if (WARN_ON(entropy_count < 0)) { - pr_warn("negative entropy count: pool %s count %d\n", - r->name, entropy_count); + pr_warn("negative entropy count: count %d\n", entropy_count); entropy_count = 0; } nfrac = ibytes << (ENTROPY_SHIFT + 3); @@ -1358,11 +1338,11 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min) else entropy_count = 0; - if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig) + if (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig) goto retry; - trace_debit_entropy(r->name, 8 * ibytes); - if (ibytes && ENTROPY_BITS(r) < random_write_wakeup_bits) { + trace_debit_entropy(8 * ibytes); + if (ibytes && ENTROPY_BITS() < random_write_wakeup_bits) { wake_up_interruptible(&random_write_wait); kill_fasync(&fasync, SIGIO, POLL_OUT); } @@ -1375,7 +1355,7 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min) * * Note: we assume that .poolwords is a multiple of 16 words. */ -static void extract_buf(struct entropy_store *r, u8 *out) +static void extract_buf(u8 *out) { struct blake2s_state state __aligned(__alignof__(unsigned long)); u8 hash[BLAKE2S_HASH_SIZE]; @@ -1397,8 +1377,8 @@ static void extract_buf(struct entropy_store *r, u8 *out) } /* Generate a hash across the pool */ - spin_lock_irqsave(&r->lock, flags); - blake2s_update(&state, (const u8 *)r->pool, POOL_BYTES); + spin_lock_irqsave(&input_pool.lock, flags); + blake2s_update(&state, (const u8 *)input_pool.pool, POOL_BYTES); blake2s_final(&state, hash); /* final zeros out state */ /* @@ -1410,8 +1390,8 @@ static void extract_buf(struct entropy_store *r, u8 *out) * brute-forcing the feedback as hard as brute-forcing the * hash. */ - __mix_pool_bytes(r, hash, sizeof(hash)); - spin_unlock_irqrestore(&r->lock, flags); + __mix_pool_bytes(hash, sizeof(hash)); + spin_unlock_irqrestore(&input_pool.lock, flags); /* Note that EXTRACT_SIZE is half of hash size here, because above * we've dumped the full length back into mixer. By reducing the @@ -1421,14 +1401,13 @@ static void extract_buf(struct entropy_store *r, u8 *out) memzero_explicit(hash, sizeof(hash)); } -static ssize_t _extract_entropy(struct entropy_store *r, void *buf, - size_t nbytes) +static ssize_t _extract_entropy(void *buf, size_t nbytes) { ssize_t ret = 0, i; u8 tmp[EXTRACT_SIZE]; while (nbytes) { - extract_buf(r, tmp); + extract_buf(tmp); i = min_t(int, nbytes, EXTRACT_SIZE); memcpy(buf, tmp, i); nbytes -= i; @@ -1449,12 +1428,11 @@ static ssize_t _extract_entropy(struct entropy_store *r, void *buf, * The min parameter specifies the minimum amount we can pull before * failing to avoid races that defeat catastrophic reseeding. */ -static ssize_t extract_entropy(struct entropy_store *r, void *buf, - size_t nbytes, int min) +static ssize_t extract_entropy(void *buf, size_t nbytes, int min) { - trace_extract_entropy(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_); - nbytes = account(r, nbytes, min); - return _extract_entropy(r, buf, nbytes); + trace_extract_entropy(nbytes, ENTROPY_BITS(), _RET_IP_); + nbytes = account(nbytes, min); + return _extract_entropy(buf, nbytes); } #define warn_unseeded_randomness(previous) \ @@ -1539,7 +1517,7 @@ EXPORT_SYMBOL(get_random_bytes); */ static void entropy_timer(struct timer_list *t) { - credit_entropy_bits(&input_pool, 1); + credit_entropy_bits(1); } /* @@ -1563,14 +1541,14 @@ static void try_to_generate_entropy(void) while (!crng_ready()) { if (!timer_pending(&stack.timer)) mod_timer(&stack.timer, jiffies+1); - mix_pool_bytes(&input_pool, &stack.now, sizeof(stack.now)); + mix_pool_bytes(&stack.now, sizeof(stack.now)); schedule(); stack.now = random_get_entropy(); } del_timer_sync(&stack.timer); destroy_timer_on_stack(&stack.timer); - mix_pool_bytes(&input_pool, &stack.now, sizeof(stack.now)); + mix_pool_bytes(&stack.now, sizeof(stack.now)); } /* @@ -1711,26 +1689,24 @@ EXPORT_SYMBOL(get_random_bytes_arch); /* * init_std_data - initialize pool with system data * - * @r: pool to initialize - * * This function clears the pool's entropy count and mixes some system * data into the pool to prepare it for use. The pool is not cleared * as that can only decrease the entropy in the pool. */ -static void __init init_std_data(struct entropy_store *r) +static void __init init_std_data(void) { int i; ktime_t now = ktime_get_real(); unsigned long rv; - mix_pool_bytes(r, &now, sizeof(now)); + mix_pool_bytes(&now, sizeof(now)); for (i = POOL_BYTES; i > 0; i -= sizeof(rv)) { if (!arch_get_random_seed_long(&rv) && !arch_get_random_long(&rv)) rv = random_get_entropy(); - mix_pool_bytes(r, &rv, sizeof(rv)); + mix_pool_bytes(&rv, sizeof(rv)); } - mix_pool_bytes(r, utsname(), sizeof(*(utsname()))); + mix_pool_bytes(utsname(), sizeof(*(utsname()))); } /* @@ -1745,7 +1721,7 @@ static void __init init_std_data(struct entropy_store *r) */ int __init rand_initialize(void) { - init_std_data(&input_pool); + init_std_data(); if (crng_need_final_init) crng_finalize_init(&primary_crng); crng_initialize_primary(&primary_crng); @@ -1782,7 +1758,7 @@ urandom_read_nowarn(struct file *file, char __user *buf, size_t nbytes, nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3)); ret = extract_crng_user(buf, nbytes); - trace_urandom_read(8 * nbytes, 0, ENTROPY_BITS(&input_pool)); + trace_urandom_read(8 * nbytes, 0, ENTROPY_BITS()); return ret; } @@ -1822,13 +1798,13 @@ random_poll(struct file *file, poll_table * wait) mask = 0; if (crng_ready()) mask |= EPOLLIN | EPOLLRDNORM; - if (ENTROPY_BITS(&input_pool) < random_write_wakeup_bits) + if (ENTROPY_BITS() < random_write_wakeup_bits) mask |= EPOLLOUT | EPOLLWRNORM; return mask; } static int -write_pool(struct entropy_store *r, const char __user *buffer, size_t count) +write_pool(const char __user *buffer, size_t count) { size_t bytes; u32 t, buf[16]; @@ -1850,7 +1826,7 @@ write_pool(struct entropy_store *r, const char __user *buffer, size_t count) count -= bytes; p += bytes; - mix_pool_bytes(r, buf, bytes); + mix_pool_bytes(buf, bytes); cond_resched(); } @@ -1862,7 +1838,7 @@ static ssize_t random_write(struct file *file, const char __user *buffer, { size_t ret; - ret = write_pool(&input_pool, buffer, count); + ret = write_pool(buffer, count); if (ret) return ret; @@ -1878,7 +1854,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) switch (cmd) { case RNDGETENTCNT: /* inherently racy, no point locking */ - ent_count = ENTROPY_BITS(&input_pool); + ent_count = ENTROPY_BITS(); if (put_user(ent_count, p)) return -EFAULT; return 0; @@ -1887,7 +1863,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) return -EPERM; if (get_user(ent_count, p)) return -EFAULT; - return credit_entropy_bits_safe(&input_pool, ent_count); + return credit_entropy_bits_safe(ent_count); case RNDADDENTROPY: if (!capable(CAP_SYS_ADMIN)) return -EPERM; @@ -1897,11 +1873,10 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) return -EINVAL; if (get_user(size, p++)) return -EFAULT; - retval = write_pool(&input_pool, (const char __user *)p, - size); + retval = write_pool((const char __user *)p, size); if (retval < 0) return retval; - return credit_entropy_bits_safe(&input_pool, ent_count); + return credit_entropy_bits_safe(ent_count); case RNDZAPENTCNT: case RNDCLEARPOOL: /* @@ -1920,7 +1895,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) return -EPERM; if (crng_init < 2) return -ENODATA; - crng_reseed(&primary_crng, &input_pool); + crng_reseed(&primary_crng, true); WRITE_ONCE(crng_global_init_time, jiffies - 1); return 0; default: @@ -2244,11 +2219,9 @@ randomize_page(unsigned long start, unsigned long range) void add_hwgenerator_randomness(const char *buffer, size_t count, size_t entropy) { - struct entropy_store *poolp = &input_pool; - if (unlikely(crng_init == 0)) { size_t ret = crng_fast_load(buffer, count); - mix_pool_bytes(poolp, buffer, ret); + mix_pool_bytes(buffer, ret); count -= ret; buffer += ret; if (!count || crng_init == 0) @@ -2261,9 +2234,9 @@ void add_hwgenerator_randomness(const char *buffer, size_t count, */ wait_event_interruptible(random_write_wait, !system_wq || kthread_should_stop() || - ENTROPY_BITS(&input_pool) <= random_write_wakeup_bits); - mix_pool_bytes(poolp, buffer, count); - credit_entropy_bits(poolp, entropy); + ENTROPY_BITS() <= random_write_wakeup_bits); + mix_pool_bytes(buffer, count); + credit_entropy_bits(entropy); } EXPORT_SYMBOL_GPL(add_hwgenerator_randomness); diff --git a/include/trace/events/random.h b/include/trace/events/random.h index 3d7b432ca5f3..a2d9aa16a5d7 100644 --- a/include/trace/events/random.h +++ b/include/trace/events/random.h @@ -28,80 +28,71 @@ TRACE_EVENT(add_device_randomness, ); DECLARE_EVENT_CLASS(random__mix_pool_bytes, - TP_PROTO(const char *pool_name, int bytes, unsigned long IP), + TP_PROTO(int bytes, unsigned long IP), - TP_ARGS(pool_name, bytes, IP), + TP_ARGS(bytes, IP), TP_STRUCT__entry( - __field( const char *, pool_name ) __field( int, bytes ) __field(unsigned long, IP ) ), TP_fast_assign( - __entry->pool_name = pool_name; __entry->bytes = bytes; __entry->IP = IP; ), - TP_printk("%s pool: bytes %d caller %pS", - __entry->pool_name, __entry->bytes, (void *)__entry->IP) + TP_printk("input pool: bytes %d caller %pS", + __entry->bytes, (void *)__entry->IP) ); DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes, - TP_PROTO(const char *pool_name, int bytes, unsigned long IP), + TP_PROTO(int bytes, unsigned long IP), - TP_ARGS(pool_name, bytes, IP) + TP_ARGS(bytes, IP) ); DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes_nolock, - TP_PROTO(const char *pool_name, int bytes, unsigned long IP), + TP_PROTO(int bytes, unsigned long IP), - TP_ARGS(pool_name, bytes, IP) + TP_ARGS(bytes, IP) ); TRACE_EVENT(credit_entropy_bits, - TP_PROTO(const char *pool_name, int bits, int entropy_count, - unsigned long IP), + TP_PROTO(int bits, int entropy_count, unsigned long IP), - TP_ARGS(pool_name, bits, entropy_count, IP), + TP_ARGS(bits, entropy_count, IP), TP_STRUCT__entry( - __field( const char *, pool_name ) __field( int, bits ) __field( int, entropy_count ) __field(unsigned long, IP ) ), TP_fast_assign( - __entry->pool_name = pool_name; __entry->bits = bits; __entry->entropy_count = entropy_count; __entry->IP = IP; ), - TP_printk("%s pool: bits %d entropy_count %d caller %pS", - __entry->pool_name, __entry->bits, - __entry->entropy_count, (void *)__entry->IP) + TP_printk("input pool: bits %d entropy_count %d caller %pS", + __entry->bits, __entry->entropy_count, (void *)__entry->IP) ); TRACE_EVENT(debit_entropy, - TP_PROTO(const char *pool_name, int debit_bits), + TP_PROTO(int debit_bits), - TP_ARGS(pool_name, debit_bits), + TP_ARGS( debit_bits), TP_STRUCT__entry( - __field( const char *, pool_name ) __field( int, debit_bits ) ), TP_fast_assign( - __entry->pool_name = pool_name; __entry->debit_bits = debit_bits; ), - TP_printk("%s: debit_bits %d", __entry->pool_name, - __entry->debit_bits) + TP_printk("input pool: debit_bits %d", __entry->debit_bits) ); TRACE_EVENT(add_input_randomness, @@ -170,36 +161,31 @@ DEFINE_EVENT(random__get_random_bytes, get_random_bytes_arch, ); DECLARE_EVENT_CLASS(random__extract_entropy, - TP_PROTO(const char *pool_name, int nbytes, int entropy_count, - unsigned long IP), + TP_PROTO(int nbytes, int entropy_count, unsigned long IP), - TP_ARGS(pool_name, nbytes, entropy_count, IP), + TP_ARGS(nbytes, entropy_count, IP), TP_STRUCT__entry( - __field( const char *, pool_name ) __field( int, nbytes ) __field( int, entropy_count ) __field(unsigned long, IP ) ), TP_fast_assign( - __entry->pool_name = pool_name; __entry->nbytes = nbytes; __entry->entropy_count = entropy_count; __entry->IP = IP; ), - TP_printk("%s pool: nbytes %d entropy_count %d caller %pS", - __entry->pool_name, __entry->nbytes, __entry->entropy_count, - (void *)__entry->IP) + TP_printk("input pool: nbytes %d entropy_count %d caller %pS", + __entry->nbytes, __entry->entropy_count, (void *)__entry->IP) ); DEFINE_EVENT(random__extract_entropy, extract_entropy, - TP_PROTO(const char *pool_name, int nbytes, int entropy_count, - unsigned long IP), + TP_PROTO(int nbytes, int entropy_count, unsigned long IP), - TP_ARGS(pool_name, nbytes, entropy_count, IP) + TP_ARGS(nbytes, entropy_count, IP) ); TRACE_EVENT(urandom_read, From 09ae6b85197969ddf349346f626544e0e6d0b069 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 13 Jan 2022 15:51:06 +0100 Subject: [PATCH 0143/1842] random: remove unused OUTPUT_POOL constants commit 0f63702718c91d89c922081ac1e6baeddc2d8b1a upstream. We no longer have an output pool. Rather, we have just a wakeup bits threshold for /dev/random reads, presumably so that processes don't hang. This value, random_write_wakeup_bits, is configurable anyway. So all the no longer usefully named OUTPUT_POOL constants were doing was setting a reasonable default for random_write_wakeup_bits. This commit gets rid of the constants and just puts it all in the default value of random_write_wakeup_bits. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index be103e3dfe24..6aaacf6df7eb 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -363,8 +363,6 @@ */ #define INPUT_POOL_SHIFT 12 #define INPUT_POOL_WORDS (1 << (INPUT_POOL_SHIFT-5)) -#define OUTPUT_POOL_SHIFT 10 -#define OUTPUT_POOL_WORDS (1 << (OUTPUT_POOL_SHIFT-5)) #define EXTRACT_SIZE (BLAKE2S_HASH_SIZE / 2) /* @@ -382,7 +380,7 @@ * should wake up processes which are selecting or polling on write * access to /dev/random. */ -static int random_write_wakeup_bits = 28 * OUTPUT_POOL_WORDS; +static int random_write_wakeup_bits = 28 * (1 << 5); /* * Originally, we used a primitive polynomial of degree .poolwords From f87f50b843e4206168326eee055b2528021a16d3 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 13 Jan 2022 16:11:21 +0100 Subject: [PATCH 0144/1842] random: de-duplicate INPUT_POOL constants commit 5b87adf30f1464477169a1d653e9baf8c012bbfe upstream. We already had the POOL_* constants, so deduplicate the older INPUT_POOL ones. As well, fold EXTRACT_SIZE into the poolinfo enum, since it's related. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 6aaacf6df7eb..cd4935a9847a 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -358,13 +358,6 @@ /* #define ADD_INTERRUPT_BENCH */ -/* - * Configuration information - */ -#define INPUT_POOL_SHIFT 12 -#define INPUT_POOL_WORDS (1 << (INPUT_POOL_SHIFT-5)) -#define EXTRACT_SIZE (BLAKE2S_HASH_SIZE / 2) - /* * To allow fractional bits to be tracked, the entropy_count field is * denominated in units of 1/8th bits. @@ -440,7 +433,9 @@ enum poolinfo { POOL_TAP2 = 76, POOL_TAP3 = 51, POOL_TAP4 = 25, - POOL_TAP5 = 1 + POOL_TAP5 = 1, + + EXTRACT_SIZE = BLAKE2S_HASH_SIZE / 2 }; /* @@ -503,7 +498,7 @@ MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression"); * **********************************************************************/ -static u32 input_pool_data[INPUT_POOL_WORDS] __latent_entropy; +static u32 input_pool_data[POOL_WORDS] __latent_entropy; static struct { /* read-only data: */ @@ -1964,7 +1959,7 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, #include static int min_write_thresh; -static int max_write_thresh = INPUT_POOL_WORDS * 32; +static int max_write_thresh = POOL_BITS; static int random_min_urandom_seed = 60; static char sysctl_bootid[16]; @@ -2021,7 +2016,7 @@ static int proc_do_entropy(struct ctl_table *table, int write, return proc_dointvec(&fake_table, write, buffer, lenp, ppos); } -static int sysctl_poolsize = INPUT_POOL_WORDS * 32; +static int sysctl_poolsize = POOL_BITS; extern struct ctl_table random_table[]; struct ctl_table random_table[] = { { From edd294052e77568e5b44a2605e8ba7779ef09cef Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 14 Jan 2022 16:48:35 +0100 Subject: [PATCH 0145/1842] random: prepend remaining pool constants with POOL_ commit b3d51c1f542113342ddfbf6007e38a684b9dbec9 upstream. The other pool constants are prepended with POOL_, but not these last ones. Rename them. This will then let us move them into the enum in the following commit. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index cd4935a9847a..844474b8319e 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -362,11 +362,11 @@ * To allow fractional bits to be tracked, the entropy_count field is * denominated in units of 1/8th bits. * - * 2*(ENTROPY_SHIFT + poolbitshift) must <= 31, or the multiply in + * 2*(POOL_ENTROPY_SHIFT + poolbitshift) must <= 31, or the multiply in * credit_entropy_bits() needs to be 64 bits wide. */ -#define ENTROPY_SHIFT 3 -#define ENTROPY_BITS() (input_pool.entropy_count >> ENTROPY_SHIFT) +#define POOL_ENTROPY_SHIFT 3 +#define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIFT) /* * If the entropy count falls under this number of bits, then we @@ -426,7 +426,7 @@ enum poolinfo { POOL_BYTES = POOL_WORDS * sizeof(u32), POOL_BITS = POOL_BYTES * 8, POOL_BITSHIFT = ilog2(POOL_WORDS) + 5, - POOL_FRACBITS = POOL_WORDS << (ENTROPY_SHIFT + 5), + POOL_FRACBITS = POOL_WORDS << (POOL_ENTROPY_SHIFT + 5), /* x^128 + x^104 + x^76 + x^51 +x^25 + x + 1 */ POOL_TAP1 = 104, @@ -650,7 +650,7 @@ static void process_random_ready_list(void) static void credit_entropy_bits(int nbits) { int entropy_count, entropy_bits, orig; - int nfrac = nbits << ENTROPY_SHIFT; + int nfrac = nbits << POOL_ENTROPY_SHIFT; if (!nbits) return; @@ -683,7 +683,7 @@ static void credit_entropy_bits(int nbits) * turns no matter how large nbits is. */ int pnfrac = nfrac; - const int s = POOL_BITSHIFT + ENTROPY_SHIFT + 2; + const int s = POOL_BITSHIFT + POOL_ENTROPY_SHIFT + 2; /* The +2 corresponds to the /4 in the denominator */ do { @@ -704,9 +704,9 @@ static void credit_entropy_bits(int nbits) if (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig) goto retry; - trace_credit_entropy_bits(nbits, entropy_count >> ENTROPY_SHIFT, _RET_IP_); + trace_credit_entropy_bits(nbits, entropy_count >> POOL_ENTROPY_SHIFT, _RET_IP_); - entropy_bits = entropy_count >> ENTROPY_SHIFT; + entropy_bits = entropy_count >> POOL_ENTROPY_SHIFT; if (crng_init < 2 && entropy_bits >= 128) crng_reseed(&primary_crng, true); } @@ -1187,7 +1187,7 @@ void add_input_randomness(unsigned int type, unsigned int code, last_value = value; add_timer_randomness(&input_timer_state, (type << 4) ^ code ^ (code >> 4) ^ value); - trace_add_input_randomness(ENTROPY_BITS()); + trace_add_input_randomness(POOL_ENTROPY_BITS()); } EXPORT_SYMBOL_GPL(add_input_randomness); @@ -1286,7 +1286,7 @@ void add_disk_randomness(struct gendisk *disk) return; /* first major is 1, so we get >= 0x200 here */ add_timer_randomness(disk->random, 0x100 + disk_devt(disk)); - trace_add_disk_randomness(disk_devt(disk), ENTROPY_BITS()); + trace_add_disk_randomness(disk_devt(disk), POOL_ENTROPY_BITS()); } EXPORT_SYMBOL_GPL(add_disk_randomness); #endif @@ -1313,7 +1313,7 @@ static size_t account(size_t nbytes, int min) entropy_count = orig = READ_ONCE(input_pool.entropy_count); ibytes = nbytes; /* never pull more than available */ - have_bytes = entropy_count >> (ENTROPY_SHIFT + 3); + have_bytes = entropy_count >> (POOL_ENTROPY_SHIFT + 3); if (have_bytes < 0) have_bytes = 0; @@ -1325,7 +1325,7 @@ static size_t account(size_t nbytes, int min) pr_warn("negative entropy count: count %d\n", entropy_count); entropy_count = 0; } - nfrac = ibytes << (ENTROPY_SHIFT + 3); + nfrac = ibytes << (POOL_ENTROPY_SHIFT + 3); if ((size_t) entropy_count > nfrac) entropy_count -= nfrac; else @@ -1335,7 +1335,7 @@ static size_t account(size_t nbytes, int min) goto retry; trace_debit_entropy(8 * ibytes); - if (ibytes && ENTROPY_BITS() < random_write_wakeup_bits) { + if (ibytes && POOL_ENTROPY_BITS() < random_write_wakeup_bits) { wake_up_interruptible(&random_write_wait); kill_fasync(&fasync, SIGIO, POLL_OUT); } @@ -1423,7 +1423,7 @@ static ssize_t _extract_entropy(void *buf, size_t nbytes) */ static ssize_t extract_entropy(void *buf, size_t nbytes, int min) { - trace_extract_entropy(nbytes, ENTROPY_BITS(), _RET_IP_); + trace_extract_entropy(nbytes, POOL_ENTROPY_BITS(), _RET_IP_); nbytes = account(nbytes, min); return _extract_entropy(buf, nbytes); } @@ -1749,9 +1749,9 @@ urandom_read_nowarn(struct file *file, char __user *buf, size_t nbytes, { int ret; - nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3)); + nbytes = min_t(size_t, nbytes, INT_MAX >> (POOL_ENTROPY_SHIFT + 3)); ret = extract_crng_user(buf, nbytes); - trace_urandom_read(8 * nbytes, 0, ENTROPY_BITS()); + trace_urandom_read(8 * nbytes, 0, POOL_ENTROPY_BITS()); return ret; } @@ -1791,7 +1791,7 @@ random_poll(struct file *file, poll_table * wait) mask = 0; if (crng_ready()) mask |= EPOLLIN | EPOLLRDNORM; - if (ENTROPY_BITS() < random_write_wakeup_bits) + if (POOL_ENTROPY_BITS() < random_write_wakeup_bits) mask |= EPOLLOUT | EPOLLWRNORM; return mask; } @@ -1847,7 +1847,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) switch (cmd) { case RNDGETENTCNT: /* inherently racy, no point locking */ - ent_count = ENTROPY_BITS(); + ent_count = POOL_ENTROPY_BITS(); if (put_user(ent_count, p)) return -EFAULT; return 0; @@ -2008,7 +2008,7 @@ static int proc_do_entropy(struct ctl_table *table, int write, struct ctl_table fake_table; int entropy_count; - entropy_count = *(int *)table->data >> ENTROPY_SHIFT; + entropy_count = *(int *)table->data >> POOL_ENTROPY_SHIFT; fake_table.data = &entropy_count; fake_table.maxlen = sizeof(entropy_count); @@ -2227,7 +2227,7 @@ void add_hwgenerator_randomness(const char *buffer, size_t count, */ wait_event_interruptible(random_write_wait, !system_wq || kthread_should_stop() || - ENTROPY_BITS() <= random_write_wakeup_bits); + POOL_ENTROPY_BITS() <= random_write_wakeup_bits); mix_pool_bytes(buffer, count); credit_entropy_bits(entropy); } From a9db850c219fdaf144997c683ba35f293225dfdf Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 13 Jan 2022 18:18:48 +0100 Subject: [PATCH 0146/1842] random: cleanup fractional entropy shift constants commit 18263c4e8e62f7329f38f5eadc568751242ca89c upstream. The entropy estimator is calculated in terms of 1/8 bits, which means there are various constants where things are shifted by 3. Move these into our pool info enum with the other relevant constants. While we're at it, move an English assertion about sizes into a proper BUILD_BUG_ON so that the compiler can ensure this invariant. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 28 +++++++++++++--------------- 1 file changed, 13 insertions(+), 15 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 844474b8319e..402e3ac7f4a9 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -358,16 +358,6 @@ /* #define ADD_INTERRUPT_BENCH */ -/* - * To allow fractional bits to be tracked, the entropy_count field is - * denominated in units of 1/8th bits. - * - * 2*(POOL_ENTROPY_SHIFT + poolbitshift) must <= 31, or the multiply in - * credit_entropy_bits() needs to be 64 bits wide. - */ -#define POOL_ENTROPY_SHIFT 3 -#define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIFT) - /* * If the entropy count falls under this number of bits, then we * should wake up processes which are selecting or polling on write @@ -425,8 +415,13 @@ enum poolinfo { POOL_WORDMASK = POOL_WORDS - 1, POOL_BYTES = POOL_WORDS * sizeof(u32), POOL_BITS = POOL_BYTES * 8, - POOL_BITSHIFT = ilog2(POOL_WORDS) + 5, - POOL_FRACBITS = POOL_WORDS << (POOL_ENTROPY_SHIFT + 5), + POOL_BITSHIFT = ilog2(POOL_BITS), + + /* To allow fractional bits to be tracked, the entropy_count field is + * denominated in units of 1/8th bits. */ + POOL_ENTROPY_SHIFT = 3, +#define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIFT) + POOL_FRACBITS = POOL_BITS << POOL_ENTROPY_SHIFT, /* x^128 + x^104 + x^76 + x^51 +x^25 + x + 1 */ POOL_TAP1 = 104, @@ -652,6 +647,9 @@ static void credit_entropy_bits(int nbits) int entropy_count, entropy_bits, orig; int nfrac = nbits << POOL_ENTROPY_SHIFT; + /* Ensure that the multiplication can avoid being 64 bits wide. */ + BUILD_BUG_ON(2 * (POOL_ENTROPY_SHIFT + POOL_BITSHIFT) > 31); + if (!nbits) return; @@ -687,13 +685,13 @@ static void credit_entropy_bits(int nbits) /* The +2 corresponds to the /4 in the denominator */ do { - unsigned int anfrac = min(pnfrac, POOL_FRACBITS/2); + unsigned int anfrac = min(pnfrac, POOL_FRACBITS / 2); unsigned int add = - ((POOL_FRACBITS - entropy_count)*anfrac*3) >> s; + ((POOL_FRACBITS - entropy_count) * anfrac * 3) >> s; entropy_count += add; pnfrac -= anfrac; - } while (unlikely(entropy_count < POOL_FRACBITS-2 && pnfrac)); + } while (unlikely(entropy_count < POOL_FRACBITS - 2 && pnfrac)); } if (WARN_ON(entropy_count < 0)) { From bccc8d92310d1060468ca15f1324ced0c1926e86 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 15 Jan 2022 14:40:04 +0100 Subject: [PATCH 0147/1842] random: access input_pool_data directly rather than through pointer commit 6c0eace6e1499712583b6ee62d95161e8b3449f5 upstream. This gets rid of another abstraction we no longer need. It would be nice if we could instead make pool an array rather than a pointer, but the latent entropy plugin won't be able to do its magic in that case. So instead we put all accesses to the input pool's actual data through the input_pool_data array directly. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 21 ++++++++------------- 1 file changed, 8 insertions(+), 13 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 402e3ac7f4a9..19a3d7983684 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -496,17 +496,12 @@ MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression"); static u32 input_pool_data[POOL_WORDS] __latent_entropy; static struct { - /* read-only data: */ - u32 *pool; - - /* read-write data: */ spinlock_t lock; u16 add_ptr; u16 input_rotate; int entropy_count; } input_pool = { .lock = __SPIN_LOCK_UNLOCKED(input_pool.lock), - .pool = input_pool_data }; static ssize_t extract_entropy(void *buf, size_t nbytes, int min); @@ -544,15 +539,15 @@ static void _mix_pool_bytes(const void *in, int nbytes) i = (i - 1) & POOL_WORDMASK; /* XOR in the various taps */ - w ^= input_pool.pool[i]; - w ^= input_pool.pool[(i + POOL_TAP1) & POOL_WORDMASK]; - w ^= input_pool.pool[(i + POOL_TAP2) & POOL_WORDMASK]; - w ^= input_pool.pool[(i + POOL_TAP3) & POOL_WORDMASK]; - w ^= input_pool.pool[(i + POOL_TAP4) & POOL_WORDMASK]; - w ^= input_pool.pool[(i + POOL_TAP5) & POOL_WORDMASK]; + w ^= input_pool_data[i]; + w ^= input_pool_data[(i + POOL_TAP1) & POOL_WORDMASK]; + w ^= input_pool_data[(i + POOL_TAP2) & POOL_WORDMASK]; + w ^= input_pool_data[(i + POOL_TAP3) & POOL_WORDMASK]; + w ^= input_pool_data[(i + POOL_TAP4) & POOL_WORDMASK]; + w ^= input_pool_data[(i + POOL_TAP5) & POOL_WORDMASK]; /* Mix the result back in with a twist */ - input_pool.pool[i] = (w >> 3) ^ twist_table[w & 7]; + input_pool_data[i] = (w >> 3) ^ twist_table[w & 7]; /* * Normally, we add 7 bits of rotation to the pool. @@ -1369,7 +1364,7 @@ static void extract_buf(u8 *out) /* Generate a hash across the pool */ spin_lock_irqsave(&input_pool.lock, flags); - blake2s_update(&state, (const u8 *)input_pool.pool, POOL_BYTES); + blake2s_update(&state, (const u8 *)input_pool_data, POOL_BYTES); blake2s_final(&state, hash); /* final zeros out state */ /* From a0653a9ec15e8312080d51f2684f1c41e41ff678 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 15 Jan 2022 14:57:22 +0100 Subject: [PATCH 0148/1842] random: selectively clang-format where it makes sense commit 248045b8dea5a32ddc0aa44193d6bc70c4b9cd8e upstream. This is an old driver that has seen a lot of different eras of kernel coding style. In an effort to make it easier to code for, unify the coding style around the current norm, by accepting some of -- but certainly not all of -- the suggestions from clang-format. This should remove ambiguity in coding style, especially with regards to spacing, when code is being changed or amended. Consequently it also makes code review easier on the eyes, following one uniform style rather than several. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 209 ++++++++++++++++++++---------------------- 1 file changed, 99 insertions(+), 110 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 19a3d7983684..1ad9d9993e7b 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -124,7 +124,7 @@ * * The primary kernel interface is * - * void get_random_bytes(void *buf, int nbytes); + * void get_random_bytes(void *buf, int nbytes); * * This interface will return the requested number of random bytes, * and place it in the requested buffer. This is equivalent to a @@ -132,10 +132,10 @@ * * For less critical applications, there are the functions: * - * u32 get_random_u32() - * u64 get_random_u64() - * unsigned int get_random_int() - * unsigned long get_random_long() + * u32 get_random_u32() + * u64 get_random_u64() + * unsigned int get_random_int() + * unsigned long get_random_long() * * These are produced by a cryptographic RNG seeded from get_random_bytes, * and so do not deplete the entropy pool as much. These are recommended @@ -197,10 +197,10 @@ * from the devices are: * * void add_device_randomness(const void *buf, unsigned int size); - * void add_input_randomness(unsigned int type, unsigned int code, + * void add_input_randomness(unsigned int type, unsigned int code, * unsigned int value); * void add_interrupt_randomness(int irq); - * void add_disk_randomness(struct gendisk *disk); + * void add_disk_randomness(struct gendisk *disk); * void add_hwgenerator_randomness(const char *buffer, size_t count, * size_t entropy); * void add_bootloader_randomness(const void *buf, unsigned int size); @@ -296,8 +296,8 @@ * /dev/random and /dev/urandom created already, they can be created * by using the commands: * - * mknod /dev/random c 1 8 - * mknod /dev/urandom c 1 9 + * mknod /dev/random c 1 8 + * mknod /dev/urandom c 1 9 * * Acknowledgements: * ================= @@ -443,9 +443,9 @@ static DEFINE_SPINLOCK(random_ready_list_lock); static LIST_HEAD(random_ready_list); struct crng_state { - u32 state[16]; - unsigned long init_time; - spinlock_t lock; + u32 state[16]; + unsigned long init_time; + spinlock_t lock; }; static struct crng_state primary_crng = { @@ -469,7 +469,7 @@ static bool crng_need_final_init = false; #define crng_ready() (likely(crng_init > 1)) static int crng_init_cnt = 0; static unsigned long crng_global_init_time = 0; -#define CRNG_INIT_CNT_THRESH (2*CHACHA_KEY_SIZE) +#define CRNG_INIT_CNT_THRESH (2 * CHACHA_KEY_SIZE) static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZE]); static void _crng_backtrack_protect(struct crng_state *crng, u8 tmp[CHACHA_BLOCK_SIZE], int used); @@ -509,7 +509,7 @@ static ssize_t _extract_entropy(void *buf, size_t nbytes); static void crng_reseed(struct crng_state *crng, bool use_input_pool); -static u32 const twist_table[8] = { +static const u32 twist_table[8] = { 0x00000000, 0x3b6e20c8, 0x76dc4190, 0x4db26158, 0xedb88320, 0xd6d6a3e8, 0x9b64c2b0, 0xa00ae278 }; @@ -579,10 +579,10 @@ static void mix_pool_bytes(const void *in, int nbytes) } struct fast_pool { - u32 pool[4]; - unsigned long last; - u16 reg_idx; - u8 count; + u32 pool[4]; + unsigned long last; + u16 reg_idx; + u8 count; }; /* @@ -710,7 +710,7 @@ static int credit_entropy_bits_safe(int nbits) return -EINVAL; /* Cap the value to avoid overflows */ - nbits = min(nbits, POOL_BITS); + nbits = min(nbits, POOL_BITS); credit_entropy_bits(nbits); return 0; @@ -722,7 +722,7 @@ static int credit_entropy_bits_safe(int nbits) * *********************************************************************/ -#define CRNG_RESEED_INTERVAL (300*HZ) +#define CRNG_RESEED_INTERVAL (300 * HZ) static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); @@ -746,9 +746,9 @@ early_param("random.trust_cpu", parse_trust_cpu); static bool crng_init_try_arch(struct crng_state *crng) { - int i; - bool arch_init = true; - unsigned long rv; + int i; + bool arch_init = true; + unsigned long rv; for (i = 4; i < 16; i++) { if (!arch_get_random_seed_long(&rv) && @@ -764,9 +764,9 @@ static bool crng_init_try_arch(struct crng_state *crng) static bool __init crng_init_try_arch_early(struct crng_state *crng) { - int i; - bool arch_init = true; - unsigned long rv; + int i; + bool arch_init = true; + unsigned long rv; for (i = 4; i < 16; i++) { if (!arch_get_random_seed_long_early(&rv) && @@ -836,7 +836,7 @@ static void do_numa_crng_init(struct work_struct *work) struct crng_state *crng; struct crng_state **pool; - pool = kcalloc(nr_node_ids, sizeof(*pool), GFP_KERNEL|__GFP_NOFAIL); + pool = kcalloc(nr_node_ids, sizeof(*pool), GFP_KERNEL | __GFP_NOFAIL); for_each_online_node(i) { crng = kmalloc_node(sizeof(struct crng_state), GFP_KERNEL | __GFP_NOFAIL, i); @@ -892,7 +892,7 @@ static size_t crng_fast_load(const u8 *cp, size_t len) spin_unlock_irqrestore(&primary_crng.lock, flags); return 0; } - p = (u8 *) &primary_crng.state[4]; + p = (u8 *)&primary_crng.state[4]; while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) { p[crng_init_cnt % CHACHA_KEY_SIZE] ^= *cp; cp++; crng_init_cnt++; len--; ret++; @@ -922,12 +922,12 @@ static size_t crng_fast_load(const u8 *cp, size_t len) */ static int crng_slow_load(const u8 *cp, size_t len) { - unsigned long flags; - static u8 lfsr = 1; - u8 tmp; - unsigned int i, max = CHACHA_KEY_SIZE; - const u8 * src_buf = cp; - u8 * dest_buf = (u8 *) &primary_crng.state[4]; + unsigned long flags; + static u8 lfsr = 1; + u8 tmp; + unsigned int i, max = CHACHA_KEY_SIZE; + const u8 *src_buf = cp; + u8 *dest_buf = (u8 *)&primary_crng.state[4]; if (!spin_trylock_irqsave(&primary_crng.lock, flags)) return 0; @@ -938,7 +938,7 @@ static int crng_slow_load(const u8 *cp, size_t len) if (len > max) max = len; - for (i = 0; i < max ; i++) { + for (i = 0; i < max; i++) { tmp = lfsr; lfsr >>= 1; if (tmp & 1) @@ -953,11 +953,11 @@ static int crng_slow_load(const u8 *cp, size_t len) static void crng_reseed(struct crng_state *crng, bool use_input_pool) { - unsigned long flags; - int i, num; + unsigned long flags; + int i, num; union { - u8 block[CHACHA_BLOCK_SIZE]; - u32 key[8]; + u8 block[CHACHA_BLOCK_SIZE]; + u32 key[8]; } buf; if (use_input_pool) { @@ -971,11 +971,11 @@ static void crng_reseed(struct crng_state *crng, bool use_input_pool) } spin_lock_irqsave(&crng->lock, flags); for (i = 0; i < 8; i++) { - unsigned long rv; + unsigned long rv; if (!arch_get_random_seed_long(&rv) && !arch_get_random_long(&rv)) rv = random_get_entropy(); - crng->state[i+4] ^= buf.key[i] ^ rv; + crng->state[i + 4] ^= buf.key[i] ^ rv; } memzero_explicit(&buf, sizeof(buf)); WRITE_ONCE(crng->init_time, jiffies); @@ -983,8 +983,7 @@ static void crng_reseed(struct crng_state *crng, bool use_input_pool) crng_finalize_init(crng); } -static void _extract_crng(struct crng_state *crng, - u8 out[CHACHA_BLOCK_SIZE]) +static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZE]) { unsigned long flags, init_time; @@ -1013,9 +1012,9 @@ static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]) static void _crng_backtrack_protect(struct crng_state *crng, u8 tmp[CHACHA_BLOCK_SIZE], int used) { - unsigned long flags; - u32 *s, *d; - int i; + unsigned long flags; + u32 *s, *d; + int i; used = round_up(used, sizeof(u32)); if (used + CHACHA_KEY_SIZE > CHACHA_BLOCK_SIZE) { @@ -1023,9 +1022,9 @@ static void _crng_backtrack_protect(struct crng_state *crng, used = 0; } spin_lock_irqsave(&crng->lock, flags); - s = (u32 *) &tmp[used]; + s = (u32 *)&tmp[used]; d = &crng->state[4]; - for (i=0; i < 8; i++) + for (i = 0; i < 8; i++) *d++ ^= *s++; spin_unlock_irqrestore(&crng->lock, flags); } @@ -1070,7 +1069,6 @@ static ssize_t extract_crng_user(void __user *buf, size_t nbytes) return ret; } - /********************************************************************* * * Entropy input management @@ -1165,11 +1163,11 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num) * Round down by 1 bit on general principles, * and limit entropy estimate to 12 bits. */ - credit_entropy_bits(min_t(int, fls(delta>>1), 11)); + credit_entropy_bits(min_t(int, fls(delta >> 1), 11)); } void add_input_randomness(unsigned int type, unsigned int code, - unsigned int value) + unsigned int value) { static unsigned char last_value; @@ -1189,19 +1187,19 @@ static DEFINE_PER_CPU(struct fast_pool, irq_randomness); #ifdef ADD_INTERRUPT_BENCH static unsigned long avg_cycles, avg_deviation; -#define AVG_SHIFT 8 /* Exponential average factor k=1/256 */ -#define FIXED_1_2 (1 << (AVG_SHIFT-1)) +#define AVG_SHIFT 8 /* Exponential average factor k=1/256 */ +#define FIXED_1_2 (1 << (AVG_SHIFT - 1)) static void add_interrupt_bench(cycles_t start) { - long delta = random_get_entropy() - start; + long delta = random_get_entropy() - start; - /* Use a weighted moving average */ - delta = delta - ((avg_cycles + FIXED_1_2) >> AVG_SHIFT); - avg_cycles += delta; - /* And average deviation */ - delta = abs(delta) - ((avg_deviation + FIXED_1_2) >> AVG_SHIFT); - avg_deviation += delta; + /* Use a weighted moving average */ + delta = delta - ((avg_cycles + FIXED_1_2) >> AVG_SHIFT); + avg_cycles += delta; + /* And average deviation */ + delta = abs(delta) - ((avg_deviation + FIXED_1_2) >> AVG_SHIFT); + avg_deviation += delta; } #else #define add_interrupt_bench(x) @@ -1209,7 +1207,7 @@ static void add_interrupt_bench(cycles_t start) static u32 get_reg(struct fast_pool *f, struct pt_regs *regs) { - u32 *ptr = (u32 *) regs; + u32 *ptr = (u32 *)regs; unsigned int idx; if (regs == NULL) @@ -1224,12 +1222,12 @@ static u32 get_reg(struct fast_pool *f, struct pt_regs *regs) void add_interrupt_randomness(int irq) { - struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness); - struct pt_regs *regs = get_irq_regs(); - unsigned long now = jiffies; - cycles_t cycles = random_get_entropy(); - u32 c_high, j_high; - u64 ip; + struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness); + struct pt_regs *regs = get_irq_regs(); + unsigned long now = jiffies; + cycles_t cycles = random_get_entropy(); + u32 c_high, j_high; + u64 ip; if (cycles == 0) cycles = get_reg(fast_pool, regs); @@ -1239,8 +1237,8 @@ void add_interrupt_randomness(int irq) fast_pool->pool[1] ^= now ^ c_high; ip = regs ? instruction_pointer(regs) : _RET_IP_; fast_pool->pool[2] ^= ip; - fast_pool->pool[3] ^= (sizeof(ip) > 4) ? ip >> 32 : - get_reg(fast_pool, regs); + fast_pool->pool[3] ^= + (sizeof(ip) > 4) ? ip >> 32 : get_reg(fast_pool, regs); fast_mix(fast_pool); add_interrupt_bench(cycles); @@ -1254,8 +1252,7 @@ void add_interrupt_randomness(int irq) return; } - if ((fast_pool->count < 64) && - !time_after(now, fast_pool->last + HZ)) + if ((fast_pool->count < 64) && !time_after(now, fast_pool->last + HZ)) return; if (!spin_trylock(&input_pool.lock)) @@ -1319,7 +1316,7 @@ static size_t account(size_t nbytes, int min) entropy_count = 0; } nfrac = ibytes << (POOL_ENTROPY_SHIFT + 3); - if ((size_t) entropy_count > nfrac) + if ((size_t)entropy_count > nfrac) entropy_count -= nfrac; else entropy_count = 0; @@ -1422,10 +1419,9 @@ static ssize_t extract_entropy(void *buf, size_t nbytes, int min) } #define warn_unseeded_randomness(previous) \ - _warn_unseeded_randomness(__func__, (void *) _RET_IP_, (previous)) + _warn_unseeded_randomness(__func__, (void *)_RET_IP_, (previous)) -static void _warn_unseeded_randomness(const char *func_name, void *caller, - void **previous) +static void _warn_unseeded_randomness(const char *func_name, void *caller, void **previous) { #ifdef CONFIG_WARN_ALL_UNSEEDED_RANDOM const bool print_once = false; @@ -1433,8 +1429,7 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller, static bool print_once __read_mostly; #endif - if (print_once || - crng_ready() || + if (print_once || crng_ready() || (previous && (caller == READ_ONCE(*previous)))) return; WRITE_ONCE(*previous, caller); @@ -1442,9 +1437,8 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller, print_once = true; #endif if (__ratelimit(&unseeded_warning)) - printk_deferred(KERN_NOTICE "random: %s called from %pS " - "with crng_init=%d\n", func_name, caller, - crng_init); + printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n", + func_name, caller, crng_init); } /* @@ -1487,7 +1481,6 @@ void get_random_bytes(void *buf, int nbytes) } EXPORT_SYMBOL(get_random_bytes); - /* * Each time the timer fires, we expect that we got an unpredictable * jump in the cycle counter. Even if the timer is running on another @@ -1526,7 +1519,7 @@ static void try_to_generate_entropy(void) timer_setup_on_stack(&stack.timer, entropy_timer, 0); while (!crng_ready()) { if (!timer_pending(&stack.timer)) - mod_timer(&stack.timer, jiffies+1); + mod_timer(&stack.timer, jiffies + 1); mix_pool_bytes(&stack.now, sizeof(stack.now)); schedule(); stack.now = random_get_entropy(); @@ -1736,9 +1729,8 @@ void rand_initialize_disk(struct gendisk *disk) } #endif -static ssize_t -urandom_read_nowarn(struct file *file, char __user *buf, size_t nbytes, - loff_t *ppos) +static ssize_t urandom_read_nowarn(struct file *file, char __user *buf, + size_t nbytes, loff_t *ppos) { int ret; @@ -1748,8 +1740,8 @@ urandom_read_nowarn(struct file *file, char __user *buf, size_t nbytes, return ret; } -static ssize_t -urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) +static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes, + loff_t *ppos) { static int maxwarn = 10; @@ -1763,8 +1755,8 @@ urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) return urandom_read_nowarn(file, buf, nbytes, ppos); } -static ssize_t -random_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) +static ssize_t random_read(struct file *file, char __user *buf, size_t nbytes, + loff_t *ppos) { int ret; @@ -1774,8 +1766,7 @@ random_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) return urandom_read_nowarn(file, buf, nbytes, ppos); } -static __poll_t -random_poll(struct file *file, poll_table * wait) +static __poll_t random_poll(struct file *file, poll_table *wait) { __poll_t mask; @@ -1789,8 +1780,7 @@ random_poll(struct file *file, poll_table * wait) return mask; } -static int -write_pool(const char __user *buffer, size_t count) +static int write_pool(const char __user *buffer, size_t count) { size_t bytes; u32 t, buf[16]; @@ -1895,9 +1885,9 @@ static int random_fasync(int fd, struct file *filp, int on) } const struct file_operations random_fops = { - .read = random_read, + .read = random_read, .write = random_write, - .poll = random_poll, + .poll = random_poll, .unlocked_ioctl = random_ioctl, .compat_ioctl = compat_ptr_ioctl, .fasync = random_fasync, @@ -1905,7 +1895,7 @@ const struct file_operations random_fops = { }; const struct file_operations urandom_fops = { - .read = urandom_read, + .read = urandom_read, .write = random_write, .unlocked_ioctl = random_ioctl, .compat_ioctl = compat_ptr_ioctl, @@ -1913,19 +1903,19 @@ const struct file_operations urandom_fops = { .llseek = noop_llseek, }; -SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, - unsigned int, flags) +SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int, + flags) { int ret; - if (flags & ~(GRND_NONBLOCK|GRND_RANDOM|GRND_INSECURE)) + if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE)) return -EINVAL; /* * Requesting insecure and blocking randomness at the same time makes * no sense. */ - if ((flags & (GRND_INSECURE|GRND_RANDOM)) == (GRND_INSECURE|GRND_RANDOM)) + if ((flags & (GRND_INSECURE | GRND_RANDOM)) == (GRND_INSECURE | GRND_RANDOM)) return -EINVAL; if (count > INT_MAX) @@ -1965,8 +1955,8 @@ static char sysctl_bootid[16]; * returned as an ASCII string in the standard UUID format; if via the * sysctl system call, as 16 bytes of binary data. */ -static int proc_do_uuid(struct ctl_table *table, int write, - void *buffer, size_t *lenp, loff_t *ppos) +static int proc_do_uuid(struct ctl_table *table, int write, void *buffer, + size_t *lenp, loff_t *ppos) { struct ctl_table fake_table; unsigned char buf[64], tmp_uuid[16], *uuid; @@ -1995,8 +1985,8 @@ static int proc_do_uuid(struct ctl_table *table, int write, /* * Return entropy available scaled to integral bits */ -static int proc_do_entropy(struct ctl_table *table, int write, - void *buffer, size_t *lenp, loff_t *ppos) +static int proc_do_entropy(struct ctl_table *table, int write, void *buffer, + size_t *lenp, loff_t *ppos) { struct ctl_table fake_table; int entropy_count; @@ -2073,7 +2063,7 @@ struct ctl_table random_table[] = { #endif { } }; -#endif /* CONFIG_SYSCTL */ +#endif /* CONFIG_SYSCTL */ struct batched_entropy { union { @@ -2093,7 +2083,7 @@ struct batched_entropy { * point prior. */ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = { - .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock), + .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock), }; u64 get_random_u64(void) @@ -2118,7 +2108,7 @@ u64 get_random_u64(void) EXPORT_SYMBOL(get_random_u64); static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) = { - .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u32.lock), + .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u32.lock), }; u32 get_random_u32(void) { @@ -2150,7 +2140,7 @@ static void invalidate_batched_entropy(void) int cpu; unsigned long flags; - for_each_possible_cpu (cpu) { + for_each_possible_cpu(cpu) { struct batched_entropy *batched_entropy; batched_entropy = per_cpu_ptr(&batched_entropy_u32, cpu); @@ -2179,8 +2169,7 @@ static void invalidate_batched_entropy(void) * Return: A page aligned address within [start, start + range). On error, * @start is returned. */ -unsigned long -randomize_page(unsigned long start, unsigned long range) +unsigned long randomize_page(unsigned long start, unsigned long range) { if (!PAGE_ALIGNED(start)) { range -= PAGE_ALIGN(start) - start; From 6d2d29f051be6533ad1e739b2fdaaea8f5ab90e1 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Mon, 17 Jan 2022 18:43:02 +0100 Subject: [PATCH 0149/1842] random: simplify arithmetic function flow in account() commit a254a0e4093fce8c832414a83940736067eed515 upstream. Now that have_bytes is never modified, we can simplify this function. First, we move the check for negative entropy_count to be first. That ensures that subsequent reads of this will be non-negative. Then, have_bytes and ibytes can be folded into their one use site in the min_t() function. Suggested-by: Dominik Brodowski Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 1ad9d9993e7b..40d6c51e5ef8 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1293,7 +1293,7 @@ EXPORT_SYMBOL_GPL(add_disk_randomness); */ static size_t account(size_t nbytes, int min) { - int entropy_count, orig, have_bytes; + int entropy_count, orig; size_t ibytes, nfrac; BUG_ON(input_pool.entropy_count > POOL_FRACBITS); @@ -1301,20 +1301,15 @@ static size_t account(size_t nbytes, int min) /* Can we pull enough? */ retry: entropy_count = orig = READ_ONCE(input_pool.entropy_count); - ibytes = nbytes; - /* never pull more than available */ - have_bytes = entropy_count >> (POOL_ENTROPY_SHIFT + 3); - - if (have_bytes < 0) - have_bytes = 0; - ibytes = min_t(size_t, ibytes, have_bytes); - if (ibytes < min) - ibytes = 0; - if (WARN_ON(entropy_count < 0)) { pr_warn("negative entropy count: count %d\n", entropy_count); entropy_count = 0; } + + /* never pull more than available */ + ibytes = min_t(size_t, nbytes, entropy_count >> (POOL_ENTROPY_SHIFT + 3)); + if (ibytes < min) + ibytes = 0; nfrac = ibytes << (POOL_ENTROPY_SHIFT + 3); if ((size_t)entropy_count > nfrac) entropy_count -= nfrac; From 0b9e36e895bbdfea615e7df713dcea65930e0f4d Mon Sep 17 00:00:00 2001 From: Dominik Brodowski Date: Tue, 25 Jan 2022 21:14:57 +0100 Subject: [PATCH 0150/1842] random: continually use hwgenerator randomness commit c321e907aa4803d562d6e70ebed9444ad082f953 upstream. The rngd kernel thread may sleep indefinitely if the entropy count is kept above random_write_wakeup_bits by other entropy sources. To make best use of multiple sources of randomness, mix entropy from hardware RNGs into the pool at least once within CRNG_RESEED_INTERVAL. Cc: Herbert Xu Cc: Jason A. Donenfeld Signed-off-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 40d6c51e5ef8..d4111220bbb0 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -2198,13 +2198,15 @@ void add_hwgenerator_randomness(const char *buffer, size_t count, return; } - /* Suspend writing if we're above the trickle threshold. + /* Throttle writing if we're above the trickle threshold. * We'll be woken up again once below random_write_wakeup_thresh, - * or when the calling thread is about to terminate. + * when the calling thread is about to terminate, or once + * CRNG_RESEED_INTERVAL has lapsed. */ - wait_event_interruptible(random_write_wait, + wait_event_interruptible_timeout(random_write_wait, !system_wq || kthread_should_stop() || - POOL_ENTROPY_BITS() <= random_write_wakeup_bits); + POOL_ENTROPY_BITS() <= random_write_wakeup_bits, + CRNG_RESEED_INTERVAL); mix_pool_bytes(buffer, count); credit_entropy_bits(entropy); } From 480fd91dcdc7074628d46b9ed1e597f328e027d1 Mon Sep 17 00:00:00 2001 From: Dominik Brodowski Date: Sun, 30 Jan 2022 22:03:19 +0100 Subject: [PATCH 0151/1842] random: access primary_pool directly rather than through pointer commit ebf7606388732ecf2821ca21087e9446cb4a5b57 upstream. Both crng_initialize_primary() and crng_init_try_arch_early() are only called for the primary_pool. Accessing it directly instead of through a function parameter simplifies the code. Signed-off-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index d4111220bbb0..0685e64b5cb3 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -762,7 +762,7 @@ static bool crng_init_try_arch(struct crng_state *crng) return arch_init; } -static bool __init crng_init_try_arch_early(struct crng_state *crng) +static bool __init crng_init_try_arch_early(void) { int i; bool arch_init = true; @@ -774,7 +774,7 @@ static bool __init crng_init_try_arch_early(struct crng_state *crng) rv = random_get_entropy(); arch_init = false; } - crng->state[i] ^= rv; + primary_crng.state[i] ^= rv; } return arch_init; @@ -788,16 +788,16 @@ static void crng_initialize_secondary(struct crng_state *crng) crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1; } -static void __init crng_initialize_primary(struct crng_state *crng) +static void __init crng_initialize_primary(void) { - _extract_entropy(&crng->state[4], sizeof(u32) * 12); - if (crng_init_try_arch_early(crng) && trust_cpu && crng_init < 2) { + _extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); + if (crng_init_try_arch_early() && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); numa_crng_init(); crng_init = 2; pr_notice("crng init done (trusting CPU's manufacturer)\n"); } - crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1; + primary_crng.init_time = jiffies - CRNG_RESEED_INTERVAL - 1; } static void crng_finalize_init(struct crng_state *crng) @@ -1698,7 +1698,7 @@ int __init rand_initialize(void) init_std_data(); if (crng_need_final_init) crng_finalize_init(&primary_crng); - crng_initialize_primary(&primary_crng); + crng_initialize_primary(); crng_global_init_time = jiffies; if (ratelimit_disable) { urandom_warning.interval = 0; From e0cc561e4758fbd6e371ff0ff2b9b00d97ec329a Mon Sep 17 00:00:00 2001 From: Dominik Brodowski Date: Sun, 30 Jan 2022 22:03:20 +0100 Subject: [PATCH 0152/1842] random: only call crng_finalize_init() for primary_crng commit 9d5505f1eebeca778074a0260ed077fd85f8792c upstream. crng_finalize_init() returns instantly if it is called for another pool than primary_crng. The test whether crng_finalize_init() is still required can be moved to the relevant caller in crng_reseed(), and crng_need_final_init can be reset to false if crng_finalize_init() is called with workqueues ready. Then, no previous callsite will call crng_finalize_init() unless it is needed, and we can get rid of the superfluous function parameter. Signed-off-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 0685e64b5cb3..6c3eee2a9004 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -800,10 +800,8 @@ static void __init crng_initialize_primary(void) primary_crng.init_time = jiffies - CRNG_RESEED_INTERVAL - 1; } -static void crng_finalize_init(struct crng_state *crng) +static void crng_finalize_init(void) { - if (crng != &primary_crng || crng_init >= 2) - return; if (!system_wq) { /* We can't call numa_crng_init until we have workqueues, * so mark this for processing later. */ @@ -814,6 +812,7 @@ static void crng_finalize_init(struct crng_state *crng) invalidate_batched_entropy(); numa_crng_init(); crng_init = 2; + crng_need_final_init = false; process_random_ready_list(); wake_up_interruptible(&crng_init_wait); kill_fasync(&fasync, SIGIO, POLL_IN); @@ -980,7 +979,8 @@ static void crng_reseed(struct crng_state *crng, bool use_input_pool) memzero_explicit(&buf, sizeof(buf)); WRITE_ONCE(crng->init_time, jiffies); spin_unlock_irqrestore(&crng->lock, flags); - crng_finalize_init(crng); + if (crng == &primary_crng && crng_init < 2) + crng_finalize_init(); } static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZE]) @@ -1697,7 +1697,7 @@ int __init rand_initialize(void) { init_std_data(); if (crng_need_final_init) - crng_finalize_init(&primary_crng); + crng_finalize_init(); crng_initialize_primary(); crng_global_init_time = jiffies; if (ratelimit_disable) { From de0727c0c44884afde52f4bd097edfbb4f9a7327 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sun, 16 Jan 2022 14:23:10 +0100 Subject: [PATCH 0153/1842] random: use computational hash for entropy extraction commit 6e8ec2552c7d13991148e551e3325a624d73fac6 upstream. The current 4096-bit LFSR used for entropy collection had a few desirable attributes for the context in which it was created. For example, the state was huge, which meant that /dev/random would be able to output quite a bit of accumulated entropy before blocking. It was also, in its time, quite fast at accumulating entropy byte-by-byte, which matters given the varying contexts in which mix_pool_bytes() is called. And its diffusion was relatively high, which meant that changes would ripple across several words of state rather quickly. However, it also suffers from a few security vulnerabilities. In particular, inputs learned by an attacker can be undone, but moreover, if the state of the pool leaks, its contents can be controlled and entirely zeroed out. I've demonstrated this attack with this SMT2 script, , which Boolector/CaDiCal solves in a matter of seconds on a single core of my laptop, resulting in little proof of concept C demonstrators such as . For basically all recent formal models of RNGs, these attacks represent a significant cryptographic flaw. But how does this manifest practically? If an attacker has access to the system to such a degree that he can learn the internal state of the RNG, arguably there are other lower hanging vulnerabilities -- side-channel, infoleak, or otherwise -- that might have higher priority. On the other hand, seed files are frequently used on systems that have a hard time generating much entropy on their own, and these seed files, being files, often leak or are duplicated and distributed accidentally, or are even seeded over the Internet intentionally, where their contents might be recorded or tampered with. Seen this way, an otherwise quasi-implausible vulnerability is a bit more practical than initially thought. Another aspect of the current mix_pool_bytes() function is that, while its performance was arguably competitive for the time in which it was created, it's no longer considered so. This patch improves performance significantly: on a high-end CPU, an i7-11850H, it improves performance of mix_pool_bytes() by 225%, and on a low-end CPU, a Cortex-A7, it improves performance by 103%. This commit replaces the LFSR of mix_pool_bytes() with a straight- forward cryptographic hash function, BLAKE2s, which is already in use for pool extraction. Universal hashing with a secret seed was considered too, something along the lines of , but the requirement for a secret seed makes for a chicken & egg problem. Instead we go with a formally proven scheme using a computational hash function, described in sections 5.1, 6.4, and B.1.8 of . BLAKE2s outputs 256 bits, which should give us an appropriate amount of min-entropy accumulation, and a wide enough margin of collision resistance against active attacks. mix_pool_bytes() becomes a simple call to blake2s_update(), for accumulation, while the extraction step becomes a blake2s_final() to generate a seed, with which we can then do a HKDF-like or BLAKE2X-like expansion, the first part of which we fold back as an init key for subsequent blake2s_update()s, and the rest we produce to the caller. This then is provided to our CRNG like usual. In that expansion step, we make opportunistic use of 32 bytes of RDRAND output, just as before. We also always reseed the crng with 32 bytes, unconditionally, or not at all, rather than sometimes with 16 as before, as we don't win anything by limiting beyond the 16 byte threshold. Going for a hash function as an entropy collector is a conservative, proven approach. The result of all this is a much simpler and much less bespoke construction than what's there now, which not only plugs a vulnerability but also improves performance considerably. Cc: Theodore Ts'o Cc: Dominik Brodowski Reviewed-by: Eric Biggers Reviewed-by: Greg Kroah-Hartman Reviewed-by: Jean-Philippe Aumasson Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 304 ++++++++---------------------------------- 1 file changed, 55 insertions(+), 249 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 6c3eee2a9004..b1d6a9e8bf7e 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -42,61 +42,6 @@ */ /* - * (now, with legal B.S. out of the way.....) - * - * This routine gathers environmental noise from device drivers, etc., - * and returns good random numbers, suitable for cryptographic use. - * Besides the obvious cryptographic uses, these numbers are also good - * for seeding TCP sequence numbers, and other places where it is - * desirable to have numbers which are not only random, but hard to - * predict by an attacker. - * - * Theory of operation - * =================== - * - * Computers are very predictable devices. Hence it is extremely hard - * to produce truly random numbers on a computer --- as opposed to - * pseudo-random numbers, which can easily generated by using a - * algorithm. Unfortunately, it is very easy for attackers to guess - * the sequence of pseudo-random number generators, and for some - * applications this is not acceptable. So instead, we must try to - * gather "environmental noise" from the computer's environment, which - * must be hard for outside attackers to observe, and use that to - * generate random numbers. In a Unix environment, this is best done - * from inside the kernel. - * - * Sources of randomness from the environment include inter-keyboard - * timings, inter-interrupt timings from some interrupts, and other - * events which are both (a) non-deterministic and (b) hard for an - * outside observer to measure. Randomness from these sources are - * added to an "entropy pool", which is mixed using a CRC-like function. - * This is not cryptographically strong, but it is adequate assuming - * the randomness is not chosen maliciously, and it is fast enough that - * the overhead of doing it on every interrupt is very reasonable. - * As random bytes are mixed into the entropy pool, the routines keep - * an *estimate* of how many bits of randomness have been stored into - * the random number generator's internal state. - * - * When random bytes are desired, they are obtained by taking the BLAKE2s - * hash of the contents of the "entropy pool". The BLAKE2s hash avoids - * exposing the internal state of the entropy pool. It is believed to - * be computationally infeasible to derive any useful information - * about the input of BLAKE2s from its output. Even if it is possible to - * analyze BLAKE2s in some clever way, as long as the amount of data - * returned from the generator is less than the inherent entropy in - * the pool, the output data is totally unpredictable. For this - * reason, the routine decreases its internal estimate of how many - * bits of "true randomness" are contained in the entropy pool as it - * outputs random numbers. - * - * If this estimate goes to zero, the routine can still generate - * random numbers; however, an attacker may (at least in theory) be - * able to infer the future output of the generator from prior - * outputs. This requires successful cryptanalysis of BLAKE2s, which is - * not believed to be feasible, but there is a remote possibility. - * Nonetheless, these numbers should be useful for the vast majority - * of purposes. - * * Exported interfaces ---- output * =============================== * @@ -298,23 +243,6 @@ * * mknod /dev/random c 1 8 * mknod /dev/urandom c 1 9 - * - * Acknowledgements: - * ================= - * - * Ideas for constructing this random number generator were derived - * from Pretty Good Privacy's random number generator, and from private - * discussions with Phil Karn. Colin Plumb provided a faster random - * number generator, which speed up the mixing function of the entropy - * pool, taken from PGPfone. Dale Worley has also contributed many - * useful ideas and suggestions to improve this driver. - * - * Any flaws in the design are solely my responsibility, and should - * not be attributed to the Phil, Colin, or any of authors of PGP. - * - * Further background information on this topic may be obtained from - * RFC 1750, "Randomness Recommendations for Security", by Donald - * Eastlake, Steve Crocker, and Jeff Schiller. */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt @@ -358,79 +286,15 @@ /* #define ADD_INTERRUPT_BENCH */ -/* - * If the entropy count falls under this number of bits, then we - * should wake up processes which are selecting or polling on write - * access to /dev/random. - */ -static int random_write_wakeup_bits = 28 * (1 << 5); - -/* - * Originally, we used a primitive polynomial of degree .poolwords - * over GF(2). The taps for various sizes are defined below. They - * were chosen to be evenly spaced except for the last tap, which is 1 - * to get the twisting happening as fast as possible. - * - * For the purposes of better mixing, we use the CRC-32 polynomial as - * well to make a (modified) twisted Generalized Feedback Shift - * Register. (See M. Matsumoto & Y. Kurita, 1992. Twisted GFSR - * generators. ACM Transactions on Modeling and Computer Simulation - * 2(3):179-194. Also see M. Matsumoto & Y. Kurita, 1994. Twisted - * GFSR generators II. ACM Transactions on Modeling and Computer - * Simulation 4:254-266) - * - * Thanks to Colin Plumb for suggesting this. - * - * The mixing operation is much less sensitive than the output hash, - * where we use BLAKE2s. All that we want of mixing operation is that - * it be a good non-cryptographic hash; i.e. it not produce collisions - * when fed "random" data of the sort we expect to see. As long as - * the pool state differs for different inputs, we have preserved the - * input entropy and done a good job. The fact that an intelligent - * attacker can construct inputs that will produce controlled - * alterations to the pool's state is not important because we don't - * consider such inputs to contribute any randomness. The only - * property we need with respect to them is that the attacker can't - * increase his/her knowledge of the pool's state. Since all - * additions are reversible (knowing the final state and the input, - * you can reconstruct the initial state), if an attacker has any - * uncertainty about the initial state, he/she can only shuffle that - * uncertainty about, but never cause any collisions (which would - * decrease the uncertainty). - * - * Our mixing functions were analyzed by Lacharme, Roeck, Strubel, and - * Videau in their paper, "The Linux Pseudorandom Number Generator - * Revisited" (see: http://eprint.iacr.org/2012/251.pdf). In their - * paper, they point out that we are not using a true Twisted GFSR, - * since Matsumoto & Kurita used a trinomial feedback polynomial (that - * is, with only three taps, instead of the six that we are using). - * As a result, the resulting polynomial is neither primitive nor - * irreducible, and hence does not have a maximal period over - * GF(2**32). They suggest a slight change to the generator - * polynomial which improves the resulting TGFSR polynomial to be - * irreducible, which we have made here. - */ enum poolinfo { - POOL_WORDS = 128, - POOL_WORDMASK = POOL_WORDS - 1, - POOL_BYTES = POOL_WORDS * sizeof(u32), - POOL_BITS = POOL_BYTES * 8, + POOL_BITS = BLAKE2S_HASH_SIZE * 8, POOL_BITSHIFT = ilog2(POOL_BITS), /* To allow fractional bits to be tracked, the entropy_count field is * denominated in units of 1/8th bits. */ POOL_ENTROPY_SHIFT = 3, #define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIFT) - POOL_FRACBITS = POOL_BITS << POOL_ENTROPY_SHIFT, - - /* x^128 + x^104 + x^76 + x^51 +x^25 + x + 1 */ - POOL_TAP1 = 104, - POOL_TAP2 = 76, - POOL_TAP3 = 51, - POOL_TAP4 = 25, - POOL_TAP5 = 1, - - EXTRACT_SIZE = BLAKE2S_HASH_SIZE / 2 + POOL_FRACBITS = POOL_BITS << POOL_ENTROPY_SHIFT }; /* @@ -438,6 +302,12 @@ enum poolinfo { */ static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); static struct fasync_struct *fasync; +/* + * If the entropy count falls under this number of bits, then we + * should wake up processes which are selecting or polling on write + * access to /dev/random. + */ +static int random_write_wakeup_bits = POOL_BITS * 3 / 4; static DEFINE_SPINLOCK(random_ready_list_lock); static LIST_HEAD(random_ready_list); @@ -493,73 +363,31 @@ MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression"); * **********************************************************************/ -static u32 input_pool_data[POOL_WORDS] __latent_entropy; - static struct { + struct blake2s_state hash; spinlock_t lock; - u16 add_ptr; - u16 input_rotate; int entropy_count; } input_pool = { + .hash.h = { BLAKE2S_IV0 ^ (0x01010000 | BLAKE2S_HASH_SIZE), + BLAKE2S_IV1, BLAKE2S_IV2, BLAKE2S_IV3, BLAKE2S_IV4, + BLAKE2S_IV5, BLAKE2S_IV6, BLAKE2S_IV7 }, + .hash.outlen = BLAKE2S_HASH_SIZE, .lock = __SPIN_LOCK_UNLOCKED(input_pool.lock), }; -static ssize_t extract_entropy(void *buf, size_t nbytes, int min); -static ssize_t _extract_entropy(void *buf, size_t nbytes); +static bool extract_entropy(void *buf, size_t nbytes, int min); +static void _extract_entropy(void *buf, size_t nbytes); static void crng_reseed(struct crng_state *crng, bool use_input_pool); -static const u32 twist_table[8] = { - 0x00000000, 0x3b6e20c8, 0x76dc4190, 0x4db26158, - 0xedb88320, 0xd6d6a3e8, 0x9b64c2b0, 0xa00ae278 }; - /* * This function adds bytes into the entropy "pool". It does not * update the entropy estimate. The caller should call * credit_entropy_bits if this is appropriate. - * - * The pool is stirred with a primitive polynomial of the appropriate - * degree, and then twisted. We twist by three bits at a time because - * it's cheap to do so and helps slightly in the expected case where - * the entropy is concentrated in the low-order bits. */ static void _mix_pool_bytes(const void *in, int nbytes) { - unsigned long i; - int input_rotate; - const u8 *bytes = in; - u32 w; - - input_rotate = input_pool.input_rotate; - i = input_pool.add_ptr; - - /* mix one byte at a time to simplify size handling and churn faster */ - while (nbytes--) { - w = rol32(*bytes++, input_rotate); - i = (i - 1) & POOL_WORDMASK; - - /* XOR in the various taps */ - w ^= input_pool_data[i]; - w ^= input_pool_data[(i + POOL_TAP1) & POOL_WORDMASK]; - w ^= input_pool_data[(i + POOL_TAP2) & POOL_WORDMASK]; - w ^= input_pool_data[(i + POOL_TAP3) & POOL_WORDMASK]; - w ^= input_pool_data[(i + POOL_TAP4) & POOL_WORDMASK]; - w ^= input_pool_data[(i + POOL_TAP5) & POOL_WORDMASK]; - - /* Mix the result back in with a twist */ - input_pool_data[i] = (w >> 3) ^ twist_table[w & 7]; - - /* - * Normally, we add 7 bits of rotation to the pool. - * At the beginning of the pool, add an extra 7 bits - * rotation, so that successive passes spread the - * input bits across the pool evenly. - */ - input_rotate = (input_rotate + (i ? 7 : 14)) & 31; - } - - input_pool.input_rotate = input_rotate; - input_pool.add_ptr = i; + blake2s_update(&input_pool.hash, in, nbytes); } static void __mix_pool_bytes(const void *in, int nbytes) @@ -953,15 +781,14 @@ static int crng_slow_load(const u8 *cp, size_t len) static void crng_reseed(struct crng_state *crng, bool use_input_pool) { unsigned long flags; - int i, num; + int i; union { u8 block[CHACHA_BLOCK_SIZE]; u32 key[8]; } buf; if (use_input_pool) { - num = extract_entropy(&buf, 32, 16); - if (num == 0) + if (!extract_entropy(&buf, 32, 16)) return; } else { _extract_crng(&primary_crng, buf.block); @@ -1329,74 +1156,48 @@ static size_t account(size_t nbytes, int min) } /* - * This function does the actual extraction for extract_entropy. - * - * Note: we assume that .poolwords is a multiple of 16 words. + * This is an HKDF-like construction for using the hashed collected entropy + * as a PRF key, that's then expanded block-by-block. */ -static void extract_buf(u8 *out) +static void _extract_entropy(void *buf, size_t nbytes) { - struct blake2s_state state __aligned(__alignof__(unsigned long)); - u8 hash[BLAKE2S_HASH_SIZE]; - unsigned long *salt; unsigned long flags; + u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE]; + struct { + unsigned long rdrand[32 / sizeof(long)]; + size_t counter; + } block; + size_t i; - blake2s_init(&state, sizeof(hash)); - - /* - * If we have an architectural hardware random number - * generator, use it for BLAKE2's salt & personal fields. - */ - for (salt = (unsigned long *)&state.h[4]; - salt < (unsigned long *)&state.h[8]; ++salt) { - unsigned long v; - if (!arch_get_random_long(&v)) - break; - *salt ^= v; + for (i = 0; i < ARRAY_SIZE(block.rdrand); ++i) { + if (!arch_get_random_long(&block.rdrand[i])) + block.rdrand[i] = random_get_entropy(); } - /* Generate a hash across the pool */ spin_lock_irqsave(&input_pool.lock, flags); - blake2s_update(&state, (const u8 *)input_pool_data, POOL_BYTES); - blake2s_final(&state, hash); /* final zeros out state */ - /* - * We mix the hash back into the pool to prevent backtracking - * attacks (where the attacker knows the state of the pool - * plus the current outputs, and attempts to find previous - * outputs), unless the hash function can be inverted. By - * mixing at least a hash worth of hash data back, we make - * brute-forcing the feedback as hard as brute-forcing the - * hash. - */ - __mix_pool_bytes(hash, sizeof(hash)); + /* seed = HASHPRF(last_key, entropy_input) */ + blake2s_final(&input_pool.hash, seed); + + /* next_key = HASHPRF(seed, RDRAND || 0) */ + block.counter = 0; + blake2s(next_key, (u8 *)&block, seed, sizeof(next_key), sizeof(block), sizeof(seed)); + blake2s_init_key(&input_pool.hash, BLAKE2S_HASH_SIZE, next_key, sizeof(next_key)); + spin_unlock_irqrestore(&input_pool.lock, flags); - - /* Note that EXTRACT_SIZE is half of hash size here, because above - * we've dumped the full length back into mixer. By reducing the - * amount that we emit, we retain a level of forward secrecy. - */ - memcpy(out, hash, EXTRACT_SIZE); - memzero_explicit(hash, sizeof(hash)); -} - -static ssize_t _extract_entropy(void *buf, size_t nbytes) -{ - ssize_t ret = 0, i; - u8 tmp[EXTRACT_SIZE]; + memzero_explicit(next_key, sizeof(next_key)); while (nbytes) { - extract_buf(tmp); - i = min_t(int, nbytes, EXTRACT_SIZE); - memcpy(buf, tmp, i); + i = min_t(size_t, nbytes, BLAKE2S_HASH_SIZE); + /* output = HASHPRF(seed, RDRAND || ++counter) */ + ++block.counter; + blake2s(buf, (u8 *)&block, seed, i, sizeof(block), sizeof(seed)); nbytes -= i; buf += i; - ret += i; } - /* Wipe data just returned from memory */ - memzero_explicit(tmp, sizeof(tmp)); - - return ret; + memzero_explicit(seed, sizeof(seed)); + memzero_explicit(&block, sizeof(block)); } /* @@ -1404,13 +1205,18 @@ static ssize_t _extract_entropy(void *buf, size_t nbytes) * returns it in a buffer. * * The min parameter specifies the minimum amount we can pull before - * failing to avoid races that defeat catastrophic reseeding. + * failing to avoid races that defeat catastrophic reseeding. If we + * have less than min entropy available, we return false and buf is + * not filled. */ -static ssize_t extract_entropy(void *buf, size_t nbytes, int min) +static bool extract_entropy(void *buf, size_t nbytes, int min) { trace_extract_entropy(nbytes, POOL_ENTROPY_BITS(), _RET_IP_); - nbytes = account(nbytes, min); - return _extract_entropy(buf, nbytes); + if (account(nbytes, min)) { + _extract_entropy(buf, nbytes); + return true; + } + return false; } #define warn_unseeded_randomness(previous) \ @@ -1674,7 +1480,7 @@ static void __init init_std_data(void) unsigned long rv; mix_pool_bytes(&now, sizeof(now)); - for (i = POOL_BYTES; i > 0; i -= sizeof(rv)) { + for (i = BLAKE2S_BLOCK_SIZE; i > 0; i -= sizeof(rv)) { if (!arch_get_random_seed_long(&rv) && !arch_get_random_long(&rv)) rv = random_get_entropy(); From bb9c45cfb97e7ad9942f6e076ea68a0663759f56 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Wed, 2 Feb 2022 13:30:03 +0100 Subject: [PATCH 0154/1842] random: simplify entropy debiting commit 9c07f57869e90140080cfc282cc628d123e27704 upstream. Our pool is 256 bits, and we only ever use all of it or don't use it at all, which is decided by whether or not it has at least 128 bits in it. So we can drastically simplify the accounting and cmpxchg loop to do exactly this. While we're at it, we move the minimum bit size into a constant so it can be shared between the two places where it matters. The reason we want any of this is for the case in which an attacker has compromised the current state, and then bruteforces small amounts of entropy added to it. By demanding a particular minimum amount of entropy be present before reseeding, we make that bruteforcing difficult. Note that this rationale no longer includes anything about /dev/random blocking at the right moment, since /dev/random no longer blocks (except for at ~boot), but rather uses the crng. In a former life, /dev/random was different and therefore required a more nuanced account(), but this is no longer. Behaviorally, nothing changes here. This is just a simplification of the code. Cc: Theodore Ts'o Cc: Greg Kroah-Hartman Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 91 ++++++++--------------------------- include/trace/events/random.h | 30 +++--------- 2 files changed, 27 insertions(+), 94 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index b1d6a9e8bf7e..4e1fd614a214 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -289,12 +289,14 @@ enum poolinfo { POOL_BITS = BLAKE2S_HASH_SIZE * 8, POOL_BITSHIFT = ilog2(POOL_BITS), + POOL_MIN_BITS = POOL_BITS / 2, /* To allow fractional bits to be tracked, the entropy_count field is * denominated in units of 1/8th bits. */ POOL_ENTROPY_SHIFT = 3, #define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIFT) - POOL_FRACBITS = POOL_BITS << POOL_ENTROPY_SHIFT + POOL_FRACBITS = POOL_BITS << POOL_ENTROPY_SHIFT, + POOL_MIN_FRACBITS = POOL_MIN_BITS << POOL_ENTROPY_SHIFT }; /* @@ -375,8 +377,7 @@ static struct { .lock = __SPIN_LOCK_UNLOCKED(input_pool.lock), }; -static bool extract_entropy(void *buf, size_t nbytes, int min); -static void _extract_entropy(void *buf, size_t nbytes); +static void extract_entropy(void *buf, size_t nbytes); static void crng_reseed(struct crng_state *crng, bool use_input_pool); @@ -467,7 +468,7 @@ static void process_random_ready_list(void) */ static void credit_entropy_bits(int nbits) { - int entropy_count, entropy_bits, orig; + int entropy_count, orig; int nfrac = nbits << POOL_ENTROPY_SHIFT; /* Ensure that the multiplication can avoid being 64 bits wide. */ @@ -527,8 +528,7 @@ static void credit_entropy_bits(int nbits) trace_credit_entropy_bits(nbits, entropy_count >> POOL_ENTROPY_SHIFT, _RET_IP_); - entropy_bits = entropy_count >> POOL_ENTROPY_SHIFT; - if (crng_init < 2 && entropy_bits >= 128) + if (crng_init < 2 && entropy_count >= POOL_MIN_FRACBITS) crng_reseed(&primary_crng, true); } @@ -618,7 +618,7 @@ static void crng_initialize_secondary(struct crng_state *crng) static void __init crng_initialize_primary(void) { - _extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); + extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); if (crng_init_try_arch_early() && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); numa_crng_init(); @@ -788,8 +788,17 @@ static void crng_reseed(struct crng_state *crng, bool use_input_pool) } buf; if (use_input_pool) { - if (!extract_entropy(&buf, 32, 16)) - return; + int entropy_count; + do { + entropy_count = READ_ONCE(input_pool.entropy_count); + if (entropy_count < POOL_MIN_FRACBITS) + return; + } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count); + extract_entropy(buf.key, sizeof(buf.key)); + if (random_write_wakeup_bits) { + wake_up_interruptible(&random_write_wait); + kill_fasync(&fasync, SIGIO, POLL_OUT); + } } else { _extract_crng(&primary_crng, buf.block); _crng_backtrack_protect(&primary_crng, buf.block, @@ -1114,52 +1123,11 @@ EXPORT_SYMBOL_GPL(add_disk_randomness); * *********************************************************************/ -/* - * This function decides how many bytes to actually take from the - * given pool, and also debits the entropy count accordingly. - */ -static size_t account(size_t nbytes, int min) -{ - int entropy_count, orig; - size_t ibytes, nfrac; - - BUG_ON(input_pool.entropy_count > POOL_FRACBITS); - - /* Can we pull enough? */ -retry: - entropy_count = orig = READ_ONCE(input_pool.entropy_count); - if (WARN_ON(entropy_count < 0)) { - pr_warn("negative entropy count: count %d\n", entropy_count); - entropy_count = 0; - } - - /* never pull more than available */ - ibytes = min_t(size_t, nbytes, entropy_count >> (POOL_ENTROPY_SHIFT + 3)); - if (ibytes < min) - ibytes = 0; - nfrac = ibytes << (POOL_ENTROPY_SHIFT + 3); - if ((size_t)entropy_count > nfrac) - entropy_count -= nfrac; - else - entropy_count = 0; - - if (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig) - goto retry; - - trace_debit_entropy(8 * ibytes); - if (ibytes && POOL_ENTROPY_BITS() < random_write_wakeup_bits) { - wake_up_interruptible(&random_write_wait); - kill_fasync(&fasync, SIGIO, POLL_OUT); - } - - return ibytes; -} - /* * This is an HKDF-like construction for using the hashed collected entropy * as a PRF key, that's then expanded block-by-block. */ -static void _extract_entropy(void *buf, size_t nbytes) +static void extract_entropy(void *buf, size_t nbytes) { unsigned long flags; u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE]; @@ -1169,6 +1137,8 @@ static void _extract_entropy(void *buf, size_t nbytes) } block; size_t i; + trace_extract_entropy(nbytes, POOL_ENTROPY_BITS()); + for (i = 0; i < ARRAY_SIZE(block.rdrand); ++i) { if (!arch_get_random_long(&block.rdrand[i])) block.rdrand[i] = random_get_entropy(); @@ -1200,25 +1170,6 @@ static void _extract_entropy(void *buf, size_t nbytes) memzero_explicit(&block, sizeof(block)); } -/* - * This function extracts randomness from the "entropy pool", and - * returns it in a buffer. - * - * The min parameter specifies the minimum amount we can pull before - * failing to avoid races that defeat catastrophic reseeding. If we - * have less than min entropy available, we return false and buf is - * not filled. - */ -static bool extract_entropy(void *buf, size_t nbytes, int min) -{ - trace_extract_entropy(nbytes, POOL_ENTROPY_BITS(), _RET_IP_); - if (account(nbytes, min)) { - _extract_entropy(buf, nbytes); - return true; - } - return false; -} - #define warn_unseeded_randomness(previous) \ _warn_unseeded_randomness(__func__, (void *)_RET_IP_, (previous)) diff --git a/include/trace/events/random.h b/include/trace/events/random.h index a2d9aa16a5d7..ad149aeaf42c 100644 --- a/include/trace/events/random.h +++ b/include/trace/events/random.h @@ -79,22 +79,6 @@ TRACE_EVENT(credit_entropy_bits, __entry->bits, __entry->entropy_count, (void *)__entry->IP) ); -TRACE_EVENT(debit_entropy, - TP_PROTO(int debit_bits), - - TP_ARGS( debit_bits), - - TP_STRUCT__entry( - __field( int, debit_bits ) - ), - - TP_fast_assign( - __entry->debit_bits = debit_bits; - ), - - TP_printk("input pool: debit_bits %d", __entry->debit_bits) -); - TRACE_EVENT(add_input_randomness, TP_PROTO(int input_bits), @@ -161,31 +145,29 @@ DEFINE_EVENT(random__get_random_bytes, get_random_bytes_arch, ); DECLARE_EVENT_CLASS(random__extract_entropy, - TP_PROTO(int nbytes, int entropy_count, unsigned long IP), + TP_PROTO(int nbytes, int entropy_count), - TP_ARGS(nbytes, entropy_count, IP), + TP_ARGS(nbytes, entropy_count), TP_STRUCT__entry( __field( int, nbytes ) __field( int, entropy_count ) - __field(unsigned long, IP ) ), TP_fast_assign( __entry->nbytes = nbytes; __entry->entropy_count = entropy_count; - __entry->IP = IP; ), - TP_printk("input pool: nbytes %d entropy_count %d caller %pS", - __entry->nbytes, __entry->entropy_count, (void *)__entry->IP) + TP_printk("input pool: nbytes %d entropy_count %d", + __entry->nbytes, __entry->entropy_count) ); DEFINE_EVENT(random__extract_entropy, extract_entropy, - TP_PROTO(int nbytes, int entropy_count, unsigned long IP), + TP_PROTO(int nbytes, int entropy_count), - TP_ARGS(nbytes, entropy_count, IP) + TP_ARGS(nbytes, entropy_count) ); TRACE_EVENT(urandom_read, From 985292206167ddd2d6db65c8911417c6e8e3688d Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 3 Feb 2022 13:28:06 +0100 Subject: [PATCH 0155/1842] random: use linear min-entropy accumulation crediting MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit c570449094844527577c5c914140222cb1893e3f upstream. 30e37ec516ae ("random: account for entropy loss due to overwrites") assumed that adding new entropy to the LFSR pool probabilistically cancelled out old entropy there, so entropy was credited asymptotically, approximating Shannon entropy of independent sources (rather than a stronger min-entropy notion) using 1/8th fractional bits and replacing a constant 2-2/√𝑒 term (~0.786938) with 3/4 (0.75) to slightly underestimate it. This wasn't superb, but it was perhaps better than nothing, so that's what was done. Which entropy specifically was being cancelled out and how much precisely each time is hard to tell, though as I showed with the attack code in my previous commit, a motivated adversary with sufficient information can actually cancel out everything. Since we're no longer using an LFSR for entropy accumulation, this probabilistic cancellation is no longer relevant. Rather, we're now using a computational hash function as the accumulator and we've switched to working in the random oracle model, from which we can now revisit the question of min-entropy accumulation, which is done in detail in . Consider a long input bit string that is built by concatenating various smaller independent input bit strings. Each one of these inputs has a designated min-entropy, which is what we're passing to credit_entropy_bits(h). When we pass the concatenation of these to a random oracle, it means that an adversary trying to receive back the same reply as us would need to become certain about each part of the concatenated bit string we passed in, which means becoming certain about all of those h values. That means we can estimate the accumulation by simply adding up the h values in calls to credit_entropy_bits(h); there's no probabilistic cancellation at play like there was said to be for the LFSR. Incidentally, this is also what other entropy accumulators based on computational hash functions do as well. So this commit replaces credit_entropy_bits(h) with essentially `total = min(POOL_BITS, total + h)`, done with a cmpxchg loop as before. What if we're wrong and the above is nonsense? It's not, but let's assume we don't want the actual _behavior_ of the code to change much. Currently that behavior is not extracting from the input pool until it has 128 bits of entropy in it. With the old algorithm, we'd hit that magic 128 number after roughly 256 calls to credit_entropy_bits(1). So, we can retain more or less the old behavior by waiting to extract from the input pool until it hits 256 bits of entropy using the new code. For people concerned about this change, it means that there's not that much practical behavioral change. And for folks actually trying to model the behavior rigorously, it means that we have an even higher margin against attacks. Cc: Theodore Ts'o Cc: Dominik Brodowski Cc: Greg Kroah-Hartman Reviewed-by: Eric Biggers Reviewed-by: Jean-Philippe Aumasson Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 114 ++++++++---------------------------------- 1 file changed, 20 insertions(+), 94 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 4e1fd614a214..f678620a25f8 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -286,17 +286,9 @@ /* #define ADD_INTERRUPT_BENCH */ -enum poolinfo { +enum { POOL_BITS = BLAKE2S_HASH_SIZE * 8, - POOL_BITSHIFT = ilog2(POOL_BITS), - POOL_MIN_BITS = POOL_BITS / 2, - - /* To allow fractional bits to be tracked, the entropy_count field is - * denominated in units of 1/8th bits. */ - POOL_ENTROPY_SHIFT = 3, -#define POOL_ENTROPY_BITS() (input_pool.entropy_count >> POOL_ENTROPY_SHIFT) - POOL_FRACBITS = POOL_BITS << POOL_ENTROPY_SHIFT, - POOL_MIN_FRACBITS = POOL_MIN_BITS << POOL_ENTROPY_SHIFT + POOL_MIN_BITS = POOL_BITS /* No point in settling for less. */ }; /* @@ -309,7 +301,7 @@ static struct fasync_struct *fasync; * should wake up processes which are selecting or polling on write * access to /dev/random. */ -static int random_write_wakeup_bits = POOL_BITS * 3 / 4; +static int random_write_wakeup_bits = POOL_MIN_BITS; static DEFINE_SPINLOCK(random_ready_list_lock); static LIST_HEAD(random_ready_list); @@ -469,66 +461,18 @@ static void process_random_ready_list(void) static void credit_entropy_bits(int nbits) { int entropy_count, orig; - int nfrac = nbits << POOL_ENTROPY_SHIFT; - - /* Ensure that the multiplication can avoid being 64 bits wide. */ - BUILD_BUG_ON(2 * (POOL_ENTROPY_SHIFT + POOL_BITSHIFT) > 31); if (!nbits) return; -retry: - entropy_count = orig = READ_ONCE(input_pool.entropy_count); - if (nfrac < 0) { - /* Debit */ - entropy_count += nfrac; - } else { - /* - * Credit: we have to account for the possibility of - * overwriting already present entropy. Even in the - * ideal case of pure Shannon entropy, new contributions - * approach the full value asymptotically: - * - * entropy <- entropy + (pool_size - entropy) * - * (1 - exp(-add_entropy/pool_size)) - * - * For add_entropy <= pool_size/2 then - * (1 - exp(-add_entropy/pool_size)) >= - * (add_entropy/pool_size)*0.7869... - * so we can approximate the exponential with - * 3/4*add_entropy/pool_size and still be on the - * safe side by adding at most pool_size/2 at a time. - * - * The use of pool_size-2 in the while statement is to - * prevent rounding artifacts from making the loop - * arbitrarily long; this limits the loop to log2(pool_size)*2 - * turns no matter how large nbits is. - */ - int pnfrac = nfrac; - const int s = POOL_BITSHIFT + POOL_ENTROPY_SHIFT + 2; - /* The +2 corresponds to the /4 in the denominator */ + do { + orig = READ_ONCE(input_pool.entropy_count); + entropy_count = min(POOL_BITS, orig + nbits); + } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig); - do { - unsigned int anfrac = min(pnfrac, POOL_FRACBITS / 2); - unsigned int add = - ((POOL_FRACBITS - entropy_count) * anfrac * 3) >> s; + trace_credit_entropy_bits(nbits, entropy_count, _RET_IP_); - entropy_count += add; - pnfrac -= anfrac; - } while (unlikely(entropy_count < POOL_FRACBITS - 2 && pnfrac)); - } - - if (WARN_ON(entropy_count < 0)) { - pr_warn("negative entropy/overflow: count %d\n", entropy_count); - entropy_count = 0; - } else if (entropy_count > POOL_FRACBITS) - entropy_count = POOL_FRACBITS; - if (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig) - goto retry; - - trace_credit_entropy_bits(nbits, entropy_count >> POOL_ENTROPY_SHIFT, _RET_IP_); - - if (crng_init < 2 && entropy_count >= POOL_MIN_FRACBITS) + if (crng_init < 2 && entropy_count >= POOL_MIN_BITS) crng_reseed(&primary_crng, true); } @@ -791,7 +735,7 @@ static void crng_reseed(struct crng_state *crng, bool use_input_pool) int entropy_count; do { entropy_count = READ_ONCE(input_pool.entropy_count); - if (entropy_count < POOL_MIN_FRACBITS) + if (entropy_count < POOL_MIN_BITS) return; } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count); extract_entropy(buf.key, sizeof(buf.key)); @@ -1014,7 +958,7 @@ void add_input_randomness(unsigned int type, unsigned int code, last_value = value; add_timer_randomness(&input_timer_state, (type << 4) ^ code ^ (code >> 4) ^ value); - trace_add_input_randomness(POOL_ENTROPY_BITS()); + trace_add_input_randomness(input_pool.entropy_count); } EXPORT_SYMBOL_GPL(add_input_randomness); @@ -1112,7 +1056,7 @@ void add_disk_randomness(struct gendisk *disk) return; /* first major is 1, so we get >= 0x200 here */ add_timer_randomness(disk->random, 0x100 + disk_devt(disk)); - trace_add_disk_randomness(disk_devt(disk), POOL_ENTROPY_BITS()); + trace_add_disk_randomness(disk_devt(disk), input_pool.entropy_count); } EXPORT_SYMBOL_GPL(add_disk_randomness); #endif @@ -1137,7 +1081,7 @@ static void extract_entropy(void *buf, size_t nbytes) } block; size_t i; - trace_extract_entropy(nbytes, POOL_ENTROPY_BITS()); + trace_extract_entropy(nbytes, input_pool.entropy_count); for (i = 0; i < ARRAY_SIZE(block.rdrand); ++i) { if (!arch_get_random_long(&block.rdrand[i])) @@ -1486,9 +1430,9 @@ static ssize_t urandom_read_nowarn(struct file *file, char __user *buf, { int ret; - nbytes = min_t(size_t, nbytes, INT_MAX >> (POOL_ENTROPY_SHIFT + 3)); + nbytes = min_t(size_t, nbytes, INT_MAX >> 6); ret = extract_crng_user(buf, nbytes); - trace_urandom_read(8 * nbytes, 0, POOL_ENTROPY_BITS()); + trace_urandom_read(8 * nbytes, 0, input_pool.entropy_count); return ret; } @@ -1527,7 +1471,7 @@ static __poll_t random_poll(struct file *file, poll_table *wait) mask = 0; if (crng_ready()) mask |= EPOLLIN | EPOLLRDNORM; - if (POOL_ENTROPY_BITS() < random_write_wakeup_bits) + if (input_pool.entropy_count < random_write_wakeup_bits) mask |= EPOLLOUT | EPOLLWRNORM; return mask; } @@ -1582,8 +1526,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) switch (cmd) { case RNDGETENTCNT: /* inherently racy, no point locking */ - ent_count = POOL_ENTROPY_BITS(); - if (put_user(ent_count, p)) + if (put_user(input_pool.entropy_count, p)) return -EFAULT; return 0; case RNDADDTOENTCNT: @@ -1734,23 +1677,6 @@ static int proc_do_uuid(struct ctl_table *table, int write, void *buffer, return proc_dostring(&fake_table, write, buffer, lenp, ppos); } -/* - * Return entropy available scaled to integral bits - */ -static int proc_do_entropy(struct ctl_table *table, int write, void *buffer, - size_t *lenp, loff_t *ppos) -{ - struct ctl_table fake_table; - int entropy_count; - - entropy_count = *(int *)table->data >> POOL_ENTROPY_SHIFT; - - fake_table.data = &entropy_count; - fake_table.maxlen = sizeof(entropy_count); - - return proc_dointvec(&fake_table, write, buffer, lenp, ppos); -} - static int sysctl_poolsize = POOL_BITS; extern struct ctl_table random_table[]; struct ctl_table random_table[] = { @@ -1763,10 +1689,10 @@ struct ctl_table random_table[] = { }, { .procname = "entropy_avail", + .data = &input_pool.entropy_count, .maxlen = sizeof(int), .mode = 0444, - .proc_handler = proc_do_entropy, - .data = &input_pool.entropy_count, + .proc_handler = proc_dointvec, }, { .procname = "write_wakeup_threshold", @@ -1962,7 +1888,7 @@ void add_hwgenerator_randomness(const char *buffer, size_t count, */ wait_event_interruptible_timeout(random_write_wait, !system_wq || kthread_should_stop() || - POOL_ENTROPY_BITS() <= random_write_wakeup_bits, + input_pool.entropy_count <= random_write_wakeup_bits, CRNG_RESEED_INTERVAL); mix_pool_bytes(buffer, count); credit_entropy_bits(entropy); From 32d1d7ce3aada23f944e74c519687c45aa3d05b0 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 5 Feb 2022 14:00:58 +0100 Subject: [PATCH 0156/1842] random: always wake up entropy writers after extraction commit 489c7fc44b5740d377e8cfdbf0851036e493af00 upstream. Now that POOL_BITS == POOL_MIN_BITS, we must unconditionally wake up entropy writers after every extraction. Therefore there's no point of write_wakeup_threshold, so we can move it to the dustbin of unused compatibility sysctls. While we're at it, we can fix a small comparison where we were waking up after <= min rather than < min. Cc: Theodore Ts'o Suggested-by: Eric Biggers Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- Documentation/admin-guide/sysctl/kernel.rst | 7 +++-- drivers/char/random.c | 33 +++++++-------------- 2 files changed, 16 insertions(+), 24 deletions(-) diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst index e338306f4587..ffc392289c68 100644 --- a/Documentation/admin-guide/sysctl/kernel.rst +++ b/Documentation/admin-guide/sysctl/kernel.rst @@ -1011,14 +1011,17 @@ This is a directory, with the following entries: * ``poolsize``: the entropy pool size, in bits; * ``urandom_min_reseed_secs``: obsolete (used to determine the minimum - number of seconds between urandom pool reseeding). + number of seconds between urandom pool reseeding). This file is + writable for compatibility purposes, but writing to it has no effect + on any RNG behavior. * ``uuid``: a UUID generated every time this is retrieved (this can thus be used to generate UUIDs at will); * ``write_wakeup_threshold``: when the entropy count drops below this (as a number of bits), processes waiting to write to ``/dev/random`` - are woken up. + are woken up. This file is writable for compatibility purposes, but + writing to it has no effect on any RNG behavior. If ``drivers/char/random.c`` is built with ``ADD_INTERRUPT_BENCH`` defined, these additional entries are present: diff --git a/drivers/char/random.c b/drivers/char/random.c index f678620a25f8..6e8176f4a448 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -296,12 +296,6 @@ enum { */ static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); static struct fasync_struct *fasync; -/* - * If the entropy count falls under this number of bits, then we - * should wake up processes which are selecting or polling on write - * access to /dev/random. - */ -static int random_write_wakeup_bits = POOL_MIN_BITS; static DEFINE_SPINLOCK(random_ready_list_lock); static LIST_HEAD(random_ready_list); @@ -739,10 +733,8 @@ static void crng_reseed(struct crng_state *crng, bool use_input_pool) return; } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count); extract_entropy(buf.key, sizeof(buf.key)); - if (random_write_wakeup_bits) { - wake_up_interruptible(&random_write_wait); - kill_fasync(&fasync, SIGIO, POLL_OUT); - } + wake_up_interruptible(&random_write_wait); + kill_fasync(&fasync, SIGIO, POLL_OUT); } else { _extract_crng(&primary_crng, buf.block); _crng_backtrack_protect(&primary_crng, buf.block, @@ -1471,7 +1463,7 @@ static __poll_t random_poll(struct file *file, poll_table *wait) mask = 0; if (crng_ready()) mask |= EPOLLIN | EPOLLRDNORM; - if (input_pool.entropy_count < random_write_wakeup_bits) + if (input_pool.entropy_count < POOL_MIN_BITS) mask |= EPOLLOUT | EPOLLWRNORM; return mask; } @@ -1556,7 +1548,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) */ if (!capable(CAP_SYS_ADMIN)) return -EPERM; - if (xchg(&input_pool.entropy_count, 0) && random_write_wakeup_bits) { + if (xchg(&input_pool.entropy_count, 0)) { wake_up_interruptible(&random_write_wait); kill_fasync(&fasync, SIGIO, POLL_OUT); } @@ -1636,9 +1628,9 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int, #include -static int min_write_thresh; -static int max_write_thresh = POOL_BITS; static int random_min_urandom_seed = 60; +static int random_write_wakeup_bits = POOL_MIN_BITS; +static int sysctl_poolsize = POOL_BITS; static char sysctl_bootid[16]; /* @@ -1677,7 +1669,6 @@ static int proc_do_uuid(struct ctl_table *table, int write, void *buffer, return proc_dostring(&fake_table, write, buffer, lenp, ppos); } -static int sysctl_poolsize = POOL_BITS; extern struct ctl_table random_table[]; struct ctl_table random_table[] = { { @@ -1699,9 +1690,7 @@ struct ctl_table random_table[] = { .data = &random_write_wakeup_bits, .maxlen = sizeof(int), .mode = 0644, - .proc_handler = proc_dointvec_minmax, - .extra1 = &min_write_thresh, - .extra2 = &max_write_thresh, + .proc_handler = proc_dointvec, }, { .procname = "urandom_min_reseed_secs", @@ -1882,13 +1871,13 @@ void add_hwgenerator_randomness(const char *buffer, size_t count, } /* Throttle writing if we're above the trickle threshold. - * We'll be woken up again once below random_write_wakeup_thresh, - * when the calling thread is about to terminate, or once - * CRNG_RESEED_INTERVAL has lapsed. + * We'll be woken up again once below POOL_MIN_BITS, when + * the calling thread is about to terminate, or once + * CRNG_RESEED_INTERVAL has elapsed. */ wait_event_interruptible_timeout(random_write_wait, !system_wq || kthread_should_stop() || - input_pool.entropy_count <= random_write_wakeup_bits, + input_pool.entropy_count < POOL_MIN_BITS, CRNG_RESEED_INTERVAL); mix_pool_bytes(buffer, count); credit_entropy_bits(entropy); From b07fcd9e53fa778fd00b650a6b0b39a11dcdedb4 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 4 Feb 2022 01:45:53 +0100 Subject: [PATCH 0157/1842] random: make credit_entropy_bits() always safe commit a49c010e61e1938be851f5e49ac219d49b704103 upstream. This is called from various hwgenerator drivers, so rather than having one "safe" version for userspace and one "unsafe" version for the kernel, just make everything safe; the checks are cheap and sensible to have anyway. Reported-by: Sultan Alsawaf Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 29 +++++++++-------------------- 1 file changed, 9 insertions(+), 20 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 6e8176f4a448..d5a2714a8126 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -447,18 +447,15 @@ static void process_random_ready_list(void) spin_unlock_irqrestore(&random_ready_list_lock, flags); } -/* - * Credit (or debit) the entropy store with n bits of entropy. - * Use credit_entropy_bits_safe() if the value comes from userspace - * or otherwise should be checked for extreme values. - */ static void credit_entropy_bits(int nbits) { int entropy_count, orig; - if (!nbits) + if (nbits <= 0) return; + nbits = min(nbits, POOL_BITS); + do { orig = READ_ONCE(input_pool.entropy_count); entropy_count = min(POOL_BITS, orig + nbits); @@ -470,18 +467,6 @@ static void credit_entropy_bits(int nbits) crng_reseed(&primary_crng, true); } -static int credit_entropy_bits_safe(int nbits) -{ - if (nbits < 0) - return -EINVAL; - - /* Cap the value to avoid overflows */ - nbits = min(nbits, POOL_BITS); - - credit_entropy_bits(nbits); - return 0; -} - /********************************************************************* * * CRNG using CHACHA20 @@ -1526,7 +1511,10 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) return -EPERM; if (get_user(ent_count, p)) return -EFAULT; - return credit_entropy_bits_safe(ent_count); + if (ent_count < 0) + return -EINVAL; + credit_entropy_bits(ent_count); + return 0; case RNDADDENTROPY: if (!capable(CAP_SYS_ADMIN)) return -EPERM; @@ -1539,7 +1527,8 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) retval = write_pool((const char __user *)p, size); if (retval < 0) return retval; - return credit_entropy_bits_safe(ent_count); + credit_entropy_bits(ent_count); + return 0; case RNDZAPENTCNT: case RNDCLEARPOOL: /* From 8d07e2a22687ec0b424916543ec4a544dc85b681 Mon Sep 17 00:00:00 2001 From: Eric Biggers Date: Fri, 4 Feb 2022 14:17:33 -0800 Subject: [PATCH 0158/1842] random: remove use_input_pool parameter from crng_reseed() commit 5d58ea3a31cc98b9fa563f6921d3d043bf0103d1 upstream. The primary_crng is always reseeded from the input_pool, while the NUMA crngs are always reseeded from the primary_crng. Remove the redundant 'use_input_pool' parameter from crng_reseed() and just directly check whether the crng is the primary_crng. Signed-off-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index d5a2714a8126..8dc1e22bb241 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -365,7 +365,7 @@ static struct { static void extract_entropy(void *buf, size_t nbytes); -static void crng_reseed(struct crng_state *crng, bool use_input_pool); +static void crng_reseed(struct crng_state *crng); /* * This function adds bytes into the entropy "pool". It does not @@ -464,7 +464,7 @@ static void credit_entropy_bits(int nbits) trace_credit_entropy_bits(nbits, entropy_count, _RET_IP_); if (crng_init < 2 && entropy_count >= POOL_MIN_BITS) - crng_reseed(&primary_crng, true); + crng_reseed(&primary_crng); } /********************************************************************* @@ -701,7 +701,7 @@ static int crng_slow_load(const u8 *cp, size_t len) return 1; } -static void crng_reseed(struct crng_state *crng, bool use_input_pool) +static void crng_reseed(struct crng_state *crng) { unsigned long flags; int i; @@ -710,7 +710,7 @@ static void crng_reseed(struct crng_state *crng, bool use_input_pool) u32 key[8]; } buf; - if (use_input_pool) { + if (crng == &primary_crng) { int entropy_count; do { entropy_count = READ_ONCE(input_pool.entropy_count); @@ -748,7 +748,7 @@ static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZE]) init_time = READ_ONCE(crng->init_time); if (time_after(READ_ONCE(crng_global_init_time), init_time) || time_after(jiffies, init_time + CRNG_RESEED_INTERVAL)) - crng_reseed(crng, crng == &primary_crng); + crng_reseed(crng); } spin_lock_irqsave(&crng->lock, flags); chacha20_block(&crng->state[0], out); @@ -1547,7 +1547,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) return -EPERM; if (crng_init < 2) return -ENODATA; - crng_reseed(&primary_crng, true); + crng_reseed(&primary_crng); WRITE_ONCE(crng_global_init_time, jiffies - 1); return 0; default: From 0762b7d1f1add397fe511c076b4871500a991ff0 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 28 Jan 2022 23:29:45 +0100 Subject: [PATCH 0159/1842] random: remove batched entropy locking MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 77760fd7f7ae3dfd03668204e708d1568d75447d upstream. Rather than use spinlocks to protect batched entropy, we can instead disable interrupts locally, since we're dealing with per-cpu data, and manage resets with a basic generation counter. At the same time, we can't quite do this on PREEMPT_RT, where we still want spinlocks-as- mutexes semantics. So we use a local_lock_t, which provides the right behavior for each. Because this is a per-cpu lock, that generation counter is still doing the necessary CPU-to-CPU communication. This should improve performance a bit. It will also fix the linked splat that Jonathan received with a PROVE_RAW_LOCK_NESTING=y. Reviewed-by: Sebastian Andrzej Siewior Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Suggested-by: Andy Lutomirski Reported-by: Jonathan Neuschäfer Tested-by: Jonathan Neuschäfer Link: https://lore.kernel.org/lkml/YfMa0QgsjCVdRAvJ@latitude/ Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 55 ++++++++++++++++++++++--------------------- 1 file changed, 28 insertions(+), 27 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 8dc1e22bb241..cf42ee10633e 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1721,13 +1721,16 @@ struct ctl_table random_table[] = { }; #endif /* CONFIG_SYSCTL */ +static atomic_t batch_generation = ATOMIC_INIT(0); + struct batched_entropy { union { u64 entropy_u64[CHACHA_BLOCK_SIZE / sizeof(u64)]; u32 entropy_u32[CHACHA_BLOCK_SIZE / sizeof(u32)]; }; + local_lock_t lock; unsigned int position; - spinlock_t batch_lock; + int generation; }; /* @@ -1739,7 +1742,7 @@ struct batched_entropy { * point prior. */ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = { - .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock), + .lock = INIT_LOCAL_LOCK(batched_entropy_u64.lock) }; u64 get_random_u64(void) @@ -1748,67 +1751,65 @@ u64 get_random_u64(void) unsigned long flags; struct batched_entropy *batch; static void *previous; + int next_gen; warn_unseeded_randomness(&previous); + local_lock_irqsave(&batched_entropy_u64.lock, flags); batch = raw_cpu_ptr(&batched_entropy_u64); - spin_lock_irqsave(&batch->batch_lock, flags); - if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) { + + next_gen = atomic_read(&batch_generation); + if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0 || + next_gen != batch->generation) { extract_crng((u8 *)batch->entropy_u64); batch->position = 0; + batch->generation = next_gen; } + ret = batch->entropy_u64[batch->position++]; - spin_unlock_irqrestore(&batch->batch_lock, flags); + local_unlock_irqrestore(&batched_entropy_u64.lock, flags); return ret; } EXPORT_SYMBOL(get_random_u64); static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) = { - .batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u32.lock), + .lock = INIT_LOCAL_LOCK(batched_entropy_u32.lock) }; + u32 get_random_u32(void) { u32 ret; unsigned long flags; struct batched_entropy *batch; static void *previous; + int next_gen; warn_unseeded_randomness(&previous); + local_lock_irqsave(&batched_entropy_u32.lock, flags); batch = raw_cpu_ptr(&batched_entropy_u32); - spin_lock_irqsave(&batch->batch_lock, flags); - if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) { + + next_gen = atomic_read(&batch_generation); + if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0 || + next_gen != batch->generation) { extract_crng((u8 *)batch->entropy_u32); batch->position = 0; + batch->generation = next_gen; } + ret = batch->entropy_u32[batch->position++]; - spin_unlock_irqrestore(&batch->batch_lock, flags); + local_unlock_irqrestore(&batched_entropy_u32.lock, flags); return ret; } EXPORT_SYMBOL(get_random_u32); /* It's important to invalidate all potential batched entropy that might * be stored before the crng is initialized, which we can do lazily by - * simply resetting the counter to zero so that it's re-extracted on the - * next usage. */ + * bumping the generation counter. + */ static void invalidate_batched_entropy(void) { - int cpu; - unsigned long flags; - - for_each_possible_cpu(cpu) { - struct batched_entropy *batched_entropy; - - batched_entropy = per_cpu_ptr(&batched_entropy_u32, cpu); - spin_lock_irqsave(&batched_entropy->batch_lock, flags); - batched_entropy->position = 0; - spin_unlock(&batched_entropy->batch_lock); - - batched_entropy = per_cpu_ptr(&batched_entropy_u64, cpu); - spin_lock(&batched_entropy->batch_lock); - batched_entropy->position = 0; - spin_unlock_irqrestore(&batched_entropy->batch_lock, flags); - } + atomic_inc(&batch_generation); } /** From 1d1582e5fe5293561b868dcbea6ecef3976a90b0 Mon Sep 17 00:00:00 2001 From: Dominik Brodowski Date: Sat, 5 Feb 2022 11:34:57 +0100 Subject: [PATCH 0160/1842] random: fix locking in crng_fast_load() commit 7c2fe2b32bf76441ff5b7a425b384e5f75aa530a upstream. crng_init is protected by primary_crng->lock, so keep holding that lock when incrementing crng_init from 0 to 1 in crng_fast_load(). The call to pr_notice() can wait until the lock is released; this code path cannot be reached twice, as crng_fast_load() aborts early if crng_init > 0. Signed-off-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index cf42ee10633e..317864de62c8 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -647,12 +647,13 @@ static size_t crng_fast_load(const u8 *cp, size_t len) p[crng_init_cnt % CHACHA_KEY_SIZE] ^= *cp; cp++; crng_init_cnt++; len--; ret++; } - spin_unlock_irqrestore(&primary_crng.lock, flags); if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { invalidate_batched_entropy(); crng_init = 1; - pr_notice("fast init done\n"); } + spin_unlock_irqrestore(&primary_crng.lock, flags); + if (crng_init == 1) + pr_notice("fast init done\n"); return ret; } From c36e71b5a52e3a9c0f60b7d633de1980a1cf8652 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 8 Feb 2022 12:18:33 +0100 Subject: [PATCH 0161/1842] random: use RDSEED instead of RDRAND in entropy extraction commit 28f425e573e906a4c15f8392cc2b1561ef448595 upstream. When /dev/random was directly connected with entropy extraction, without any expansion stage, extract_buf() was called for every 10 bytes of data read from /dev/random. For that reason, RDRAND was used rather than RDSEED. At the same time, crng_reseed() was still only called every 5 minutes, so there RDSEED made sense. Those olden days were also a time when the entropy collector did not use a cryptographic hash function, which meant most bets were off in terms of real preimage resistance. For that reason too it didn't matter _that_ much whether RDSEED was mixed in before or after entropy extraction; both choices were sort of bad. But now we have a cryptographic hash function at work, and with that we get real preimage resistance. We also now only call extract_entropy() every 5 minutes, rather than every 10 bytes. This allows us to do two important things. First, we can switch to using RDSEED in extract_entropy(), as Dominik suggested. Second, we can ensure that RDSEED input always goes into the cryptographic hash function with other things before being used directly. This eliminates a category of attacks in which the CPU knows the current state of the crng and knows that we're going to xor RDSEED into it, and so it computes a malicious RDSEED. By going through our hash function, it would require the CPU to compute a preimage on the fly, which isn't going to happen. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Suggested-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 22 +++++++++------------- 1 file changed, 9 insertions(+), 13 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 317864de62c8..39132383adb8 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -727,13 +727,8 @@ static void crng_reseed(struct crng_state *crng) CHACHA_KEY_SIZE); } spin_lock_irqsave(&crng->lock, flags); - for (i = 0; i < 8; i++) { - unsigned long rv; - if (!arch_get_random_seed_long(&rv) && - !arch_get_random_long(&rv)) - rv = random_get_entropy(); - crng->state[i + 4] ^= buf.key[i] ^ rv; - } + for (i = 0; i < 8; i++) + crng->state[i + 4] ^= buf.key[i]; memzero_explicit(&buf, sizeof(buf)); WRITE_ONCE(crng->init_time, jiffies); spin_unlock_irqrestore(&crng->lock, flags); @@ -1054,16 +1049,17 @@ static void extract_entropy(void *buf, size_t nbytes) unsigned long flags; u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE]; struct { - unsigned long rdrand[32 / sizeof(long)]; + unsigned long rdseed[32 / sizeof(long)]; size_t counter; } block; size_t i; trace_extract_entropy(nbytes, input_pool.entropy_count); - for (i = 0; i < ARRAY_SIZE(block.rdrand); ++i) { - if (!arch_get_random_long(&block.rdrand[i])) - block.rdrand[i] = random_get_entropy(); + for (i = 0; i < ARRAY_SIZE(block.rdseed); ++i) { + if (!arch_get_random_seed_long(&block.rdseed[i]) && + !arch_get_random_long(&block.rdseed[i])) + block.rdseed[i] = random_get_entropy(); } spin_lock_irqsave(&input_pool.lock, flags); @@ -1071,7 +1067,7 @@ static void extract_entropy(void *buf, size_t nbytes) /* seed = HASHPRF(last_key, entropy_input) */ blake2s_final(&input_pool.hash, seed); - /* next_key = HASHPRF(seed, RDRAND || 0) */ + /* next_key = HASHPRF(seed, RDSEED || 0) */ block.counter = 0; blake2s(next_key, (u8 *)&block, seed, sizeof(next_key), sizeof(block), sizeof(seed)); blake2s_init_key(&input_pool.hash, BLAKE2S_HASH_SIZE, next_key, sizeof(next_key)); @@ -1081,7 +1077,7 @@ static void extract_entropy(void *buf, size_t nbytes) while (nbytes) { i = min_t(size_t, nbytes, BLAKE2S_HASH_SIZE); - /* output = HASHPRF(seed, RDRAND || ++counter) */ + /* output = HASHPRF(seed, RDSEED || ++counter) */ ++block.counter; blake2s(buf, (u8 *)&block, seed, i, sizeof(block), sizeof(seed)); nbytes -= i; From 70377ee0740caaf264d3451252b9d8dea6684b1d Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sun, 6 Feb 2022 23:51:41 +0100 Subject: [PATCH 0162/1842] random: get rid of secondary crngs commit a9412d510ab9a9ba411fea612903631d2e1f1601 upstream. As the comment said, this is indeed a "hack". Since it was introduced, it's been a constant state machine nightmare, with lots of subtle early boot issues and a wildly complex set of machinery to keep everything in sync. Rather than continuing to play whack-a-mole with this approach, this commit simply removes it entirely. This commit is preparation for "random: use simpler fast key erasure flow on per-cpu keys" in this series, which introduces a simpler (and faster) mechanism to accomplish the same thing. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 229 ++++++++++-------------------------------- 1 file changed, 55 insertions(+), 174 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 39132383adb8..a1c1b5b0b0a8 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -323,14 +323,11 @@ static struct crng_state primary_crng = { * its value (from 0->1->2). */ static int crng_init = 0; -static bool crng_need_final_init = false; #define crng_ready() (likely(crng_init > 1)) static int crng_init_cnt = 0; -static unsigned long crng_global_init_time = 0; #define CRNG_INIT_CNT_THRESH (2 * CHACHA_KEY_SIZE) -static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZE]); -static void _crng_backtrack_protect(struct crng_state *crng, - u8 tmp[CHACHA_BLOCK_SIZE], int used); +static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]); +static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used); static void process_random_ready_list(void); static void _get_random_bytes(void *buf, int nbytes); @@ -365,7 +362,7 @@ static struct { static void extract_entropy(void *buf, size_t nbytes); -static void crng_reseed(struct crng_state *crng); +static void crng_reseed(void); /* * This function adds bytes into the entropy "pool". It does not @@ -464,7 +461,7 @@ static void credit_entropy_bits(int nbits) trace_credit_entropy_bits(nbits, entropy_count, _RET_IP_); if (crng_init < 2 && entropy_count >= POOL_MIN_BITS) - crng_reseed(&primary_crng); + crng_reseed(); } /********************************************************************* @@ -477,16 +474,7 @@ static void credit_entropy_bits(int nbits) static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); -/* - * Hack to deal with crazy userspace progams when they are all trying - * to access /dev/urandom in parallel. The programs are almost - * certainly doing something terribly wrong, but we'll work around - * their brain damage. - */ -static struct crng_state **crng_node_pool __read_mostly; - static void invalidate_batched_entropy(void); -static void numa_crng_init(void); static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU); static int __init parse_trust_cpu(char *arg) @@ -495,24 +483,6 @@ static int __init parse_trust_cpu(char *arg) } early_param("random.trust_cpu", parse_trust_cpu); -static bool crng_init_try_arch(struct crng_state *crng) -{ - int i; - bool arch_init = true; - unsigned long rv; - - for (i = 4; i < 16; i++) { - if (!arch_get_random_seed_long(&rv) && - !arch_get_random_long(&rv)) { - rv = random_get_entropy(); - arch_init = false; - } - crng->state[i] ^= rv; - } - - return arch_init; -} - static bool __init crng_init_try_arch_early(void) { int i; @@ -531,100 +501,17 @@ static bool __init crng_init_try_arch_early(void) return arch_init; } -static void crng_initialize_secondary(struct crng_state *crng) -{ - chacha_init_consts(crng->state); - _get_random_bytes(&crng->state[4], sizeof(u32) * 12); - crng_init_try_arch(crng); - crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1; -} - -static void __init crng_initialize_primary(void) +static void __init crng_initialize(void) { extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); if (crng_init_try_arch_early() && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); - numa_crng_init(); crng_init = 2; pr_notice("crng init done (trusting CPU's manufacturer)\n"); } primary_crng.init_time = jiffies - CRNG_RESEED_INTERVAL - 1; } -static void crng_finalize_init(void) -{ - if (!system_wq) { - /* We can't call numa_crng_init until we have workqueues, - * so mark this for processing later. */ - crng_need_final_init = true; - return; - } - - invalidate_batched_entropy(); - numa_crng_init(); - crng_init = 2; - crng_need_final_init = false; - process_random_ready_list(); - wake_up_interruptible(&crng_init_wait); - kill_fasync(&fasync, SIGIO, POLL_IN); - pr_notice("crng init done\n"); - if (unseeded_warning.missed) { - pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n", - unseeded_warning.missed); - unseeded_warning.missed = 0; - } - if (urandom_warning.missed) { - pr_notice("%d urandom warning(s) missed due to ratelimiting\n", - urandom_warning.missed); - urandom_warning.missed = 0; - } -} - -static void do_numa_crng_init(struct work_struct *work) -{ - int i; - struct crng_state *crng; - struct crng_state **pool; - - pool = kcalloc(nr_node_ids, sizeof(*pool), GFP_KERNEL | __GFP_NOFAIL); - for_each_online_node(i) { - crng = kmalloc_node(sizeof(struct crng_state), - GFP_KERNEL | __GFP_NOFAIL, i); - spin_lock_init(&crng->lock); - crng_initialize_secondary(crng); - pool[i] = crng; - } - /* pairs with READ_ONCE() in select_crng() */ - if (cmpxchg_release(&crng_node_pool, NULL, pool) != NULL) { - for_each_node(i) - kfree(pool[i]); - kfree(pool); - } -} - -static DECLARE_WORK(numa_crng_init_work, do_numa_crng_init); - -static void numa_crng_init(void) -{ - if (IS_ENABLED(CONFIG_NUMA)) - schedule_work(&numa_crng_init_work); -} - -static struct crng_state *select_crng(void) -{ - if (IS_ENABLED(CONFIG_NUMA)) { - struct crng_state **pool; - int nid = numa_node_id(); - - /* pairs with cmpxchg_release() in do_numa_crng_init() */ - pool = READ_ONCE(crng_node_pool); - if (pool && pool[nid]) - return pool[nid]; - } - - return &primary_crng; -} - /* * crng_fast_load() can be called by code in the interrupt service * path. So we can't afford to dilly-dally. Returns the number of @@ -702,68 +589,71 @@ static int crng_slow_load(const u8 *cp, size_t len) return 1; } -static void crng_reseed(struct crng_state *crng) +static void crng_reseed(void) { unsigned long flags; - int i; + int i, entropy_count; union { u8 block[CHACHA_BLOCK_SIZE]; u32 key[8]; } buf; - if (crng == &primary_crng) { - int entropy_count; - do { - entropy_count = READ_ONCE(input_pool.entropy_count); - if (entropy_count < POOL_MIN_BITS) - return; - } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count); - extract_entropy(buf.key, sizeof(buf.key)); - wake_up_interruptible(&random_write_wait); - kill_fasync(&fasync, SIGIO, POLL_OUT); - } else { - _extract_crng(&primary_crng, buf.block); - _crng_backtrack_protect(&primary_crng, buf.block, - CHACHA_KEY_SIZE); - } - spin_lock_irqsave(&crng->lock, flags); + do { + entropy_count = READ_ONCE(input_pool.entropy_count); + if (entropy_count < POOL_MIN_BITS) + return; + } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count); + extract_entropy(buf.key, sizeof(buf.key)); + wake_up_interruptible(&random_write_wait); + kill_fasync(&fasync, SIGIO, POLL_OUT); + + spin_lock_irqsave(&primary_crng.lock, flags); for (i = 0; i < 8; i++) - crng->state[i + 4] ^= buf.key[i]; + primary_crng.state[i + 4] ^= buf.key[i]; memzero_explicit(&buf, sizeof(buf)); - WRITE_ONCE(crng->init_time, jiffies); - spin_unlock_irqrestore(&crng->lock, flags); - if (crng == &primary_crng && crng_init < 2) - crng_finalize_init(); -} - -static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZE]) -{ - unsigned long flags, init_time; - - if (crng_ready()) { - init_time = READ_ONCE(crng->init_time); - if (time_after(READ_ONCE(crng_global_init_time), init_time) || - time_after(jiffies, init_time + CRNG_RESEED_INTERVAL)) - crng_reseed(crng); + WRITE_ONCE(primary_crng.init_time, jiffies); + spin_unlock_irqrestore(&primary_crng.lock, flags); + if (crng_init < 2) { + invalidate_batched_entropy(); + crng_init = 2; + process_random_ready_list(); + wake_up_interruptible(&crng_init_wait); + kill_fasync(&fasync, SIGIO, POLL_IN); + pr_notice("crng init done\n"); + if (unseeded_warning.missed) { + pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n", + unseeded_warning.missed); + unseeded_warning.missed = 0; + } + if (urandom_warning.missed) { + pr_notice("%d urandom warning(s) missed due to ratelimiting\n", + urandom_warning.missed); + urandom_warning.missed = 0; + } } - spin_lock_irqsave(&crng->lock, flags); - chacha20_block(&crng->state[0], out); - if (crng->state[12] == 0) - crng->state[13]++; - spin_unlock_irqrestore(&crng->lock, flags); } static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]) { - _extract_crng(select_crng(), out); + unsigned long flags, init_time; + + if (crng_ready()) { + init_time = READ_ONCE(primary_crng.init_time); + if (time_after(jiffies, init_time + CRNG_RESEED_INTERVAL)) + crng_reseed(); + } + spin_lock_irqsave(&primary_crng.lock, flags); + chacha20_block(&primary_crng.state[0], out); + if (primary_crng.state[12] == 0) + primary_crng.state[13]++; + spin_unlock_irqrestore(&primary_crng.lock, flags); } /* * Use the leftover bytes from the CRNG block output (if there is * enough) to mutate the CRNG key to provide backtracking protection. */ -static void _crng_backtrack_protect(struct crng_state *crng, - u8 tmp[CHACHA_BLOCK_SIZE], int used) +static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used) { unsigned long flags; u32 *s, *d; @@ -774,17 +664,12 @@ static void _crng_backtrack_protect(struct crng_state *crng, extract_crng(tmp); used = 0; } - spin_lock_irqsave(&crng->lock, flags); + spin_lock_irqsave(&primary_crng.lock, flags); s = (u32 *)&tmp[used]; - d = &crng->state[4]; + d = &primary_crng.state[4]; for (i = 0; i < 8; i++) *d++ ^= *s++; - spin_unlock_irqrestore(&crng->lock, flags); -} - -static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used) -{ - _crng_backtrack_protect(select_crng(), tmp, used); + spin_unlock_irqrestore(&primary_crng.lock, flags); } static ssize_t extract_crng_user(void __user *buf, size_t nbytes) @@ -1371,10 +1256,7 @@ static void __init init_std_data(void) int __init rand_initialize(void) { init_std_data(); - if (crng_need_final_init) - crng_finalize_init(); - crng_initialize_primary(); - crng_global_init_time = jiffies; + crng_initialize(); if (ratelimit_disable) { urandom_warning.interval = 0; unseeded_warning.interval = 0; @@ -1544,8 +1426,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) return -EPERM; if (crng_init < 2) return -ENODATA; - crng_reseed(&primary_crng); - WRITE_ONCE(crng_global_init_time, jiffies - 1); + crng_reseed(); return 0; default: return -EINVAL; From c521bf08ee694964e1ac0fe21b85271522d2fbe1 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 8 Feb 2022 12:40:14 +0100 Subject: [PATCH 0163/1842] random: inline leaves of rand_initialize() commit 8566417221fcec51346ec164e920dacb979c6b5f upstream. This is a preparatory commit for the following one. We simply inline the various functions that rand_initialize() calls that have no other callers. The compiler was doing this anyway before. Doing this will allow us to reorganize this after. We can then move the trust_cpu and parse_trust_cpu definitions a bit closer to where they're actually used, which makes the code easier to read. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 92 ++++++++++++++++--------------------------- 1 file changed, 34 insertions(+), 58 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index a1c1b5b0b0a8..c7b3256c543b 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -476,42 +476,6 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); static void invalidate_batched_entropy(void); -static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU); -static int __init parse_trust_cpu(char *arg) -{ - return kstrtobool(arg, &trust_cpu); -} -early_param("random.trust_cpu", parse_trust_cpu); - -static bool __init crng_init_try_arch_early(void) -{ - int i; - bool arch_init = true; - unsigned long rv; - - for (i = 4; i < 16; i++) { - if (!arch_get_random_seed_long_early(&rv) && - !arch_get_random_long_early(&rv)) { - rv = random_get_entropy(); - arch_init = false; - } - primary_crng.state[i] ^= rv; - } - - return arch_init; -} - -static void __init crng_initialize(void) -{ - extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); - if (crng_init_try_arch_early() && trust_cpu && crng_init < 2) { - invalidate_batched_entropy(); - crng_init = 2; - pr_notice("crng init done (trusting CPU's manufacturer)\n"); - } - primary_crng.init_time = jiffies - CRNG_RESEED_INTERVAL - 1; -} - /* * crng_fast_load() can be called by code in the interrupt service * path. So we can't afford to dilly-dally. Returns the number of @@ -1220,28 +1184,12 @@ int __must_check get_random_bytes_arch(void *buf, int nbytes) } EXPORT_SYMBOL(get_random_bytes_arch); -/* - * init_std_data - initialize pool with system data - * - * This function clears the pool's entropy count and mixes some system - * data into the pool to prepare it for use. The pool is not cleared - * as that can only decrease the entropy in the pool. - */ -static void __init init_std_data(void) +static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU); +static int __init parse_trust_cpu(char *arg) { - int i; - ktime_t now = ktime_get_real(); - unsigned long rv; - - mix_pool_bytes(&now, sizeof(now)); - for (i = BLAKE2S_BLOCK_SIZE; i > 0; i -= sizeof(rv)) { - if (!arch_get_random_seed_long(&rv) && - !arch_get_random_long(&rv)) - rv = random_get_entropy(); - mix_pool_bytes(&rv, sizeof(rv)); - } - mix_pool_bytes(utsname(), sizeof(*(utsname()))); + return kstrtobool(arg, &trust_cpu); } +early_param("random.trust_cpu", parse_trust_cpu); /* * Note that setup_arch() may call add_device_randomness() @@ -1255,8 +1203,36 @@ static void __init init_std_data(void) */ int __init rand_initialize(void) { - init_std_data(); - crng_initialize(); + int i; + ktime_t now = ktime_get_real(); + bool arch_init = true; + unsigned long rv; + + mix_pool_bytes(&now, sizeof(now)); + for (i = BLAKE2S_BLOCK_SIZE; i > 0; i -= sizeof(rv)) { + if (!arch_get_random_seed_long(&rv) && + !arch_get_random_long(&rv)) + rv = random_get_entropy(); + mix_pool_bytes(&rv, sizeof(rv)); + } + mix_pool_bytes(utsname(), sizeof(*(utsname()))); + + extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); + for (i = 4; i < 16; i++) { + if (!arch_get_random_seed_long_early(&rv) && + !arch_get_random_long_early(&rv)) { + rv = random_get_entropy(); + arch_init = false; + } + primary_crng.state[i] ^= rv; + } + if (arch_init && trust_cpu && crng_init < 2) { + invalidate_batched_entropy(); + crng_init = 2; + pr_notice("crng init done (trusting CPU's manufacturer)\n"); + } + primary_crng.init_time = jiffies - CRNG_RESEED_INTERVAL - 1; + if (ratelimit_disable) { urandom_warning.interval = 0; unseeded_warning.interval = 0; From 16a6e4ae71e238c70b630f7ceed1d742c38b7ef3 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 8 Feb 2022 12:44:28 +0100 Subject: [PATCH 0164/1842] random: ensure early RDSEED goes through mixer on init commit a02cf3d0dd77244fd5333ac48d78871de459ae6d upstream. Continuing the reasoning of "random: use RDSEED instead of RDRAND in entropy extraction" from this series, at init time we also don't want to be xoring RDSEED directly into the crng. Instead it's safer to put it into our entropy collector and then re-extract it, so that it goes through a hash function with preimage resistance. As a matter of hygiene, we also order these now so that the RDSEED byte are hashed in first, followed by the bytes that are likely more predictable (e.g. utsname()). Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 16 +++++----------- 1 file changed, 5 insertions(+), 11 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index c7b3256c543b..092b31354cf3 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1208,24 +1208,18 @@ int __init rand_initialize(void) bool arch_init = true; unsigned long rv; - mix_pool_bytes(&now, sizeof(now)); for (i = BLAKE2S_BLOCK_SIZE; i > 0; i -= sizeof(rv)) { - if (!arch_get_random_seed_long(&rv) && - !arch_get_random_long(&rv)) - rv = random_get_entropy(); - mix_pool_bytes(&rv, sizeof(rv)); - } - mix_pool_bytes(utsname(), sizeof(*(utsname()))); - - extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); - for (i = 4; i < 16; i++) { if (!arch_get_random_seed_long_early(&rv) && !arch_get_random_long_early(&rv)) { rv = random_get_entropy(); arch_init = false; } - primary_crng.state[i] ^= rv; + mix_pool_bytes(&rv, sizeof(rv)); } + mix_pool_bytes(&now, sizeof(now)); + mix_pool_bytes(utsname(), sizeof(*(utsname()))); + + extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); if (arch_init && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); crng_init = 2; From 7a5b9ca583f9f00ab0f0c713379de40a66dc353a Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 8 Feb 2022 13:00:11 +0100 Subject: [PATCH 0165/1842] random: do not xor RDRAND when writing into /dev/random commit 91c2afca290ed3034841c8c8532e69ed9e16cf34 upstream. Continuing the reasoning of "random: ensure early RDSEED goes through mixer on init", we don't want RDRAND interacting with anything without going through the mixer function, as a backdoored CPU could presumably cancel out data during an xor, which it'd have a harder time doing when being forced through a cryptographic hash function. There's actually no need at all to be calling RDRAND in write_pool(), because before we extract from the pool, we always do so with 32 bytes of RDSEED hashed in at that stage. Xoring at this stage is needless and introduces a minor liability. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 14 ++------------ 1 file changed, 2 insertions(+), 12 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 092b31354cf3..3f6510e2db92 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1305,25 +1305,15 @@ static __poll_t random_poll(struct file *file, poll_table *wait) static int write_pool(const char __user *buffer, size_t count) { size_t bytes; - u32 t, buf[16]; + u8 buf[BLAKE2S_BLOCK_SIZE]; const char __user *p = buffer; while (count > 0) { - int b, i = 0; - bytes = min(count, sizeof(buf)); - if (copy_from_user(&buf, p, bytes)) + if (copy_from_user(buf, p, bytes)) return -EFAULT; - - for (b = bytes; b > 0; b -= sizeof(u32), i++) { - if (!arch_get_random_int(&t)) - break; - buf[i] ^= t; - } - count -= bytes; p += bytes; - mix_pool_bytes(buf, bytes); cond_resched(); } From 732872aa2c412457eae31589681d4eba96263e2f Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Wed, 9 Feb 2022 01:56:35 +0100 Subject: [PATCH 0166/1842] random: absorb fast pool into input pool after fast load commit c30c575db4858f0bbe5e315ff2e529c782f33a1f upstream. During crng_init == 0, we never credit entropy in add_interrupt_ randomness(), but instead dump it directly into the primary_crng. That's fine, except for the fact that we then wind up throwing away that entropy later when we switch to extracting from the input pool and xoring into (and later in this series overwriting) the primary_crng key. The two other early init sites -- add_hwgenerator_randomness()'s use crng_fast_load() and add_device_ randomness()'s use of crng_slow_load() -- always additionally give their inputs to the input pool. But not add_interrupt_randomness(). This commit fixes that shortcoming by calling mix_pool_bytes() after crng_fast_load() in add_interrupt_randomness(). That's partially verboten on PREEMPT_RT, where it implies taking spinlock_t from an IRQ handler. But this also only happens during early boot and then never again after that. Plus it's a trylock so it has the same considerations as calling crng_fast_load(), which we're already using. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Suggested-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index 3f6510e2db92..d3ad07bb8990 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -850,6 +850,10 @@ void add_interrupt_randomness(int irq) crng_fast_load((u8 *)fast_pool->pool, sizeof(fast_pool->pool)) > 0) { fast_pool->count = 0; fast_pool->last = now; + if (spin_trylock(&input_pool.lock)) { + _mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool)); + spin_unlock(&input_pool.lock); + } } return; } From 95026060d80955bd699523381bb8f412ba92e3c0 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Mon, 7 Feb 2022 15:08:49 +0100 Subject: [PATCH 0167/1842] random: use simpler fast key erasure flow on per-cpu keys MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 186873c549df11b63e17062f863654e1501e1524 upstream. Rather than the clunky NUMA full ChaCha state system we had prior, this commit is closer to the original "fast key erasure RNG" proposal from , by simply treating ChaCha keys on a per-cpu basis. All entropy is extracted to a base crng key of 32 bytes. This base crng has a birthdate and a generation counter. When we go to take bytes from the crng, we first check if the birthdate is too old; if it is, we reseed per usual. Then we start working on a per-cpu crng. This per-cpu crng makes sure that it has the same generation counter as the base crng. If it doesn't, it does fast key erasure with the base crng key and uses the output as its new per-cpu key, and then updates its local generation counter. Then, using this per-cpu state, we do ordinary fast key erasure. Half of this first block is used to overwrite the per-cpu crng key for the next call -- this is the fast key erasure RNG idea -- and the other half, along with the ChaCha state, is returned to the caller. If the caller desires more than this remaining half, it can generate more ChaCha blocks, unlocked, using the now detached ChaCha state that was just returned. Crypto-wise, this is more or less what we were doing before, but this simply makes it more explicit and ensures that we always have backtrack protection by not playing games with a shared block counter. The flow looks like this: ──extract()──► base_crng.key ◄──memcpy()───┐ │ │ └──chacha()──────┬─► new_base_key └─► crngs[n].key ◄──memcpy()───┐ │ │ └──chacha()───┬─► new_key └─► random_bytes │ └────► There are a few hairy details around early init. Just as was done before, prior to having gathered enough entropy, crng_fast_load() and crng_slow_load() dump bytes directly into the base crng, and when we go to take bytes from the crng, in that case, we're doing fast key erasure with the base crng rather than the fast unlocked per-cpu crngs. This is fine as that's only the state of affairs during very early boot; once the crng initializes we never use these paths again. In the process of all this, the APIs into the crng become a bit simpler: we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len), which both do what you'd expect. All of the details of fast key erasure and per-cpu selection happen only in a very short critical section of crng_make_state(), which selects the right per-cpu key, does the fast key erasure, and returns a local state to the caller's stack. So, we no longer have a need for a separate backtrack function, as this happens all at once here. The API then allows us to extend backtrack protection to batched entropy without really having to do much at all. The result is a bit simpler than before and has fewer foot guns. The init time state machine also gets a lot simpler as we don't need to wait for workqueues to come online and do deferred work. And the multi-core performance should be increased significantly, by virtue of having hardly any locking on the fast path. Cc: Theodore Ts'o Cc: Dominik Brodowski Cc: Sebastian Andrzej Siewior Reviewed-by: Jann Horn Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 395 ++++++++++++++++++++++++------------------ 1 file changed, 229 insertions(+), 166 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index d3ad07bb8990..626c53c21b7e 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -67,63 +67,19 @@ * Exported interfaces ---- kernel output * -------------------------------------- * - * The primary kernel interface is + * The primary kernel interfaces are: * * void get_random_bytes(void *buf, int nbytes); - * - * This interface will return the requested number of random bytes, - * and place it in the requested buffer. This is equivalent to a - * read from /dev/urandom. - * - * For less critical applications, there are the functions: - * * u32 get_random_u32() * u64 get_random_u64() * unsigned int get_random_int() * unsigned long get_random_long() * - * These are produced by a cryptographic RNG seeded from get_random_bytes, - * and so do not deplete the entropy pool as much. These are recommended - * for most in-kernel operations *if the result is going to be stored in - * the kernel*. - * - * Specifically, the get_random_int() family do not attempt to do - * "anti-backtracking". If you capture the state of the kernel (e.g. - * by snapshotting the VM), you can figure out previous get_random_int() - * return values. But if the value is stored in the kernel anyway, - * this is not a problem. - * - * It *is* safe to expose get_random_int() output to attackers (e.g. as - * network cookies); given outputs 1..n, it's not feasible to predict - * outputs 0 or n+1. The only concern is an attacker who breaks into - * the kernel later; the get_random_int() engine is not reseeded as - * often as the get_random_bytes() one. - * - * get_random_bytes() is needed for keys that need to stay secret after - * they are erased from the kernel. For example, any key that will - * be wrapped and stored encrypted. And session encryption keys: we'd - * like to know that after the session is closed and the keys erased, - * the plaintext is unrecoverable to someone who recorded the ciphertext. - * - * But for network ports/cookies, stack canaries, PRNG seeds, address - * space layout randomization, session *authentication* keys, or other - * applications where the sensitive data is stored in the kernel in - * plaintext for as long as it's sensitive, the get_random_int() family - * is just fine. - * - * Consider ASLR. We want to keep the address space secret from an - * outside attacker while the process is running, but once the address - * space is torn down, it's of no use to an attacker any more. And it's - * stored in kernel data structures as long as it's alive, so worrying - * about an attacker's ability to extrapolate it from the get_random_int() - * CRNG is silly. - * - * Even some cryptographic keys are safe to generate with get_random_int(). - * In particular, keys for SipHash are generally fine. Here, knowledge - * of the key authorizes you to do something to a kernel object (inject - * packets to a network connection, or flood a hash table), and the - * key is stored with the object being protected. Once it goes away, - * we no longer care if anyone knows the key. + * These interfaces will return the requested number of random bytes + * into the given buffer or as a return value. This is equivalent to a + * read from /dev/urandom. The get_random_{u32,u64,int,long}() family + * of functions may be higher performance for one-off random integers, + * because they do a bit of buffering. * * prandom_u32() * ------------- @@ -300,20 +256,6 @@ static struct fasync_struct *fasync; static DEFINE_SPINLOCK(random_ready_list_lock); static LIST_HEAD(random_ready_list); -struct crng_state { - u32 state[16]; - unsigned long init_time; - spinlock_t lock; -}; - -static struct crng_state primary_crng = { - .lock = __SPIN_LOCK_UNLOCKED(primary_crng.lock), - .state[0] = CHACHA_CONSTANT_EXPA, - .state[1] = CHACHA_CONSTANT_ND_3, - .state[2] = CHACHA_CONSTANT_2_BY, - .state[3] = CHACHA_CONSTANT_TE_K, -}; - /* * crng_init = 0 --> Uninitialized * 1 --> Initialized @@ -325,9 +267,6 @@ static struct crng_state primary_crng = { static int crng_init = 0; #define crng_ready() (likely(crng_init > 1)) static int crng_init_cnt = 0; -#define CRNG_INIT_CNT_THRESH (2 * CHACHA_KEY_SIZE) -static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]); -static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used); static void process_random_ready_list(void); static void _get_random_bytes(void *buf, int nbytes); @@ -470,7 +409,30 @@ static void credit_entropy_bits(int nbits) * *********************************************************************/ -#define CRNG_RESEED_INTERVAL (300 * HZ) +enum { + CRNG_RESEED_INTERVAL = 300 * HZ, + CRNG_INIT_CNT_THRESH = 2 * CHACHA_KEY_SIZE +}; + +static struct { + u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long)); + unsigned long birth; + unsigned long generation; + spinlock_t lock; +} base_crng = { + .lock = __SPIN_LOCK_UNLOCKED(base_crng.lock) +}; + +struct crng { + u8 key[CHACHA_KEY_SIZE]; + unsigned long generation; + local_lock_t lock; +}; + +static DEFINE_PER_CPU(struct crng, crngs) = { + .generation = ULONG_MAX, + .lock = INIT_LOCAL_LOCK(crngs.lock), +}; static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); @@ -487,22 +449,22 @@ static size_t crng_fast_load(const u8 *cp, size_t len) u8 *p; size_t ret = 0; - if (!spin_trylock_irqsave(&primary_crng.lock, flags)) + if (!spin_trylock_irqsave(&base_crng.lock, flags)) return 0; if (crng_init != 0) { - spin_unlock_irqrestore(&primary_crng.lock, flags); + spin_unlock_irqrestore(&base_crng.lock, flags); return 0; } - p = (u8 *)&primary_crng.state[4]; + p = base_crng.key; while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) { - p[crng_init_cnt % CHACHA_KEY_SIZE] ^= *cp; + p[crng_init_cnt % sizeof(base_crng.key)] ^= *cp; cp++; crng_init_cnt++; len--; ret++; } if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { invalidate_batched_entropy(); crng_init = 1; } - spin_unlock_irqrestore(&primary_crng.lock, flags); + spin_unlock_irqrestore(&base_crng.lock, flags); if (crng_init == 1) pr_notice("fast init done\n"); return ret; @@ -527,14 +489,14 @@ static int crng_slow_load(const u8 *cp, size_t len) unsigned long flags; static u8 lfsr = 1; u8 tmp; - unsigned int i, max = CHACHA_KEY_SIZE; + unsigned int i, max = sizeof(base_crng.key); const u8 *src_buf = cp; - u8 *dest_buf = (u8 *)&primary_crng.state[4]; + u8 *dest_buf = base_crng.key; - if (!spin_trylock_irqsave(&primary_crng.lock, flags)) + if (!spin_trylock_irqsave(&base_crng.lock, flags)) return 0; if (crng_init != 0) { - spin_unlock_irqrestore(&primary_crng.lock, flags); + spin_unlock_irqrestore(&base_crng.lock, flags); return 0; } if (len > max) @@ -545,38 +507,50 @@ static int crng_slow_load(const u8 *cp, size_t len) lfsr >>= 1; if (tmp & 1) lfsr ^= 0xE1; - tmp = dest_buf[i % CHACHA_KEY_SIZE]; - dest_buf[i % CHACHA_KEY_SIZE] ^= src_buf[i % len] ^ lfsr; + tmp = dest_buf[i % sizeof(base_crng.key)]; + dest_buf[i % sizeof(base_crng.key)] ^= src_buf[i % len] ^ lfsr; lfsr += (tmp << 3) | (tmp >> 5); } - spin_unlock_irqrestore(&primary_crng.lock, flags); + spin_unlock_irqrestore(&base_crng.lock, flags); return 1; } static void crng_reseed(void) { unsigned long flags; - int i, entropy_count; - union { - u8 block[CHACHA_BLOCK_SIZE]; - u32 key[8]; - } buf; + int entropy_count; + unsigned long next_gen; + u8 key[CHACHA_KEY_SIZE]; + /* + * First we make sure we have POOL_MIN_BITS of entropy in the pool, + * and then we drain all of it. Only then can we extract a new key. + */ do { entropy_count = READ_ONCE(input_pool.entropy_count); if (entropy_count < POOL_MIN_BITS) return; } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count); - extract_entropy(buf.key, sizeof(buf.key)); + extract_entropy(key, sizeof(key)); wake_up_interruptible(&random_write_wait); kill_fasync(&fasync, SIGIO, POLL_OUT); - spin_lock_irqsave(&primary_crng.lock, flags); - for (i = 0; i < 8; i++) - primary_crng.state[i + 4] ^= buf.key[i]; - memzero_explicit(&buf, sizeof(buf)); - WRITE_ONCE(primary_crng.init_time, jiffies); - spin_unlock_irqrestore(&primary_crng.lock, flags); + /* + * We copy the new key into the base_crng, overwriting the old one, + * and update the generation counter. We avoid hitting ULONG_MAX, + * because the per-cpu crngs are initialized to ULONG_MAX, so this + * forces new CPUs that come online to always initialize. + */ + spin_lock_irqsave(&base_crng.lock, flags); + memcpy(base_crng.key, key, sizeof(base_crng.key)); + next_gen = base_crng.generation + 1; + if (next_gen == ULONG_MAX) + ++next_gen; + WRITE_ONCE(base_crng.generation, next_gen); + WRITE_ONCE(base_crng.birth, jiffies); + spin_unlock_irqrestore(&base_crng.lock, flags); + memzero_explicit(key, sizeof(key)); + if (crng_init < 2) { invalidate_batched_entropy(); crng_init = 2; @@ -597,77 +571,143 @@ static void crng_reseed(void) } } -static void extract_crng(u8 out[CHACHA_BLOCK_SIZE]) +/* + * The general form here is based on a "fast key erasure RNG" from + * . It generates a ChaCha + * block using the provided key, and then immediately overwites that + * key with half the block. It returns the resultant ChaCha state to the + * user, along with the second half of the block containing 32 bytes of + * random data that may be used; random_data_len may not be greater than + * 32. + */ +static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE], + u32 chacha_state[CHACHA_STATE_WORDS], + u8 *random_data, size_t random_data_len) { - unsigned long flags, init_time; + u8 first_block[CHACHA_BLOCK_SIZE]; - if (crng_ready()) { - init_time = READ_ONCE(primary_crng.init_time); - if (time_after(jiffies, init_time + CRNG_RESEED_INTERVAL)) - crng_reseed(); - } - spin_lock_irqsave(&primary_crng.lock, flags); - chacha20_block(&primary_crng.state[0], out); - if (primary_crng.state[12] == 0) - primary_crng.state[13]++; - spin_unlock_irqrestore(&primary_crng.lock, flags); + BUG_ON(random_data_len > 32); + + chacha_init_consts(chacha_state); + memcpy(&chacha_state[4], key, CHACHA_KEY_SIZE); + memset(&chacha_state[12], 0, sizeof(u32) * 4); + chacha20_block(chacha_state, first_block); + + memcpy(key, first_block, CHACHA_KEY_SIZE); + memcpy(random_data, first_block + CHACHA_KEY_SIZE, random_data_len); + memzero_explicit(first_block, sizeof(first_block)); } /* - * Use the leftover bytes from the CRNG block output (if there is - * enough) to mutate the CRNG key to provide backtracking protection. + * This function returns a ChaCha state that you may use for generating + * random data. It also returns up to 32 bytes on its own of random data + * that may be used; random_data_len may not be greater than 32. */ -static void crng_backtrack_protect(u8 tmp[CHACHA_BLOCK_SIZE], int used) +static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], + u8 *random_data, size_t random_data_len) { unsigned long flags; - u32 *s, *d; - int i; + struct crng *crng; - used = round_up(used, sizeof(u32)); - if (used + CHACHA_KEY_SIZE > CHACHA_BLOCK_SIZE) { - extract_crng(tmp); - used = 0; + BUG_ON(random_data_len > 32); + + /* + * For the fast path, we check whether we're ready, unlocked first, and + * then re-check once locked later. In the case where we're really not + * ready, we do fast key erasure with the base_crng directly, because + * this is what crng_{fast,slow}_load mutate during early init. + */ + if (unlikely(!crng_ready())) { + bool ready; + + spin_lock_irqsave(&base_crng.lock, flags); + ready = crng_ready(); + if (!ready) + crng_fast_key_erasure(base_crng.key, chacha_state, + random_data, random_data_len); + spin_unlock_irqrestore(&base_crng.lock, flags); + if (!ready) + return; } - spin_lock_irqsave(&primary_crng.lock, flags); - s = (u32 *)&tmp[used]; - d = &primary_crng.state[4]; - for (i = 0; i < 8; i++) - *d++ ^= *s++; - spin_unlock_irqrestore(&primary_crng.lock, flags); + + /* + * If the base_crng is more than 5 minutes old, we reseed, which + * in turn bumps the generation counter that we check below. + */ + if (unlikely(time_after(jiffies, READ_ONCE(base_crng.birth) + CRNG_RESEED_INTERVAL))) + crng_reseed(); + + local_lock_irqsave(&crngs.lock, flags); + crng = raw_cpu_ptr(&crngs); + + /* + * If our per-cpu crng is older than the base_crng, then it means + * somebody reseeded the base_crng. In that case, we do fast key + * erasure on the base_crng, and use its output as the new key + * for our per-cpu crng. This brings us up to date with base_crng. + */ + if (unlikely(crng->generation != READ_ONCE(base_crng.generation))) { + spin_lock(&base_crng.lock); + crng_fast_key_erasure(base_crng.key, chacha_state, + crng->key, sizeof(crng->key)); + crng->generation = base_crng.generation; + spin_unlock(&base_crng.lock); + } + + /* + * Finally, when we've made it this far, our per-cpu crng has an up + * to date key, and we can do fast key erasure with it to produce + * some random data and a ChaCha state for the caller. All other + * branches of this function are "unlikely", so most of the time we + * should wind up here immediately. + */ + crng_fast_key_erasure(crng->key, chacha_state, random_data, random_data_len); + local_unlock_irqrestore(&crngs.lock, flags); } -static ssize_t extract_crng_user(void __user *buf, size_t nbytes) +static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) { - ssize_t ret = 0, i = CHACHA_BLOCK_SIZE; - u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4); - int large_request = (nbytes > 256); + bool large_request = nbytes > 256; + ssize_t ret = 0, len; + u32 chacha_state[CHACHA_STATE_WORDS]; + u8 output[CHACHA_BLOCK_SIZE]; + + if (!nbytes) + return 0; + + len = min_t(ssize_t, 32, nbytes); + crng_make_state(chacha_state, output, len); + + if (copy_to_user(buf, output, len)) + return -EFAULT; + nbytes -= len; + buf += len; + ret += len; while (nbytes) { if (large_request && need_resched()) { - if (signal_pending(current)) { - if (ret == 0) - ret = -ERESTARTSYS; + if (signal_pending(current)) break; - } schedule(); } - extract_crng(tmp); - i = min_t(int, nbytes, CHACHA_BLOCK_SIZE); - if (copy_to_user(buf, tmp, i)) { + chacha20_block(chacha_state, output); + if (unlikely(chacha_state[12] == 0)) + ++chacha_state[13]; + + len = min_t(ssize_t, nbytes, CHACHA_BLOCK_SIZE); + if (copy_to_user(buf, output, len)) { ret = -EFAULT; break; } - nbytes -= i; - buf += i; - ret += i; + nbytes -= len; + buf += len; + ret += len; } - crng_backtrack_protect(tmp, i); - - /* Wipe data just written to memory */ - memzero_explicit(tmp, sizeof(tmp)); + memzero_explicit(chacha_state, sizeof(chacha_state)); + memzero_explicit(output, sizeof(output)); return ret; } @@ -976,23 +1016,36 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller, void */ static void _get_random_bytes(void *buf, int nbytes) { - u8 tmp[CHACHA_BLOCK_SIZE] __aligned(4); + u32 chacha_state[CHACHA_STATE_WORDS]; + u8 tmp[CHACHA_BLOCK_SIZE]; + ssize_t len; trace_get_random_bytes(nbytes, _RET_IP_); - while (nbytes >= CHACHA_BLOCK_SIZE) { - extract_crng(buf); - buf += CHACHA_BLOCK_SIZE; + if (!nbytes) + return; + + len = min_t(ssize_t, 32, nbytes); + crng_make_state(chacha_state, buf, len); + nbytes -= len; + buf += len; + + while (nbytes) { + if (nbytes < CHACHA_BLOCK_SIZE) { + chacha20_block(chacha_state, tmp); + memcpy(buf, tmp, nbytes); + memzero_explicit(tmp, sizeof(tmp)); + break; + } + + chacha20_block(chacha_state, buf); + if (unlikely(chacha_state[12] == 0)) + ++chacha_state[13]; nbytes -= CHACHA_BLOCK_SIZE; + buf += CHACHA_BLOCK_SIZE; } - if (nbytes > 0) { - extract_crng(tmp); - memcpy(buf, tmp, nbytes); - crng_backtrack_protect(tmp, nbytes); - } else - crng_backtrack_protect(tmp, CHACHA_BLOCK_SIZE); - memzero_explicit(tmp, sizeof(tmp)); + memzero_explicit(chacha_state, sizeof(chacha_state)); } void get_random_bytes(void *buf, int nbytes) @@ -1223,13 +1276,12 @@ int __init rand_initialize(void) mix_pool_bytes(&now, sizeof(now)); mix_pool_bytes(utsname(), sizeof(*(utsname()))); - extract_entropy(&primary_crng.state[4], sizeof(u32) * 12); + extract_entropy(base_crng.key, sizeof(base_crng.key)); if (arch_init && trust_cpu && crng_init < 2) { invalidate_batched_entropy(); crng_init = 2; pr_notice("crng init done (trusting CPU's manufacturer)\n"); } - primary_crng.init_time = jiffies - CRNG_RESEED_INTERVAL - 1; if (ratelimit_disable) { urandom_warning.interval = 0; @@ -1261,7 +1313,7 @@ static ssize_t urandom_read_nowarn(struct file *file, char __user *buf, int ret; nbytes = min_t(size_t, nbytes, INT_MAX >> 6); - ret = extract_crng_user(buf, nbytes); + ret = get_random_bytes_user(buf, nbytes); trace_urandom_read(8 * nbytes, 0, input_pool.entropy_count); return ret; } @@ -1567,8 +1619,15 @@ static atomic_t batch_generation = ATOMIC_INIT(0); struct batched_entropy { union { - u64 entropy_u64[CHACHA_BLOCK_SIZE / sizeof(u64)]; - u32 entropy_u32[CHACHA_BLOCK_SIZE / sizeof(u32)]; + /* + * We make this 1.5x a ChaCha block, so that we get the + * remaining 32 bytes from fast key erasure, plus one full + * block from the detached ChaCha state. We can increase + * the size of this later if needed so long as we keep the + * formula of (integer_blocks + 0.5) * CHACHA_BLOCK_SIZE. + */ + u64 entropy_u64[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u64))]; + u32 entropy_u32[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u32))]; }; local_lock_t lock; unsigned int position; @@ -1577,14 +1636,13 @@ struct batched_entropy { /* * Get a random word for internal kernel use only. The quality of the random - * number is good as /dev/urandom, but there is no backtrack protection, with - * the goal of being quite fast and not depleting entropy. In order to ensure - * that the randomness provided by this function is okay, the function - * wait_for_random_bytes() should be called and return 0 at least once at any - * point prior. + * number is good as /dev/urandom. In order to ensure that the randomness + * provided by this function is okay, the function wait_for_random_bytes() + * should be called and return 0 at least once at any point prior. */ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = { - .lock = INIT_LOCAL_LOCK(batched_entropy_u64.lock) + .lock = INIT_LOCAL_LOCK(batched_entropy_u64.lock), + .position = UINT_MAX }; u64 get_random_u64(void) @@ -1601,21 +1659,24 @@ u64 get_random_u64(void) batch = raw_cpu_ptr(&batched_entropy_u64); next_gen = atomic_read(&batch_generation); - if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0 || + if (batch->position >= ARRAY_SIZE(batch->entropy_u64) || next_gen != batch->generation) { - extract_crng((u8 *)batch->entropy_u64); + _get_random_bytes(batch->entropy_u64, sizeof(batch->entropy_u64)); batch->position = 0; batch->generation = next_gen; } - ret = batch->entropy_u64[batch->position++]; + ret = batch->entropy_u64[batch->position]; + batch->entropy_u64[batch->position] = 0; + ++batch->position; local_unlock_irqrestore(&batched_entropy_u64.lock, flags); return ret; } EXPORT_SYMBOL(get_random_u64); static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) = { - .lock = INIT_LOCAL_LOCK(batched_entropy_u32.lock) + .lock = INIT_LOCAL_LOCK(batched_entropy_u32.lock), + .position = UINT_MAX }; u32 get_random_u32(void) @@ -1632,14 +1693,16 @@ u32 get_random_u32(void) batch = raw_cpu_ptr(&batched_entropy_u32); next_gen = atomic_read(&batch_generation); - if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0 || + if (batch->position >= ARRAY_SIZE(batch->entropy_u32) || next_gen != batch->generation) { - extract_crng((u8 *)batch->entropy_u32); + _get_random_bytes(batch->entropy_u32, sizeof(batch->entropy_u32)); batch->position = 0; batch->generation = next_gen; } - ret = batch->entropy_u32[batch->position++]; + ret = batch->entropy_u32[batch->position]; + batch->entropy_u32[batch->position] = 0; + ++batch->position; local_unlock_irqrestore(&batched_entropy_u32.lock, flags); return ret; } From 655a69cb41e008943ea2f0990141863159126b82 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 8 Feb 2022 19:23:17 +0100 Subject: [PATCH 0168/1842] random: use hash function for crng_slow_load() commit 66e4c2b9541503d721e936cc3898c9f25f4591ff upstream. Since we have a hash function that's really fast, and the goal of crng_slow_load() is reportedly to "touch all of the crng's state", we can just hash the old state together with the new state and call it a day. This way we dont need to reason about another LFSR or worry about various attacks there. This code is only ever used at early boot and then never again. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 40 ++++++++++++++-------------------------- 1 file changed, 14 insertions(+), 26 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 626c53c21b7e..866b855c1609 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -477,42 +477,30 @@ static size_t crng_fast_load(const u8 *cp, size_t len) * all), and (2) it doesn't have the performance constraints of * crng_fast_load(). * - * So we do something more comprehensive which is guaranteed to touch - * all of the primary_crng's state, and which uses a LFSR with a - * period of 255 as part of the mixing algorithm. Finally, we do - * *not* advance crng_init_cnt since buffer we may get may be something - * like a fixed DMI table (for example), which might very well be - * unique to the machine, but is otherwise unvarying. + * So, we simply hash the contents in with the current key. Finally, + * we do *not* advance crng_init_cnt since buffer we may get may be + * something like a fixed DMI table (for example), which might very + * well be unique to the machine, but is otherwise unvarying. */ -static int crng_slow_load(const u8 *cp, size_t len) +static void crng_slow_load(const u8 *cp, size_t len) { unsigned long flags; - static u8 lfsr = 1; - u8 tmp; - unsigned int i, max = sizeof(base_crng.key); - const u8 *src_buf = cp; - u8 *dest_buf = base_crng.key; + struct blake2s_state hash; + + blake2s_init(&hash, sizeof(base_crng.key)); if (!spin_trylock_irqsave(&base_crng.lock, flags)) - return 0; + return; if (crng_init != 0) { spin_unlock_irqrestore(&base_crng.lock, flags); - return 0; + return; } - if (len > max) - max = len; - for (i = 0; i < max; i++) { - tmp = lfsr; - lfsr >>= 1; - if (tmp & 1) - lfsr ^= 0xE1; - tmp = dest_buf[i % sizeof(base_crng.key)]; - dest_buf[i % sizeof(base_crng.key)] ^= src_buf[i % len] ^ lfsr; - lfsr += (tmp << 3) | (tmp >> 5); - } + blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); + blake2s_update(&hash, cp, len); + blake2s_final(&hash, base_crng.key); + spin_unlock_irqrestore(&base_crng.lock, flags); - return 1; } static void crng_reseed(void) From 07280d2c3f33d47741f42411eb8c976b70c6657a Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Wed, 9 Feb 2022 14:43:25 +0100 Subject: [PATCH 0169/1842] random: make more consistent use of integer types commit 04ec96b768c9dd43946b047c3da60dcc66431370 upstream. We've been using a flurry of int, unsigned int, size_t, and ssize_t. Let's unify all of this into size_t where it makes sense, as it does in most places, and leave ssize_t for return values with possible errors. In addition, keeping with the convention of other functions in this file, functions that are dealing with raw bytes now take void * consistently instead of a mix of that and u8 *, because much of the time we're actually passing some other structure that is then interpreted as bytes by the function. We also take the opportunity to fix the outdated and incorrect comment in get_random_bytes_arch(). Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Jann Horn Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 123 +++++++++++++++------------------- include/linux/hw_random.h | 2 +- include/linux/random.h | 10 +-- include/trace/events/random.h | 79 +++++++++++----------- 4 files changed, 99 insertions(+), 115 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 866b855c1609..357fefa23b96 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -69,7 +69,7 @@ * * The primary kernel interfaces are: * - * void get_random_bytes(void *buf, int nbytes); + * void get_random_bytes(void *buf, size_t nbytes); * u32 get_random_u32() * u64 get_random_u64() * unsigned int get_random_int() @@ -97,14 +97,14 @@ * The current exported interfaces for gathering environmental noise * from the devices are: * - * void add_device_randomness(const void *buf, unsigned int size); + * void add_device_randomness(const void *buf, size_t size); * void add_input_randomness(unsigned int type, unsigned int code, * unsigned int value); * void add_interrupt_randomness(int irq); * void add_disk_randomness(struct gendisk *disk); - * void add_hwgenerator_randomness(const char *buffer, size_t count, + * void add_hwgenerator_randomness(const void *buffer, size_t count, * size_t entropy); - * void add_bootloader_randomness(const void *buf, unsigned int size); + * void add_bootloader_randomness(const void *buf, size_t size); * * add_device_randomness() is for adding data to the random pool that * is likely to differ between two devices (or possibly even per boot). @@ -268,7 +268,7 @@ static int crng_init = 0; #define crng_ready() (likely(crng_init > 1)) static int crng_init_cnt = 0; static void process_random_ready_list(void); -static void _get_random_bytes(void *buf, int nbytes); +static void _get_random_bytes(void *buf, size_t nbytes); static struct ratelimit_state unseeded_warning = RATELIMIT_STATE_INIT("warn_unseeded_randomness", HZ, 3); @@ -290,7 +290,7 @@ MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression"); static struct { struct blake2s_state hash; spinlock_t lock; - int entropy_count; + unsigned int entropy_count; } input_pool = { .hash.h = { BLAKE2S_IV0 ^ (0x01010000 | BLAKE2S_HASH_SIZE), BLAKE2S_IV1, BLAKE2S_IV2, BLAKE2S_IV3, BLAKE2S_IV4, @@ -308,18 +308,12 @@ static void crng_reseed(void); * update the entropy estimate. The caller should call * credit_entropy_bits if this is appropriate. */ -static void _mix_pool_bytes(const void *in, int nbytes) +static void _mix_pool_bytes(const void *in, size_t nbytes) { blake2s_update(&input_pool.hash, in, nbytes); } -static void __mix_pool_bytes(const void *in, int nbytes) -{ - trace_mix_pool_bytes_nolock(nbytes, _RET_IP_); - _mix_pool_bytes(in, nbytes); -} - -static void mix_pool_bytes(const void *in, int nbytes) +static void mix_pool_bytes(const void *in, size_t nbytes) { unsigned long flags; @@ -383,18 +377,18 @@ static void process_random_ready_list(void) spin_unlock_irqrestore(&random_ready_list_lock, flags); } -static void credit_entropy_bits(int nbits) +static void credit_entropy_bits(size_t nbits) { - int entropy_count, orig; + unsigned int entropy_count, orig, add; - if (nbits <= 0) + if (!nbits) return; - nbits = min(nbits, POOL_BITS); + add = min_t(size_t, nbits, POOL_BITS); do { orig = READ_ONCE(input_pool.entropy_count); - entropy_count = min(POOL_BITS, orig + nbits); + entropy_count = min_t(unsigned int, POOL_BITS, orig + add); } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig); trace_credit_entropy_bits(nbits, entropy_count, _RET_IP_); @@ -443,10 +437,10 @@ static void invalidate_batched_entropy(void); * path. So we can't afford to dilly-dally. Returns the number of * bytes processed from cp. */ -static size_t crng_fast_load(const u8 *cp, size_t len) +static size_t crng_fast_load(const void *cp, size_t len) { unsigned long flags; - u8 *p; + const u8 *src = (const u8 *)cp; size_t ret = 0; if (!spin_trylock_irqsave(&base_crng.lock, flags)) @@ -455,10 +449,9 @@ static size_t crng_fast_load(const u8 *cp, size_t len) spin_unlock_irqrestore(&base_crng.lock, flags); return 0; } - p = base_crng.key; while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) { - p[crng_init_cnt % sizeof(base_crng.key)] ^= *cp; - cp++; crng_init_cnt++; len--; ret++; + base_crng.key[crng_init_cnt % sizeof(base_crng.key)] ^= *src; + src++; crng_init_cnt++; len--; ret++; } if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { invalidate_batched_entropy(); @@ -482,7 +475,7 @@ static size_t crng_fast_load(const u8 *cp, size_t len) * something like a fixed DMI table (for example), which might very * well be unique to the machine, but is otherwise unvarying. */ -static void crng_slow_load(const u8 *cp, size_t len) +static void crng_slow_load(const void *cp, size_t len) { unsigned long flags; struct blake2s_state hash; @@ -656,14 +649,15 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) { bool large_request = nbytes > 256; - ssize_t ret = 0, len; + ssize_t ret = 0; + size_t len; u32 chacha_state[CHACHA_STATE_WORDS]; u8 output[CHACHA_BLOCK_SIZE]; if (!nbytes) return 0; - len = min_t(ssize_t, 32, nbytes); + len = min_t(size_t, 32, nbytes); crng_make_state(chacha_state, output, len); if (copy_to_user(buf, output, len)) @@ -683,7 +677,7 @@ static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) if (unlikely(chacha_state[12] == 0)) ++chacha_state[13]; - len = min_t(ssize_t, nbytes, CHACHA_BLOCK_SIZE); + len = min_t(size_t, nbytes, CHACHA_BLOCK_SIZE); if (copy_to_user(buf, output, len)) { ret = -EFAULT; break; @@ -721,7 +715,7 @@ struct timer_rand_state { * the entropy pool having similar initial state across largely * identical devices. */ -void add_device_randomness(const void *buf, unsigned int size) +void add_device_randomness(const void *buf, size_t size) { unsigned long time = random_get_entropy() ^ jiffies; unsigned long flags; @@ -749,7 +743,7 @@ static struct timer_rand_state input_timer_state = INIT_TIMER_RAND_STATE; * keyboard scan codes, and 256 upwards for interrupts. * */ -static void add_timer_randomness(struct timer_rand_state *state, unsigned num) +static void add_timer_randomness(struct timer_rand_state *state, unsigned int num) { struct { long jiffies; @@ -793,7 +787,7 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num) * Round down by 1 bit on general principles, * and limit entropy estimate to 12 bits. */ - credit_entropy_bits(min_t(int, fls(delta >> 1), 11)); + credit_entropy_bits(min_t(unsigned int, fls(delta >> 1), 11)); } void add_input_randomness(unsigned int type, unsigned int code, @@ -874,8 +868,8 @@ void add_interrupt_randomness(int irq) add_interrupt_bench(cycles); if (unlikely(crng_init == 0)) { - if ((fast_pool->count >= 64) && - crng_fast_load((u8 *)fast_pool->pool, sizeof(fast_pool->pool)) > 0) { + if (fast_pool->count >= 64 && + crng_fast_load(fast_pool->pool, sizeof(fast_pool->pool)) > 0) { fast_pool->count = 0; fast_pool->last = now; if (spin_trylock(&input_pool.lock)) { @@ -893,7 +887,7 @@ void add_interrupt_randomness(int irq) return; fast_pool->last = now; - __mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool)); + _mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool)); spin_unlock(&input_pool.lock); fast_pool->count = 0; @@ -1002,18 +996,18 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller, void * wait_for_random_bytes() should be called and return 0 at least once * at any point prior. */ -static void _get_random_bytes(void *buf, int nbytes) +static void _get_random_bytes(void *buf, size_t nbytes) { u32 chacha_state[CHACHA_STATE_WORDS]; u8 tmp[CHACHA_BLOCK_SIZE]; - ssize_t len; + size_t len; trace_get_random_bytes(nbytes, _RET_IP_); if (!nbytes) return; - len = min_t(ssize_t, 32, nbytes); + len = min_t(size_t, 32, nbytes); crng_make_state(chacha_state, buf, len); nbytes -= len; buf += len; @@ -1036,7 +1030,7 @@ static void _get_random_bytes(void *buf, int nbytes) memzero_explicit(chacha_state, sizeof(chacha_state)); } -void get_random_bytes(void *buf, int nbytes) +void get_random_bytes(void *buf, size_t nbytes) { static void *previous; @@ -1197,25 +1191,19 @@ EXPORT_SYMBOL(del_random_ready_callback); /* * This function will use the architecture-specific hardware random - * number generator if it is available. The arch-specific hw RNG will - * almost certainly be faster than what we can do in software, but it - * is impossible to verify that it is implemented securely (as - * opposed, to, say, the AES encryption of a sequence number using a - * key known by the NSA). So it's useful if we need the speed, but - * only if we're willing to trust the hardware manufacturer not to - * have put in a back door. - * - * Return number of bytes filled in. + * number generator if it is available. It is not recommended for + * use. Use get_random_bytes() instead. It returns the number of + * bytes filled in. */ -int __must_check get_random_bytes_arch(void *buf, int nbytes) +size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes) { - int left = nbytes; + size_t left = nbytes; u8 *p = buf; trace_get_random_bytes_arch(left, _RET_IP_); while (left) { unsigned long v; - int chunk = min_t(int, left, sizeof(unsigned long)); + size_t chunk = min_t(size_t, left, sizeof(unsigned long)); if (!arch_get_random_long(&v)) break; @@ -1248,12 +1236,12 @@ early_param("random.trust_cpu", parse_trust_cpu); */ int __init rand_initialize(void) { - int i; + size_t i; ktime_t now = ktime_get_real(); bool arch_init = true; unsigned long rv; - for (i = BLAKE2S_BLOCK_SIZE; i > 0; i -= sizeof(rv)) { + for (i = 0; i < BLAKE2S_BLOCK_SIZE; i += sizeof(rv)) { if (!arch_get_random_seed_long_early(&rv) && !arch_get_random_long_early(&rv)) { rv = random_get_entropy(); @@ -1302,7 +1290,7 @@ static ssize_t urandom_read_nowarn(struct file *file, char __user *buf, nbytes = min_t(size_t, nbytes, INT_MAX >> 6); ret = get_random_bytes_user(buf, nbytes); - trace_urandom_read(8 * nbytes, 0, input_pool.entropy_count); + trace_urandom_read(nbytes, input_pool.entropy_count); return ret; } @@ -1346,19 +1334,18 @@ static __poll_t random_poll(struct file *file, poll_table *wait) return mask; } -static int write_pool(const char __user *buffer, size_t count) +static int write_pool(const char __user *ubuf, size_t count) { - size_t bytes; - u8 buf[BLAKE2S_BLOCK_SIZE]; - const char __user *p = buffer; + size_t len; + u8 block[BLAKE2S_BLOCK_SIZE]; - while (count > 0) { - bytes = min(count, sizeof(buf)); - if (copy_from_user(buf, p, bytes)) + while (count) { + len = min(count, sizeof(block)); + if (copy_from_user(block, ubuf, len)) return -EFAULT; - count -= bytes; - p += bytes; - mix_pool_bytes(buf, bytes); + count -= len; + ubuf += len; + mix_pool_bytes(block, len); cond_resched(); } @@ -1368,7 +1355,7 @@ static int write_pool(const char __user *buffer, size_t count) static ssize_t random_write(struct file *file, const char __user *buffer, size_t count, loff_t *ppos) { - size_t ret; + int ret; ret = write_pool(buffer, count); if (ret) @@ -1464,8 +1451,6 @@ const struct file_operations urandom_fops = { SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int, flags) { - int ret; - if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE)) return -EINVAL; @@ -1480,6 +1465,8 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int, count = INT_MAX; if (!(flags & GRND_INSECURE) && !crng_ready()) { + int ret; + if (flags & GRND_NONBLOCK) return -EAGAIN; ret = wait_for_random_bytes(); @@ -1741,7 +1728,7 @@ unsigned long randomize_page(unsigned long start, unsigned long range) * Those devices may produce endless random bits and will be throttled * when our pool is full. */ -void add_hwgenerator_randomness(const char *buffer, size_t count, +void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy) { if (unlikely(crng_init == 0)) { @@ -1772,7 +1759,7 @@ EXPORT_SYMBOL_GPL(add_hwgenerator_randomness); * it would be regarded as device data. * The decision is controlled by CONFIG_RANDOM_TRUST_BOOTLOADER. */ -void add_bootloader_randomness(const void *buf, unsigned int size) +void add_bootloader_randomness(const void *buf, size_t size) { if (IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER)) add_hwgenerator_randomness(buf, size, size * 8); diff --git a/include/linux/hw_random.h b/include/linux/hw_random.h index 8e6dd908da21..1a9fc38f8938 100644 --- a/include/linux/hw_random.h +++ b/include/linux/hw_random.h @@ -61,6 +61,6 @@ extern int devm_hwrng_register(struct device *dev, struct hwrng *rng); extern void hwrng_unregister(struct hwrng *rng); extern void devm_hwrng_unregister(struct device *dve, struct hwrng *rng); /** Feed random bits into the pool. */ -extern void add_hwgenerator_randomness(const char *buffer, size_t count, size_t entropy); +extern void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy); #endif /* LINUX_HWRANDOM_H_ */ diff --git a/include/linux/random.h b/include/linux/random.h index c45b2693e51f..e92efb39779c 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -20,8 +20,8 @@ struct random_ready_callback { struct module *owner; }; -extern void add_device_randomness(const void *, unsigned int); -extern void add_bootloader_randomness(const void *, unsigned int); +extern void add_device_randomness(const void *, size_t); +extern void add_bootloader_randomness(const void *, size_t); #if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__) static inline void add_latent_entropy(void) @@ -37,13 +37,13 @@ extern void add_input_randomness(unsigned int type, unsigned int code, unsigned int value) __latent_entropy; extern void add_interrupt_randomness(int irq) __latent_entropy; -extern void get_random_bytes(void *buf, int nbytes); +extern void get_random_bytes(void *buf, size_t nbytes); extern int wait_for_random_bytes(void); extern int __init rand_initialize(void); extern bool rng_is_initialized(void); extern int add_random_ready_callback(struct random_ready_callback *rdy); extern void del_random_ready_callback(struct random_ready_callback *rdy); -extern int __must_check get_random_bytes_arch(void *buf, int nbytes); +extern size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes); #ifndef MODULE extern const struct file_operations random_fops, urandom_fops; @@ -87,7 +87,7 @@ static inline unsigned long get_random_canary(void) /* Calls wait_for_random_bytes() and then calls get_random_bytes(buf, nbytes). * Returns the result of the call to wait_for_random_bytes. */ -static inline int get_random_bytes_wait(void *buf, int nbytes) +static inline int get_random_bytes_wait(void *buf, size_t nbytes) { int ret = wait_for_random_bytes(); get_random_bytes(buf, nbytes); diff --git a/include/trace/events/random.h b/include/trace/events/random.h index ad149aeaf42c..0609a2810a12 100644 --- a/include/trace/events/random.h +++ b/include/trace/events/random.h @@ -9,13 +9,13 @@ #include TRACE_EVENT(add_device_randomness, - TP_PROTO(int bytes, unsigned long IP), + TP_PROTO(size_t bytes, unsigned long IP), TP_ARGS(bytes, IP), TP_STRUCT__entry( - __field( int, bytes ) - __field(unsigned long, IP ) + __field(size_t, bytes ) + __field(unsigned long, IP ) ), TP_fast_assign( @@ -23,18 +23,18 @@ TRACE_EVENT(add_device_randomness, __entry->IP = IP; ), - TP_printk("bytes %d caller %pS", + TP_printk("bytes %zu caller %pS", __entry->bytes, (void *)__entry->IP) ); DECLARE_EVENT_CLASS(random__mix_pool_bytes, - TP_PROTO(int bytes, unsigned long IP), + TP_PROTO(size_t bytes, unsigned long IP), TP_ARGS(bytes, IP), TP_STRUCT__entry( - __field( int, bytes ) - __field(unsigned long, IP ) + __field(size_t, bytes ) + __field(unsigned long, IP ) ), TP_fast_assign( @@ -42,12 +42,12 @@ DECLARE_EVENT_CLASS(random__mix_pool_bytes, __entry->IP = IP; ), - TP_printk("input pool: bytes %d caller %pS", + TP_printk("input pool: bytes %zu caller %pS", __entry->bytes, (void *)__entry->IP) ); DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes, - TP_PROTO(int bytes, unsigned long IP), + TP_PROTO(size_t bytes, unsigned long IP), TP_ARGS(bytes, IP) ); @@ -59,13 +59,13 @@ DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes_nolock, ); TRACE_EVENT(credit_entropy_bits, - TP_PROTO(int bits, int entropy_count, unsigned long IP), + TP_PROTO(size_t bits, size_t entropy_count, unsigned long IP), TP_ARGS(bits, entropy_count, IP), TP_STRUCT__entry( - __field( int, bits ) - __field( int, entropy_count ) + __field(size_t, bits ) + __field(size_t, entropy_count ) __field(unsigned long, IP ) ), @@ -75,34 +75,34 @@ TRACE_EVENT(credit_entropy_bits, __entry->IP = IP; ), - TP_printk("input pool: bits %d entropy_count %d caller %pS", + TP_printk("input pool: bits %zu entropy_count %zu caller %pS", __entry->bits, __entry->entropy_count, (void *)__entry->IP) ); TRACE_EVENT(add_input_randomness, - TP_PROTO(int input_bits), + TP_PROTO(size_t input_bits), TP_ARGS(input_bits), TP_STRUCT__entry( - __field( int, input_bits ) + __field(size_t, input_bits ) ), TP_fast_assign( __entry->input_bits = input_bits; ), - TP_printk("input_pool_bits %d", __entry->input_bits) + TP_printk("input_pool_bits %zu", __entry->input_bits) ); TRACE_EVENT(add_disk_randomness, - TP_PROTO(dev_t dev, int input_bits), + TP_PROTO(dev_t dev, size_t input_bits), TP_ARGS(dev, input_bits), TP_STRUCT__entry( - __field( dev_t, dev ) - __field( int, input_bits ) + __field(dev_t, dev ) + __field(size_t, input_bits ) ), TP_fast_assign( @@ -110,17 +110,17 @@ TRACE_EVENT(add_disk_randomness, __entry->input_bits = input_bits; ), - TP_printk("dev %d,%d input_pool_bits %d", MAJOR(__entry->dev), + TP_printk("dev %d,%d input_pool_bits %zu", MAJOR(__entry->dev), MINOR(__entry->dev), __entry->input_bits) ); DECLARE_EVENT_CLASS(random__get_random_bytes, - TP_PROTO(int nbytes, unsigned long IP), + TP_PROTO(size_t nbytes, unsigned long IP), TP_ARGS(nbytes, IP), TP_STRUCT__entry( - __field( int, nbytes ) + __field(size_t, nbytes ) __field(unsigned long, IP ) ), @@ -129,29 +129,29 @@ DECLARE_EVENT_CLASS(random__get_random_bytes, __entry->IP = IP; ), - TP_printk("nbytes %d caller %pS", __entry->nbytes, (void *)__entry->IP) + TP_printk("nbytes %zu caller %pS", __entry->nbytes, (void *)__entry->IP) ); DEFINE_EVENT(random__get_random_bytes, get_random_bytes, - TP_PROTO(int nbytes, unsigned long IP), + TP_PROTO(size_t nbytes, unsigned long IP), TP_ARGS(nbytes, IP) ); DEFINE_EVENT(random__get_random_bytes, get_random_bytes_arch, - TP_PROTO(int nbytes, unsigned long IP), + TP_PROTO(size_t nbytes, unsigned long IP), TP_ARGS(nbytes, IP) ); DECLARE_EVENT_CLASS(random__extract_entropy, - TP_PROTO(int nbytes, int entropy_count), + TP_PROTO(size_t nbytes, size_t entropy_count), TP_ARGS(nbytes, entropy_count), TP_STRUCT__entry( - __field( int, nbytes ) - __field( int, entropy_count ) + __field( size_t, nbytes ) + __field( size_t, entropy_count ) ), TP_fast_assign( @@ -159,37 +159,34 @@ DECLARE_EVENT_CLASS(random__extract_entropy, __entry->entropy_count = entropy_count; ), - TP_printk("input pool: nbytes %d entropy_count %d", + TP_printk("input pool: nbytes %zu entropy_count %zu", __entry->nbytes, __entry->entropy_count) ); DEFINE_EVENT(random__extract_entropy, extract_entropy, - TP_PROTO(int nbytes, int entropy_count), + TP_PROTO(size_t nbytes, size_t entropy_count), TP_ARGS(nbytes, entropy_count) ); TRACE_EVENT(urandom_read, - TP_PROTO(int got_bits, int pool_left, int input_left), + TP_PROTO(size_t nbytes, size_t entropy_count), - TP_ARGS(got_bits, pool_left, input_left), + TP_ARGS(nbytes, entropy_count), TP_STRUCT__entry( - __field( int, got_bits ) - __field( int, pool_left ) - __field( int, input_left ) + __field( size_t, nbytes ) + __field( size_t, entropy_count ) ), TP_fast_assign( - __entry->got_bits = got_bits; - __entry->pool_left = pool_left; - __entry->input_left = input_left; + __entry->nbytes = nbytes; + __entry->entropy_count = entropy_count; ), - TP_printk("got_bits %d nonblocking_pool_entropy_left %d " - "input_entropy_left %d", __entry->got_bits, - __entry->pool_left, __entry->input_left) + TP_printk("reading: nbytes %zu entropy_count %zu", + __entry->nbytes, __entry->entropy_count) ); TRACE_EVENT(prandom_u32, From 63c1aae40ac16ed136188bb9c9b595c0147fe6fa Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Mon, 7 Feb 2022 23:37:13 +0100 Subject: [PATCH 0170/1842] random: remove outdated INT_MAX >> 6 check in urandom_read() commit 434537ae54ad37e93555de21b6ac8133d6d773a9 upstream. In 79a8468747c5 ("random: check for increase of entropy_count because of signed conversion"), a number of checks were added around what values were passed to account(), because account() was doing fancy fixed point fractional arithmetic, and a user had some ability to pass large values directly into it. One of things in that commit was limiting those values to INT_MAX >> 6. The first >> 3 was for bytes to bits, and the next >> 3 was for bits to 1/8 fractional bits. However, for several years now, urandom reads no longer touch entropy accounting, and so this check serves no purpose. The current flow is: urandom_read_nowarn()-->get_random_bytes_user()-->chacha20_block() Of course, we don't want that size_t to be truncated when adding it into the ssize_t. But we arrive at urandom_read_nowarn() in the first place either via ordinary fops, which limits reads to MAX_RW_COUNT, or via getrandom() which limits reads to INT_MAX. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Jann Horn Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 357fefa23b96..88f55921757f 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1286,9 +1286,8 @@ void rand_initialize_disk(struct gendisk *disk) static ssize_t urandom_read_nowarn(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) { - int ret; + ssize_t ret; - nbytes = min_t(size_t, nbytes, INT_MAX >> 6); ret = get_random_bytes_user(buf, nbytes); trace_urandom_read(nbytes, input_pool.entropy_count); return ret; From bb63851c25576e83cabe2d26c6548f3d5085d0e7 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Wed, 9 Feb 2022 18:42:13 +0100 Subject: [PATCH 0171/1842] random: zero buffer after reading entropy from userspace commit 7b5164fb1279bf0251371848e40bae646b59b3a8 upstream. This buffer may contain entropic data that shouldn't stick around longer than needed, so zero out the temporary buffer at the end of write_pool(). Reviewed-by: Dominik Brodowski Reviewed-by: Jann Horn Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 88f55921757f..610840356407 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1336,19 +1336,24 @@ static __poll_t random_poll(struct file *file, poll_table *wait) static int write_pool(const char __user *ubuf, size_t count) { size_t len; + int ret = 0; u8 block[BLAKE2S_BLOCK_SIZE]; while (count) { len = min(count, sizeof(block)); - if (copy_from_user(block, ubuf, len)) - return -EFAULT; + if (copy_from_user(block, ubuf, len)) { + ret = -EFAULT; + goto out; + } count -= len; ubuf += len; mix_pool_bytes(block, len); cond_resched(); } - return 0; +out: + memzero_explicit(block, sizeof(block)); + return ret; } static ssize_t random_write(struct file *file, const char __user *buffer, From adc32acf23db38eaac913dc5694935a18c95494f Mon Sep 17 00:00:00 2001 From: Dominik Brodowski Date: Wed, 9 Feb 2022 19:57:06 +0100 Subject: [PATCH 0172/1842] random: fix locking for crng_init in crng_reseed() commit 7191c628fe07b70d3f37de736d173d1b115396ed upstream. crng_init is protected by primary_crng->lock. Therefore, we need to hold this lock when increasing crng_init to 2. As we shouldn't hold this lock for too long, only hold it for those parts which require protection. Signed-off-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 610840356407..1da8687610ea 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -502,6 +502,7 @@ static void crng_reseed(void) int entropy_count; unsigned long next_gen; u8 key[CHACHA_KEY_SIZE]; + bool finalize_init = false; /* * First we make sure we have POOL_MIN_BITS of entropy in the pool, @@ -529,12 +530,14 @@ static void crng_reseed(void) ++next_gen; WRITE_ONCE(base_crng.generation, next_gen); WRITE_ONCE(base_crng.birth, jiffies); - spin_unlock_irqrestore(&base_crng.lock, flags); - memzero_explicit(key, sizeof(key)); - if (crng_init < 2) { invalidate_batched_entropy(); crng_init = 2; + finalize_init = true; + } + spin_unlock_irqrestore(&base_crng.lock, flags); + memzero_explicit(key, sizeof(key)); + if (finalize_init) { process_random_ready_list(); wake_up_interruptible(&crng_init_wait); kill_fasync(&fasync, SIGIO, POLL_IN); From 28683a18853796b9fbb68febc62f3b8253f5b4af Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Wed, 9 Feb 2022 22:46:48 +0100 Subject: [PATCH 0173/1842] random: tie batched entropy generation to base_crng generation commit 0791e8b655cc373718f0f58800fdc625a3447ac5 upstream. Now that we have an explicit base_crng generation counter, we don't need a separate one for batched entropy. Rather, we can just move the generation forward every time we change crng_init state or update the base_crng key. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 29 ++++++++--------------------- 1 file changed, 8 insertions(+), 21 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 1da8687610ea..14574a211951 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -430,8 +430,6 @@ static DEFINE_PER_CPU(struct crng, crngs) = { static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); -static void invalidate_batched_entropy(void); - /* * crng_fast_load() can be called by code in the interrupt service * path. So we can't afford to dilly-dally. Returns the number of @@ -454,7 +452,7 @@ static size_t crng_fast_load(const void *cp, size_t len) src++; crng_init_cnt++; len--; ret++; } if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { - invalidate_batched_entropy(); + ++base_crng.generation; crng_init = 1; } spin_unlock_irqrestore(&base_crng.lock, flags); @@ -531,7 +529,6 @@ static void crng_reseed(void) WRITE_ONCE(base_crng.generation, next_gen); WRITE_ONCE(base_crng.birth, jiffies); if (crng_init < 2) { - invalidate_batched_entropy(); crng_init = 2; finalize_init = true; } @@ -1256,8 +1253,9 @@ int __init rand_initialize(void) mix_pool_bytes(utsname(), sizeof(*(utsname()))); extract_entropy(base_crng.key, sizeof(base_crng.key)); + ++base_crng.generation; + if (arch_init && trust_cpu && crng_init < 2) { - invalidate_batched_entropy(); crng_init = 2; pr_notice("crng init done (trusting CPU's manufacturer)\n"); } @@ -1597,8 +1595,6 @@ struct ctl_table random_table[] = { }; #endif /* CONFIG_SYSCTL */ -static atomic_t batch_generation = ATOMIC_INIT(0); - struct batched_entropy { union { /* @@ -1612,8 +1608,8 @@ struct batched_entropy { u32 entropy_u32[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u32))]; }; local_lock_t lock; + unsigned long generation; unsigned int position; - int generation; }; /* @@ -1633,14 +1629,14 @@ u64 get_random_u64(void) unsigned long flags; struct batched_entropy *batch; static void *previous; - int next_gen; + unsigned long next_gen; warn_unseeded_randomness(&previous); local_lock_irqsave(&batched_entropy_u64.lock, flags); batch = raw_cpu_ptr(&batched_entropy_u64); - next_gen = atomic_read(&batch_generation); + next_gen = READ_ONCE(base_crng.generation); if (batch->position >= ARRAY_SIZE(batch->entropy_u64) || next_gen != batch->generation) { _get_random_bytes(batch->entropy_u64, sizeof(batch->entropy_u64)); @@ -1667,14 +1663,14 @@ u32 get_random_u32(void) unsigned long flags; struct batched_entropy *batch; static void *previous; - int next_gen; + unsigned long next_gen; warn_unseeded_randomness(&previous); local_lock_irqsave(&batched_entropy_u32.lock, flags); batch = raw_cpu_ptr(&batched_entropy_u32); - next_gen = atomic_read(&batch_generation); + next_gen = READ_ONCE(base_crng.generation); if (batch->position >= ARRAY_SIZE(batch->entropy_u32) || next_gen != batch->generation) { _get_random_bytes(batch->entropy_u32, sizeof(batch->entropy_u32)); @@ -1690,15 +1686,6 @@ u32 get_random_u32(void) } EXPORT_SYMBOL(get_random_u32); -/* It's important to invalidate all potential batched entropy that might - * be stored before the crng is initialized, which we can do lazily by - * bumping the generation counter. - */ -static void invalidate_batched_entropy(void) -{ - atomic_inc(&batch_generation); -} - /** * randomize_page - Generate a random, page aligned address * @start: The smallest acceptable address the caller will take. From 17ad693cd21451924d4267df9235916b0982b455 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 10 Feb 2022 16:35:24 +0100 Subject: [PATCH 0174/1842] random: remove ifdef'd out interrupt bench commit 95e6060c20a7f5db60163274c5222a725ac118f9 upstream. With tools like kbench9000 giving more finegrained responses, and this basically never having been used ever since it was initially added, let's just get rid of this. There *is* still work to be done on the interrupt handler, but this really isn't the way it's being developed. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- Documentation/admin-guide/sysctl/kernel.rst | 9 ----- drivers/char/random.c | 40 --------------------- 2 files changed, 49 deletions(-) diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst index ffc392289c68..c776ecc7a81f 100644 --- a/Documentation/admin-guide/sysctl/kernel.rst +++ b/Documentation/admin-guide/sysctl/kernel.rst @@ -1023,15 +1023,6 @@ This is a directory, with the following entries: are woken up. This file is writable for compatibility purposes, but writing to it has no effect on any RNG behavior. -If ``drivers/char/random.c`` is built with ``ADD_INTERRUPT_BENCH`` -defined, these additional entries are present: - -* ``add_interrupt_avg_cycles``: the average number of cycles between - interrupts used to feed the pool; - -* ``add_interrupt_avg_deviation``: the standard deviation seen on the - number of cycles between interrupts used to feed the pool. - randomize_va_space ================== diff --git a/drivers/char/random.c b/drivers/char/random.c index 14574a211951..d1d01be190cc 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -240,8 +240,6 @@ #define CREATE_TRACE_POINTS #include -/* #define ADD_INTERRUPT_BENCH */ - enum { POOL_BITS = BLAKE2S_HASH_SIZE * 8, POOL_MIN_BITS = POOL_BITS /* No point in settling for less. */ @@ -808,27 +806,6 @@ EXPORT_SYMBOL_GPL(add_input_randomness); static DEFINE_PER_CPU(struct fast_pool, irq_randomness); -#ifdef ADD_INTERRUPT_BENCH -static unsigned long avg_cycles, avg_deviation; - -#define AVG_SHIFT 8 /* Exponential average factor k=1/256 */ -#define FIXED_1_2 (1 << (AVG_SHIFT - 1)) - -static void add_interrupt_bench(cycles_t start) -{ - long delta = random_get_entropy() - start; - - /* Use a weighted moving average */ - delta = delta - ((avg_cycles + FIXED_1_2) >> AVG_SHIFT); - avg_cycles += delta; - /* And average deviation */ - delta = abs(delta) - ((avg_deviation + FIXED_1_2) >> AVG_SHIFT); - avg_deviation += delta; -} -#else -#define add_interrupt_bench(x) -#endif - static u32 get_reg(struct fast_pool *f, struct pt_regs *regs) { u32 *ptr = (u32 *)regs; @@ -865,7 +842,6 @@ void add_interrupt_randomness(int irq) (sizeof(ip) > 4) ? ip >> 32 : get_reg(fast_pool, regs); fast_mix(fast_pool); - add_interrupt_bench(cycles); if (unlikely(crng_init == 0)) { if (fast_pool->count >= 64 && @@ -1575,22 +1551,6 @@ struct ctl_table random_table[] = { .mode = 0444, .proc_handler = proc_do_uuid, }, -#ifdef ADD_INTERRUPT_BENCH - { - .procname = "add_interrupt_avg_cycles", - .data = &avg_cycles, - .maxlen = sizeof(avg_cycles), - .mode = 0444, - .proc_handler = proc_doulongvec_minmax, - }, - { - .procname = "add_interrupt_avg_deviation", - .data = &avg_deviation, - .maxlen = sizeof(avg_deviation), - .mode = 0444, - .proc_handler = proc_doulongvec_minmax, - }, -#endif { } }; #endif /* CONFIG_SYSCTL */ From 9342656c013d3f3c3667298f0e2e6b0cfb69f0e8 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 10 Feb 2022 16:40:44 +0100 Subject: [PATCH 0175/1842] random: remove unused tracepoints commit 14c174633f349cb41ea90c2c0aaddac157012f74 upstream. These explicit tracepoints aren't really used and show sign of aging. It's work to keep these up to date, and before I attempted to keep them up to date, they weren't up to date, which indicates that they're not really used. These days there are better ways of introspecting anyway. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 30 +---- include/trace/events/random.h | 212 ---------------------------------- lib/random32.c | 3 +- 3 files changed, 4 insertions(+), 241 deletions(-) delete mode 100644 include/trace/events/random.h diff --git a/drivers/char/random.c b/drivers/char/random.c index d1d01be190cc..b5889b52a251 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -237,9 +237,6 @@ #include #include -#define CREATE_TRACE_POINTS -#include - enum { POOL_BITS = BLAKE2S_HASH_SIZE * 8, POOL_MIN_BITS = POOL_BITS /* No point in settling for less. */ @@ -315,7 +312,6 @@ static void mix_pool_bytes(const void *in, size_t nbytes) { unsigned long flags; - trace_mix_pool_bytes(nbytes, _RET_IP_); spin_lock_irqsave(&input_pool.lock, flags); _mix_pool_bytes(in, nbytes); spin_unlock_irqrestore(&input_pool.lock, flags); @@ -389,8 +385,6 @@ static void credit_entropy_bits(size_t nbits) entropy_count = min_t(unsigned int, POOL_BITS, orig + add); } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig); - trace_credit_entropy_bits(nbits, entropy_count, _RET_IP_); - if (crng_init < 2 && entropy_count >= POOL_MIN_BITS) crng_reseed(); } @@ -721,7 +715,6 @@ void add_device_randomness(const void *buf, size_t size) if (!crng_ready() && size) crng_slow_load(buf, size); - trace_add_device_randomness(size, _RET_IP_); spin_lock_irqsave(&input_pool.lock, flags); _mix_pool_bytes(buf, size); _mix_pool_bytes(&time, sizeof(time)); @@ -800,7 +793,6 @@ void add_input_randomness(unsigned int type, unsigned int code, last_value = value; add_timer_randomness(&input_timer_state, (type << 4) ^ code ^ (code >> 4) ^ value); - trace_add_input_randomness(input_pool.entropy_count); } EXPORT_SYMBOL_GPL(add_input_randomness); @@ -880,7 +872,6 @@ void add_disk_randomness(struct gendisk *disk) return; /* first major is 1, so we get >= 0x200 here */ add_timer_randomness(disk->random, 0x100 + disk_devt(disk)); - trace_add_disk_randomness(disk_devt(disk), input_pool.entropy_count); } EXPORT_SYMBOL_GPL(add_disk_randomness); #endif @@ -905,8 +896,6 @@ static void extract_entropy(void *buf, size_t nbytes) } block; size_t i; - trace_extract_entropy(nbytes, input_pool.entropy_count); - for (i = 0; i < ARRAY_SIZE(block.rdseed); ++i) { if (!arch_get_random_seed_long(&block.rdseed[i]) && !arch_get_random_long(&block.rdseed[i])) @@ -978,8 +967,6 @@ static void _get_random_bytes(void *buf, size_t nbytes) u8 tmp[CHACHA_BLOCK_SIZE]; size_t len; - trace_get_random_bytes(nbytes, _RET_IP_); - if (!nbytes) return; @@ -1176,7 +1163,6 @@ size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes) size_t left = nbytes; u8 *p = buf; - trace_get_random_bytes_arch(left, _RET_IP_); while (left) { unsigned long v; size_t chunk = min_t(size_t, left, sizeof(unsigned long)); @@ -1260,16 +1246,6 @@ void rand_initialize_disk(struct gendisk *disk) } #endif -static ssize_t urandom_read_nowarn(struct file *file, char __user *buf, - size_t nbytes, loff_t *ppos) -{ - ssize_t ret; - - ret = get_random_bytes_user(buf, nbytes); - trace_urandom_read(nbytes, input_pool.entropy_count); - return ret; -} - static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) { @@ -1282,7 +1258,7 @@ static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes, current->comm, nbytes); } - return urandom_read_nowarn(file, buf, nbytes, ppos); + return get_random_bytes_user(buf, nbytes); } static ssize_t random_read(struct file *file, char __user *buf, size_t nbytes, @@ -1293,7 +1269,7 @@ static ssize_t random_read(struct file *file, char __user *buf, size_t nbytes, ret = wait_for_random_bytes(); if (ret != 0) return ret; - return urandom_read_nowarn(file, buf, nbytes, ppos); + return get_random_bytes_user(buf, nbytes); } static __poll_t random_poll(struct file *file, poll_table *wait) @@ -1454,7 +1430,7 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int, if (unlikely(ret)) return ret; } - return urandom_read_nowarn(NULL, buf, count, NULL); + return get_random_bytes_user(buf, count); } /******************************************************************** diff --git a/include/trace/events/random.h b/include/trace/events/random.h deleted file mode 100644 index 0609a2810a12..000000000000 --- a/include/trace/events/random.h +++ /dev/null @@ -1,212 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#undef TRACE_SYSTEM -#define TRACE_SYSTEM random - -#if !defined(_TRACE_RANDOM_H) || defined(TRACE_HEADER_MULTI_READ) -#define _TRACE_RANDOM_H - -#include -#include - -TRACE_EVENT(add_device_randomness, - TP_PROTO(size_t bytes, unsigned long IP), - - TP_ARGS(bytes, IP), - - TP_STRUCT__entry( - __field(size_t, bytes ) - __field(unsigned long, IP ) - ), - - TP_fast_assign( - __entry->bytes = bytes; - __entry->IP = IP; - ), - - TP_printk("bytes %zu caller %pS", - __entry->bytes, (void *)__entry->IP) -); - -DECLARE_EVENT_CLASS(random__mix_pool_bytes, - TP_PROTO(size_t bytes, unsigned long IP), - - TP_ARGS(bytes, IP), - - TP_STRUCT__entry( - __field(size_t, bytes ) - __field(unsigned long, IP ) - ), - - TP_fast_assign( - __entry->bytes = bytes; - __entry->IP = IP; - ), - - TP_printk("input pool: bytes %zu caller %pS", - __entry->bytes, (void *)__entry->IP) -); - -DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes, - TP_PROTO(size_t bytes, unsigned long IP), - - TP_ARGS(bytes, IP) -); - -DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes_nolock, - TP_PROTO(int bytes, unsigned long IP), - - TP_ARGS(bytes, IP) -); - -TRACE_EVENT(credit_entropy_bits, - TP_PROTO(size_t bits, size_t entropy_count, unsigned long IP), - - TP_ARGS(bits, entropy_count, IP), - - TP_STRUCT__entry( - __field(size_t, bits ) - __field(size_t, entropy_count ) - __field(unsigned long, IP ) - ), - - TP_fast_assign( - __entry->bits = bits; - __entry->entropy_count = entropy_count; - __entry->IP = IP; - ), - - TP_printk("input pool: bits %zu entropy_count %zu caller %pS", - __entry->bits, __entry->entropy_count, (void *)__entry->IP) -); - -TRACE_EVENT(add_input_randomness, - TP_PROTO(size_t input_bits), - - TP_ARGS(input_bits), - - TP_STRUCT__entry( - __field(size_t, input_bits ) - ), - - TP_fast_assign( - __entry->input_bits = input_bits; - ), - - TP_printk("input_pool_bits %zu", __entry->input_bits) -); - -TRACE_EVENT(add_disk_randomness, - TP_PROTO(dev_t dev, size_t input_bits), - - TP_ARGS(dev, input_bits), - - TP_STRUCT__entry( - __field(dev_t, dev ) - __field(size_t, input_bits ) - ), - - TP_fast_assign( - __entry->dev = dev; - __entry->input_bits = input_bits; - ), - - TP_printk("dev %d,%d input_pool_bits %zu", MAJOR(__entry->dev), - MINOR(__entry->dev), __entry->input_bits) -); - -DECLARE_EVENT_CLASS(random__get_random_bytes, - TP_PROTO(size_t nbytes, unsigned long IP), - - TP_ARGS(nbytes, IP), - - TP_STRUCT__entry( - __field(size_t, nbytes ) - __field(unsigned long, IP ) - ), - - TP_fast_assign( - __entry->nbytes = nbytes; - __entry->IP = IP; - ), - - TP_printk("nbytes %zu caller %pS", __entry->nbytes, (void *)__entry->IP) -); - -DEFINE_EVENT(random__get_random_bytes, get_random_bytes, - TP_PROTO(size_t nbytes, unsigned long IP), - - TP_ARGS(nbytes, IP) -); - -DEFINE_EVENT(random__get_random_bytes, get_random_bytes_arch, - TP_PROTO(size_t nbytes, unsigned long IP), - - TP_ARGS(nbytes, IP) -); - -DECLARE_EVENT_CLASS(random__extract_entropy, - TP_PROTO(size_t nbytes, size_t entropy_count), - - TP_ARGS(nbytes, entropy_count), - - TP_STRUCT__entry( - __field( size_t, nbytes ) - __field( size_t, entropy_count ) - ), - - TP_fast_assign( - __entry->nbytes = nbytes; - __entry->entropy_count = entropy_count; - ), - - TP_printk("input pool: nbytes %zu entropy_count %zu", - __entry->nbytes, __entry->entropy_count) -); - - -DEFINE_EVENT(random__extract_entropy, extract_entropy, - TP_PROTO(size_t nbytes, size_t entropy_count), - - TP_ARGS(nbytes, entropy_count) -); - -TRACE_EVENT(urandom_read, - TP_PROTO(size_t nbytes, size_t entropy_count), - - TP_ARGS(nbytes, entropy_count), - - TP_STRUCT__entry( - __field( size_t, nbytes ) - __field( size_t, entropy_count ) - ), - - TP_fast_assign( - __entry->nbytes = nbytes; - __entry->entropy_count = entropy_count; - ), - - TP_printk("reading: nbytes %zu entropy_count %zu", - __entry->nbytes, __entry->entropy_count) -); - -TRACE_EVENT(prandom_u32, - - TP_PROTO(unsigned int ret), - - TP_ARGS(ret), - - TP_STRUCT__entry( - __field( unsigned int, ret) - ), - - TP_fast_assign( - __entry->ret = ret; - ), - - TP_printk("ret=%u" , __entry->ret) -); - -#endif /* _TRACE_RANDOM_H */ - -/* This part must be outside protection */ -#include diff --git a/lib/random32.c b/lib/random32.c index 4d0e05e471d7..3c19820796d0 100644 --- a/lib/random32.c +++ b/lib/random32.c @@ -39,8 +39,8 @@ #include #include #include +#include #include -#include /** * prandom_u32_state - seeded pseudo-random number generator. @@ -386,7 +386,6 @@ u32 prandom_u32(void) struct siprand_state *state = get_cpu_ptr(&net_rand_state); u32 res = siprand_u32(state); - trace_prandom_u32(res); put_cpu_ptr(&net_rand_state); return res; } From ae1b8f19542f29cf733bd0c1de04b6844bd9e089 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 10 Feb 2022 16:43:57 +0100 Subject: [PATCH 0176/1842] random: add proper SPDX header commit a07fdae346c35c6ba286af1c88e0effcfa330bf9 upstream. Convert the current license into the SPDX notation of "(GPL-2.0 OR BSD-3-Clause)". This infers GPL-2.0 from the text "ALTERNATIVELY, this product may be distributed under the terms of the GNU General Public License, in which case the provisions of the GPL are required INSTEAD OF the above restrictions" and it infers BSD-3-Clause from the verbatim BSD 3 clause license in the file. Cc: Thomas Gleixner Cc: Theodore Ts'o Cc: Dominik Brodowski Reviewed-by: Greg Kroah-Hartman Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 37 +------------------------------------ 1 file changed, 1 insertion(+), 36 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index b5889b52a251..ef9c96d8bdfd 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1,44 +1,9 @@ +// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) /* - * random.c -- A strong random number generator - * * Copyright (C) 2017-2022 Jason A. Donenfeld . All Rights Reserved. - * * Copyright Matt Mackall , 2003, 2004, 2005 - * * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All * rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * 1. Redistributions of source code must retain the above copyright - * notice, and the entire permission notice in its entirety, - * including the disclaimer of warranties. - * 2. Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * 3. The name of the author may not be used to endorse or promote - * products derived from this software without specific prior - * written permission. - * - * ALTERNATIVELY, this product may be distributed under the terms of - * the GNU General Public License, in which case the provisions of the GPL are - * required INSTEAD OF the above restrictions. (This clause is - * necessary due to a potential bad interaction between the GPL and - * the restrictions contained in a BSD-style copyright.) - * - * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED - * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES - * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF - * WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE - * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT - * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR - * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF - * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE - * USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH - * DAMAGE. */ /* From 0971c1c2fdc63002329bd74c1db40624cd1e3583 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 10 Feb 2022 17:01:27 +0100 Subject: [PATCH 0177/1842] random: deobfuscate irq u32/u64 contributions commit b2f408fe403800c91a49f6589d95b6759ce1b30b upstream. In the irq handler, we fill out 16 bytes differently on 32-bit and 64-bit platforms, and for 32-bit vs 64-bit cycle counters, which doesn't always correspond with the bitness of the platform. Whether or not you like this strangeness, it is a matter of fact. But it might not be a fact you well realized until now, because the code that loaded the irq info into 4 32-bit words was quite confusing. Instead, this commit makes everything explicit by having separate (compile-time) branches for 32-bit and 64-bit types. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 49 ++++++++++++++++++++++++------------------- 1 file changed, 28 insertions(+), 21 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index ef9c96d8bdfd..291b7e8d2393 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -283,7 +283,10 @@ static void mix_pool_bytes(const void *in, size_t nbytes) } struct fast_pool { - u32 pool[4]; + union { + u32 pool32[4]; + u64 pool64[2]; + }; unsigned long last; u16 reg_idx; u8 count; @@ -294,10 +297,10 @@ struct fast_pool { * collector. It's hardcoded for an 128 bit pool and assumes that any * locks that might be needed are taken by the caller. */ -static void fast_mix(struct fast_pool *f) +static void fast_mix(u32 pool[4]) { - u32 a = f->pool[0], b = f->pool[1]; - u32 c = f->pool[2], d = f->pool[3]; + u32 a = pool[0], b = pool[1]; + u32 c = pool[2], d = pool[3]; a += b; c += d; b = rol32(b, 6); d = rol32(d, 27); @@ -315,9 +318,8 @@ static void fast_mix(struct fast_pool *f) b = rol32(b, 16); d = rol32(d, 14); d ^= a; b ^= c; - f->pool[0] = a; f->pool[1] = b; - f->pool[2] = c; f->pool[3] = d; - f->count++; + pool[0] = a; pool[1] = b; + pool[2] = c; pool[3] = d; } static void process_random_ready_list(void) @@ -784,29 +786,34 @@ void add_interrupt_randomness(int irq) struct pt_regs *regs = get_irq_regs(); unsigned long now = jiffies; cycles_t cycles = random_get_entropy(); - u32 c_high, j_high; - u64 ip; if (cycles == 0) cycles = get_reg(fast_pool, regs); - c_high = (sizeof(cycles) > 4) ? cycles >> 32 : 0; - j_high = (sizeof(now) > 4) ? now >> 32 : 0; - fast_pool->pool[0] ^= cycles ^ j_high ^ irq; - fast_pool->pool[1] ^= now ^ c_high; - ip = regs ? instruction_pointer(regs) : _RET_IP_; - fast_pool->pool[2] ^= ip; - fast_pool->pool[3] ^= - (sizeof(ip) > 4) ? ip >> 32 : get_reg(fast_pool, regs); - fast_mix(fast_pool); + if (sizeof(cycles) == 8) + fast_pool->pool64[0] ^= cycles ^ rol64(now, 32) ^ irq; + else { + fast_pool->pool32[0] ^= cycles ^ irq; + fast_pool->pool32[1] ^= now; + } + + if (sizeof(unsigned long) == 8) + fast_pool->pool64[1] ^= regs ? instruction_pointer(regs) : _RET_IP_; + else { + fast_pool->pool32[2] ^= regs ? instruction_pointer(regs) : _RET_IP_; + fast_pool->pool32[3] ^= get_reg(fast_pool, regs); + } + + fast_mix(fast_pool->pool32); + ++fast_pool->count; if (unlikely(crng_init == 0)) { if (fast_pool->count >= 64 && - crng_fast_load(fast_pool->pool, sizeof(fast_pool->pool)) > 0) { + crng_fast_load(fast_pool->pool32, sizeof(fast_pool->pool32)) > 0) { fast_pool->count = 0; fast_pool->last = now; if (spin_trylock(&input_pool.lock)) { - _mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool)); + _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); spin_unlock(&input_pool.lock); } } @@ -820,7 +827,7 @@ void add_interrupt_randomness(int irq) return; fast_pool->last = now; - _mix_pool_bytes(&fast_pool->pool, sizeof(fast_pool->pool)); + _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); spin_unlock(&input_pool.lock); fast_pool->count = 0; From b3901816545ec65b6c6d4c85a8b80605eb035c99 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 11 Feb 2022 12:19:49 +0100 Subject: [PATCH 0178/1842] random: introduce drain_entropy() helper to declutter crng_reseed() commit 246c03dd899164d0186b6d685d6387f228c28d93 upstream. In preparation for separating responsibilities, break out the entropy count management part of crng_reseed() into its own function. No functional changes. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 36 +++++++++++++++++++++++------------- 1 file changed, 23 insertions(+), 13 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 291b7e8d2393..d7ae07da8878 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -260,6 +260,7 @@ static struct { }; static void extract_entropy(void *buf, size_t nbytes); +static bool drain_entropy(void *buf, size_t nbytes); static void crng_reseed(void); @@ -456,23 +457,13 @@ static void crng_slow_load(const void *cp, size_t len) static void crng_reseed(void) { unsigned long flags; - int entropy_count; unsigned long next_gen; u8 key[CHACHA_KEY_SIZE]; bool finalize_init = false; - /* - * First we make sure we have POOL_MIN_BITS of entropy in the pool, - * and then we drain all of it. Only then can we extract a new key. - */ - do { - entropy_count = READ_ONCE(input_pool.entropy_count); - if (entropy_count < POOL_MIN_BITS) - return; - } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count); - extract_entropy(key, sizeof(key)); - wake_up_interruptible(&random_write_wait); - kill_fasync(&fasync, SIGIO, POLL_OUT); + /* Only reseed if we can, to prevent brute forcing a small amount of new bits. */ + if (!drain_entropy(key, sizeof(key))) + return; /* * We copy the new key into the base_crng, overwriting the old one, @@ -900,6 +891,25 @@ static void extract_entropy(void *buf, size_t nbytes) memzero_explicit(&block, sizeof(block)); } +/* + * First we make sure we have POOL_MIN_BITS of entropy in the pool, and then we + * set the entropy count to zero (but don't actually touch any data). Only then + * can we extract a new key with extract_entropy(). + */ +static bool drain_entropy(void *buf, size_t nbytes) +{ + unsigned int entropy_count; + do { + entropy_count = READ_ONCE(input_pool.entropy_count); + if (entropy_count < POOL_MIN_BITS) + return false; + } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count); + extract_entropy(buf, nbytes); + wake_up_interruptible(&random_write_wait); + kill_fasync(&fasync, SIGIO, POLL_OUT); + return true; +} + #define warn_unseeded_randomness(previous) \ _warn_unseeded_randomness(__func__, (void *)_RET_IP_, (previous)) From 7b0f36f7c252409d7af62be2c2f9fb87d9322b75 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 11 Feb 2022 12:28:33 +0100 Subject: [PATCH 0179/1842] random: remove useless header comment commit 6071a6c0fba2d747742cadcbb3ba26ed756ed73b upstream. This really adds nothing at all useful. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- include/linux/random.h | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/include/linux/random.h b/include/linux/random.h index e92efb39779c..37e1e8c43d7e 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -1,9 +1,5 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* - * include/linux/random.h - * - * Include file for the random number generator. - */ + #ifndef _LINUX_RANDOM_H #define _LINUX_RANDOM_H From 6c9cee15555dd032c2e7d92061adb6aadc87d33f Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 11 Feb 2022 13:41:41 +0100 Subject: [PATCH 0180/1842] random: remove whitespace and reorder includes commit 87e7d5abad0cbc9312dea7f889a57d294c1a5fcc upstream. This is purely cosmetic. Future work involves figuring out which of these headers we need and which we don't. Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index d7ae07da8878..007eee1fcb8f 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -193,11 +193,10 @@ #include #include #include +#include #include #include - #include -#include #include #include #include From 6b1ffb3b5a08ecf57fe411ba2efa7f31f45f47aa Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 11 Feb 2022 12:53:34 +0100 Subject: [PATCH 0181/1842] random: group initialization wait functions commit 5f1bb112006b104b3e2a1e1b39bbb9b2617581e6 upstream. This pulls all of the readiness waiting-focused functions into the first labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 347 ++++++++++++++++++++++-------------------- 1 file changed, 179 insertions(+), 168 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 007eee1fcb8f..c68a3e33ef2c 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -201,6 +201,185 @@ #include #include +/********************************************************************* + * + * Initialization and readiness waiting. + * + * Much of the RNG infrastructure is devoted to various dependencies + * being able to wait until the RNG has collected enough entropy and + * is ready for safe consumption. + * + *********************************************************************/ + +/* + * crng_init = 0 --> Uninitialized + * 1 --> Initialized + * 2 --> Initialized from input_pool + * + * crng_init is protected by base_crng->lock, and only increases + * its value (from 0->1->2). + */ +static int crng_init = 0; +#define crng_ready() (likely(crng_init > 1)) +/* Various types of waiters for crng_init->2 transition. */ +static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); +static struct fasync_struct *fasync; +static DEFINE_SPINLOCK(random_ready_list_lock); +static LIST_HEAD(random_ready_list); + +/* Control how we warn userspace. */ +static struct ratelimit_state unseeded_warning = + RATELIMIT_STATE_INIT("warn_unseeded_randomness", HZ, 3); +static struct ratelimit_state urandom_warning = + RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3); +static int ratelimit_disable __read_mostly; +module_param_named(ratelimit_disable, ratelimit_disable, int, 0644); +MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression"); + +/* + * Returns whether or not the input pool has been seeded and thus guaranteed + * to supply cryptographically secure random numbers. This applies to: the + * /dev/urandom device, the get_random_bytes function, and the get_random_{u32, + * ,u64,int,long} family of functions. + * + * Returns: true if the input pool has been seeded. + * false if the input pool has not been seeded. + */ +bool rng_is_initialized(void) +{ + return crng_ready(); +} +EXPORT_SYMBOL(rng_is_initialized); + +/* Used by wait_for_random_bytes(), and considered an entropy collector, below. */ +static void try_to_generate_entropy(void); + +/* + * Wait for the input pool to be seeded and thus guaranteed to supply + * cryptographically secure random numbers. This applies to: the /dev/urandom + * device, the get_random_bytes function, and the get_random_{u32,u64,int,long} + * family of functions. Using any of these functions without first calling + * this function forfeits the guarantee of security. + * + * Returns: 0 if the input pool has been seeded. + * -ERESTARTSYS if the function was interrupted by a signal. + */ +int wait_for_random_bytes(void) +{ + if (likely(crng_ready())) + return 0; + + do { + int ret; + ret = wait_event_interruptible_timeout(crng_init_wait, crng_ready(), HZ); + if (ret) + return ret > 0 ? 0 : ret; + + try_to_generate_entropy(); + } while (!crng_ready()); + + return 0; +} +EXPORT_SYMBOL(wait_for_random_bytes); + +/* + * Add a callback function that will be invoked when the input + * pool is initialised. + * + * returns: 0 if callback is successfully added + * -EALREADY if pool is already initialised (callback not called) + * -ENOENT if module for callback is not alive + */ +int add_random_ready_callback(struct random_ready_callback *rdy) +{ + struct module *owner; + unsigned long flags; + int err = -EALREADY; + + if (crng_ready()) + return err; + + owner = rdy->owner; + if (!try_module_get(owner)) + return -ENOENT; + + spin_lock_irqsave(&random_ready_list_lock, flags); + if (crng_ready()) + goto out; + + owner = NULL; + + list_add(&rdy->list, &random_ready_list); + err = 0; + +out: + spin_unlock_irqrestore(&random_ready_list_lock, flags); + + module_put(owner); + + return err; +} +EXPORT_SYMBOL(add_random_ready_callback); + +/* + * Delete a previously registered readiness callback function. + */ +void del_random_ready_callback(struct random_ready_callback *rdy) +{ + unsigned long flags; + struct module *owner = NULL; + + spin_lock_irqsave(&random_ready_list_lock, flags); + if (!list_empty(&rdy->list)) { + list_del_init(&rdy->list); + owner = rdy->owner; + } + spin_unlock_irqrestore(&random_ready_list_lock, flags); + + module_put(owner); +} +EXPORT_SYMBOL(del_random_ready_callback); + +static void process_random_ready_list(void) +{ + unsigned long flags; + struct random_ready_callback *rdy, *tmp; + + spin_lock_irqsave(&random_ready_list_lock, flags); + list_for_each_entry_safe(rdy, tmp, &random_ready_list, list) { + struct module *owner = rdy->owner; + + list_del_init(&rdy->list); + rdy->func(rdy); + module_put(owner); + } + spin_unlock_irqrestore(&random_ready_list_lock, flags); +} + +#define warn_unseeded_randomness(previous) \ + _warn_unseeded_randomness(__func__, (void *)_RET_IP_, (previous)) + +static void _warn_unseeded_randomness(const char *func_name, void *caller, void **previous) +{ +#ifdef CONFIG_WARN_ALL_UNSEEDED_RANDOM + const bool print_once = false; +#else + static bool print_once __read_mostly; +#endif + + if (print_once || crng_ready() || + (previous && (caller == READ_ONCE(*previous)))) + return; + WRITE_ONCE(*previous, caller); +#ifndef CONFIG_WARN_ALL_UNSEEDED_RANDOM + print_once = true; +#endif + if (__ratelimit(&unseeded_warning)) + printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n", + func_name, caller, crng_init); +} + + enum { POOL_BITS = BLAKE2S_HASH_SIZE * 8, POOL_MIN_BITS = POOL_BITS /* No point in settling for less. */ @@ -210,34 +389,8 @@ enum { * Static global variables */ static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); -static struct fasync_struct *fasync; -static DEFINE_SPINLOCK(random_ready_list_lock); -static LIST_HEAD(random_ready_list); - -/* - * crng_init = 0 --> Uninitialized - * 1 --> Initialized - * 2 --> Initialized from input_pool - * - * crng_init is protected by primary_crng->lock, and only increases - * its value (from 0->1->2). - */ -static int crng_init = 0; -#define crng_ready() (likely(crng_init > 1)) static int crng_init_cnt = 0; -static void process_random_ready_list(void); -static void _get_random_bytes(void *buf, size_t nbytes); - -static struct ratelimit_state unseeded_warning = - RATELIMIT_STATE_INIT("warn_unseeded_randomness", HZ, 3); -static struct ratelimit_state urandom_warning = - RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3); - -static int ratelimit_disable __read_mostly; - -module_param_named(ratelimit_disable, ratelimit_disable, int, 0644); -MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression"); /********************************************************************** * @@ -322,22 +475,6 @@ static void fast_mix(u32 pool[4]) pool[2] = c; pool[3] = d; } -static void process_random_ready_list(void) -{ - unsigned long flags; - struct random_ready_callback *rdy, *tmp; - - spin_lock_irqsave(&random_ready_list_lock, flags); - list_for_each_entry_safe(rdy, tmp, &random_ready_list, list) { - struct module *owner = rdy->owner; - - list_del_init(&rdy->list); - rdy->func(rdy); - module_put(owner); - } - spin_unlock_irqrestore(&random_ready_list_lock, flags); -} - static void credit_entropy_bits(size_t nbits) { unsigned int entropy_count, orig, add; @@ -387,8 +524,6 @@ static DEFINE_PER_CPU(struct crng, crngs) = { .lock = INIT_LOCAL_LOCK(crngs.lock), }; -static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); - /* * crng_fast_load() can be called by code in the interrupt service * path. So we can't afford to dilly-dally. Returns the number of @@ -909,29 +1044,6 @@ static bool drain_entropy(void *buf, size_t nbytes) return true; } -#define warn_unseeded_randomness(previous) \ - _warn_unseeded_randomness(__func__, (void *)_RET_IP_, (previous)) - -static void _warn_unseeded_randomness(const char *func_name, void *caller, void **previous) -{ -#ifdef CONFIG_WARN_ALL_UNSEEDED_RANDOM - const bool print_once = false; -#else - static bool print_once __read_mostly; -#endif - - if (print_once || crng_ready() || - (previous && (caller == READ_ONCE(*previous)))) - return; - WRITE_ONCE(*previous, caller); -#ifndef CONFIG_WARN_ALL_UNSEEDED_RANDOM - print_once = true; -#endif - if (__ratelimit(&unseeded_warning)) - printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n", - func_name, caller, crng_init); -} - /* * This function is the exported kernel interface. It returns some * number of good random numbers, suitable for key generation, seeding @@ -1032,107 +1144,6 @@ static void try_to_generate_entropy(void) mix_pool_bytes(&stack.now, sizeof(stack.now)); } -/* - * Wait for the urandom pool to be seeded and thus guaranteed to supply - * cryptographically secure random numbers. This applies to: the /dev/urandom - * device, the get_random_bytes function, and the get_random_{u32,u64,int,long} - * family of functions. Using any of these functions without first calling - * this function forfeits the guarantee of security. - * - * Returns: 0 if the urandom pool has been seeded. - * -ERESTARTSYS if the function was interrupted by a signal. - */ -int wait_for_random_bytes(void) -{ - if (likely(crng_ready())) - return 0; - - do { - int ret; - ret = wait_event_interruptible_timeout(crng_init_wait, crng_ready(), HZ); - if (ret) - return ret > 0 ? 0 : ret; - - try_to_generate_entropy(); - } while (!crng_ready()); - - return 0; -} -EXPORT_SYMBOL(wait_for_random_bytes); - -/* - * Returns whether or not the urandom pool has been seeded and thus guaranteed - * to supply cryptographically secure random numbers. This applies to: the - * /dev/urandom device, the get_random_bytes function, and the get_random_{u32, - * ,u64,int,long} family of functions. - * - * Returns: true if the urandom pool has been seeded. - * false if the urandom pool has not been seeded. - */ -bool rng_is_initialized(void) -{ - return crng_ready(); -} -EXPORT_SYMBOL(rng_is_initialized); - -/* - * Add a callback function that will be invoked when the nonblocking - * pool is initialised. - * - * returns: 0 if callback is successfully added - * -EALREADY if pool is already initialised (callback not called) - * -ENOENT if module for callback is not alive - */ -int add_random_ready_callback(struct random_ready_callback *rdy) -{ - struct module *owner; - unsigned long flags; - int err = -EALREADY; - - if (crng_ready()) - return err; - - owner = rdy->owner; - if (!try_module_get(owner)) - return -ENOENT; - - spin_lock_irqsave(&random_ready_list_lock, flags); - if (crng_ready()) - goto out; - - owner = NULL; - - list_add(&rdy->list, &random_ready_list); - err = 0; - -out: - spin_unlock_irqrestore(&random_ready_list_lock, flags); - - module_put(owner); - - return err; -} -EXPORT_SYMBOL(add_random_ready_callback); - -/* - * Delete a previously registered readiness callback function. - */ -void del_random_ready_callback(struct random_ready_callback *rdy) -{ - unsigned long flags; - struct module *owner = NULL; - - spin_lock_irqsave(&random_ready_list_lock, flags); - if (!list_empty(&rdy->list)) { - list_del_init(&rdy->list); - owner = rdy->owner; - } - spin_unlock_irqrestore(&random_ready_list_lock, flags); - - module_put(owner); -} -EXPORT_SYMBOL(del_random_ready_callback); - /* * This function will use the architecture-specific hardware random * number generator if it is available. It is not recommended for From d7e5b1925a67a1861658bb5a3c2640eb8fbdd4d1 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 11 Feb 2022 12:53:34 +0100 Subject: [PATCH 0182/1842] random: group crng functions commit 3655adc7089da4f8ca74cec8fcef73ea5101430e upstream. This pulls all of the crng-focused functions into the second labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 1002 +++++++++++++++++++++-------------------- 1 file changed, 515 insertions(+), 487 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index c68a3e33ef2c..202252596ccb 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -380,6 +380,521 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller, void } +/********************************************************************* + * + * Fast key erasure RNG, the "crng". + * + * These functions expand entropy from the entropy extractor into + * long streams for external consumption using the "fast key erasure" + * RNG described at . + * + * There are a few exported interfaces for use by other drivers: + * + * void get_random_bytes(void *buf, size_t nbytes) + * u32 get_random_u32() + * u64 get_random_u64() + * unsigned int get_random_int() + * unsigned long get_random_long() + * + * These interfaces will return the requested number of random bytes + * into the given buffer or as a return value. This is equivalent to + * a read from /dev/urandom. The integer family of functions may be + * higher performance for one-off random integers, because they do a + * bit of buffering. + * + *********************************************************************/ + +enum { + CRNG_RESEED_INTERVAL = 300 * HZ, + CRNG_INIT_CNT_THRESH = 2 * CHACHA_KEY_SIZE +}; + +static struct { + u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long)); + unsigned long birth; + unsigned long generation; + spinlock_t lock; +} base_crng = { + .lock = __SPIN_LOCK_UNLOCKED(base_crng.lock) +}; + +struct crng { + u8 key[CHACHA_KEY_SIZE]; + unsigned long generation; + local_lock_t lock; +}; + +static DEFINE_PER_CPU(struct crng, crngs) = { + .generation = ULONG_MAX, + .lock = INIT_LOCAL_LOCK(crngs.lock), +}; + +/* Used by crng_reseed() to extract a new seed from the input pool. */ +static bool drain_entropy(void *buf, size_t nbytes); + +/* + * This extracts a new crng key from the input pool, but only if there is a + * sufficient amount of entropy available, in order to mitigate bruteforcing + * of newly added bits. + */ +static void crng_reseed(void) +{ + unsigned long flags; + unsigned long next_gen; + u8 key[CHACHA_KEY_SIZE]; + bool finalize_init = false; + + /* Only reseed if we can, to prevent brute forcing a small amount of new bits. */ + if (!drain_entropy(key, sizeof(key))) + return; + + /* + * We copy the new key into the base_crng, overwriting the old one, + * and update the generation counter. We avoid hitting ULONG_MAX, + * because the per-cpu crngs are initialized to ULONG_MAX, so this + * forces new CPUs that come online to always initialize. + */ + spin_lock_irqsave(&base_crng.lock, flags); + memcpy(base_crng.key, key, sizeof(base_crng.key)); + next_gen = base_crng.generation + 1; + if (next_gen == ULONG_MAX) + ++next_gen; + WRITE_ONCE(base_crng.generation, next_gen); + WRITE_ONCE(base_crng.birth, jiffies); + if (crng_init < 2) { + crng_init = 2; + finalize_init = true; + } + spin_unlock_irqrestore(&base_crng.lock, flags); + memzero_explicit(key, sizeof(key)); + if (finalize_init) { + process_random_ready_list(); + wake_up_interruptible(&crng_init_wait); + kill_fasync(&fasync, SIGIO, POLL_IN); + pr_notice("crng init done\n"); + if (unseeded_warning.missed) { + pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n", + unseeded_warning.missed); + unseeded_warning.missed = 0; + } + if (urandom_warning.missed) { + pr_notice("%d urandom warning(s) missed due to ratelimiting\n", + urandom_warning.missed); + urandom_warning.missed = 0; + } + } +} + +/* + * This generates a ChaCha block using the provided key, and then + * immediately overwites that key with half the block. It returns + * the resultant ChaCha state to the user, along with the second + * half of the block containing 32 bytes of random data that may + * be used; random_data_len may not be greater than 32. + */ +static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE], + u32 chacha_state[CHACHA_STATE_WORDS], + u8 *random_data, size_t random_data_len) +{ + u8 first_block[CHACHA_BLOCK_SIZE]; + + BUG_ON(random_data_len > 32); + + chacha_init_consts(chacha_state); + memcpy(&chacha_state[4], key, CHACHA_KEY_SIZE); + memset(&chacha_state[12], 0, sizeof(u32) * 4); + chacha20_block(chacha_state, first_block); + + memcpy(key, first_block, CHACHA_KEY_SIZE); + memcpy(random_data, first_block + CHACHA_KEY_SIZE, random_data_len); + memzero_explicit(first_block, sizeof(first_block)); +} + +/* + * This function returns a ChaCha state that you may use for generating + * random data. It also returns up to 32 bytes on its own of random data + * that may be used; random_data_len may not be greater than 32. + */ +static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], + u8 *random_data, size_t random_data_len) +{ + unsigned long flags; + struct crng *crng; + + BUG_ON(random_data_len > 32); + + /* + * For the fast path, we check whether we're ready, unlocked first, and + * then re-check once locked later. In the case where we're really not + * ready, we do fast key erasure with the base_crng directly, because + * this is what crng_{fast,slow}_load mutate during early init. + */ + if (unlikely(!crng_ready())) { + bool ready; + + spin_lock_irqsave(&base_crng.lock, flags); + ready = crng_ready(); + if (!ready) + crng_fast_key_erasure(base_crng.key, chacha_state, + random_data, random_data_len); + spin_unlock_irqrestore(&base_crng.lock, flags); + if (!ready) + return; + } + + /* + * If the base_crng is more than 5 minutes old, we reseed, which + * in turn bumps the generation counter that we check below. + */ + if (unlikely(time_after(jiffies, READ_ONCE(base_crng.birth) + CRNG_RESEED_INTERVAL))) + crng_reseed(); + + local_lock_irqsave(&crngs.lock, flags); + crng = raw_cpu_ptr(&crngs); + + /* + * If our per-cpu crng is older than the base_crng, then it means + * somebody reseeded the base_crng. In that case, we do fast key + * erasure on the base_crng, and use its output as the new key + * for our per-cpu crng. This brings us up to date with base_crng. + */ + if (unlikely(crng->generation != READ_ONCE(base_crng.generation))) { + spin_lock(&base_crng.lock); + crng_fast_key_erasure(base_crng.key, chacha_state, + crng->key, sizeof(crng->key)); + crng->generation = base_crng.generation; + spin_unlock(&base_crng.lock); + } + + /* + * Finally, when we've made it this far, our per-cpu crng has an up + * to date key, and we can do fast key erasure with it to produce + * some random data and a ChaCha state for the caller. All other + * branches of this function are "unlikely", so most of the time we + * should wind up here immediately. + */ + crng_fast_key_erasure(crng->key, chacha_state, random_data, random_data_len); + local_unlock_irqrestore(&crngs.lock, flags); +} + +/* + * This function is for crng_init == 0 only. + * + * crng_fast_load() can be called by code in the interrupt service + * path. So we can't afford to dilly-dally. Returns the number of + * bytes processed from cp. + */ +static size_t crng_fast_load(const void *cp, size_t len) +{ + static int crng_init_cnt = 0; + unsigned long flags; + const u8 *src = (const u8 *)cp; + size_t ret = 0; + + if (!spin_trylock_irqsave(&base_crng.lock, flags)) + return 0; + if (crng_init != 0) { + spin_unlock_irqrestore(&base_crng.lock, flags); + return 0; + } + while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) { + base_crng.key[crng_init_cnt % sizeof(base_crng.key)] ^= *src; + src++; crng_init_cnt++; len--; ret++; + } + if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { + ++base_crng.generation; + crng_init = 1; + } + spin_unlock_irqrestore(&base_crng.lock, flags); + if (crng_init == 1) + pr_notice("fast init done\n"); + return ret; +} + +/* + * This function is for crng_init == 0 only. + * + * crng_slow_load() is called by add_device_randomness, which has two + * attributes. (1) We can't trust the buffer passed to it is + * guaranteed to be unpredictable (so it might not have any entropy at + * all), and (2) it doesn't have the performance constraints of + * crng_fast_load(). + * + * So, we simply hash the contents in with the current key. Finally, + * we do *not* advance crng_init_cnt since buffer we may get may be + * something like a fixed DMI table (for example), which might very + * well be unique to the machine, but is otherwise unvarying. + */ +static void crng_slow_load(const void *cp, size_t len) +{ + unsigned long flags; + struct blake2s_state hash; + + blake2s_init(&hash, sizeof(base_crng.key)); + + if (!spin_trylock_irqsave(&base_crng.lock, flags)) + return; + if (crng_init != 0) { + spin_unlock_irqrestore(&base_crng.lock, flags); + return; + } + + blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); + blake2s_update(&hash, cp, len); + blake2s_final(&hash, base_crng.key); + + spin_unlock_irqrestore(&base_crng.lock, flags); +} + +static void _get_random_bytes(void *buf, size_t nbytes) +{ + u32 chacha_state[CHACHA_STATE_WORDS]; + u8 tmp[CHACHA_BLOCK_SIZE]; + size_t len; + + if (!nbytes) + return; + + len = min_t(size_t, 32, nbytes); + crng_make_state(chacha_state, buf, len); + nbytes -= len; + buf += len; + + while (nbytes) { + if (nbytes < CHACHA_BLOCK_SIZE) { + chacha20_block(chacha_state, tmp); + memcpy(buf, tmp, nbytes); + memzero_explicit(tmp, sizeof(tmp)); + break; + } + + chacha20_block(chacha_state, buf); + if (unlikely(chacha_state[12] == 0)) + ++chacha_state[13]; + nbytes -= CHACHA_BLOCK_SIZE; + buf += CHACHA_BLOCK_SIZE; + } + + memzero_explicit(chacha_state, sizeof(chacha_state)); +} + +/* + * This function is the exported kernel interface. It returns some + * number of good random numbers, suitable for key generation, seeding + * TCP sequence numbers, etc. It does not rely on the hardware random + * number generator. For random bytes direct from the hardware RNG + * (when available), use get_random_bytes_arch(). In order to ensure + * that the randomness provided by this function is okay, the function + * wait_for_random_bytes() should be called and return 0 at least once + * at any point prior. + */ +void get_random_bytes(void *buf, size_t nbytes) +{ + static void *previous; + + warn_unseeded_randomness(&previous); + _get_random_bytes(buf, nbytes); +} +EXPORT_SYMBOL(get_random_bytes); + +static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) +{ + bool large_request = nbytes > 256; + ssize_t ret = 0; + size_t len; + u32 chacha_state[CHACHA_STATE_WORDS]; + u8 output[CHACHA_BLOCK_SIZE]; + + if (!nbytes) + return 0; + + len = min_t(size_t, 32, nbytes); + crng_make_state(chacha_state, output, len); + + if (copy_to_user(buf, output, len)) + return -EFAULT; + nbytes -= len; + buf += len; + ret += len; + + while (nbytes) { + if (large_request && need_resched()) { + if (signal_pending(current)) + break; + schedule(); + } + + chacha20_block(chacha_state, output); + if (unlikely(chacha_state[12] == 0)) + ++chacha_state[13]; + + len = min_t(size_t, nbytes, CHACHA_BLOCK_SIZE); + if (copy_to_user(buf, output, len)) { + ret = -EFAULT; + break; + } + + nbytes -= len; + buf += len; + ret += len; + } + + memzero_explicit(chacha_state, sizeof(chacha_state)); + memzero_explicit(output, sizeof(output)); + return ret; +} + +/* + * Batched entropy returns random integers. The quality of the random + * number is good as /dev/urandom. In order to ensure that the randomness + * provided by this function is okay, the function wait_for_random_bytes() + * should be called and return 0 at least once at any point prior. + */ +struct batched_entropy { + union { + /* + * We make this 1.5x a ChaCha block, so that we get the + * remaining 32 bytes from fast key erasure, plus one full + * block from the detached ChaCha state. We can increase + * the size of this later if needed so long as we keep the + * formula of (integer_blocks + 0.5) * CHACHA_BLOCK_SIZE. + */ + u64 entropy_u64[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u64))]; + u32 entropy_u32[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u32))]; + }; + local_lock_t lock; + unsigned long generation; + unsigned int position; +}; + + +static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = { + .lock = INIT_LOCAL_LOCK(batched_entropy_u64.lock), + .position = UINT_MAX +}; + +u64 get_random_u64(void) +{ + u64 ret; + unsigned long flags; + struct batched_entropy *batch; + static void *previous; + unsigned long next_gen; + + warn_unseeded_randomness(&previous); + + local_lock_irqsave(&batched_entropy_u64.lock, flags); + batch = raw_cpu_ptr(&batched_entropy_u64); + + next_gen = READ_ONCE(base_crng.generation); + if (batch->position >= ARRAY_SIZE(batch->entropy_u64) || + next_gen != batch->generation) { + _get_random_bytes(batch->entropy_u64, sizeof(batch->entropy_u64)); + batch->position = 0; + batch->generation = next_gen; + } + + ret = batch->entropy_u64[batch->position]; + batch->entropy_u64[batch->position] = 0; + ++batch->position; + local_unlock_irqrestore(&batched_entropy_u64.lock, flags); + return ret; +} +EXPORT_SYMBOL(get_random_u64); + +static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) = { + .lock = INIT_LOCAL_LOCK(batched_entropy_u32.lock), + .position = UINT_MAX +}; + +u32 get_random_u32(void) +{ + u32 ret; + unsigned long flags; + struct batched_entropy *batch; + static void *previous; + unsigned long next_gen; + + warn_unseeded_randomness(&previous); + + local_lock_irqsave(&batched_entropy_u32.lock, flags); + batch = raw_cpu_ptr(&batched_entropy_u32); + + next_gen = READ_ONCE(base_crng.generation); + if (batch->position >= ARRAY_SIZE(batch->entropy_u32) || + next_gen != batch->generation) { + _get_random_bytes(batch->entropy_u32, sizeof(batch->entropy_u32)); + batch->position = 0; + batch->generation = next_gen; + } + + ret = batch->entropy_u32[batch->position]; + batch->entropy_u32[batch->position] = 0; + ++batch->position; + local_unlock_irqrestore(&batched_entropy_u32.lock, flags); + return ret; +} +EXPORT_SYMBOL(get_random_u32); + +/** + * randomize_page - Generate a random, page aligned address + * @start: The smallest acceptable address the caller will take. + * @range: The size of the area, starting at @start, within which the + * random address must fall. + * + * If @start + @range would overflow, @range is capped. + * + * NOTE: Historical use of randomize_range, which this replaces, presumed that + * @start was already page aligned. We now align it regardless. + * + * Return: A page aligned address within [start, start + range). On error, + * @start is returned. + */ +unsigned long randomize_page(unsigned long start, unsigned long range) +{ + if (!PAGE_ALIGNED(start)) { + range -= PAGE_ALIGN(start) - start; + start = PAGE_ALIGN(start); + } + + if (start > ULONG_MAX - range) + range = ULONG_MAX - start; + + range >>= PAGE_SHIFT; + + if (range == 0) + return start; + + return start + (get_random_long() % range << PAGE_SHIFT); +} + +/* + * This function will use the architecture-specific hardware random + * number generator if it is available. It is not recommended for + * use. Use get_random_bytes() instead. It returns the number of + * bytes filled in. + */ +size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes) +{ + size_t left = nbytes; + u8 *p = buf; + + while (left) { + unsigned long v; + size_t chunk = min_t(size_t, left, sizeof(unsigned long)); + + if (!arch_get_random_long(&v)) + break; + + memcpy(p, &v, chunk); + p += chunk; + left -= chunk; + } + + return nbytes - left; +} +EXPORT_SYMBOL(get_random_bytes_arch); + enum { POOL_BITS = BLAKE2S_HASH_SIZE * 8, POOL_MIN_BITS = POOL_BITS /* No point in settling for less. */ @@ -390,8 +905,6 @@ enum { */ static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); -static int crng_init_cnt = 0; - /********************************************************************** * * OS independent entropy store. Here are the functions which handle @@ -493,290 +1006,6 @@ static void credit_entropy_bits(size_t nbits) crng_reseed(); } -/********************************************************************* - * - * CRNG using CHACHA20 - * - *********************************************************************/ - -enum { - CRNG_RESEED_INTERVAL = 300 * HZ, - CRNG_INIT_CNT_THRESH = 2 * CHACHA_KEY_SIZE -}; - -static struct { - u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long)); - unsigned long birth; - unsigned long generation; - spinlock_t lock; -} base_crng = { - .lock = __SPIN_LOCK_UNLOCKED(base_crng.lock) -}; - -struct crng { - u8 key[CHACHA_KEY_SIZE]; - unsigned long generation; - local_lock_t lock; -}; - -static DEFINE_PER_CPU(struct crng, crngs) = { - .generation = ULONG_MAX, - .lock = INIT_LOCAL_LOCK(crngs.lock), -}; - -/* - * crng_fast_load() can be called by code in the interrupt service - * path. So we can't afford to dilly-dally. Returns the number of - * bytes processed from cp. - */ -static size_t crng_fast_load(const void *cp, size_t len) -{ - unsigned long flags; - const u8 *src = (const u8 *)cp; - size_t ret = 0; - - if (!spin_trylock_irqsave(&base_crng.lock, flags)) - return 0; - if (crng_init != 0) { - spin_unlock_irqrestore(&base_crng.lock, flags); - return 0; - } - while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) { - base_crng.key[crng_init_cnt % sizeof(base_crng.key)] ^= *src; - src++; crng_init_cnt++; len--; ret++; - } - if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { - ++base_crng.generation; - crng_init = 1; - } - spin_unlock_irqrestore(&base_crng.lock, flags); - if (crng_init == 1) - pr_notice("fast init done\n"); - return ret; -} - -/* - * crng_slow_load() is called by add_device_randomness, which has two - * attributes. (1) We can't trust the buffer passed to it is - * guaranteed to be unpredictable (so it might not have any entropy at - * all), and (2) it doesn't have the performance constraints of - * crng_fast_load(). - * - * So, we simply hash the contents in with the current key. Finally, - * we do *not* advance crng_init_cnt since buffer we may get may be - * something like a fixed DMI table (for example), which might very - * well be unique to the machine, but is otherwise unvarying. - */ -static void crng_slow_load(const void *cp, size_t len) -{ - unsigned long flags; - struct blake2s_state hash; - - blake2s_init(&hash, sizeof(base_crng.key)); - - if (!spin_trylock_irqsave(&base_crng.lock, flags)) - return; - if (crng_init != 0) { - spin_unlock_irqrestore(&base_crng.lock, flags); - return; - } - - blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); - blake2s_update(&hash, cp, len); - blake2s_final(&hash, base_crng.key); - - spin_unlock_irqrestore(&base_crng.lock, flags); -} - -static void crng_reseed(void) -{ - unsigned long flags; - unsigned long next_gen; - u8 key[CHACHA_KEY_SIZE]; - bool finalize_init = false; - - /* Only reseed if we can, to prevent brute forcing a small amount of new bits. */ - if (!drain_entropy(key, sizeof(key))) - return; - - /* - * We copy the new key into the base_crng, overwriting the old one, - * and update the generation counter. We avoid hitting ULONG_MAX, - * because the per-cpu crngs are initialized to ULONG_MAX, so this - * forces new CPUs that come online to always initialize. - */ - spin_lock_irqsave(&base_crng.lock, flags); - memcpy(base_crng.key, key, sizeof(base_crng.key)); - next_gen = base_crng.generation + 1; - if (next_gen == ULONG_MAX) - ++next_gen; - WRITE_ONCE(base_crng.generation, next_gen); - WRITE_ONCE(base_crng.birth, jiffies); - if (crng_init < 2) { - crng_init = 2; - finalize_init = true; - } - spin_unlock_irqrestore(&base_crng.lock, flags); - memzero_explicit(key, sizeof(key)); - if (finalize_init) { - process_random_ready_list(); - wake_up_interruptible(&crng_init_wait); - kill_fasync(&fasync, SIGIO, POLL_IN); - pr_notice("crng init done\n"); - if (unseeded_warning.missed) { - pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n", - unseeded_warning.missed); - unseeded_warning.missed = 0; - } - if (urandom_warning.missed) { - pr_notice("%d urandom warning(s) missed due to ratelimiting\n", - urandom_warning.missed); - urandom_warning.missed = 0; - } - } -} - -/* - * The general form here is based on a "fast key erasure RNG" from - * . It generates a ChaCha - * block using the provided key, and then immediately overwites that - * key with half the block. It returns the resultant ChaCha state to the - * user, along with the second half of the block containing 32 bytes of - * random data that may be used; random_data_len may not be greater than - * 32. - */ -static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE], - u32 chacha_state[CHACHA_STATE_WORDS], - u8 *random_data, size_t random_data_len) -{ - u8 first_block[CHACHA_BLOCK_SIZE]; - - BUG_ON(random_data_len > 32); - - chacha_init_consts(chacha_state); - memcpy(&chacha_state[4], key, CHACHA_KEY_SIZE); - memset(&chacha_state[12], 0, sizeof(u32) * 4); - chacha20_block(chacha_state, first_block); - - memcpy(key, first_block, CHACHA_KEY_SIZE); - memcpy(random_data, first_block + CHACHA_KEY_SIZE, random_data_len); - memzero_explicit(first_block, sizeof(first_block)); -} - -/* - * This function returns a ChaCha state that you may use for generating - * random data. It also returns up to 32 bytes on its own of random data - * that may be used; random_data_len may not be greater than 32. - */ -static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], - u8 *random_data, size_t random_data_len) -{ - unsigned long flags; - struct crng *crng; - - BUG_ON(random_data_len > 32); - - /* - * For the fast path, we check whether we're ready, unlocked first, and - * then re-check once locked later. In the case where we're really not - * ready, we do fast key erasure with the base_crng directly, because - * this is what crng_{fast,slow}_load mutate during early init. - */ - if (unlikely(!crng_ready())) { - bool ready; - - spin_lock_irqsave(&base_crng.lock, flags); - ready = crng_ready(); - if (!ready) - crng_fast_key_erasure(base_crng.key, chacha_state, - random_data, random_data_len); - spin_unlock_irqrestore(&base_crng.lock, flags); - if (!ready) - return; - } - - /* - * If the base_crng is more than 5 minutes old, we reseed, which - * in turn bumps the generation counter that we check below. - */ - if (unlikely(time_after(jiffies, READ_ONCE(base_crng.birth) + CRNG_RESEED_INTERVAL))) - crng_reseed(); - - local_lock_irqsave(&crngs.lock, flags); - crng = raw_cpu_ptr(&crngs); - - /* - * If our per-cpu crng is older than the base_crng, then it means - * somebody reseeded the base_crng. In that case, we do fast key - * erasure on the base_crng, and use its output as the new key - * for our per-cpu crng. This brings us up to date with base_crng. - */ - if (unlikely(crng->generation != READ_ONCE(base_crng.generation))) { - spin_lock(&base_crng.lock); - crng_fast_key_erasure(base_crng.key, chacha_state, - crng->key, sizeof(crng->key)); - crng->generation = base_crng.generation; - spin_unlock(&base_crng.lock); - } - - /* - * Finally, when we've made it this far, our per-cpu crng has an up - * to date key, and we can do fast key erasure with it to produce - * some random data and a ChaCha state for the caller. All other - * branches of this function are "unlikely", so most of the time we - * should wind up here immediately. - */ - crng_fast_key_erasure(crng->key, chacha_state, random_data, random_data_len); - local_unlock_irqrestore(&crngs.lock, flags); -} - -static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) -{ - bool large_request = nbytes > 256; - ssize_t ret = 0; - size_t len; - u32 chacha_state[CHACHA_STATE_WORDS]; - u8 output[CHACHA_BLOCK_SIZE]; - - if (!nbytes) - return 0; - - len = min_t(size_t, 32, nbytes); - crng_make_state(chacha_state, output, len); - - if (copy_to_user(buf, output, len)) - return -EFAULT; - nbytes -= len; - buf += len; - ret += len; - - while (nbytes) { - if (large_request && need_resched()) { - if (signal_pending(current)) - break; - schedule(); - } - - chacha20_block(chacha_state, output); - if (unlikely(chacha_state[12] == 0)) - ++chacha_state[13]; - - len = min_t(size_t, nbytes, CHACHA_BLOCK_SIZE); - if (copy_to_user(buf, output, len)) { - ret = -EFAULT; - break; - } - - nbytes -= len; - buf += len; - ret += len; - } - - memzero_explicit(chacha_state, sizeof(chacha_state)); - memzero_explicit(output, sizeof(output)); - return ret; -} - /********************************************************************* * * Entropy input management @@ -1044,57 +1273,6 @@ static bool drain_entropy(void *buf, size_t nbytes) return true; } -/* - * This function is the exported kernel interface. It returns some - * number of good random numbers, suitable for key generation, seeding - * TCP sequence numbers, etc. It does not rely on the hardware random - * number generator. For random bytes direct from the hardware RNG - * (when available), use get_random_bytes_arch(). In order to ensure - * that the randomness provided by this function is okay, the function - * wait_for_random_bytes() should be called and return 0 at least once - * at any point prior. - */ -static void _get_random_bytes(void *buf, size_t nbytes) -{ - u32 chacha_state[CHACHA_STATE_WORDS]; - u8 tmp[CHACHA_BLOCK_SIZE]; - size_t len; - - if (!nbytes) - return; - - len = min_t(size_t, 32, nbytes); - crng_make_state(chacha_state, buf, len); - nbytes -= len; - buf += len; - - while (nbytes) { - if (nbytes < CHACHA_BLOCK_SIZE) { - chacha20_block(chacha_state, tmp); - memcpy(buf, tmp, nbytes); - memzero_explicit(tmp, sizeof(tmp)); - break; - } - - chacha20_block(chacha_state, buf); - if (unlikely(chacha_state[12] == 0)) - ++chacha_state[13]; - nbytes -= CHACHA_BLOCK_SIZE; - buf += CHACHA_BLOCK_SIZE; - } - - memzero_explicit(chacha_state, sizeof(chacha_state)); -} - -void get_random_bytes(void *buf, size_t nbytes) -{ - static void *previous; - - warn_unseeded_randomness(&previous); - _get_random_bytes(buf, nbytes); -} -EXPORT_SYMBOL(get_random_bytes); - /* * Each time the timer fires, we expect that we got an unpredictable * jump in the cycle counter. Even if the timer is running on another @@ -1144,33 +1322,6 @@ static void try_to_generate_entropy(void) mix_pool_bytes(&stack.now, sizeof(stack.now)); } -/* - * This function will use the architecture-specific hardware random - * number generator if it is available. It is not recommended for - * use. Use get_random_bytes() instead. It returns the number of - * bytes filled in. - */ -size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes) -{ - size_t left = nbytes; - u8 *p = buf; - - while (left) { - unsigned long v; - size_t chunk = min_t(size_t, left, sizeof(unsigned long)); - - if (!arch_get_random_long(&v)) - break; - - memcpy(p, &v, chunk); - p += chunk; - left -= chunk; - } - - return nbytes - left; -} -EXPORT_SYMBOL(get_random_bytes_arch); - static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU); static int __init parse_trust_cpu(char *arg) { @@ -1523,129 +1674,6 @@ struct ctl_table random_table[] = { }; #endif /* CONFIG_SYSCTL */ -struct batched_entropy { - union { - /* - * We make this 1.5x a ChaCha block, so that we get the - * remaining 32 bytes from fast key erasure, plus one full - * block from the detached ChaCha state. We can increase - * the size of this later if needed so long as we keep the - * formula of (integer_blocks + 0.5) * CHACHA_BLOCK_SIZE. - */ - u64 entropy_u64[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u64))]; - u32 entropy_u32[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u32))]; - }; - local_lock_t lock; - unsigned long generation; - unsigned int position; -}; - -/* - * Get a random word for internal kernel use only. The quality of the random - * number is good as /dev/urandom. In order to ensure that the randomness - * provided by this function is okay, the function wait_for_random_bytes() - * should be called and return 0 at least once at any point prior. - */ -static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = { - .lock = INIT_LOCAL_LOCK(batched_entropy_u64.lock), - .position = UINT_MAX -}; - -u64 get_random_u64(void) -{ - u64 ret; - unsigned long flags; - struct batched_entropy *batch; - static void *previous; - unsigned long next_gen; - - warn_unseeded_randomness(&previous); - - local_lock_irqsave(&batched_entropy_u64.lock, flags); - batch = raw_cpu_ptr(&batched_entropy_u64); - - next_gen = READ_ONCE(base_crng.generation); - if (batch->position >= ARRAY_SIZE(batch->entropy_u64) || - next_gen != batch->generation) { - _get_random_bytes(batch->entropy_u64, sizeof(batch->entropy_u64)); - batch->position = 0; - batch->generation = next_gen; - } - - ret = batch->entropy_u64[batch->position]; - batch->entropy_u64[batch->position] = 0; - ++batch->position; - local_unlock_irqrestore(&batched_entropy_u64.lock, flags); - return ret; -} -EXPORT_SYMBOL(get_random_u64); - -static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) = { - .lock = INIT_LOCAL_LOCK(batched_entropy_u32.lock), - .position = UINT_MAX -}; - -u32 get_random_u32(void) -{ - u32 ret; - unsigned long flags; - struct batched_entropy *batch; - static void *previous; - unsigned long next_gen; - - warn_unseeded_randomness(&previous); - - local_lock_irqsave(&batched_entropy_u32.lock, flags); - batch = raw_cpu_ptr(&batched_entropy_u32); - - next_gen = READ_ONCE(base_crng.generation); - if (batch->position >= ARRAY_SIZE(batch->entropy_u32) || - next_gen != batch->generation) { - _get_random_bytes(batch->entropy_u32, sizeof(batch->entropy_u32)); - batch->position = 0; - batch->generation = next_gen; - } - - ret = batch->entropy_u32[batch->position]; - batch->entropy_u32[batch->position] = 0; - ++batch->position; - local_unlock_irqrestore(&batched_entropy_u32.lock, flags); - return ret; -} -EXPORT_SYMBOL(get_random_u32); - -/** - * randomize_page - Generate a random, page aligned address - * @start: The smallest acceptable address the caller will take. - * @range: The size of the area, starting at @start, within which the - * random address must fall. - * - * If @start + @range would overflow, @range is capped. - * - * NOTE: Historical use of randomize_range, which this replaces, presumed that - * @start was already page aligned. We now align it regardless. - * - * Return: A page aligned address within [start, start + range). On error, - * @start is returned. - */ -unsigned long randomize_page(unsigned long start, unsigned long range) -{ - if (!PAGE_ALIGNED(start)) { - range -= PAGE_ALIGN(start) - start; - start = PAGE_ALIGN(start); - } - - if (start > ULONG_MAX - range) - range = ULONG_MAX - start; - - range >>= PAGE_SHIFT; - - if (range == 0) - return start; - - return start + (get_random_long() % range << PAGE_SHIFT); -} - /* Interface for in-kernel drivers of true hardware RNGs. * Those devices may produce endless random bits and will be throttled * when our pool is full. From e9ff357860ab9630926bde5c479d43f7f4fb31dd Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 11 Feb 2022 12:53:34 +0100 Subject: [PATCH 0183/1842] random: group entropy extraction functions commit a5ed7cb1a7732ef11959332d507889fbc39ebbb4 upstream. This pulls all of the entropy extraction-focused functions into the third labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 218 +++++++++++++++++++++--------------------- 1 file changed, 110 insertions(+), 108 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 202252596ccb..170f16ac7f75 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -895,23 +895,36 @@ size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes) } EXPORT_SYMBOL(get_random_bytes_arch); + +/********************************************************************** + * + * Entropy accumulation and extraction routines. + * + * Callers may add entropy via: + * + * static void mix_pool_bytes(const void *in, size_t nbytes) + * + * After which, if added entropy should be credited: + * + * static void credit_entropy_bits(size_t nbits) + * + * Finally, extract entropy via these two, with the latter one + * setting the entropy count to zero and extracting only if there + * is POOL_MIN_BITS entropy credited prior: + * + * static void extract_entropy(void *buf, size_t nbytes) + * static bool drain_entropy(void *buf, size_t nbytes) + * + **********************************************************************/ + enum { POOL_BITS = BLAKE2S_HASH_SIZE * 8, POOL_MIN_BITS = POOL_BITS /* No point in settling for less. */ }; -/* - * Static global variables - */ +/* For notifying userspace should write into /dev/random. */ static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); -/********************************************************************** - * - * OS independent entropy store. Here are the functions which handle - * storing entropy in an entropy pool. - * - **********************************************************************/ - static struct { struct blake2s_state hash; spinlock_t lock; @@ -924,21 +937,16 @@ static struct { .lock = __SPIN_LOCK_UNLOCKED(input_pool.lock), }; -static void extract_entropy(void *buf, size_t nbytes); -static bool drain_entropy(void *buf, size_t nbytes); - -static void crng_reseed(void); +static void _mix_pool_bytes(const void *in, size_t nbytes) +{ + blake2s_update(&input_pool.hash, in, nbytes); +} /* * This function adds bytes into the entropy "pool". It does not * update the entropy estimate. The caller should call * credit_entropy_bits if this is appropriate. */ -static void _mix_pool_bytes(const void *in, size_t nbytes) -{ - blake2s_update(&input_pool.hash, in, nbytes); -} - static void mix_pool_bytes(const void *in, size_t nbytes) { unsigned long flags; @@ -948,6 +956,89 @@ static void mix_pool_bytes(const void *in, size_t nbytes) spin_unlock_irqrestore(&input_pool.lock, flags); } +static void credit_entropy_bits(size_t nbits) +{ + unsigned int entropy_count, orig, add; + + if (!nbits) + return; + + add = min_t(size_t, nbits, POOL_BITS); + + do { + orig = READ_ONCE(input_pool.entropy_count); + entropy_count = min_t(unsigned int, POOL_BITS, orig + add); + } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig); + + if (crng_init < 2 && entropy_count >= POOL_MIN_BITS) + crng_reseed(); +} + +/* + * This is an HKDF-like construction for using the hashed collected entropy + * as a PRF key, that's then expanded block-by-block. + */ +static void extract_entropy(void *buf, size_t nbytes) +{ + unsigned long flags; + u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE]; + struct { + unsigned long rdseed[32 / sizeof(long)]; + size_t counter; + } block; + size_t i; + + for (i = 0; i < ARRAY_SIZE(block.rdseed); ++i) { + if (!arch_get_random_seed_long(&block.rdseed[i]) && + !arch_get_random_long(&block.rdseed[i])) + block.rdseed[i] = random_get_entropy(); + } + + spin_lock_irqsave(&input_pool.lock, flags); + + /* seed = HASHPRF(last_key, entropy_input) */ + blake2s_final(&input_pool.hash, seed); + + /* next_key = HASHPRF(seed, RDSEED || 0) */ + block.counter = 0; + blake2s(next_key, (u8 *)&block, seed, sizeof(next_key), sizeof(block), sizeof(seed)); + blake2s_init_key(&input_pool.hash, BLAKE2S_HASH_SIZE, next_key, sizeof(next_key)); + + spin_unlock_irqrestore(&input_pool.lock, flags); + memzero_explicit(next_key, sizeof(next_key)); + + while (nbytes) { + i = min_t(size_t, nbytes, BLAKE2S_HASH_SIZE); + /* output = HASHPRF(seed, RDSEED || ++counter) */ + ++block.counter; + blake2s(buf, (u8 *)&block, seed, i, sizeof(block), sizeof(seed)); + nbytes -= i; + buf += i; + } + + memzero_explicit(seed, sizeof(seed)); + memzero_explicit(&block, sizeof(block)); +} + +/* + * First we make sure we have POOL_MIN_BITS of entropy in the pool, and then we + * set the entropy count to zero (but don't actually touch any data). Only then + * can we extract a new key with extract_entropy(). + */ +static bool drain_entropy(void *buf, size_t nbytes) +{ + unsigned int entropy_count; + do { + entropy_count = READ_ONCE(input_pool.entropy_count); + if (entropy_count < POOL_MIN_BITS) + return false; + } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count); + extract_entropy(buf, nbytes); + wake_up_interruptible(&random_write_wait); + kill_fasync(&fasync, SIGIO, POLL_OUT); + return true; +} + struct fast_pool { union { u32 pool32[4]; @@ -988,24 +1079,6 @@ static void fast_mix(u32 pool[4]) pool[2] = c; pool[3] = d; } -static void credit_entropy_bits(size_t nbits) -{ - unsigned int entropy_count, orig, add; - - if (!nbits) - return; - - add = min_t(size_t, nbits, POOL_BITS); - - do { - orig = READ_ONCE(input_pool.entropy_count); - entropy_count = min_t(unsigned int, POOL_BITS, orig + add); - } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig); - - if (crng_init < 2 && entropy_count >= POOL_MIN_BITS) - crng_reseed(); -} - /********************************************************************* * * Entropy input management @@ -1202,77 +1275,6 @@ void add_disk_randomness(struct gendisk *disk) EXPORT_SYMBOL_GPL(add_disk_randomness); #endif -/********************************************************************* - * - * Entropy extraction routines - * - *********************************************************************/ - -/* - * This is an HKDF-like construction for using the hashed collected entropy - * as a PRF key, that's then expanded block-by-block. - */ -static void extract_entropy(void *buf, size_t nbytes) -{ - unsigned long flags; - u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE]; - struct { - unsigned long rdseed[32 / sizeof(long)]; - size_t counter; - } block; - size_t i; - - for (i = 0; i < ARRAY_SIZE(block.rdseed); ++i) { - if (!arch_get_random_seed_long(&block.rdseed[i]) && - !arch_get_random_long(&block.rdseed[i])) - block.rdseed[i] = random_get_entropy(); - } - - spin_lock_irqsave(&input_pool.lock, flags); - - /* seed = HASHPRF(last_key, entropy_input) */ - blake2s_final(&input_pool.hash, seed); - - /* next_key = HASHPRF(seed, RDSEED || 0) */ - block.counter = 0; - blake2s(next_key, (u8 *)&block, seed, sizeof(next_key), sizeof(block), sizeof(seed)); - blake2s_init_key(&input_pool.hash, BLAKE2S_HASH_SIZE, next_key, sizeof(next_key)); - - spin_unlock_irqrestore(&input_pool.lock, flags); - memzero_explicit(next_key, sizeof(next_key)); - - while (nbytes) { - i = min_t(size_t, nbytes, BLAKE2S_HASH_SIZE); - /* output = HASHPRF(seed, RDSEED || ++counter) */ - ++block.counter; - blake2s(buf, (u8 *)&block, seed, i, sizeof(block), sizeof(seed)); - nbytes -= i; - buf += i; - } - - memzero_explicit(seed, sizeof(seed)); - memzero_explicit(&block, sizeof(block)); -} - -/* - * First we make sure we have POOL_MIN_BITS of entropy in the pool, and then we - * set the entropy count to zero (but don't actually touch any data). Only then - * can we extract a new key with extract_entropy(). - */ -static bool drain_entropy(void *buf, size_t nbytes) -{ - unsigned int entropy_count; - do { - entropy_count = READ_ONCE(input_pool.entropy_count); - if (entropy_count < POOL_MIN_BITS) - return false; - } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count); - extract_entropy(buf, nbytes); - wake_up_interruptible(&random_write_wait); - kill_fasync(&fasync, SIGIO, POLL_OUT); - return true; -} - /* * Each time the timer fires, we expect that we got an unpredictable * jump in the cycle counter. Even if the timer is running on another From f04580811d26dbfa28a5af220077721e587fa779 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 11 Feb 2022 12:53:34 +0100 Subject: [PATCH 0184/1842] random: group entropy collection functions commit 92c653cf14400946f376a29b828d6af7e01f38dd upstream. This pulls all of the entropy collection-focused functions into the fourth labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 370 +++++++++++++++++++++++------------------- 1 file changed, 206 insertions(+), 164 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 170f16ac7f75..e9857a4b76ec 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1039,60 +1039,112 @@ static bool drain_entropy(void *buf, size_t nbytes) return true; } -struct fast_pool { - union { - u32 pool32[4]; - u64 pool64[2]; - }; - unsigned long last; - u16 reg_idx; - u8 count; -}; + +/********************************************************************** + * + * Entropy collection routines. + * + * The following exported functions are used for pushing entropy into + * the above entropy accumulation routines: + * + * void add_device_randomness(const void *buf, size_t size); + * void add_input_randomness(unsigned int type, unsigned int code, + * unsigned int value); + * void add_disk_randomness(struct gendisk *disk); + * void add_hwgenerator_randomness(const void *buffer, size_t count, + * size_t entropy); + * void add_bootloader_randomness(const void *buf, size_t size); + * void add_interrupt_randomness(int irq); + * + * add_device_randomness() adds data to the input pool that + * is likely to differ between two devices (or possibly even per boot). + * This would be things like MAC addresses or serial numbers, or the + * read-out of the RTC. This does *not* credit any actual entropy to + * the pool, but it initializes the pool to different values for devices + * that might otherwise be identical and have very little entropy + * available to them (particularly common in the embedded world). + * + * add_input_randomness() uses the input layer interrupt timing, as well + * as the event type information from the hardware. + * + * add_disk_randomness() uses what amounts to the seek time of block + * layer request events, on a per-disk_devt basis, as input to the + * entropy pool. Note that high-speed solid state drives with very low + * seek times do not make for good sources of entropy, as their seek + * times are usually fairly consistent. + * + * The above two routines try to estimate how many bits of entropy + * to credit. They do this by keeping track of the first and second + * order deltas of the event timings. + * + * add_hwgenerator_randomness() is for true hardware RNGs, and will credit + * entropy as specified by the caller. If the entropy pool is full it will + * block until more entropy is needed. + * + * add_bootloader_randomness() is the same as add_hwgenerator_randomness() or + * add_device_randomness(), depending on whether or not the configuration + * option CONFIG_RANDOM_TRUST_BOOTLOADER is set. + * + * add_interrupt_randomness() uses the interrupt timing as random + * inputs to the entropy pool. Using the cycle counters and the irq source + * as inputs, it feeds the input pool roughly once a second or after 64 + * interrupts, crediting 1 bit of entropy for whichever comes first. + * + **********************************************************************/ + +static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU); +static int __init parse_trust_cpu(char *arg) +{ + return kstrtobool(arg, &trust_cpu); +} +early_param("random.trust_cpu", parse_trust_cpu); /* - * This is a fast mixing routine used by the interrupt randomness - * collector. It's hardcoded for an 128 bit pool and assumes that any - * locks that might be needed are taken by the caller. + * The first collection of entropy occurs at system boot while interrupts + * are still turned off. Here we push in RDSEED, a timestamp, and utsname(). + * Depending on the above configuration knob, RDSEED may be considered + * sufficient for initialization. Note that much earlier setup may already + * have pushed entropy into the input pool by the time we get here. */ -static void fast_mix(u32 pool[4]) +int __init rand_initialize(void) { - u32 a = pool[0], b = pool[1]; - u32 c = pool[2], d = pool[3]; + size_t i; + ktime_t now = ktime_get_real(); + bool arch_init = true; + unsigned long rv; - a += b; c += d; - b = rol32(b, 6); d = rol32(d, 27); - d ^= a; b ^= c; + for (i = 0; i < BLAKE2S_BLOCK_SIZE; i += sizeof(rv)) { + if (!arch_get_random_seed_long_early(&rv) && + !arch_get_random_long_early(&rv)) { + rv = random_get_entropy(); + arch_init = false; + } + mix_pool_bytes(&rv, sizeof(rv)); + } + mix_pool_bytes(&now, sizeof(now)); + mix_pool_bytes(utsname(), sizeof(*(utsname()))); - a += b; c += d; - b = rol32(b, 16); d = rol32(d, 14); - d ^= a; b ^= c; + extract_entropy(base_crng.key, sizeof(base_crng.key)); + ++base_crng.generation; - a += b; c += d; - b = rol32(b, 6); d = rol32(d, 27); - d ^= a; b ^= c; + if (arch_init && trust_cpu && crng_init < 2) { + crng_init = 2; + pr_notice("crng init done (trusting CPU's manufacturer)\n"); + } - a += b; c += d; - b = rol32(b, 16); d = rol32(d, 14); - d ^= a; b ^= c; - - pool[0] = a; pool[1] = b; - pool[2] = c; pool[3] = d; + if (ratelimit_disable) { + urandom_warning.interval = 0; + unseeded_warning.interval = 0; + } + return 0; } -/********************************************************************* - * - * Entropy input management - * - *********************************************************************/ - /* There is one of these per entropy source */ struct timer_rand_state { cycles_t last_time; long last_delta, last_delta2; }; -#define INIT_TIMER_RAND_STATE { INITIAL_JIFFIES, }; - /* * Add device- or boot-specific data to the input pool to help * initialize it. @@ -1116,8 +1168,6 @@ void add_device_randomness(const void *buf, size_t size) } EXPORT_SYMBOL(add_device_randomness); -static struct timer_rand_state input_timer_state = INIT_TIMER_RAND_STATE; - /* * This function adds entropy to the entropy "pool" by using timing * delays. It uses the timer_rand_state structure to make an estimate @@ -1179,8 +1229,9 @@ void add_input_randomness(unsigned int type, unsigned int code, unsigned int value) { static unsigned char last_value; + static struct timer_rand_state input_timer_state = { INITIAL_JIFFIES }; - /* ignore autorepeat and the like */ + /* Ignore autorepeat and the like. */ if (value == last_value) return; @@ -1190,6 +1241,119 @@ void add_input_randomness(unsigned int type, unsigned int code, } EXPORT_SYMBOL_GPL(add_input_randomness); +#ifdef CONFIG_BLOCK +void add_disk_randomness(struct gendisk *disk) +{ + if (!disk || !disk->random) + return; + /* First major is 1, so we get >= 0x200 here. */ + add_timer_randomness(disk->random, 0x100 + disk_devt(disk)); +} +EXPORT_SYMBOL_GPL(add_disk_randomness); + +void rand_initialize_disk(struct gendisk *disk) +{ + struct timer_rand_state *state; + + /* + * If kzalloc returns null, we just won't use that entropy + * source. + */ + state = kzalloc(sizeof(struct timer_rand_state), GFP_KERNEL); + if (state) { + state->last_time = INITIAL_JIFFIES; + disk->random = state; + } +} +#endif + +/* + * Interface for in-kernel drivers of true hardware RNGs. + * Those devices may produce endless random bits and will be throttled + * when our pool is full. + */ +void add_hwgenerator_randomness(const void *buffer, size_t count, + size_t entropy) +{ + if (unlikely(crng_init == 0)) { + size_t ret = crng_fast_load(buffer, count); + mix_pool_bytes(buffer, ret); + count -= ret; + buffer += ret; + if (!count || crng_init == 0) + return; + } + + /* + * Throttle writing if we're above the trickle threshold. + * We'll be woken up again once below POOL_MIN_BITS, when + * the calling thread is about to terminate, or once + * CRNG_RESEED_INTERVAL has elapsed. + */ + wait_event_interruptible_timeout(random_write_wait, + !system_wq || kthread_should_stop() || + input_pool.entropy_count < POOL_MIN_BITS, + CRNG_RESEED_INTERVAL); + mix_pool_bytes(buffer, count); + credit_entropy_bits(entropy); +} +EXPORT_SYMBOL_GPL(add_hwgenerator_randomness); + +/* + * Handle random seed passed by bootloader. + * If the seed is trustworthy, it would be regarded as hardware RNGs. Otherwise + * it would be regarded as device data. + * The decision is controlled by CONFIG_RANDOM_TRUST_BOOTLOADER. + */ +void add_bootloader_randomness(const void *buf, size_t size) +{ + if (IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER)) + add_hwgenerator_randomness(buf, size, size * 8); + else + add_device_randomness(buf, size); +} +EXPORT_SYMBOL_GPL(add_bootloader_randomness); + +struct fast_pool { + union { + u32 pool32[4]; + u64 pool64[2]; + }; + unsigned long last; + u16 reg_idx; + u8 count; +}; + +/* + * This is a fast mixing routine used by the interrupt randomness + * collector. It's hardcoded for an 128 bit pool and assumes that any + * locks that might be needed are taken by the caller. + */ +static void fast_mix(u32 pool[4]) +{ + u32 a = pool[0], b = pool[1]; + u32 c = pool[2], d = pool[3]; + + a += b; c += d; + b = rol32(b, 6); d = rol32(d, 27); + d ^= a; b ^= c; + + a += b; c += d; + b = rol32(b, 16); d = rol32(d, 14); + d ^= a; b ^= c; + + a += b; c += d; + b = rol32(b, 6); d = rol32(d, 27); + d ^= a; b ^= c; + + a += b; c += d; + b = rol32(b, 16); d = rol32(d, 14); + d ^= a; b ^= c; + + pool[0] = a; pool[1] = b; + pool[2] = c; pool[3] = d; +} + static DEFINE_PER_CPU(struct fast_pool, irq_randomness); static u32 get_reg(struct fast_pool *f, struct pt_regs *regs) @@ -1259,22 +1423,11 @@ void add_interrupt_randomness(int irq) fast_pool->count = 0; - /* award one bit for the contents of the fast pool */ + /* Award one bit for the contents of the fast pool. */ credit_entropy_bits(1); } EXPORT_SYMBOL_GPL(add_interrupt_randomness); -#ifdef CONFIG_BLOCK -void add_disk_randomness(struct gendisk *disk) -{ - if (!disk || !disk->random) - return; - /* first major is 1, so we get >= 0x200 here */ - add_timer_randomness(disk->random, 0x100 + disk_devt(disk)); -} -EXPORT_SYMBOL_GPL(add_disk_randomness); -#endif - /* * Each time the timer fires, we expect that we got an unpredictable * jump in the cycle counter. Even if the timer is running on another @@ -1324,73 +1477,6 @@ static void try_to_generate_entropy(void) mix_pool_bytes(&stack.now, sizeof(stack.now)); } -static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU); -static int __init parse_trust_cpu(char *arg) -{ - return kstrtobool(arg, &trust_cpu); -} -early_param("random.trust_cpu", parse_trust_cpu); - -/* - * Note that setup_arch() may call add_device_randomness() - * long before we get here. This allows seeding of the pools - * with some platform dependent data very early in the boot - * process. But it limits our options here. We must use - * statically allocated structures that already have all - * initializations complete at compile time. We should also - * take care not to overwrite the precious per platform data - * we were given. - */ -int __init rand_initialize(void) -{ - size_t i; - ktime_t now = ktime_get_real(); - bool arch_init = true; - unsigned long rv; - - for (i = 0; i < BLAKE2S_BLOCK_SIZE; i += sizeof(rv)) { - if (!arch_get_random_seed_long_early(&rv) && - !arch_get_random_long_early(&rv)) { - rv = random_get_entropy(); - arch_init = false; - } - mix_pool_bytes(&rv, sizeof(rv)); - } - mix_pool_bytes(&now, sizeof(now)); - mix_pool_bytes(utsname(), sizeof(*(utsname()))); - - extract_entropy(base_crng.key, sizeof(base_crng.key)); - ++base_crng.generation; - - if (arch_init && trust_cpu && crng_init < 2) { - crng_init = 2; - pr_notice("crng init done (trusting CPU's manufacturer)\n"); - } - - if (ratelimit_disable) { - urandom_warning.interval = 0; - unseeded_warning.interval = 0; - } - return 0; -} - -#ifdef CONFIG_BLOCK -void rand_initialize_disk(struct gendisk *disk) -{ - struct timer_rand_state *state; - - /* - * If kzalloc returns null, we just won't use that entropy - * source. - */ - state = kzalloc(sizeof(struct timer_rand_state), GFP_KERNEL); - if (state) { - state->last_time = INITIAL_JIFFIES; - disk->random = state; - } -} -#endif - static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos) { @@ -1675,47 +1761,3 @@ struct ctl_table random_table[] = { { } }; #endif /* CONFIG_SYSCTL */ - -/* Interface for in-kernel drivers of true hardware RNGs. - * Those devices may produce endless random bits and will be throttled - * when our pool is full. - */ -void add_hwgenerator_randomness(const void *buffer, size_t count, - size_t entropy) -{ - if (unlikely(crng_init == 0)) { - size_t ret = crng_fast_load(buffer, count); - mix_pool_bytes(buffer, ret); - count -= ret; - buffer += ret; - if (!count || crng_init == 0) - return; - } - - /* Throttle writing if we're above the trickle threshold. - * We'll be woken up again once below POOL_MIN_BITS, when - * the calling thread is about to terminate, or once - * CRNG_RESEED_INTERVAL has elapsed. - */ - wait_event_interruptible_timeout(random_write_wait, - !system_wq || kthread_should_stop() || - input_pool.entropy_count < POOL_MIN_BITS, - CRNG_RESEED_INTERVAL); - mix_pool_bytes(buffer, count); - credit_entropy_bits(entropy); -} -EXPORT_SYMBOL_GPL(add_hwgenerator_randomness); - -/* Handle random seed passed by bootloader. - * If the seed is trustworthy, it would be regarded as hardware RNGs. Otherwise - * it would be regarded as device data. - * The decision is controlled by CONFIG_RANDOM_TRUST_BOOTLOADER. - */ -void add_bootloader_randomness(const void *buf, size_t size) -{ - if (IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER)) - add_hwgenerator_randomness(buf, size, size * 8); - else - add_device_randomness(buf, size); -} -EXPORT_SYMBOL_GPL(add_bootloader_randomness); From 21ae543e3afb807d6cf70f51b5795f6c53054779 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 11 Feb 2022 12:53:34 +0100 Subject: [PATCH 0185/1842] random: group userspace read/write functions commit a6adf8e7a605250b911e94793fd077933709ff9e upstream. This pulls all of the userspace read/write-focused functions into the fifth labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 131 ++++++++++++++++++++++++++---------------- 1 file changed, 80 insertions(+), 51 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index e9857a4b76ec..7d005f7ebcf5 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1477,30 +1477,61 @@ static void try_to_generate_entropy(void) mix_pool_bytes(&stack.now, sizeof(stack.now)); } -static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes, - loff_t *ppos) -{ - static int maxwarn = 10; - if (!crng_ready() && maxwarn > 0) { - maxwarn--; - if (__ratelimit(&urandom_warning)) - pr_notice("%s: uninitialized urandom read (%zd bytes read)\n", - current->comm, nbytes); +/********************************************************************** + * + * Userspace reader/writer interfaces. + * + * getrandom(2) is the primary modern interface into the RNG and should + * be used in preference to anything else. + * + * Reading from /dev/random has the same functionality as calling + * getrandom(2) with flags=0. In earlier versions, however, it had + * vastly different semantics and should therefore be avoided, to + * prevent backwards compatibility issues. + * + * Reading from /dev/urandom has the same functionality as calling + * getrandom(2) with flags=GRND_INSECURE. Because it does not block + * waiting for the RNG to be ready, it should not be used. + * + * Writing to either /dev/random or /dev/urandom adds entropy to + * the input pool but does not credit it. + * + * Polling on /dev/random indicates when the RNG is initialized, on + * the read side, and when it wants new entropy, on the write side. + * + * Both /dev/random and /dev/urandom have the same set of ioctls for + * adding entropy, getting the entropy count, zeroing the count, and + * reseeding the crng. + * + **********************************************************************/ + +SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int, + flags) +{ + if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE)) + return -EINVAL; + + /* + * Requesting insecure and blocking randomness at the same time makes + * no sense. + */ + if ((flags & (GRND_INSECURE | GRND_RANDOM)) == (GRND_INSECURE | GRND_RANDOM)) + return -EINVAL; + + if (count > INT_MAX) + count = INT_MAX; + + if (!(flags & GRND_INSECURE) && !crng_ready()) { + int ret; + + if (flags & GRND_NONBLOCK) + return -EAGAIN; + ret = wait_for_random_bytes(); + if (unlikely(ret)) + return ret; } - - return get_random_bytes_user(buf, nbytes); -} - -static ssize_t random_read(struct file *file, char __user *buf, size_t nbytes, - loff_t *ppos) -{ - int ret; - - ret = wait_for_random_bytes(); - if (ret != 0) - return ret; - return get_random_bytes_user(buf, nbytes); + return get_random_bytes_user(buf, count); } static __poll_t random_poll(struct file *file, poll_table *wait) @@ -1552,6 +1583,32 @@ static ssize_t random_write(struct file *file, const char __user *buffer, return (ssize_t)count; } +static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes, + loff_t *ppos) +{ + static int maxwarn = 10; + + if (!crng_ready() && maxwarn > 0) { + maxwarn--; + if (__ratelimit(&urandom_warning)) + pr_notice("%s: uninitialized urandom read (%zd bytes read)\n", + current->comm, nbytes); + } + + return get_random_bytes_user(buf, nbytes); +} + +static ssize_t random_read(struct file *file, char __user *buf, size_t nbytes, + loff_t *ppos) +{ + int ret; + + ret = wait_for_random_bytes(); + if (ret != 0) + return ret; + return get_random_bytes_user(buf, nbytes); +} + static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) { int size, ent_count; @@ -1560,7 +1617,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) switch (cmd) { case RNDGETENTCNT: - /* inherently racy, no point locking */ + /* Inherently racy, no point locking. */ if (put_user(input_pool.entropy_count, p)) return -EFAULT; return 0; @@ -1636,34 +1693,6 @@ const struct file_operations urandom_fops = { .llseek = noop_llseek, }; -SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int, - flags) -{ - if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE)) - return -EINVAL; - - /* - * Requesting insecure and blocking randomness at the same time makes - * no sense. - */ - if ((flags & (GRND_INSECURE | GRND_RANDOM)) == (GRND_INSECURE | GRND_RANDOM)) - return -EINVAL; - - if (count > INT_MAX) - count = INT_MAX; - - if (!(flags & GRND_INSECURE) && !crng_ready()) { - int ret; - - if (flags & GRND_NONBLOCK) - return -EAGAIN; - ret = wait_for_random_bytes(); - if (unlikely(ret)) - return ret; - } - return get_random_bytes_user(buf, count); -} - /******************************************************************** * * Sysctl interface From 6d1671b6d2531f06dd3aecad03c8eff0cbf83bff Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 11 Feb 2022 12:53:34 +0100 Subject: [PATCH 0186/1842] random: group sysctl functions commit 0deff3c43206c24e746b1410f11125707ad3040e upstream. This pulls all of the sysctl-focused functions into the sixth labeled section. No functional changes. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 35 ++++++++++++++++++++++++++++++----- 1 file changed, 30 insertions(+), 5 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 7d005f7ebcf5..9a23983f3dac 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1693,9 +1693,34 @@ const struct file_operations urandom_fops = { .llseek = noop_llseek, }; + /******************************************************************** * - * Sysctl interface + * Sysctl interface. + * + * These are partly unused legacy knobs with dummy values to not break + * userspace and partly still useful things. They are usually accessible + * in /proc/sys/kernel/random/ and are as follows: + * + * - boot_id - a UUID representing the current boot. + * + * - uuid - a random UUID, different each time the file is read. + * + * - poolsize - the number of bits of entropy that the input pool can + * hold, tied to the POOL_BITS constant. + * + * - entropy_avail - the number of bits of entropy currently in the + * input pool. Always <= poolsize. + * + * - write_wakeup_threshold - the amount of entropy in the input pool + * below which write polls to /dev/random will unblock, requesting + * more entropy, tied to the POOL_MIN_BITS constant. It is writable + * to avoid breaking old userspaces, but writing to it does not + * change any behavior of the RNG. + * + * - urandom_min_reseed_secs - fixed to the meaningless value "60". + * It is writable to avoid breaking old userspaces, but writing + * to it does not change any behavior of the RNG. * ********************************************************************/ @@ -1703,8 +1728,8 @@ const struct file_operations urandom_fops = { #include -static int random_min_urandom_seed = 60; -static int random_write_wakeup_bits = POOL_MIN_BITS; +static int sysctl_random_min_urandom_seed = 60; +static int sysctl_random_write_wakeup_bits = POOL_MIN_BITS; static int sysctl_poolsize = POOL_BITS; static char sysctl_bootid[16]; @@ -1762,14 +1787,14 @@ struct ctl_table random_table[] = { }, { .procname = "write_wakeup_threshold", - .data = &random_write_wakeup_bits, + .data = &sysctl_random_write_wakeup_bits, .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec, }, { .procname = "urandom_min_reseed_secs", - .data = &random_min_urandom_seed, + .data = &sysctl_random_min_urandom_seed, .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec, From 7873321cd88ffa2deafd7a156da29949e5a969c7 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 11 Feb 2022 12:29:33 +0100 Subject: [PATCH 0187/1842] random: rewrite header introductory comment commit 5f75d9f3babea8ae0a2d06724656874f41d317f5 upstream. Now that we've re-documented the various sections, we can remove the outdated text here and replace it with a high-level overview. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 177 +++++------------------------------------- 1 file changed, 18 insertions(+), 159 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 9a23983f3dac..2b462d9a1563 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -2,168 +2,27 @@ /* * Copyright (C) 2017-2022 Jason A. Donenfeld . All Rights Reserved. * Copyright Matt Mackall , 2003, 2004, 2005 - * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All - * rights reserved. - */ - -/* - * Exported interfaces ---- output - * =============================== + * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All rights reserved. * - * There are four exported interfaces; two for use within the kernel, - * and two for use from userspace. + * This driver produces cryptographically secure pseudorandom data. It is divided + * into roughly six sections, each with a section header: * - * Exported interfaces ---- userspace output - * ----------------------------------------- + * - Initialization and readiness waiting. + * - Fast key erasure RNG, the "crng". + * - Entropy accumulation and extraction routines. + * - Entropy collection routines. + * - Userspace reader/writer interfaces. + * - Sysctl interface. * - * The userspace interfaces are two character devices /dev/random and - * /dev/urandom. /dev/random is suitable for use when very high - * quality randomness is desired (for example, for key generation or - * one-time pads), as it will only return a maximum of the number of - * bits of randomness (as estimated by the random number generator) - * contained in the entropy pool. - * - * The /dev/urandom device does not have this limit, and will return - * as many bytes as are requested. As more and more random bytes are - * requested without giving time for the entropy pool to recharge, - * this will result in random numbers that are merely cryptographically - * strong. For many applications, however, this is acceptable. - * - * Exported interfaces ---- kernel output - * -------------------------------------- - * - * The primary kernel interfaces are: - * - * void get_random_bytes(void *buf, size_t nbytes); - * u32 get_random_u32() - * u64 get_random_u64() - * unsigned int get_random_int() - * unsigned long get_random_long() - * - * These interfaces will return the requested number of random bytes - * into the given buffer or as a return value. This is equivalent to a - * read from /dev/urandom. The get_random_{u32,u64,int,long}() family - * of functions may be higher performance for one-off random integers, - * because they do a bit of buffering. - * - * prandom_u32() - * ------------- - * - * For even weaker applications, see the pseudorandom generator - * prandom_u32(), prandom_max(), and prandom_bytes(). If the random - * numbers aren't security-critical at all, these are *far* cheaper. - * Useful for self-tests, random error simulation, randomized backoffs, - * and any other application where you trust that nobody is trying to - * maliciously mess with you by guessing the "random" numbers. - * - * Exported interfaces ---- input - * ============================== - * - * The current exported interfaces for gathering environmental noise - * from the devices are: - * - * void add_device_randomness(const void *buf, size_t size); - * void add_input_randomness(unsigned int type, unsigned int code, - * unsigned int value); - * void add_interrupt_randomness(int irq); - * void add_disk_randomness(struct gendisk *disk); - * void add_hwgenerator_randomness(const void *buffer, size_t count, - * size_t entropy); - * void add_bootloader_randomness(const void *buf, size_t size); - * - * add_device_randomness() is for adding data to the random pool that - * is likely to differ between two devices (or possibly even per boot). - * This would be things like MAC addresses or serial numbers, or the - * read-out of the RTC. This does *not* add any actual entropy to the - * pool, but it initializes the pool to different values for devices - * that might otherwise be identical and have very little entropy - * available to them (particularly common in the embedded world). - * - * add_input_randomness() uses the input layer interrupt timing, as well as - * the event type information from the hardware. - * - * add_interrupt_randomness() uses the interrupt timing as random - * inputs to the entropy pool. Using the cycle counters and the irq source - * as inputs, it feeds the randomness roughly once a second. - * - * add_disk_randomness() uses what amounts to the seek time of block - * layer request events, on a per-disk_devt basis, as input to the - * entropy pool. Note that high-speed solid state drives with very low - * seek times do not make for good sources of entropy, as their seek - * times are usually fairly consistent. - * - * All of these routines try to estimate how many bits of randomness a - * particular randomness source. They do this by keeping track of the - * first and second order deltas of the event timings. - * - * add_hwgenerator_randomness() is for true hardware RNGs, and will credit - * entropy as specified by the caller. If the entropy pool is full it will - * block until more entropy is needed. - * - * add_bootloader_randomness() is the same as add_hwgenerator_randomness() or - * add_device_randomness(), depending on whether or not the configuration - * option CONFIG_RANDOM_TRUST_BOOTLOADER is set. - * - * Ensuring unpredictability at system startup - * ============================================ - * - * When any operating system starts up, it will go through a sequence - * of actions that are fairly predictable by an adversary, especially - * if the start-up does not involve interaction with a human operator. - * This reduces the actual number of bits of unpredictability in the - * entropy pool below the value in entropy_count. In order to - * counteract this effect, it helps to carry information in the - * entropy pool across shut-downs and start-ups. To do this, put the - * following lines an appropriate script which is run during the boot - * sequence: - * - * echo "Initializing random number generator..." - * random_seed=/var/run/random-seed - * # Carry a random seed from start-up to start-up - * # Load and then save the whole entropy pool - * if [ -f $random_seed ]; then - * cat $random_seed >/dev/urandom - * else - * touch $random_seed - * fi - * chmod 600 $random_seed - * dd if=/dev/urandom of=$random_seed count=1 bs=512 - * - * and the following lines in an appropriate script which is run as - * the system is shutdown: - * - * # Carry a random seed from shut-down to start-up - * # Save the whole entropy pool - * echo "Saving random seed..." - * random_seed=/var/run/random-seed - * touch $random_seed - * chmod 600 $random_seed - * dd if=/dev/urandom of=$random_seed count=1 bs=512 - * - * For example, on most modern systems using the System V init - * scripts, such code fragments would be found in - * /etc/rc.d/init.d/random. On older Linux systems, the correct script - * location might be in /etc/rcb.d/rc.local or /etc/rc.d/rc.0. - * - * Effectively, these commands cause the contents of the entropy pool - * to be saved at shut-down time and reloaded into the entropy pool at - * start-up. (The 'dd' in the addition to the bootup script is to - * make sure that /etc/random-seed is different for every start-up, - * even if the system crashes without executing rc.0.) Even with - * complete knowledge of the start-up activities, predicting the state - * of the entropy pool requires knowledge of the previous history of - * the system. - * - * Configuring the /dev/random driver under Linux - * ============================================== - * - * The /dev/random driver under Linux uses minor numbers 8 and 9 of - * the /dev/mem major number (#1). So if your system does not have - * /dev/random and /dev/urandom created already, they can be created - * by using the commands: - * - * mknod /dev/random c 1 8 - * mknod /dev/urandom c 1 9 + * The high level overview is that there is one input pool, into which + * various pieces of data are hashed. Some of that data is then "credited" as + * having a certain number of bits of entropy. When enough bits of entropy are + * available, the hash is finalized and handed as a key to a stream cipher that + * expands it indefinitely for various consumers. This key is periodically + * refreshed as the various entropy collectors, described below, add data to the + * input pool and credit it. There is currently no Fortuna-like scheduler + * involved, which can lead to malicious entropy sources causing a premature + * reseed, and the entropy estimates are, at best, conservative guesses. */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt From 5d73e69a5dd41bbad0d6bdf2f00e3f0739124657 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 4 Feb 2022 16:15:46 +0100 Subject: [PATCH 0188/1842] random: defer fast pool mixing to worker MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 58340f8e952b613e0ead0bed58b97b05bf4743c5 upstream. On PREEMPT_RT, it's problematic to take spinlocks from hard irq handlers. We can fix this by deferring to a workqueue the dumping of the fast pool into the input pool. We accomplish this with some careful rules on fast_pool->count: - When it's incremented to >= 64, we schedule the work. - If the top bit is set, we never schedule the work, even if >= 64. - The worker is responsible for setting it back to 0 when it's done. There are two small issues around using workqueues for this purpose that we work around. The first issue is that mix_interrupt_randomness() might be migrated to another CPU during CPU hotplug. This issue is rectified by checking that it hasn't been migrated (after disabling irqs). If it has been migrated, then we set the count to zero, so that when the CPU comes online again, it can requeue the work. As part of this, we switch to using an atomic_t, so that the increment in the irq handler doesn't wipe out the zeroing if the CPU comes back online while this worker is running. The second issue is that, though relatively minor in effect, we probably want to make sure we get a consistent view of the pool onto the stack, in case it's interrupted by an irq while reading. To do this, we don't reenable irqs until after the copy. There are only 18 instructions between the cli and sti, so this is a pretty tiny window. Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Theodore Ts'o Cc: Jonathan Neuschäfer Acked-by: Sebastian Andrzej Siewior Reviewed-by: Sultan Alsawaf Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 63 +++++++++++++++++++++++++++++++++---------- 1 file changed, 49 insertions(+), 14 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 2b462d9a1563..8728de2f4450 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1178,9 +1178,10 @@ struct fast_pool { u32 pool32[4]; u64 pool64[2]; }; + struct work_struct mix; unsigned long last; + atomic_t count; u16 reg_idx; - u8 count; }; /* @@ -1230,12 +1231,49 @@ static u32 get_reg(struct fast_pool *f, struct pt_regs *regs) return *ptr; } +static void mix_interrupt_randomness(struct work_struct *work) +{ + struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix); + u32 pool[4]; + + /* Check to see if we're running on the wrong CPU due to hotplug. */ + local_irq_disable(); + if (fast_pool != this_cpu_ptr(&irq_randomness)) { + local_irq_enable(); + /* + * If we are unlucky enough to have been moved to another CPU, + * during CPU hotplug while the CPU was shutdown then we set + * our count to zero atomically so that when the CPU comes + * back online, it can enqueue work again. The _release here + * pairs with the atomic_inc_return_acquire in + * add_interrupt_randomness(). + */ + atomic_set_release(&fast_pool->count, 0); + return; + } + + /* + * Copy the pool to the stack so that the mixer always has a + * consistent view, before we reenable irqs again. + */ + memcpy(pool, fast_pool->pool32, sizeof(pool)); + atomic_set(&fast_pool->count, 0); + fast_pool->last = jiffies; + local_irq_enable(); + + mix_pool_bytes(pool, sizeof(pool)); + credit_entropy_bits(1); + memzero_explicit(pool, sizeof(pool)); +} + void add_interrupt_randomness(int irq) { + enum { MIX_INFLIGHT = 1U << 31 }; struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness); struct pt_regs *regs = get_irq_regs(); unsigned long now = jiffies; cycles_t cycles = random_get_entropy(); + unsigned int new_count; if (cycles == 0) cycles = get_reg(fast_pool, regs); @@ -1255,12 +1293,13 @@ void add_interrupt_randomness(int irq) } fast_mix(fast_pool->pool32); - ++fast_pool->count; + /* The _acquire here pairs with the atomic_set_release in mix_interrupt_randomness(). */ + new_count = (unsigned int)atomic_inc_return_acquire(&fast_pool->count); if (unlikely(crng_init == 0)) { - if (fast_pool->count >= 64 && + if (new_count >= 64 && crng_fast_load(fast_pool->pool32, sizeof(fast_pool->pool32)) > 0) { - fast_pool->count = 0; + atomic_set(&fast_pool->count, 0); fast_pool->last = now; if (spin_trylock(&input_pool.lock)) { _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); @@ -1270,20 +1309,16 @@ void add_interrupt_randomness(int irq) return; } - if ((fast_pool->count < 64) && !time_after(now, fast_pool->last + HZ)) + if (new_count & MIX_INFLIGHT) return; - if (!spin_trylock(&input_pool.lock)) + if (new_count < 64 && !time_after(now, fast_pool->last + HZ)) return; - fast_pool->last = now; - _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); - spin_unlock(&input_pool.lock); - - fast_pool->count = 0; - - /* Award one bit for the contents of the fast pool. */ - credit_entropy_bits(1); + if (unlikely(!fast_pool->mix.func)) + INIT_WORK(&fast_pool->mix, mix_interrupt_randomness); + atomic_or(MIX_INFLIGHT, &fast_pool->count); + queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix); } EXPORT_SYMBOL_GPL(add_interrupt_randomness); From f656bd0011fd128d482fb75f45cd564578984473 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 12 Feb 2022 01:26:17 +0100 Subject: [PATCH 0189/1842] random: do not take pool spinlock at boot commit afba0b80b977b2a8f16234f2acd982f82710ba33 upstream. Since rand_initialize() is run while interrupts are still off and nothing else is running, we don't need to repeatedly take and release the pool spinlock, especially in the RDSEED loop. Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 8728de2f4450..1d0249e017d0 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -978,10 +978,10 @@ int __init rand_initialize(void) rv = random_get_entropy(); arch_init = false; } - mix_pool_bytes(&rv, sizeof(rv)); + _mix_pool_bytes(&rv, sizeof(rv)); } - mix_pool_bytes(&now, sizeof(now)); - mix_pool_bytes(utsname(), sizeof(*(utsname()))); + _mix_pool_bytes(&now, sizeof(now)); + _mix_pool_bytes(utsname(), sizeof(*(utsname()))); extract_entropy(base_crng.key, sizeof(base_crng.key)); ++base_crng.generation; From 684e9fe92d440b7700672118675ae8a8ace228bb Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 12 Feb 2022 23:54:09 +0100 Subject: [PATCH 0190/1842] random: unify early init crng load accounting commit da792c6d5f59a76c10a310c5d4c93428fd18f996 upstream. crng_fast_load() and crng_slow_load() have different semantics: - crng_fast_load() xors and accounts with crng_init_cnt. - crng_slow_load() hashes and doesn't account. However add_hwgenerator_randomness() can afford to hash (it's called from a kthread), and it should account. Additionally, ones that can afford to hash don't need to take a trylock but can take a normal lock. So, we combine these into one function, crng_pre_init_inject(), which allows us to control these in a uniform way. This will make it simpler later to simplify this all down when the time comes for that. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 112 ++++++++++++++++++++++-------------------- 1 file changed, 58 insertions(+), 54 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 1d0249e017d0..fdfb0dec6ae6 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -386,7 +386,7 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], * For the fast path, we check whether we're ready, unlocked first, and * then re-check once locked later. In the case where we're really not * ready, we do fast key erasure with the base_crng directly, because - * this is what crng_{fast,slow}_load mutate during early init. + * this is what crng_pre_init_inject() mutates during early init. */ if (unlikely(!crng_ready())) { bool ready; @@ -437,72 +437,75 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], } /* - * This function is for crng_init == 0 only. + * This function is for crng_init == 0 only. It loads entropy directly + * into the crng's key, without going through the input pool. It is, + * generally speaking, not very safe, but we use this only at early + * boot time when it's better to have something there rather than + * nothing. * - * crng_fast_load() can be called by code in the interrupt service - * path. So we can't afford to dilly-dally. Returns the number of - * bytes processed from cp. + * There are two paths, a slow one and a fast one. The slow one + * hashes the input along with the current key. The fast one simply + * xors it in, and should only be used from interrupt context. + * + * If account is set, then the crng_init_cnt counter is incremented. + * This shouldn't be set by functions like add_device_randomness(), + * where we can't trust the buffer passed to it is guaranteed to be + * unpredictable (so it might not have any entropy at all). + * + * Returns the number of bytes processed from input, which is bounded + * by CRNG_INIT_CNT_THRESH if account is true. */ -static size_t crng_fast_load(const void *cp, size_t len) +static size_t crng_pre_init_inject(const void *input, size_t len, + bool fast, bool account) { static int crng_init_cnt = 0; unsigned long flags; - const u8 *src = (const u8 *)cp; - size_t ret = 0; - if (!spin_trylock_irqsave(&base_crng.lock, flags)) - return 0; + if (fast) { + if (!spin_trylock_irqsave(&base_crng.lock, flags)) + return 0; + } else { + spin_lock_irqsave(&base_crng.lock, flags); + } + if (crng_init != 0) { spin_unlock_irqrestore(&base_crng.lock, flags); return 0; } - while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) { - base_crng.key[crng_init_cnt % sizeof(base_crng.key)] ^= *src; - src++; crng_init_cnt++; len--; ret++; + + if (account) + len = min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt); + + if (fast) { + const u8 *src = input; + size_t i; + + for (i = 0; i < len; ++i) + base_crng.key[(crng_init_cnt + i) % + sizeof(base_crng.key)] ^= src[i]; + } else { + struct blake2s_state hash; + + blake2s_init(&hash, sizeof(base_crng.key)); + blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); + blake2s_update(&hash, input, len); + blake2s_final(&hash, base_crng.key); } - if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { - ++base_crng.generation; - crng_init = 1; + + if (account) { + crng_init_cnt += len; + if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { + ++base_crng.generation; + crng_init = 1; + } } + spin_unlock_irqrestore(&base_crng.lock, flags); + if (crng_init == 1) pr_notice("fast init done\n"); - return ret; -} -/* - * This function is for crng_init == 0 only. - * - * crng_slow_load() is called by add_device_randomness, which has two - * attributes. (1) We can't trust the buffer passed to it is - * guaranteed to be unpredictable (so it might not have any entropy at - * all), and (2) it doesn't have the performance constraints of - * crng_fast_load(). - * - * So, we simply hash the contents in with the current key. Finally, - * we do *not* advance crng_init_cnt since buffer we may get may be - * something like a fixed DMI table (for example), which might very - * well be unique to the machine, but is otherwise unvarying. - */ -static void crng_slow_load(const void *cp, size_t len) -{ - unsigned long flags; - struct blake2s_state hash; - - blake2s_init(&hash, sizeof(base_crng.key)); - - if (!spin_trylock_irqsave(&base_crng.lock, flags)) - return; - if (crng_init != 0) { - spin_unlock_irqrestore(&base_crng.lock, flags); - return; - } - - blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); - blake2s_update(&hash, cp, len); - blake2s_final(&hash, base_crng.key); - - spin_unlock_irqrestore(&base_crng.lock, flags); + return len; } static void _get_random_bytes(void *buf, size_t nbytes) @@ -1018,7 +1021,7 @@ void add_device_randomness(const void *buf, size_t size) unsigned long flags; if (!crng_ready() && size) - crng_slow_load(buf, size); + crng_pre_init_inject(buf, size, false, false); spin_lock_irqsave(&input_pool.lock, flags); _mix_pool_bytes(buf, size); @@ -1135,7 +1138,7 @@ void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy) { if (unlikely(crng_init == 0)) { - size_t ret = crng_fast_load(buffer, count); + size_t ret = crng_pre_init_inject(buffer, count, false, true); mix_pool_bytes(buffer, ret); count -= ret; buffer += ret; @@ -1298,7 +1301,8 @@ void add_interrupt_randomness(int irq) if (unlikely(crng_init == 0)) { if (new_count >= 64 && - crng_fast_load(fast_pool->pool32, sizeof(fast_pool->pool32)) > 0) { + crng_pre_init_inject(fast_pool->pool32, sizeof(fast_pool->pool32), + true, true) > 0) { atomic_set(&fast_pool->count, 0); fast_pool->last = now; if (spin_trylock(&input_pool.lock)) { From 32252548b50fca325d8964b945dbb787934ab861 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 12 Feb 2022 23:57:38 +0100 Subject: [PATCH 0191/1842] random: check for crng_init == 0 in add_device_randomness() commit 1daf2f387652bf3a7044aea042f5023b3f6b189b upstream. This has no real functional change, as crng_pre_init_inject() (and before that, crng_slow_init()) always checks for == 0, not >= 2. So correct the outer unlocked change to reflect that. Before this used crng_ready(), which was not correct. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index fdfb0dec6ae6..adf42f7d7413 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1020,7 +1020,7 @@ void add_device_randomness(const void *buf, size_t size) unsigned long time = random_get_entropy() ^ jiffies; unsigned long flags; - if (!crng_ready() && size) + if (crng_init == 0 && size) crng_pre_init_inject(buf, size, false, false); spin_lock_irqsave(&input_pool.lock, flags); From 6e1cb84cc6a09ae7bb471fa88f1815450e6fcadf Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sun, 13 Feb 2022 16:17:01 +0100 Subject: [PATCH 0192/1842] random: pull add_hwgenerator_randomness() declaration into random.h commit b777c38239fec5a528e59f55b379e31b1a187524 upstream. add_hwgenerator_randomness() is a function implemented and documented inside of random.c. It is the way that hardware RNGs push data into it. Therefore, it should be declared in random.h. Otherwise sparse complains with: random.c:1137:6: warning: symbol 'add_hwgenerator_randomness' was not declared. Should it be static? The alternative would be to include hw_random.h into random.c, but that wouldn't really be good for anything except slowing down compile time. Cc: Matt Mackall Cc: Theodore Ts'o Acked-by: Herbert Xu Reviewed-by: Eric Biggers Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/hw_random/core.c | 1 + include/linux/hw_random.h | 2 -- include/linux/random.h | 2 ++ 3 files changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c index 8c1c47dd9f46..5749998feaa4 100644 --- a/drivers/char/hw_random/core.c +++ b/drivers/char/hw_random/core.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include diff --git a/include/linux/hw_random.h b/include/linux/hw_random.h index 1a9fc38f8938..aa1d4da03538 100644 --- a/include/linux/hw_random.h +++ b/include/linux/hw_random.h @@ -60,7 +60,5 @@ extern int devm_hwrng_register(struct device *dev, struct hwrng *rng); /** Unregister a Hardware Random Number Generator driver. */ extern void hwrng_unregister(struct hwrng *rng); extern void devm_hwrng_unregister(struct device *dve, struct hwrng *rng); -/** Feed random bits into the pool. */ -extern void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy); #endif /* LINUX_HWRANDOM_H_ */ diff --git a/include/linux/random.h b/include/linux/random.h index 37e1e8c43d7e..d7354de9351e 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -32,6 +32,8 @@ static inline void add_latent_entropy(void) {} extern void add_input_randomness(unsigned int type, unsigned int code, unsigned int value) __latent_entropy; extern void add_interrupt_randomness(int irq) __latent_entropy; +extern void add_hwgenerator_randomness(const void *buffer, size_t count, + size_t entropy); extern void get_random_bytes(void *buf, size_t nbytes); extern int wait_for_random_bytes(void); From 5064550d422dca59828eb4d29ef4ae00b965c20d Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sun, 13 Feb 2022 22:48:04 +0100 Subject: [PATCH 0193/1842] random: clear fast pool, crng, and batches in cpuhp bring up commit 3191dd5a1179ef0fad5a050a1702ae98b6251e8f upstream. For the irq randomness fast pool, rather than having to use expensive atomics, which were visibly the most expensive thing in the entire irq handler, simply take care of the extreme edge case of resetting count to zero in the cpuhp online handler, just after workqueues have been reenabled. This simplifies the code a bit and lets us use vanilla variables rather than atomics, and performance should be improved. As well, very early on when the CPU comes up, while interrupts are still disabled, we clear out the per-cpu crng and its batches, so that it always starts with fresh randomness. Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Theodore Ts'o Cc: Sultan Alsawaf Cc: Dominik Brodowski Acked-by: Sebastian Andrzej Siewior Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 62 +++++++++++++++++++++++++++++--------- include/linux/cpuhotplug.h | 2 ++ include/linux/random.h | 5 +++ kernel/cpu.c | 11 +++++++ 4 files changed, 65 insertions(+), 15 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index adf42f7d7413..6c36940e2c6e 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -698,6 +698,25 @@ u32 get_random_u32(void) } EXPORT_SYMBOL(get_random_u32); +#ifdef CONFIG_SMP +/* + * This function is called when the CPU is coming up, with entry + * CPUHP_RANDOM_PREPARE, which comes before CPUHP_WORKQUEUE_PREP. + */ +int random_prepare_cpu(unsigned int cpu) +{ + /* + * When the cpu comes back online, immediately invalidate both + * the per-cpu crng and all batches, so that we serve fresh + * randomness. + */ + per_cpu_ptr(&crngs, cpu)->generation = ULONG_MAX; + per_cpu_ptr(&batched_entropy_u32, cpu)->position = UINT_MAX; + per_cpu_ptr(&batched_entropy_u64, cpu)->position = UINT_MAX; + return 0; +} +#endif + /** * randomize_page - Generate a random, page aligned address * @start: The smallest acceptable address the caller will take. @@ -1183,7 +1202,7 @@ struct fast_pool { }; struct work_struct mix; unsigned long last; - atomic_t count; + unsigned int count; u16 reg_idx; }; @@ -1219,6 +1238,29 @@ static void fast_mix(u32 pool[4]) static DEFINE_PER_CPU(struct fast_pool, irq_randomness); +#ifdef CONFIG_SMP +/* + * This function is called when the CPU has just come online, with + * entry CPUHP_AP_RANDOM_ONLINE, just after CPUHP_AP_WORKQUEUE_ONLINE. + */ +int random_online_cpu(unsigned int cpu) +{ + /* + * During CPU shutdown and before CPU onlining, add_interrupt_ + * randomness() may schedule mix_interrupt_randomness(), and + * set the MIX_INFLIGHT flag. However, because the worker can + * be scheduled on a different CPU during this period, that + * flag will never be cleared. For that reason, we zero out + * the flag here, which runs just after workqueues are onlined + * for the CPU again. This also has the effect of setting the + * irq randomness count to zero so that new accumulated irqs + * are fresh. + */ + per_cpu_ptr(&irq_randomness, cpu)->count = 0; + return 0; +} +#endif + static u32 get_reg(struct fast_pool *f, struct pt_regs *regs) { u32 *ptr = (u32 *)regs; @@ -1243,15 +1285,6 @@ static void mix_interrupt_randomness(struct work_struct *work) local_irq_disable(); if (fast_pool != this_cpu_ptr(&irq_randomness)) { local_irq_enable(); - /* - * If we are unlucky enough to have been moved to another CPU, - * during CPU hotplug while the CPU was shutdown then we set - * our count to zero atomically so that when the CPU comes - * back online, it can enqueue work again. The _release here - * pairs with the atomic_inc_return_acquire in - * add_interrupt_randomness(). - */ - atomic_set_release(&fast_pool->count, 0); return; } @@ -1260,7 +1293,7 @@ static void mix_interrupt_randomness(struct work_struct *work) * consistent view, before we reenable irqs again. */ memcpy(pool, fast_pool->pool32, sizeof(pool)); - atomic_set(&fast_pool->count, 0); + fast_pool->count = 0; fast_pool->last = jiffies; local_irq_enable(); @@ -1296,14 +1329,13 @@ void add_interrupt_randomness(int irq) } fast_mix(fast_pool->pool32); - /* The _acquire here pairs with the atomic_set_release in mix_interrupt_randomness(). */ - new_count = (unsigned int)atomic_inc_return_acquire(&fast_pool->count); + new_count = ++fast_pool->count; if (unlikely(crng_init == 0)) { if (new_count >= 64 && crng_pre_init_inject(fast_pool->pool32, sizeof(fast_pool->pool32), true, true) > 0) { - atomic_set(&fast_pool->count, 0); + fast_pool->count = 0; fast_pool->last = now; if (spin_trylock(&input_pool.lock)) { _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); @@ -1321,7 +1353,7 @@ void add_interrupt_randomness(int irq) if (unlikely(!fast_pool->mix.func)) INIT_WORK(&fast_pool->mix, mix_interrupt_randomness); - atomic_or(MIX_INFLIGHT, &fast_pool->count); + fast_pool->count |= MIX_INFLIGHT; queue_work_on(raw_smp_processor_id(), system_highpri_wq, &fast_pool->mix); } EXPORT_SYMBOL_GPL(add_interrupt_randomness); diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 8fb893ed205e..fc945f9df2c1 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -61,6 +61,7 @@ enum cpuhp_state { CPUHP_LUSTRE_CFS_DEAD, CPUHP_AP_ARM_CACHE_B15_RAC_DEAD, CPUHP_PADATA_DEAD, + CPUHP_RANDOM_PREPARE, CPUHP_WORKQUEUE_PREP, CPUHP_POWER_NUMA_PREPARE, CPUHP_HRTIMERS_PREPARE, @@ -187,6 +188,7 @@ enum cpuhp_state { CPUHP_AP_PERF_POWERPC_HV_GPCI_ONLINE, CPUHP_AP_WATCHDOG_ONLINE, CPUHP_AP_WORKQUEUE_ONLINE, + CPUHP_AP_RANDOM_ONLINE, CPUHP_AP_RCUTREE_ONLINE, CPUHP_AP_BASE_CACHEINFO_ONLINE, CPUHP_AP_ONLINE_DYN, diff --git a/include/linux/random.h b/include/linux/random.h index d7354de9351e..6148b8d1ccf3 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -156,4 +156,9 @@ static inline bool __init arch_get_random_long_early(unsigned long *v) } #endif +#ifdef CONFIG_SMP +extern int random_prepare_cpu(unsigned int cpu); +extern int random_online_cpu(unsigned int cpu); +#endif + #endif /* _LINUX_RANDOM_H */ diff --git a/kernel/cpu.c b/kernel/cpu.c index c06ced18f78a..3c9ee966c56a 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #define CREATE_TRACE_POINTS @@ -1581,6 +1582,11 @@ static struct cpuhp_step cpuhp_hp_states[] = { .startup.single = perf_event_init_cpu, .teardown.single = perf_event_exit_cpu, }, + [CPUHP_RANDOM_PREPARE] = { + .name = "random:prepare", + .startup.single = random_prepare_cpu, + .teardown.single = NULL, + }, [CPUHP_WORKQUEUE_PREP] = { .name = "workqueue:prepare", .startup.single = workqueue_prepare_cpu, @@ -1697,6 +1703,11 @@ static struct cpuhp_step cpuhp_hp_states[] = { .startup.single = workqueue_online_cpu, .teardown.single = workqueue_offline_cpu, }, + [CPUHP_AP_RANDOM_ONLINE] = { + .name = "random:online", + .startup.single = random_online_cpu, + .teardown.single = NULL, + }, [CPUHP_AP_RCUTREE_ONLINE] = { .name = "RCU/tree:online", .startup.single = rcutree_online_cpu, From c47f215ab36d678e41dd3aa7710b542e79b0fed1 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 22 Feb 2022 13:46:10 +0100 Subject: [PATCH 0194/1842] random: round-robin registers as ulong, not u32 commit da3951ebdcd1cb1d5c750e08cd05aee7b0c04d9a upstream. When the interrupt handler does not have a valid cycle counter, it calls get_reg() to read a register from the irq stack, in round-robin. Currently it does this assuming that registers are 32-bit. This is _probably_ the case, and probably all platforms without cycle counters are in fact 32-bit platforms. But maybe not, and either way, it's not quite correct. This commit fixes that to deal with `unsigned long` rather than `u32`. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 6c36940e2c6e..35991d87d619 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1261,15 +1261,15 @@ int random_online_cpu(unsigned int cpu) } #endif -static u32 get_reg(struct fast_pool *f, struct pt_regs *regs) +static unsigned long get_reg(struct fast_pool *f, struct pt_regs *regs) { - u32 *ptr = (u32 *)regs; + unsigned long *ptr = (unsigned long *)regs; unsigned int idx; if (regs == NULL) return 0; idx = READ_ONCE(f->reg_idx); - if (idx >= sizeof(struct pt_regs) / sizeof(u32)) + if (idx >= sizeof(struct pt_regs) / sizeof(unsigned long)) idx = 0; ptr += idx++; WRITE_ONCE(f->reg_idx, idx); From 9b0e0e27140d007241772cadc7df1e05292f465d Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 22 Feb 2022 14:01:57 +0100 Subject: [PATCH 0195/1842] random: only wake up writers after zap if threshold was passed commit a3f9e8910e1584d7725ef7d5ac870920d42d0bb4 upstream. The only time that we need to wake up /dev/random writers on RNDCLEARPOOL/RNDZAPPOOL is when we're changing from a value that is greater than or equal to POOL_MIN_BITS to zero, because if we're changing from below POOL_MIN_BITS to zero, the writers are already unblocked. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 35991d87d619..6a5da2fe2305 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1582,7 +1582,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) */ if (!capable(CAP_SYS_ADMIN)) return -EPERM; - if (xchg(&input_pool.entropy_count, 0)) { + if (xchg(&input_pool.entropy_count, 0) >= POOL_MIN_BITS) { wake_up_interruptible(&random_write_wait); kill_fasync(&fasync, SIGIO, POLL_OUT); } From 47f0e89b71e281a659c34824176f19d58a003c2d Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 24 Feb 2022 23:04:56 +0100 Subject: [PATCH 0196/1842] random: cleanup UUID handling commit 64276a9939ff414f2f0db38036cf4e1a0a703394 upstream. Rather than hard coding various lengths, we can use the right constants. Strings should be `char *` while buffers should be `u8 *`. Rather than have a nonsensical and unused maxlength, just remove it. Finally, use snprintf instead of sprintf, just out of good hygiene. As well, remove the old comment about returning a binary UUID via the binary sysctl syscall. That syscall was removed from the kernel in 5.5, and actually, the "uuid_strategy" function and related infrastructure for even serving it via the binary sysctl syscall was removed with 894d2491153a ("sysctl drivers: Remove dead binary sysctl support") back in 2.6.33. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 29 +++++++++++++---------------- 1 file changed, 13 insertions(+), 16 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 6a5da2fe2305..9785f78f49ce 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1661,22 +1661,25 @@ const struct file_operations urandom_fops = { static int sysctl_random_min_urandom_seed = 60; static int sysctl_random_write_wakeup_bits = POOL_MIN_BITS; static int sysctl_poolsize = POOL_BITS; -static char sysctl_bootid[16]; +static u8 sysctl_bootid[UUID_SIZE]; /* * This function is used to return both the bootid UUID, and random - * UUID. The difference is in whether table->data is NULL; if it is, + * UUID. The difference is in whether table->data is NULL; if it is, * then a new UUID is generated and returned to the user. - * - * If the user accesses this via the proc interface, the UUID will be - * returned as an ASCII string in the standard UUID format; if via the - * sysctl system call, as 16 bytes of binary data. */ static int proc_do_uuid(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { - struct ctl_table fake_table; - unsigned char buf[64], tmp_uuid[16], *uuid; + u8 tmp_uuid[UUID_SIZE], *uuid; + char uuid_string[UUID_STRING_LEN + 1]; + struct ctl_table fake_table = { + .data = uuid_string, + .maxlen = UUID_STRING_LEN + }; + + if (write) + return -EPERM; uuid = table->data; if (!uuid) { @@ -1691,12 +1694,8 @@ static int proc_do_uuid(struct ctl_table *table, int write, void *buffer, spin_unlock(&bootid_spinlock); } - sprintf(buf, "%pU", uuid); - - fake_table.data = buf; - fake_table.maxlen = sizeof(buf); - - return proc_dostring(&fake_table, write, buffer, lenp, ppos); + snprintf(uuid_string, sizeof(uuid_string), "%pU", uuid); + return proc_dostring(&fake_table, 0, buffer, lenp, ppos); } extern struct ctl_table random_table[]; @@ -1732,13 +1731,11 @@ struct ctl_table random_table[] = { { .procname = "boot_id", .data = &sysctl_bootid, - .maxlen = 16, .mode = 0444, .proc_handler = proc_do_uuid, }, { .procname = "uuid", - .maxlen = 16, .mode = 0444, .proc_handler = proc_do_uuid, }, From 192d4c6cb3e23869d4a51975c8d6aac354510d6b Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 24 Feb 2022 18:30:58 +0100 Subject: [PATCH 0197/1842] random: unify cycles_t and jiffies usage and types commit abded93ec1e9692920fe309f07f40bd1035f2940 upstream. random_get_entropy() returns a cycles_t, not an unsigned long, which is sometimes 64 bits on various 32-bit platforms, including x86. Conversely, jiffies is always unsigned long. This commit fixes things to use cycles_t for fields that use random_get_entropy(), named "cycles", and unsigned long for fields that use jiffies, named "now". It's also good to mix in a cycles_t and a jiffies in the same way for both add_device_randomness and add_timer_randomness, rather than using xor in one case. Finally, we unify the order of these volatile reads, always reading the more precise cycles counter, and then jiffies, so that the cycle counter is as close to the event as possible. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 56 +++++++++++++++++++++---------------------- 1 file changed, 27 insertions(+), 29 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 9785f78f49ce..27d9edfce3e0 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1020,12 +1020,6 @@ int __init rand_initialize(void) return 0; } -/* There is one of these per entropy source */ -struct timer_rand_state { - cycles_t last_time; - long last_delta, last_delta2; -}; - /* * Add device- or boot-specific data to the input pool to help * initialize it. @@ -1036,19 +1030,26 @@ struct timer_rand_state { */ void add_device_randomness(const void *buf, size_t size) { - unsigned long time = random_get_entropy() ^ jiffies; - unsigned long flags; + cycles_t cycles = random_get_entropy(); + unsigned long flags, now = jiffies; if (crng_init == 0 && size) crng_pre_init_inject(buf, size, false, false); spin_lock_irqsave(&input_pool.lock, flags); + _mix_pool_bytes(&cycles, sizeof(cycles)); + _mix_pool_bytes(&now, sizeof(now)); _mix_pool_bytes(buf, size); - _mix_pool_bytes(&time, sizeof(time)); spin_unlock_irqrestore(&input_pool.lock, flags); } EXPORT_SYMBOL(add_device_randomness); +/* There is one of these per entropy source */ +struct timer_rand_state { + unsigned long last_time; + long last_delta, last_delta2; +}; + /* * This function adds entropy to the entropy "pool" by using timing * delays. It uses the timer_rand_state structure to make an estimate @@ -1057,29 +1058,26 @@ EXPORT_SYMBOL(add_device_randomness); * The number "num" is also added to the pool - it should somehow describe * the type of event which just happened. This is currently 0-255 for * keyboard scan codes, and 256 upwards for interrupts. - * */ static void add_timer_randomness(struct timer_rand_state *state, unsigned int num) { - struct { - long jiffies; - unsigned int cycles; - unsigned int num; - } sample; + cycles_t cycles = random_get_entropy(); + unsigned long flags, now = jiffies; long delta, delta2, delta3; - sample.jiffies = jiffies; - sample.cycles = random_get_entropy(); - sample.num = num; - mix_pool_bytes(&sample, sizeof(sample)); + spin_lock_irqsave(&input_pool.lock, flags); + _mix_pool_bytes(&cycles, sizeof(cycles)); + _mix_pool_bytes(&now, sizeof(now)); + _mix_pool_bytes(&num, sizeof(num)); + spin_unlock_irqrestore(&input_pool.lock, flags); /* * Calculate number of bits of randomness we probably added. * We take into account the first, second and third-order deltas * in order to make our estimate. */ - delta = sample.jiffies - READ_ONCE(state->last_time); - WRITE_ONCE(state->last_time, sample.jiffies); + delta = now - READ_ONCE(state->last_time); + WRITE_ONCE(state->last_time, now); delta2 = delta - READ_ONCE(state->last_delta); WRITE_ONCE(state->last_delta, delta); @@ -1305,10 +1303,10 @@ static void mix_interrupt_randomness(struct work_struct *work) void add_interrupt_randomness(int irq) { enum { MIX_INFLIGHT = 1U << 31 }; + cycles_t cycles = random_get_entropy(); + unsigned long now = jiffies; struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness); struct pt_regs *regs = get_irq_regs(); - unsigned long now = jiffies; - cycles_t cycles = random_get_entropy(); unsigned int new_count; if (cycles == 0) @@ -1383,28 +1381,28 @@ static void entropy_timer(struct timer_list *t) static void try_to_generate_entropy(void) { struct { - unsigned long now; + cycles_t cycles; struct timer_list timer; } stack; - stack.now = random_get_entropy(); + stack.cycles = random_get_entropy(); /* Slow counter - or none. Don't even bother */ - if (stack.now == random_get_entropy()) + if (stack.cycles == random_get_entropy()) return; timer_setup_on_stack(&stack.timer, entropy_timer, 0); while (!crng_ready()) { if (!timer_pending(&stack.timer)) mod_timer(&stack.timer, jiffies + 1); - mix_pool_bytes(&stack.now, sizeof(stack.now)); + mix_pool_bytes(&stack.cycles, sizeof(stack.cycles)); schedule(); - stack.now = random_get_entropy(); + stack.cycles = random_get_entropy(); } del_timer_sync(&stack.timer); destroy_timer_on_stack(&stack.timer); - mix_pool_bytes(&stack.now, sizeof(stack.now)); + mix_pool_bytes(&stack.cycles, sizeof(stack.cycles)); } From 0964a76fd58b5ea7a96d243fc23616b2377627e7 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sun, 13 Feb 2022 18:25:07 +0100 Subject: [PATCH 0198/1842] random: do crng pre-init loading in worker rather than irq commit c2a7de4feb6e09f23af7accc0f882a8fa92e7ae5 upstream. Taking spinlocks from IRQ context is generally problematic for PREEMPT_RT. That is, in part, why we take trylocks instead. However, a spin_try_lock() is also problematic since another spin_lock() invocation can potentially PI-boost the wrong task, as the spin_try_lock() is invoked from an IRQ-context, so the task on CPU (random task or idle) is not the actual owner. Additionally, by deferring the crng pre-init loading to the worker, we can use the cryptographic hash function rather than xor, which is perhaps a meaningful difference when considering this data has only been through the relatively weak fast_mix() function. The biggest downside of this approach is that the pre-init loading is now deferred until later, which means things that need random numbers after interrupts are enabled, but before workqueues are running -- or before this particular worker manages to run -- are going to get into trouble. Hopefully in the real world, this window is rather small, especially since this code won't run until 64 interrupts had occurred. Cc: Sultan Alsawaf Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Eric Biggers Cc: Theodore Ts'o Acked-by: Sebastian Andrzej Siewior Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 65 +++++++++++++------------------------------ 1 file changed, 19 insertions(+), 46 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 27d9edfce3e0..140514f36f73 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -443,10 +443,6 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], * boot time when it's better to have something there rather than * nothing. * - * There are two paths, a slow one and a fast one. The slow one - * hashes the input along with the current key. The fast one simply - * xors it in, and should only be used from interrupt context. - * * If account is set, then the crng_init_cnt counter is incremented. * This shouldn't be set by functions like add_device_randomness(), * where we can't trust the buffer passed to it is guaranteed to be @@ -455,19 +451,15 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], * Returns the number of bytes processed from input, which is bounded * by CRNG_INIT_CNT_THRESH if account is true. */ -static size_t crng_pre_init_inject(const void *input, size_t len, - bool fast, bool account) +static size_t crng_pre_init_inject(const void *input, size_t len, bool account) { static int crng_init_cnt = 0; + struct blake2s_state hash; unsigned long flags; - if (fast) { - if (!spin_trylock_irqsave(&base_crng.lock, flags)) - return 0; - } else { - spin_lock_irqsave(&base_crng.lock, flags); - } + blake2s_init(&hash, sizeof(base_crng.key)); + spin_lock_irqsave(&base_crng.lock, flags); if (crng_init != 0) { spin_unlock_irqrestore(&base_crng.lock, flags); return 0; @@ -476,21 +468,9 @@ static size_t crng_pre_init_inject(const void *input, size_t len, if (account) len = min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt); - if (fast) { - const u8 *src = input; - size_t i; - - for (i = 0; i < len; ++i) - base_crng.key[(crng_init_cnt + i) % - sizeof(base_crng.key)] ^= src[i]; - } else { - struct blake2s_state hash; - - blake2s_init(&hash, sizeof(base_crng.key)); - blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); - blake2s_update(&hash, input, len); - blake2s_final(&hash, base_crng.key); - } + blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); + blake2s_update(&hash, input, len); + blake2s_final(&hash, base_crng.key); if (account) { crng_init_cnt += len; @@ -1034,7 +1014,7 @@ void add_device_randomness(const void *buf, size_t size) unsigned long flags, now = jiffies; if (crng_init == 0 && size) - crng_pre_init_inject(buf, size, false, false); + crng_pre_init_inject(buf, size, false); spin_lock_irqsave(&input_pool.lock, flags); _mix_pool_bytes(&cycles, sizeof(cycles)); @@ -1155,7 +1135,7 @@ void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy) { if (unlikely(crng_init == 0)) { - size_t ret = crng_pre_init_inject(buffer, count, false, true); + size_t ret = crng_pre_init_inject(buffer, count, true); mix_pool_bytes(buffer, ret); count -= ret; buffer += ret; @@ -1295,8 +1275,14 @@ static void mix_interrupt_randomness(struct work_struct *work) fast_pool->last = jiffies; local_irq_enable(); - mix_pool_bytes(pool, sizeof(pool)); - credit_entropy_bits(1); + if (unlikely(crng_init == 0)) { + crng_pre_init_inject(pool, sizeof(pool), true); + mix_pool_bytes(pool, sizeof(pool)); + } else { + mix_pool_bytes(pool, sizeof(pool)); + credit_entropy_bits(1); + } + memzero_explicit(pool, sizeof(pool)); } @@ -1329,24 +1315,11 @@ void add_interrupt_randomness(int irq) fast_mix(fast_pool->pool32); new_count = ++fast_pool->count; - if (unlikely(crng_init == 0)) { - if (new_count >= 64 && - crng_pre_init_inject(fast_pool->pool32, sizeof(fast_pool->pool32), - true, true) > 0) { - fast_pool->count = 0; - fast_pool->last = now; - if (spin_trylock(&input_pool.lock)) { - _mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32)); - spin_unlock(&input_pool.lock); - } - } - return; - } - if (new_count & MIX_INFLIGHT) return; - if (new_count < 64 && !time_after(now, fast_pool->last + HZ)) + if (new_count < 64 && (!time_after(now, fast_pool->last + HZ) || + unlikely(crng_init == 0))) return; if (unlikely(!fast_pool->mix.func)) From 4c74ca006afe2410a48a7cddf9a3211d325b267a Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Mon, 28 Feb 2022 13:57:57 +0100 Subject: [PATCH 0199/1842] random: give sysctl_random_min_urandom_seed a more sensible value commit d0efdf35a6a71d307a250199af6fce122a7c7e11 upstream. This isn't used by anything or anywhere, but we can't delete it due to compatibility. So at least give it the correct value of what it's supposed to be instead of a garbage one. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 140514f36f73..389c58e11af5 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1619,7 +1619,7 @@ const struct file_operations urandom_fops = { * to avoid breaking old userspaces, but writing to it does not * change any behavior of the RNG. * - * - urandom_min_reseed_secs - fixed to the meaningless value "60". + * - urandom_min_reseed_secs - fixed to the value CRNG_RESEED_INTERVAL. * It is writable to avoid breaking old userspaces, but writing * to it does not change any behavior of the RNG. * @@ -1629,7 +1629,7 @@ const struct file_operations urandom_fops = { #include -static int sysctl_random_min_urandom_seed = 60; +static int sysctl_random_min_urandom_seed = CRNG_RESEED_INTERVAL / HZ; static int sysctl_random_write_wakeup_bits = POOL_MIN_BITS; static int sysctl_poolsize = POOL_BITS; static u8 sysctl_bootid[UUID_SIZE]; From 66307429b5df542004c43d822f80ea1c61703898 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Mon, 28 Feb 2022 14:00:52 +0100 Subject: [PATCH 0200/1842] random: don't let 644 read-only sysctls be written to commit 77553cf8f44863b31da242cf24671d76ddb61597 upstream. We leave around these old sysctls for compatibility, and we keep them "writable" for compatibility, but even after writing, we should keep reporting the same value. This is consistent with how userspaces tend to use sysctl_random_write_wakeup_bits, writing to it, and then later reading from it and using the value. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 389c58e11af5..ee430c1753a9 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1669,6 +1669,13 @@ static int proc_do_uuid(struct ctl_table *table, int write, void *buffer, return proc_dostring(&fake_table, 0, buffer, lenp, ppos); } +/* The same as proc_dointvec, but writes don't change anything. */ +static int proc_do_rointvec(struct ctl_table *table, int write, void *buffer, + size_t *lenp, loff_t *ppos) +{ + return write ? 0 : proc_dointvec(table, 0, buffer, lenp, ppos); +} + extern struct ctl_table random_table[]; struct ctl_table random_table[] = { { @@ -1690,14 +1697,14 @@ struct ctl_table random_table[] = { .data = &sysctl_random_write_wakeup_bits, .maxlen = sizeof(int), .mode = 0644, - .proc_handler = proc_dointvec, + .proc_handler = proc_do_rointvec, }, { .procname = "urandom_min_reseed_secs", .data = &sysctl_random_min_urandom_seed, .maxlen = sizeof(int), .mode = 0644, - .proc_handler = proc_dointvec, + .proc_handler = proc_do_rointvec, }, { .procname = "boot_id", From 849e7b744cf2c6bfc47ebb099e29bd3e7b783182 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 1 Mar 2022 20:03:49 +0100 Subject: [PATCH 0201/1842] random: replace custom notifier chain with standard one commit 5acd35487dc911541672b3ffc322851769c32a56 upstream. We previously rolled our own randomness readiness notifier, which only has two users in the whole kernel. Replace this with a more standard atomic notifier block that serves the same purpose with less code. Also unexport the symbols, because no modules use it, only unconditional builtins. The only drawback is that it's possible for a notification handler returning the "stop" code to prevent further processing, but given that there are only two users, and that we're unexporting this anyway, that doesn't seem like a significant drawback for the simplification we receive here. Cc: Greg Kroah-Hartman Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski [Jason: for stable, also backported to crypto/drbg.c, not unexporting.] Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- crypto/drbg.c | 17 +++++------ drivers/char/random.c | 69 +++++++++++++----------------------------- include/crypto/drbg.h | 2 +- include/linux/random.h | 10 ++---- lib/random32.c | 13 +++++--- lib/vsprintf.c | 10 +++--- 6 files changed, 47 insertions(+), 74 deletions(-) diff --git a/crypto/drbg.c b/crypto/drbg.c index 3132967a1749..19ea8d6628ff 100644 --- a/crypto/drbg.c +++ b/crypto/drbg.c @@ -1490,12 +1490,13 @@ static int drbg_generate_long(struct drbg_state *drbg, return 0; } -static void drbg_schedule_async_seed(struct random_ready_callback *rdy) +static int drbg_schedule_async_seed(struct notifier_block *nb, unsigned long action, void *data) { - struct drbg_state *drbg = container_of(rdy, struct drbg_state, + struct drbg_state *drbg = container_of(nb, struct drbg_state, random_ready); schedule_work(&drbg->seed_work); + return 0; } static int drbg_prepare_hrng(struct drbg_state *drbg) @@ -1510,10 +1511,8 @@ static int drbg_prepare_hrng(struct drbg_state *drbg) INIT_WORK(&drbg->seed_work, drbg_async_seed); - drbg->random_ready.owner = THIS_MODULE; - drbg->random_ready.func = drbg_schedule_async_seed; - - err = add_random_ready_callback(&drbg->random_ready); + drbg->random_ready.notifier_call = drbg_schedule_async_seed; + err = register_random_ready_notifier(&drbg->random_ready); switch (err) { case 0: @@ -1524,7 +1523,7 @@ static int drbg_prepare_hrng(struct drbg_state *drbg) fallthrough; default: - drbg->random_ready.func = NULL; + drbg->random_ready.notifier_call = NULL; return err; } @@ -1628,8 +1627,8 @@ static int drbg_instantiate(struct drbg_state *drbg, struct drbg_string *pers, */ static int drbg_uninstantiate(struct drbg_state *drbg) { - if (drbg->random_ready.func) { - del_random_ready_callback(&drbg->random_ready); + if (drbg->random_ready.notifier_call) { + unregister_random_ready_notifier(&drbg->random_ready); cancel_work_sync(&drbg->seed_work); } diff --git a/drivers/char/random.c b/drivers/char/random.c index ee430c1753a9..1421bd53f576 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -83,8 +83,8 @@ static int crng_init = 0; /* Various types of waiters for crng_init->2 transition. */ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); static struct fasync_struct *fasync; -static DEFINE_SPINLOCK(random_ready_list_lock); -static LIST_HEAD(random_ready_list); +static DEFINE_SPINLOCK(random_ready_chain_lock); +static RAW_NOTIFIER_HEAD(random_ready_chain); /* Control how we warn userspace. */ static struct ratelimit_state unseeded_warning = @@ -147,72 +147,45 @@ EXPORT_SYMBOL(wait_for_random_bytes); * * returns: 0 if callback is successfully added * -EALREADY if pool is already initialised (callback not called) - * -ENOENT if module for callback is not alive */ -int add_random_ready_callback(struct random_ready_callback *rdy) +int register_random_ready_notifier(struct notifier_block *nb) { - struct module *owner; unsigned long flags; - int err = -EALREADY; + int ret = -EALREADY; if (crng_ready()) - return err; + return ret; - owner = rdy->owner; - if (!try_module_get(owner)) - return -ENOENT; - - spin_lock_irqsave(&random_ready_list_lock, flags); - if (crng_ready()) - goto out; - - owner = NULL; - - list_add(&rdy->list, &random_ready_list); - err = 0; - -out: - spin_unlock_irqrestore(&random_ready_list_lock, flags); - - module_put(owner); - - return err; + spin_lock_irqsave(&random_ready_chain_lock, flags); + if (!crng_ready()) + ret = raw_notifier_chain_register(&random_ready_chain, nb); + spin_unlock_irqrestore(&random_ready_chain_lock, flags); + return ret; } -EXPORT_SYMBOL(add_random_ready_callback); +EXPORT_SYMBOL(register_random_ready_notifier); /* * Delete a previously registered readiness callback function. */ -void del_random_ready_callback(struct random_ready_callback *rdy) +int unregister_random_ready_notifier(struct notifier_block *nb) { unsigned long flags; - struct module *owner = NULL; + int ret; - spin_lock_irqsave(&random_ready_list_lock, flags); - if (!list_empty(&rdy->list)) { - list_del_init(&rdy->list); - owner = rdy->owner; - } - spin_unlock_irqrestore(&random_ready_list_lock, flags); - - module_put(owner); + spin_lock_irqsave(&random_ready_chain_lock, flags); + ret = raw_notifier_chain_unregister(&random_ready_chain, nb); + spin_unlock_irqrestore(&random_ready_chain_lock, flags); + return ret; } -EXPORT_SYMBOL(del_random_ready_callback); +EXPORT_SYMBOL(unregister_random_ready_notifier); static void process_random_ready_list(void) { unsigned long flags; - struct random_ready_callback *rdy, *tmp; - spin_lock_irqsave(&random_ready_list_lock, flags); - list_for_each_entry_safe(rdy, tmp, &random_ready_list, list) { - struct module *owner = rdy->owner; - - list_del_init(&rdy->list); - rdy->func(rdy); - module_put(owner); - } - spin_unlock_irqrestore(&random_ready_list_lock, flags); + spin_lock_irqsave(&random_ready_chain_lock, flags); + raw_notifier_call_chain(&random_ready_chain, 0, NULL); + spin_unlock_irqrestore(&random_ready_chain_lock, flags); } #define warn_unseeded_randomness(previous) \ diff --git a/include/crypto/drbg.h b/include/crypto/drbg.h index c4165126937e..88e4d145f7cd 100644 --- a/include/crypto/drbg.h +++ b/include/crypto/drbg.h @@ -136,7 +136,7 @@ struct drbg_state { const struct drbg_state_ops *d_ops; const struct drbg_core *core; struct drbg_string test_data; - struct random_ready_callback random_ready; + struct notifier_block random_ready; }; static inline __u8 drbg_statelen(struct drbg_state *drbg) diff --git a/include/linux/random.h b/include/linux/random.h index 6148b8d1ccf3..d4abda9e2348 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -10,11 +10,7 @@ #include -struct random_ready_callback { - struct list_head list; - void (*func)(struct random_ready_callback *rdy); - struct module *owner; -}; +struct notifier_block; extern void add_device_randomness(const void *, size_t); extern void add_bootloader_randomness(const void *, size_t); @@ -39,8 +35,8 @@ extern void get_random_bytes(void *buf, size_t nbytes); extern int wait_for_random_bytes(void); extern int __init rand_initialize(void); extern bool rng_is_initialized(void); -extern int add_random_ready_callback(struct random_ready_callback *rdy); -extern void del_random_ready_callback(struct random_ready_callback *rdy); +extern int register_random_ready_notifier(struct notifier_block *nb); +extern int unregister_random_ready_notifier(struct notifier_block *nb); extern size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes); #ifndef MODULE diff --git a/lib/random32.c b/lib/random32.c index 3c19820796d0..f0ab17c2244b 100644 --- a/lib/random32.c +++ b/lib/random32.c @@ -40,6 +40,7 @@ #include #include #include +#include #include /** @@ -551,9 +552,11 @@ static void prandom_reseed(struct timer_list *unused) * To avoid worrying about whether it's safe to delay that interrupt * long enough to seed all CPUs, just schedule an immediate timer event. */ -static void prandom_timer_start(struct random_ready_callback *unused) +static int prandom_timer_start(struct notifier_block *nb, + unsigned long action, void *data) { mod_timer(&seed_timer, jiffies); + return 0; } #ifdef CONFIG_RANDOM32_SELFTEST @@ -617,13 +620,13 @@ core_initcall(prandom32_state_selftest); */ static int __init prandom_init_late(void) { - static struct random_ready_callback random_ready = { - .func = prandom_timer_start + static struct notifier_block random_ready = { + .notifier_call = prandom_timer_start }; - int ret = add_random_ready_callback(&random_ready); + int ret = register_random_ready_notifier(&random_ready); if (ret == -EALREADY) { - prandom_timer_start(&random_ready); + prandom_timer_start(&random_ready, 0, NULL); ret = 0; } return ret; diff --git a/lib/vsprintf.c b/lib/vsprintf.c index 8ade1a86d818..daf32a489dc0 100644 --- a/lib/vsprintf.c +++ b/lib/vsprintf.c @@ -756,14 +756,16 @@ static void enable_ptr_key_workfn(struct work_struct *work) static DECLARE_WORK(enable_ptr_key_work, enable_ptr_key_workfn); -static void fill_random_ptr_key(struct random_ready_callback *unused) +static int fill_random_ptr_key(struct notifier_block *nb, + unsigned long action, void *data) { /* This may be in an interrupt handler. */ queue_work(system_unbound_wq, &enable_ptr_key_work); + return 0; } -static struct random_ready_callback random_ready = { - .func = fill_random_ptr_key +static struct notifier_block random_ready = { + .notifier_call = fill_random_ptr_key }; static int __init initialize_ptr_random(void) @@ -777,7 +779,7 @@ static int __init initialize_ptr_random(void) return 0; } - ret = add_random_ready_callback(&random_ready); + ret = register_random_ready_notifier(&random_ready); if (!ret) { return 0; } else if (ret == -EALREADY) { From 95a1c94a1bd7be9843bd6e1a05dbb1e637ebb1b0 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 11 Feb 2022 14:58:44 +0100 Subject: [PATCH 0202/1842] random: use SipHash as interrupt entropy accumulator commit f5eab0e2db4f881fb2b62b3fdad5b9be673dd7ae upstream. The current fast_mix() function is a piece of classic mailing list crypto, where it just sort of sprung up by an anonymous author without a lot of real analysis of what precisely it was accomplishing. As an ARX permutation alone, there are some easily searchable differential trails in it, and as a means of preventing malicious interrupts, it completely fails, since it xors new data into the entire state every time. It can't really be analyzed as a random permutation, because it clearly isn't, and it can't be analyzed as an interesting linear algebraic structure either, because it's also not that. There really is very little one can say about it in terms of entropy accumulation. It might diffuse bits, some of the time, maybe, we hope, I guess. But for the most part, it fails to accomplish anything concrete. As a reminder, the simple goal of add_interrupt_randomness() is to simply accumulate entropy until ~64 interrupts have elapsed, and then dump it into the main input pool, which uses a cryptographic hash. It would be nice to have something cryptographically strong in the interrupt handler itself, in case a malicious interrupt compromises a per-cpu fast pool within the 64 interrupts / 1 second window, and then inside of that same window somehow can control its return address and cycle counter, even if that's a bit far fetched. However, with a very CPU-limited budget, actually doing that remains an active research project (and perhaps there'll be something useful for Linux to come out of it). And while the abundance of caution would be nice, this isn't *currently* the security model, and we don't yet have a fast enough solution to make it our security model. Plus there's not exactly a pressing need to do that. (And for the avoidance of doubt, the actual cluster of 64 accumulated interrupts still gets dumped into our cryptographically secure input pool.) So, for now we are going to stick with the existing interrupt security model, which assumes that each cluster of 64 interrupt data samples is mostly non-malicious and not colluding with an infoleaker. With this as our goal, we have a few more choices, simply aiming to accumulate entropy, while discarding the least amount of it. We know from that random oracles, instantiated as computational hash functions, make good entropy accumulators and extractors, which is the justification for using BLAKE2s in the main input pool. As mentioned, we don't have that luxury here, but we also don't have the same security model requirements, because we're assuming that there aren't malicious inputs. A pseudorandom function instance can approximately behave like a random oracle, provided that the key is uniformly random. But since we're not concerned with malicious inputs, we can pick a fixed key, which is not secret, knowing that "nature" won't interact with a sufficiently chosen fixed key by accident. So we pick a PRF with a fixed initial key, and accumulate into it continuously, dumping the result every 64 interrupts into our cryptographically secure input pool. For this, we make use of SipHash-1-x on 64-bit and HalfSipHash-1-x on 32-bit, which are already in use in the kernel's hsiphash family of functions and achieve the same performance as the function they replace. It would be nice to do two rounds, but we don't exactly have the CPU budget handy for that, and one round alone is already sufficient. As mentioned, we start with a fixed initial key (zeros is fine), and allow SipHash's symmetry breaking constants to turn that into a useful starting point. Also, since we're dumping the result (or half of it on 64-bit so as to tax our hash function the same amount on all platforms) into the cryptographically secure input pool, there's no point in finalizing SipHash's output, since it'll wind up being finalized by something much stronger. This means that all we need to do is use the ordinary round function word-by-word, as normal SipHash does. Simplified, the flow is as follows: Initialize: siphash_state_t state; siphash_init(&state, key={0, 0, 0, 0}); Update (accumulate) on interrupt: siphash_update(&state, interrupt_data_and_timing); Dump into input pool after 64 interrupts: blake2s_update(&input_pool, &state, sizeof(state) / 2); The result of all of this is that the security model is unchanged from before -- we assume non-malicious inputs -- yet we now implement that model with a stronger argument. I would like to emphasize, again, that the purpose of this commit is to improve the existing design, by making it analyzable, without changing any fundamental assumptions. There may well be value down the road in changing up the existing design, using something cryptographically strong, or simply using a ring buffer of samples rather than having a fast_mix() at all, or changing which and how much data we collect each interrupt so that we can use something linear, or a variety of other ideas. This commit does not invalidate the potential for those in the future. For example, in the future, if we're able to characterize the data we're collecting on each interrupt, we may be able to inch toward information theoretic accumulators. shows that `s = ror32(s, 7) ^ x` and `s = ror64(s, 19) ^ x` make very good accumulators for 2-monotone distributions, which would apply to timestamp counters, like random_get_entropy() or jiffies, but would not apply to our current combination of the two values, or to the various function addresses and register values we mix in. Alternatively, shows that max-period linear functions with no non-trivial invariant subspace make good extractors, used in the form `s = f(s) ^ x`. However, this only works if the input data is both identical and independent, and obviously a collection of address values and counters fails; so it goes with theoretical papers. Future directions here may involve trying to characterize more precisely what we actually need to collect in the interrupt handler, and building something specific around that. However, as mentioned, the morass of data we're gathering at the interrupt handler presently defies characterization, and so we use SipHash for now, which works well and performs well. Cc: Theodore Ts'o Cc: Greg Kroah-Hartman Reviewed-by: Jean-Philippe Aumasson Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 94 +++++++++++++++++++++++++------------------ 1 file changed, 55 insertions(+), 39 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 1421bd53f576..fe2533a1642a 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1147,48 +1147,51 @@ void add_bootloader_randomness(const void *buf, size_t size) EXPORT_SYMBOL_GPL(add_bootloader_randomness); struct fast_pool { - union { - u32 pool32[4]; - u64 pool64[2]; - }; struct work_struct mix; + unsigned long pool[4]; unsigned long last; unsigned int count; u16 reg_idx; }; +static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = { +#ifdef CONFIG_64BIT + /* SipHash constants */ + .pool = { 0x736f6d6570736575UL, 0x646f72616e646f6dUL, + 0x6c7967656e657261UL, 0x7465646279746573UL } +#else + /* HalfSipHash constants */ + .pool = { 0, 0, 0x6c796765U, 0x74656462U } +#endif +}; + /* - * This is a fast mixing routine used by the interrupt randomness - * collector. It's hardcoded for an 128 bit pool and assumes that any - * locks that might be needed are taken by the caller. + * This is [Half]SipHash-1-x, starting from an empty key. Because + * the key is fixed, it assumes that its inputs are non-malicious, + * and therefore this has no security on its own. s represents the + * 128 or 256-bit SipHash state, while v represents a 128-bit input. */ -static void fast_mix(u32 pool[4]) +static void fast_mix(unsigned long s[4], const unsigned long *v) { - u32 a = pool[0], b = pool[1]; - u32 c = pool[2], d = pool[3]; + size_t i; - a += b; c += d; - b = rol32(b, 6); d = rol32(d, 27); - d ^= a; b ^= c; - - a += b; c += d; - b = rol32(b, 16); d = rol32(d, 14); - d ^= a; b ^= c; - - a += b; c += d; - b = rol32(b, 6); d = rol32(d, 27); - d ^= a; b ^= c; - - a += b; c += d; - b = rol32(b, 16); d = rol32(d, 14); - d ^= a; b ^= c; - - pool[0] = a; pool[1] = b; - pool[2] = c; pool[3] = d; + for (i = 0; i < 16 / sizeof(long); ++i) { + s[3] ^= v[i]; +#ifdef CONFIG_64BIT + s[0] += s[1]; s[1] = rol64(s[1], 13); s[1] ^= s[0]; s[0] = rol64(s[0], 32); + s[2] += s[3]; s[3] = rol64(s[3], 16); s[3] ^= s[2]; + s[0] += s[3]; s[3] = rol64(s[3], 21); s[3] ^= s[0]; + s[2] += s[1]; s[1] = rol64(s[1], 17); s[1] ^= s[2]; s[2] = rol64(s[2], 32); +#else + s[0] += s[1]; s[1] = rol32(s[1], 5); s[1] ^= s[0]; s[0] = rol32(s[0], 16); + s[2] += s[3]; s[3] = rol32(s[3], 8); s[3] ^= s[2]; + s[0] += s[3]; s[3] = rol32(s[3], 7); s[3] ^= s[0]; + s[2] += s[1]; s[1] = rol32(s[1], 13); s[1] ^= s[2]; s[2] = rol32(s[2], 16); +#endif + s[0] ^= v[i]; + } } -static DEFINE_PER_CPU(struct fast_pool, irq_randomness); - #ifdef CONFIG_SMP /* * This function is called when the CPU has just come online, with @@ -1230,7 +1233,15 @@ static unsigned long get_reg(struct fast_pool *f, struct pt_regs *regs) static void mix_interrupt_randomness(struct work_struct *work) { struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix); - u32 pool[4]; + /* + * The size of the copied stack pool is explicitly 16 bytes so that we + * tax mix_pool_byte()'s compression function the same amount on all + * platforms. This means on 64-bit we copy half the pool into this, + * while on 32-bit we copy all of it. The entropy is supposed to be + * sufficiently dispersed between bits that in the sponge-like + * half case, on average we don't wind up "losing" some. + */ + u8 pool[16]; /* Check to see if we're running on the wrong CPU due to hotplug. */ local_irq_disable(); @@ -1243,7 +1254,7 @@ static void mix_interrupt_randomness(struct work_struct *work) * Copy the pool to the stack so that the mixer always has a * consistent view, before we reenable irqs again. */ - memcpy(pool, fast_pool->pool32, sizeof(pool)); + memcpy(pool, fast_pool->pool, sizeof(pool)); fast_pool->count = 0; fast_pool->last = jiffies; local_irq_enable(); @@ -1267,25 +1278,30 @@ void add_interrupt_randomness(int irq) struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness); struct pt_regs *regs = get_irq_regs(); unsigned int new_count; + union { + u32 u32[4]; + u64 u64[2]; + unsigned long longs[16 / sizeof(long)]; + } irq_data; if (cycles == 0) cycles = get_reg(fast_pool, regs); if (sizeof(cycles) == 8) - fast_pool->pool64[0] ^= cycles ^ rol64(now, 32) ^ irq; + irq_data.u64[0] = cycles ^ rol64(now, 32) ^ irq; else { - fast_pool->pool32[0] ^= cycles ^ irq; - fast_pool->pool32[1] ^= now; + irq_data.u32[0] = cycles ^ irq; + irq_data.u32[1] = now; } if (sizeof(unsigned long) == 8) - fast_pool->pool64[1] ^= regs ? instruction_pointer(regs) : _RET_IP_; + irq_data.u64[1] = regs ? instruction_pointer(regs) : _RET_IP_; else { - fast_pool->pool32[2] ^= regs ? instruction_pointer(regs) : _RET_IP_; - fast_pool->pool32[3] ^= get_reg(fast_pool, regs); + irq_data.u32[2] = regs ? instruction_pointer(regs) : _RET_IP_; + irq_data.u32[3] = get_reg(fast_pool, regs); } - fast_mix(fast_pool->pool32); + fast_mix(fast_pool->pool, irq_data.longs); new_count = ++fast_pool->count; if (new_count & MIX_INFLIGHT) From 9891211dfe0362877a3ba0c4b6ef96cef968cc71 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 8 Mar 2022 11:20:17 -0700 Subject: [PATCH 0203/1842] random: make consistent usage of crng_ready() commit a96cfe2d427064325ecbf56df8816c6b871ec285 upstream. Rather than sometimes checking `crng_init < 2`, we should always use the crng_ready() macro, so that should we change anything later, it's consistent. Additionally, that macro already has a likely() around it, which means we don't need to open code our own likely() and unlikely() annotations. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index fe2533a1642a..90a3a09999d5 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -125,18 +125,13 @@ static void try_to_generate_entropy(void); */ int wait_for_random_bytes(void) { - if (likely(crng_ready())) - return 0; - - do { + while (!crng_ready()) { int ret; ret = wait_event_interruptible_timeout(crng_init_wait, crng_ready(), HZ); if (ret) return ret > 0 ? 0 : ret; - try_to_generate_entropy(); - } while (!crng_ready()); - + } return 0; } EXPORT_SYMBOL(wait_for_random_bytes); @@ -293,7 +288,7 @@ static void crng_reseed(void) ++next_gen; WRITE_ONCE(base_crng.generation, next_gen); WRITE_ONCE(base_crng.birth, jiffies); - if (crng_init < 2) { + if (!crng_ready()) { crng_init = 2; finalize_init = true; } @@ -361,7 +356,7 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], * ready, we do fast key erasure with the base_crng directly, because * this is what crng_pre_init_inject() mutates during early init. */ - if (unlikely(!crng_ready())) { + if (!crng_ready()) { bool ready; spin_lock_irqsave(&base_crng.lock, flags); @@ -804,7 +799,7 @@ static void credit_entropy_bits(size_t nbits) entropy_count = min_t(unsigned int, POOL_BITS, orig + add); } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig); - if (crng_init < 2 && entropy_count >= POOL_MIN_BITS) + if (!crng_ready() && entropy_count >= POOL_MIN_BITS) crng_reseed(); } @@ -961,7 +956,7 @@ int __init rand_initialize(void) extract_entropy(base_crng.key, sizeof(base_crng.key)); ++base_crng.generation; - if (arch_init && trust_cpu && crng_init < 2) { + if (arch_init && trust_cpu && !crng_ready()) { crng_init = 2; pr_notice("crng init done (trusting CPU's manufacturer)\n"); } @@ -1550,7 +1545,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) case RNDRESEEDCRNG: if (!capable(CAP_SYS_ADMIN)) return -EPERM; - if (crng_init < 2) + if (!crng_ready()) return -ENODATA; crng_reseed(); return 0; From 20da9c6079df82d3c6e53e5d196950e30ddf5252 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 8 Mar 2022 23:32:34 -0700 Subject: [PATCH 0204/1842] random: reseed more often immediately after booting commit 7a7ff644aeaf071d433caffb3b8ea57354b55bd3 upstream. In order to chip away at the "premature first" problem, we augment our existing entropy accounting with more frequent reseedings at boot. The idea is that at boot, we're getting entropy from various places, and we're not very sure which of early boot entropy is good and which isn't. Even when we're crediting the entropy, we're still not totally certain that it's any good. Since boot is the one time (aside from a compromise) that we have zero entropy, it's important that we shepherd entropy into the crng fairly often. At the same time, we don't want a "premature next" problem, whereby an attacker can brute force individual bits of added entropy. In lieu of going full-on Fortuna (for now), we can pick a simpler strategy of just reseeding more often during the first 5 minutes after boot. This is still bounded by the 256-bit entropy credit requirement, so we'll skip a reseeding if we haven't reached that, but in case entropy /is/ coming in, this ensures that it makes its way into the crng rather rapidly during these early stages. Ordinarily we reseed if the previous reseeding is 300 seconds old. This commit changes things so that for the first 600 seconds of boot time, we reseed if the previous reseeding is uptime / 2 seconds old. That means that we'll reseed at the very least double the uptime of the previous reseeding. Cc: Theodore Ts'o Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 28 +++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 90a3a09999d5..4187cc8e4cb7 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -337,6 +337,28 @@ static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE], memzero_explicit(first_block, sizeof(first_block)); } +/* + * Return whether the crng seed is considered to be sufficiently + * old that a reseeding might be attempted. This happens if the last + * reseeding was CRNG_RESEED_INTERVAL ago, or during early boot, at + * an interval proportional to the uptime. + */ +static bool crng_has_old_seed(void) +{ + static bool early_boot = true; + unsigned long interval = CRNG_RESEED_INTERVAL; + + if (unlikely(READ_ONCE(early_boot))) { + time64_t uptime = ktime_get_seconds(); + if (uptime >= CRNG_RESEED_INTERVAL / HZ * 2) + WRITE_ONCE(early_boot, false); + else + interval = max_t(unsigned int, 5 * HZ, + (unsigned int)uptime / 2 * HZ); + } + return time_after(jiffies, READ_ONCE(base_crng.birth) + interval); +} + /* * This function returns a ChaCha state that you may use for generating * random data. It also returns up to 32 bytes on its own of random data @@ -370,10 +392,10 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], } /* - * If the base_crng is more than 5 minutes old, we reseed, which - * in turn bumps the generation counter that we check below. + * If the base_crng is old enough, we try to reseed, which in turn + * bumps the generation counter that we check below. */ - if (unlikely(time_after(jiffies, READ_ONCE(base_crng.birth) + CRNG_RESEED_INTERVAL))) + if (unlikely(crng_has_old_seed())) crng_reseed(); local_lock_irqsave(&crngs.lock, flags); From 083ab33951e45a4e7521241ee9e1570139bf66f0 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 8 Mar 2022 10:12:16 -0700 Subject: [PATCH 0205/1842] random: check for signal and try earlier when generating entropy commit 3e504d2026eb6c8762cd6040ae57db166516824a upstream. Rather than waiting a full second in an interruptable waiter before trying to generate entropy, try to generate entropy first and wait second. While waiting one second might give an extra second for getting entropy from elsewhere, we're already pretty late in the init process here, and whatever else is generating entropy will still continue to contribute. This has implications on signal handling: we call try_to_generate_entropy() from wait_for_random_bytes(), and wait_for_random_bytes() always uses wait_event_interruptible_timeout() when waiting, since it's called by userspace code in restartable contexts, where signals can pend. Since try_to_generate_entropy() now runs first, if a signal is pending, it's necessary for try_to_generate_entropy() to check for signals, since it won't hit the wait until after try_to_generate_entropy() has returned. And even before this change, when entering a busy loop in try_to_generate_entropy(), we should have been checking to see if any signals are pending, so that a process doesn't get stuck in that loop longer than expected. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 4187cc8e4cb7..4c0a12c9052b 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -127,10 +127,11 @@ int wait_for_random_bytes(void) { while (!crng_ready()) { int ret; + + try_to_generate_entropy(); ret = wait_event_interruptible_timeout(crng_init_wait, crng_ready(), HZ); if (ret) return ret > 0 ? 0 : ret; - try_to_generate_entropy(); } return 0; } @@ -1371,7 +1372,7 @@ static void try_to_generate_entropy(void) return; timer_setup_on_stack(&stack.timer, entropy_timer, 0); - while (!crng_ready()) { + while (!crng_ready() && !signal_pending(current)) { if (!timer_pending(&stack.timer)) mod_timer(&stack.timer, jiffies + 1); mix_pool_bytes(&stack.cycles, sizeof(stack.cycles)); From 7cb6782146b88cb020b0b0d37e0a56d8fc6b134c Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Mon, 21 Mar 2022 18:48:05 -0600 Subject: [PATCH 0206/1842] random: skip fast_init if hwrng provides large chunk of entropy commit af704c856e888fb044b058d731d61b46eeec499d upstream. At boot time, EFI calls add_bootloader_randomness(), which in turn calls add_hwgenerator_randomness(). Currently add_hwgenerator_randomness() feeds the first 64 bytes of randomness to the "fast init" non-crypto-grade phase. But if add_hwgenerator_randomness() gets called with more than POOL_MIN_BITS of entropy, there's no point in passing it off to the "fast init" stage, since that's enough entropy to bootstrap the real RNG. The "fast init" stage is just there to provide _something_ in the case where we don't have enough entropy to properly bootstrap the RNG. But if we do have enough entropy to bootstrap the RNG, the current logic doesn't serve a purpose. So, in the case where we're passed greater than or equal to POOL_MIN_BITS of entropy, this commit makes us skip the "fast init" phase. Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 4c0a12c9052b..382ff01aeb78 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1125,7 +1125,7 @@ void rand_initialize_disk(struct gendisk *disk) void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy) { - if (unlikely(crng_init == 0)) { + if (unlikely(crng_init == 0 && entropy < POOL_MIN_BITS)) { size_t ret = crng_pre_init_inject(buffer, count, true); mix_pool_bytes(buffer, ret); count -= ret; From f3bc5eca83d37a1a723d4c378a167a83d1b9c771 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 22 Mar 2022 21:43:12 -0600 Subject: [PATCH 0207/1842] random: treat bootloader trust toggle the same way as cpu trust toggle commit d97c68d178fbf8aaaf21b69b446f2dfb13909316 upstream. If CONFIG_RANDOM_TRUST_CPU is set, the RNG initializes using RDRAND. But, the user can disable (or enable) this behavior by setting `random.trust_cpu=0/1` on the kernel command line. This allows system builders to do reasonable things while avoiding howls from tinfoil hatters. (Or vice versa.) CONFIG_RANDOM_TRUST_BOOTLOADER is basically the same thing, but regards the seed passed via EFI or device tree, which might come from RDRAND or a TPM or somewhere else. In order to allow distros to more easily enable this while avoiding those same howls (or vice versa), this commit adds the corresponding `random.trust_bootloader=0/1` toggle. Cc: Theodore Ts'o Cc: Graham Christensen Reviewed-by: Ard Biesheuvel Reviewed-by: Dominik Brodowski Link: https://github.com/NixOS/nixpkgs/pull/165355 Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- Documentation/admin-guide/kernel-parameters.txt | 6 ++++++ drivers/char/Kconfig | 3 ++- drivers/char/random.c | 8 +++++++- 3 files changed, 15 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 611172f68bb5..5e34deec819f 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4035,6 +4035,12 @@ fully seed the kernel's CRNG. Default is controlled by CONFIG_RANDOM_TRUST_CPU. + random.trust_bootloader={on,off} + [KNL] Enable or disable trusting the use of a + seed passed by the bootloader (if available) to + fully seed the kernel's CRNG. Default is controlled + by CONFIG_RANDOM_TRUST_BOOTLOADER. + ras=option[,option,...] [KNL] RAS-specific options cec_disable [X86] diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig index d229a2d0c017..3e2703a49632 100644 --- a/drivers/char/Kconfig +++ b/drivers/char/Kconfig @@ -495,4 +495,5 @@ config RANDOM_TRUST_BOOTLOADER device randomness. Say Y here to assume the entropy provided by the booloader is trustworthy so it will be added to the kernel's entropy pool. Otherwise, say N here so it will be regarded as device input that - only mixes the entropy pool. + only mixes the entropy pool. This can also be configured at boot with + "random.trust_bootloader=on/off". diff --git a/drivers/char/random.c b/drivers/char/random.c index 382ff01aeb78..fd7b234c2d3e 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -945,11 +945,17 @@ static bool drain_entropy(void *buf, size_t nbytes) **********************************************************************/ static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU); +static bool trust_bootloader __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER); static int __init parse_trust_cpu(char *arg) { return kstrtobool(arg, &trust_cpu); } +static int __init parse_trust_bootloader(char *arg) +{ + return kstrtobool(arg, &trust_bootloader); +} early_param("random.trust_cpu", parse_trust_cpu); +early_param("random.trust_bootloader", parse_trust_bootloader); /* * The first collection of entropy occurs at system boot while interrupts @@ -1157,7 +1163,7 @@ EXPORT_SYMBOL_GPL(add_hwgenerator_randomness); */ void add_bootloader_randomness(const void *buf, size_t size) { - if (IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER)) + if (trust_bootloader) add_hwgenerator_randomness(buf, size, size * 8); else add_device_randomness(buf, size); From bb563d06c5bc3d08bd5c8665d6b1da6865114cfd Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 22 Mar 2022 22:21:52 -0600 Subject: [PATCH 0208/1842] random: re-add removed comment about get_random_{u32,u64} reseeding commit dd7aa36e535797926d8eb311da7151919130139d upstream. The comment about get_random_{u32,u64}() not invoking reseeding got added in an unrelated commit, that then was recently reverted by 0313bc278dac ("Revert "random: block in /dev/urandom""). So this adds that little comment snippet back, and improves the wording a bit too. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index fd7b234c2d3e..d6b8a14c25b6 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -226,9 +226,10 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller, void * * These interfaces will return the requested number of random bytes * into the given buffer or as a return value. This is equivalent to - * a read from /dev/urandom. The integer family of functions may be - * higher performance for one-off random integers, because they do a - * bit of buffering. + * a read from /dev/urandom. The u32, u64, int, and long family of + * functions may be higher performance for one-off random integers, + * because they do a bit of buffering and do not invoke reseeding + * until the buffer is emptied. * *********************************************************************/ From be0d4e3e96adb6a9bb68a237a33d95bf2b9d3143 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 31 Mar 2022 11:01:01 -0400 Subject: [PATCH 0209/1842] random: mix build-time latent entropy into pool at init commit 1754abb3e7583c570666fa1e1ee5b317e88c89a0 upstream. Prior, the "input_pool_data" array needed no real initialization, and so it was easy to mark it with __latent_entropy to populate it during compile-time. In switching to using a hash function, this required us to specifically initialize it to some specific state, which means we dropped the __latent_entropy attribute. An unfortunate side effect was this meant the pool was no longer seeded using compile-time random data. In order to bring this back, we declare an array in rand_initialize() with __latent_entropy and call mix_pool_bytes() on that at init, which accomplishes the same thing as before. We make this __initconst, so that it doesn't take up space at runtime after init. Fixes: 6e8ec2552c7d ("random: use computational hash for entropy extraction") Reviewed-by: Dominik Brodowski Reviewed-by: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index d6b8a14c25b6..e21e645836c5 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -972,6 +972,11 @@ int __init rand_initialize(void) bool arch_init = true; unsigned long rv; +#if defined(LATENT_ENTROPY_PLUGIN) + static const u8 compiletime_seed[BLAKE2S_BLOCK_SIZE] __initconst __latent_entropy; + _mix_pool_bytes(compiletime_seed, sizeof(compiletime_seed)); +#endif + for (i = 0; i < BLAKE2S_BLOCK_SIZE; i += sizeof(rv)) { if (!arch_get_random_seed_long_early(&rv) && !arch_get_random_long_early(&rv)) { From bb515a5beff279443f54802d20d609f7294c98a4 Mon Sep 17 00:00:00 2001 From: Jan Varho Date: Mon, 4 Apr 2022 19:42:30 +0300 Subject: [PATCH 0210/1842] random: do not split fast init input in add_hwgenerator_randomness() commit 527a9867af29ff89f278d037db704e0ed50fb666 upstream. add_hwgenerator_randomness() tries to only use the required amount of input for fast init, but credits all the entropy, rather than a fraction of it. Since it's hard to determine how much entropy is left over out of a non-unformly random sample, either give it all to fast init or credit it, but don't attempt to do both. In the process, we can clean up the injection code to no longer need to return a value. Signed-off-by: Jan Varho [Jason: expanded commit message] Fixes: 73c7733f122e ("random: do not throw away excess input to crng_fast_load") Cc: stable@vger.kernel.org # 5.17+, requires af704c856e88 Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 23 ++++++----------------- 1 file changed, 6 insertions(+), 17 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index e21e645836c5..ae38e4edcbd7 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -439,11 +439,8 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], * This shouldn't be set by functions like add_device_randomness(), * where we can't trust the buffer passed to it is guaranteed to be * unpredictable (so it might not have any entropy at all). - * - * Returns the number of bytes processed from input, which is bounded - * by CRNG_INIT_CNT_THRESH if account is true. */ -static size_t crng_pre_init_inject(const void *input, size_t len, bool account) +static void crng_pre_init_inject(const void *input, size_t len, bool account) { static int crng_init_cnt = 0; struct blake2s_state hash; @@ -454,18 +451,15 @@ static size_t crng_pre_init_inject(const void *input, size_t len, bool account) spin_lock_irqsave(&base_crng.lock, flags); if (crng_init != 0) { spin_unlock_irqrestore(&base_crng.lock, flags); - return 0; + return; } - if (account) - len = min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt); - blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); blake2s_update(&hash, input, len); blake2s_final(&hash, base_crng.key); if (account) { - crng_init_cnt += len; + crng_init_cnt += min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt); if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { ++base_crng.generation; crng_init = 1; @@ -476,8 +470,6 @@ static size_t crng_pre_init_inject(const void *input, size_t len, bool account) if (crng_init == 1) pr_notice("fast init done\n"); - - return len; } static void _get_random_bytes(void *buf, size_t nbytes) @@ -1138,12 +1130,9 @@ void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy) { if (unlikely(crng_init == 0 && entropy < POOL_MIN_BITS)) { - size_t ret = crng_pre_init_inject(buffer, count, true); - mix_pool_bytes(buffer, ret); - count -= ret; - buffer += ret; - if (!count || crng_init == 0) - return; + crng_pre_init_inject(buffer, count, true); + mix_pool_bytes(buffer, count); + return; } /* From 26ee8fa4dfda061ecdb8d7be5b8f38292376e07e Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 5 Apr 2022 16:40:51 +0200 Subject: [PATCH 0211/1842] random: do not allow user to keep crng key around on stack commit aba120cc101788544aa3e2c30c8da88513892350 upstream. The fast key erasure RNG design relies on the key that's used to be used and then discarded. We do this, making judicious use of memzero_explicit(). However, reads to /dev/urandom and calls to getrandom() involve a copy_to_user(), and userspace can use FUSE or userfaultfd, or make a massive call, dynamically remap memory addresses as it goes, and set the process priority to idle, in order to keep a kernel stack alive indefinitely. By probing /proc/sys/kernel/random/entropy_avail to learn when the crng key is refreshed, a malicious userspace could mount this attack every 5 minutes thereafter, breaking the crng's forward secrecy. In order to fix this, we just overwrite the stack's key with the first 32 bytes of the "free" fast key erasure output. If we're returning <= 32 bytes to the user, then we can still return those bytes directly, so that short reads don't become slower. And for long reads, the difference is hopefully lost in the amortization, so it doesn't change much, with that amortization helping variously for medium reads. We don't need to do this for get_random_bytes() and the various kernel-space callers, and later, if we ever switch to always batching, this won't be necessary either, so there's no need to change the API of these functions. Cc: Theodore Ts'o Reviewed-by: Jann Horn Fixes: c92e040d575a ("random: add backtracking protection to the CRNG") Fixes: 186873c549df ("random: use simpler fast key erasure flow on per-cpu keys") Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 35 +++++++++++++++++++++++------------ 1 file changed, 23 insertions(+), 12 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index ae38e4edcbd7..ab366f50cd9c 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -534,19 +534,29 @@ static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) if (!nbytes) return 0; - len = min_t(size_t, 32, nbytes); - crng_make_state(chacha_state, output, len); + /* + * Immediately overwrite the ChaCha key at index 4 with random + * bytes, in case userspace causes copy_to_user() below to sleep + * forever, so that we still retain forward secrecy in that case. + */ + crng_make_state(chacha_state, (u8 *)&chacha_state[4], CHACHA_KEY_SIZE); + /* + * However, if we're doing a read of len <= 32, we don't need to + * use chacha_state after, so we can simply return those bytes to + * the user directly. + */ + if (nbytes <= CHACHA_KEY_SIZE) { + ret = copy_to_user(buf, &chacha_state[4], nbytes) ? -EFAULT : nbytes; + goto out_zero_chacha; + } - if (copy_to_user(buf, output, len)) - return -EFAULT; - nbytes -= len; - buf += len; - ret += len; - - while (nbytes) { + do { if (large_request && need_resched()) { - if (signal_pending(current)) + if (signal_pending(current)) { + if (!ret) + ret = -ERESTARTSYS; break; + } schedule(); } @@ -563,10 +573,11 @@ static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) nbytes -= len; buf += len; ret += len; - } + } while (nbytes); - memzero_explicit(chacha_state, sizeof(chacha_state)); memzero_explicit(output, sizeof(output)); +out_zero_chacha: + memzero_explicit(chacha_state, sizeof(chacha_state)); return ret; } From 9641d9b4303f654636610ac6f001196ec5edd7e5 Mon Sep 17 00:00:00 2001 From: Jann Horn Date: Tue, 5 Apr 2022 18:39:31 +0200 Subject: [PATCH 0212/1842] random: check for signal_pending() outside of need_resched() check commit 1448769c9cdb69ad65287f4f7ab58bc5f2f5d7ba upstream. signal_pending() checks TIF_NOTIFY_SIGNAL and TIF_SIGPENDING, which signal that the task should bail out of the syscall when possible. This is a separate concept from need_resched(), which checks TIF_NEED_RESCHED, signaling that the task should preempt. In particular, with the current code, the signal_pending() bailout probably won't work reliably. Change this to look like other functions that read lots of data, such as read_zero(). Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Jann Horn Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index ab366f50cd9c..4ac213109812 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -551,13 +551,13 @@ static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) } do { - if (large_request && need_resched()) { + if (large_request) { if (signal_pending(current)) { if (!ret) ret = -ERESTARTSYS; break; } - schedule(); + cond_resched(); } chacha20_block(chacha_state, output); From 1805d20dfb67b3503e08a2ca0de714faf6e06fc4 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Wed, 6 Apr 2022 02:36:16 +0200 Subject: [PATCH 0213/1842] random: check for signals every PAGE_SIZE chunk of /dev/[u]random commit e3c1c4fd9e6d14059ed93ebfe15e1c57793b1a05 upstream. In 1448769c9cdb ("random: check for signal_pending() outside of need_resched() check"), Jann pointed out that we previously were only checking the TIF_NOTIFY_SIGNAL and TIF_SIGPENDING flags if the process had TIF_NEED_RESCHED set, which meant in practice, super long reads to /dev/[u]random would delay signal handling by a long time. I tried this using the below program, and indeed I wasn't able to interrupt a /dev/urandom read until after several megabytes had been read. The bug he fixed has always been there, and so code that reads from /dev/urandom without checking the return value of read() has mostly worked for a long time, for most sizes, not just for <= 256. Maybe it makes sense to keep that code working. The reason it was so small prior, ignoring the fact that it didn't work anyway, was likely because /dev/random used to block, and that could happen for pretty large lengths of time while entropy was gathered. But now, it's just a chacha20 call, which is extremely fast and is just operating on pure data, without having to wait for some external event. In that sense, /dev/[u]random is a lot more like /dev/zero. Taking a page out of /dev/zero's read_zero() function, it always returns at least one chunk, and then checks for signals after each chunk. Chunk sizes there are of length PAGE_SIZE. Let's just copy the same thing for /dev/[u]random, and check for signals and cond_resched() for every PAGE_SIZE amount of data. This makes the behavior more consistent with expectations, and should mitigate the impact of Jann's fix for the age-old signal check bug. ---- test program ---- #include #include #include #include static unsigned char x[~0U]; static void handle(int) { } int main(int argc, char *argv[]) { pid_t pid = getpid(), child; signal(SIGUSR1, handle); if (!(child = fork())) { for (;;) kill(pid, SIGUSR1); } pause(); printf("interrupted after reading %zd bytes\n", getrandom(x, sizeof(x), 0)); kill(child, SIGTERM); return 0; } Cc: Jann Horn Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 4ac213109812..154865d6108c 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -525,7 +525,6 @@ EXPORT_SYMBOL(get_random_bytes); static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) { - bool large_request = nbytes > 256; ssize_t ret = 0; size_t len; u32 chacha_state[CHACHA_STATE_WORDS]; @@ -551,15 +550,6 @@ static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) } do { - if (large_request) { - if (signal_pending(current)) { - if (!ret) - ret = -ERESTARTSYS; - break; - } - cond_resched(); - } - chacha20_block(chacha_state, output); if (unlikely(chacha_state[12] == 0)) ++chacha_state[13]; @@ -573,6 +563,13 @@ static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) nbytes -= len; buf += len; ret += len; + + BUILD_BUG_ON(PAGE_SIZE % CHACHA_BLOCK_SIZE != 0); + if (!(ret % PAGE_SIZE) && nbytes) { + if (signal_pending(current)) + break; + cond_resched(); + } } while (nbytes); memzero_explicit(output, sizeof(output)); From 72a9ec8d75142aaf818cdf1492b5a421009d8b81 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 7 Apr 2022 21:23:08 +0200 Subject: [PATCH 0214/1842] random: allow partial reads if later user copies fail commit 5209aed5137880fa229746cb521f715e55596460 upstream. Rather than failing entirely if a copy_to_user() fails at some point, instead we should return a partial read for the amount that succeeded prior, unless none succeeded at all, in which case we return -EFAULT as before. This makes it consistent with other reader interfaces. For example, the following snippet for /dev/zero outputs "4" followed by "1": int fd; void *x = mmap(NULL, 4096, PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); assert(x != MAP_FAILED); fd = open("/dev/zero", O_RDONLY); assert(fd >= 0); printf("%zd\n", read(fd, x, 4)); printf("%zd\n", read(fd, x + 4095, 4)); close(fd); This brings that same standard behavior to the various RNG reader interfaces. While we're at it, we can streamline the loop logic a little bit. Suggested-by: Linus Torvalds Cc: Jann Horn Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 154865d6108c..4ec60f4cb086 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -525,8 +525,7 @@ EXPORT_SYMBOL(get_random_bytes); static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) { - ssize_t ret = 0; - size_t len; + size_t len, left, ret = 0; u32 chacha_state[CHACHA_STATE_WORDS]; u8 output[CHACHA_BLOCK_SIZE]; @@ -545,37 +544,40 @@ static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) * the user directly. */ if (nbytes <= CHACHA_KEY_SIZE) { - ret = copy_to_user(buf, &chacha_state[4], nbytes) ? -EFAULT : nbytes; + ret = nbytes - copy_to_user(buf, &chacha_state[4], nbytes); goto out_zero_chacha; } - do { + for (;;) { chacha20_block(chacha_state, output); if (unlikely(chacha_state[12] == 0)) ++chacha_state[13]; len = min_t(size_t, nbytes, CHACHA_BLOCK_SIZE); - if (copy_to_user(buf, output, len)) { - ret = -EFAULT; + left = copy_to_user(buf, output, len); + if (left) { + ret += len - left; break; } - nbytes -= len; buf += len; ret += len; + nbytes -= len; + if (!nbytes) + break; BUILD_BUG_ON(PAGE_SIZE % CHACHA_BLOCK_SIZE != 0); - if (!(ret % PAGE_SIZE) && nbytes) { + if (ret % PAGE_SIZE == 0) { if (signal_pending(current)) break; cond_resched(); } - } while (nbytes); + } memzero_explicit(output, sizeof(output)); out_zero_chacha: memzero_explicit(chacha_state, sizeof(chacha_state)); - return ret; + return ret ? ret : -EFAULT; } /* From a1b5c849d855c97f75375c564321961ad18b8f46 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 8 Apr 2022 18:14:57 +0200 Subject: [PATCH 0215/1842] random: make random_get_entropy() return an unsigned long commit b0c3e796f24b588b862b61ce235d3c9417dc8983 upstream. Some implementations were returning type `unsigned long`, while others that fell back to get_cycles() were implicitly returning a `cycles_t` or an untyped constant int literal. That makes for weird and confusing code, and basically all code in the kernel already handled it like it was an `unsigned long`. I recently tried to handle it as the largest type it could be, a `cycles_t`, but doing so doesn't really help with much. Instead let's just make random_get_entropy() return an unsigned long all the time. This also matches the commonly used `arch_get_random_long()` function, so now RDRAND and RDTSC return the same sized integer, which means one can fallback to the other more gracefully. Cc: Dominik Brodowski Cc: Theodore Ts'o Acked-by: Thomas Gleixner Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 20 +++++++------------- include/linux/timex.h | 2 +- 2 files changed, 8 insertions(+), 14 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 4ec60f4cb086..8b09d608c802 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1015,7 +1015,7 @@ int __init rand_initialize(void) */ void add_device_randomness(const void *buf, size_t size) { - cycles_t cycles = random_get_entropy(); + unsigned long cycles = random_get_entropy(); unsigned long flags, now = jiffies; if (crng_init == 0 && size) @@ -1046,8 +1046,7 @@ struct timer_rand_state { */ static void add_timer_randomness(struct timer_rand_state *state, unsigned int num) { - cycles_t cycles = random_get_entropy(); - unsigned long flags, now = jiffies; + unsigned long cycles = random_get_entropy(), now = jiffies, flags; long delta, delta2, delta3; spin_lock_irqsave(&input_pool.lock, flags); @@ -1302,8 +1301,7 @@ static void mix_interrupt_randomness(struct work_struct *work) void add_interrupt_randomness(int irq) { enum { MIX_INFLIGHT = 1U << 31 }; - cycles_t cycles = random_get_entropy(); - unsigned long now = jiffies; + unsigned long cycles = random_get_entropy(), now = jiffies; struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness); struct pt_regs *regs = get_irq_regs(); unsigned int new_count; @@ -1316,16 +1314,12 @@ void add_interrupt_randomness(int irq) if (cycles == 0) cycles = get_reg(fast_pool, regs); - if (sizeof(cycles) == 8) + if (sizeof(unsigned long) == 8) { irq_data.u64[0] = cycles ^ rol64(now, 32) ^ irq; - else { + irq_data.u64[1] = regs ? instruction_pointer(regs) : _RET_IP_; + } else { irq_data.u32[0] = cycles ^ irq; irq_data.u32[1] = now; - } - - if (sizeof(unsigned long) == 8) - irq_data.u64[1] = regs ? instruction_pointer(regs) : _RET_IP_; - else { irq_data.u32[2] = regs ? instruction_pointer(regs) : _RET_IP_; irq_data.u32[3] = get_reg(fast_pool, regs); } @@ -1372,7 +1366,7 @@ static void entropy_timer(struct timer_list *t) static void try_to_generate_entropy(void) { struct { - cycles_t cycles; + unsigned long cycles; struct timer_list timer; } stack; diff --git a/include/linux/timex.h b/include/linux/timex.h index ce0859763670..e5154378076d 100644 --- a/include/linux/timex.h +++ b/include/linux/timex.h @@ -75,7 +75,7 @@ * By default we use get_cycles() for this purpose, but individual * architectures may override this in their asm/timex.h header file. */ -#define random_get_entropy() get_cycles() +#define random_get_entropy() ((unsigned long)get_cycles()) #endif /* From 9dff512945f19afc91515f7b4e0ffe06fab417ed Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Mon, 18 Apr 2022 20:57:31 +0200 Subject: [PATCH 0216/1842] random: document crng_fast_key_erasure() destination possibility commit 8717627d6ac53251ee012c3c7aca392f29f38a42 upstream. This reverts 35a33ff3807d ("random: use memmove instead of memcpy for remaining 32 bytes"), which was made on a totally bogus basis. The thing it was worried about overlapping came from the stack, not from one of its arguments, as Eric pointed out. But the fact that this confusion even happened draws attention to the fact that it's a bit non-obvious that the random_data parameter can alias chacha_state, and in fact should do so when the caller can't rely on the stack being cleared in a timely manner. So this commit documents that. Reported-by: Eric Biggers Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index 8b09d608c802..7035739d1924 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -320,6 +320,13 @@ static void crng_reseed(void) * the resultant ChaCha state to the user, along with the second * half of the block containing 32 bytes of random data that may * be used; random_data_len may not be greater than 32. + * + * The returned ChaCha state contains within it a copy of the old + * key value, at index 4, so the state should always be zeroed out + * immediately after using in order to maintain forward secrecy. + * If the state cannot be erased in a timely manner, then it is + * safer to set the random_data parameter to &chacha_state[4] so + * that this function overwrites it before returning. */ static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE], u32 chacha_state[CHACHA_STATE_WORDS], From ec25e386d38190441a6a13423ec5fe842409c254 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 3 May 2022 21:43:58 +0200 Subject: [PATCH 0217/1842] random: fix sysctl documentation nits commit 069c4ea6871c18bd368f27756e0f91ffb524a788 upstream. A semicolon was missing, and the almost-alphabetical-but-not ordering was confusing, so regroup these by category instead. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- Documentation/admin-guide/sysctl/kernel.rst | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst index c776ecc7a81f..a4b1ebc2e70b 100644 --- a/Documentation/admin-guide/sysctl/kernel.rst +++ b/Documentation/admin-guide/sysctl/kernel.rst @@ -1006,6 +1006,9 @@ This is a directory, with the following entries: * ``boot_id``: a UUID generated the first time this is retrieved, and unvarying after that; +* ``uuid``: a UUID generated every time this is retrieved (this can + thus be used to generate UUIDs at will); + * ``entropy_avail``: the pool's entropy count, in bits; * ``poolsize``: the entropy pool size, in bits; @@ -1013,10 +1016,7 @@ This is a directory, with the following entries: * ``urandom_min_reseed_secs``: obsolete (used to determine the minimum number of seconds between urandom pool reseeding). This file is writable for compatibility purposes, but writing to it has no effect - on any RNG behavior. - -* ``uuid``: a UUID generated every time this is retrieved (this can - thus be used to generate UUIDs at will); + on any RNG behavior; * ``write_wakeup_threshold``: when the entropy count drops below this (as a number of bits), processes waiting to write to ``/dev/random`` From 7d9eab78bed97625619c30b4dbc27a4702c589ad Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 5 May 2022 02:20:22 +0200 Subject: [PATCH 0218/1842] init: call time_init() before rand_initialize() commit fe222a6ca2d53c38433cba5d3be62a39099e708e upstream. Currently time_init() is called after rand_initialize(), but rand_initialize() makes use of the timer on various platforms, and sometimes this timer needs to be initialized by time_init() first. In order for random_get_entropy() to not return zero during early boot when it's potentially used as an entropy source, reverse the order of these two calls. The block doing random initialization was right before time_init() before, so changing the order shouldn't have any complicated effects. Cc: Andrew Morton Reviewed-by: Stafford Horne Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- init/main.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/init/main.c b/init/main.c index 3526eaec7508..5fd0da8629e5 100644 --- a/init/main.c +++ b/init/main.c @@ -952,11 +952,13 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void) hrtimers_init(); softirq_init(); timekeeping_init(); + time_init(); /* * For best initial stack canary entropy, prepare it after: * - setup_arch() for any UEFI RNG entropy and boot cmdline access * - timekeeping_init() for ktime entropy used in rand_initialize() + * - time_init() for making random_get_entropy() work on some platforms * - rand_initialize() to get any arch-specific entropy like RDRAND * - add_latent_entropy() to get any latent entropy * - adding command line entropy @@ -966,7 +968,6 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void) add_device_randomness(command_line, strlen(command_line)); boot_init_stack_canary(); - time_init(); perf_event_init(); profile_init(); call_function_init(); From c895438b172c7071b720b0ccefe25500526b7a0d Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 23 Apr 2022 21:11:41 +0200 Subject: [PATCH 0219/1842] ia64: define get_cycles macro for arch-override commit 57c0900b91d8891ab43f0e6b464d059fda51d102 upstream. Itanium defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/ia64/include/asm/timex.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/ia64/include/asm/timex.h b/arch/ia64/include/asm/timex.h index 869a3ac6bf23..7ccc077a60be 100644 --- a/arch/ia64/include/asm/timex.h +++ b/arch/ia64/include/asm/timex.h @@ -39,6 +39,7 @@ get_cycles (void) ret = ia64_getreg(_IA64_REG_AR_ITC); return ret; } +#define get_cycles get_cycles extern void ia64_cpu_local_tick (void); extern unsigned long long ia64_native_sched_clock (void); From 641d1fbd96676cb3c7c987aea4792349117fdbe6 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 23 Apr 2022 21:11:41 +0200 Subject: [PATCH 0220/1842] s390: define get_cycles macro for arch-override commit 2e3df523256cb9836de8441e9c791a796759bb3c upstream. S390x defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Acked-by: Heiko Carstens Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/s390/include/asm/timex.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/s390/include/asm/timex.h b/arch/s390/include/asm/timex.h index 289aaff4d365..588aa0f2c842 100644 --- a/arch/s390/include/asm/timex.h +++ b/arch/s390/include/asm/timex.h @@ -172,6 +172,7 @@ static inline cycles_t get_cycles(void) { return (cycles_t) get_tod_clock() >> 2; } +#define get_cycles get_cycles int get_phys_clock(unsigned long *clock); void init_cpu_timer(void); From c55a863c304e7301237c6c057350cf01d4ca9716 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 23 Apr 2022 21:11:41 +0200 Subject: [PATCH 0221/1842] parisc: define get_cycles macro for arch-override commit 8865bbe6ba1120e67f72201b7003a16202cd42be upstream. PA-RISC defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Acked-by: Helge Deller Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/parisc/include/asm/timex.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/parisc/include/asm/timex.h b/arch/parisc/include/asm/timex.h index 06b510f8172e..b4622cb06a75 100644 --- a/arch/parisc/include/asm/timex.h +++ b/arch/parisc/include/asm/timex.h @@ -13,9 +13,10 @@ typedef unsigned long cycles_t; -static inline cycles_t get_cycles (void) +static inline cycles_t get_cycles(void) { return mfctl(16); } +#define get_cycles get_cycles #endif From 30ee01bcdc2cd684d9aec469cbc1881384846dd1 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 23 Apr 2022 21:11:41 +0200 Subject: [PATCH 0222/1842] alpha: define get_cycles macro for arch-override commit 1097710bc9660e1e588cf2186a35db3d95c4d258 upstream. Alpha defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Richard Henderson Cc: Ivan Kokshaysky Acked-by: Matt Turner Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/alpha/include/asm/timex.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/alpha/include/asm/timex.h b/arch/alpha/include/asm/timex.h index b565cc6f408e..f89798da8a14 100644 --- a/arch/alpha/include/asm/timex.h +++ b/arch/alpha/include/asm/timex.h @@ -28,5 +28,6 @@ static inline cycles_t get_cycles (void) __asm__ __volatile__ ("rpcc %0" : "=r"(ret)); return ret; } +#define get_cycles get_cycles #endif From 07b5d0b3e2cc435d9ae44b52df4f0a846170962b Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 23 Apr 2022 21:11:41 +0200 Subject: [PATCH 0223/1842] powerpc: define get_cycles macro for arch-override commit 408835832158df0357e18e96da7f2d1ed6b80e7f upstream. PowerPC defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Acked-by: Michael Ellerman Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/include/asm/timex.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/powerpc/include/asm/timex.h b/arch/powerpc/include/asm/timex.h index 95988870a57b..171602fd358e 100644 --- a/arch/powerpc/include/asm/timex.h +++ b/arch/powerpc/include/asm/timex.h @@ -19,6 +19,7 @@ static inline cycles_t get_cycles(void) { return mftb(); } +#define get_cycles get_cycles #endif /* __KERNEL__ */ #endif /* _ASM_POWERPC_TIMEX_H */ From 7690be1adf8a5b8cc5dd5530009b5004209461b3 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sun, 10 Apr 2022 16:49:50 +0200 Subject: [PATCH 0224/1842] timekeeping: Add raw clock fallback for random_get_entropy() commit 1366992e16bddd5e2d9a561687f367f9f802e2e4 upstream. The addition of random_get_entropy_fallback() provides access to whichever time source has the highest frequency, which is useful for gathering entropy on platforms without available cycle counters. It's not necessarily as good as being able to quickly access a cycle counter that the CPU has, but it's still something, even when it falls back to being jiffies-based. In the event that a given arch does not define get_cycles(), falling back to the get_cycles() default implementation that returns 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Finally, since random_get_entropy_fallback() is used during extremely early boot when randomizing freelists in mm_init(), it can be called before timekeeping has been initialized. In that case there really is nothing we can do; jiffies hasn't even started ticking yet. So just give up and return 0. Suggested-by: Thomas Gleixner Signed-off-by: Jason A. Donenfeld Reviewed-by: Thomas Gleixner Cc: Arnd Bergmann Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- include/linux/timex.h | 8 ++++++++ kernel/time/timekeeping.c | 15 +++++++++++++++ 2 files changed, 23 insertions(+) diff --git a/include/linux/timex.h b/include/linux/timex.h index e5154378076d..2efab9a806a9 100644 --- a/include/linux/timex.h +++ b/include/linux/timex.h @@ -62,6 +62,8 @@ #include #include +unsigned long random_get_entropy_fallback(void); + #include #ifndef random_get_entropy @@ -74,8 +76,14 @@ * * By default we use get_cycles() for this purpose, but individual * architectures may override this in their asm/timex.h header file. + * If a given arch does not have get_cycles(), then we fallback to + * using random_get_entropy_fallback(). */ +#ifdef get_cycles #define random_get_entropy() ((unsigned long)get_cycles()) +#else +#define random_get_entropy() random_get_entropy_fallback() +#endif #endif /* diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c index cc4dc2857a87..e12ce2821dba 100644 --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -2378,6 +2379,20 @@ static int timekeeping_validate_timex(const struct __kernel_timex *txc) return 0; } +/** + * random_get_entropy_fallback - Returns the raw clock source value, + * used by random.c for platforms with no valid random_get_entropy(). + */ +unsigned long random_get_entropy_fallback(void) +{ + struct tk_read_base *tkr = &tk_core.timekeeper.tkr_mono; + struct clocksource *clock = READ_ONCE(tkr->clock); + + if (unlikely(timekeeping_suspended || !clock)) + return 0; + return clock->read(clock); +} +EXPORT_SYMBOL_GPL(random_get_entropy_fallback); /** * do_adjtimex() - Accessor function to NTP __do_adjtimex function From 84397906a60373e7764553788cd3d1f04b640f80 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 8 Apr 2022 18:03:13 +0200 Subject: [PATCH 0225/1842] m68k: use fallback for random_get_entropy() instead of zero commit 0f392c95391f2d708b12971a07edaa7973f9eece upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Cc: Thomas Gleixner Cc: Arnd Bergmann Acked-by: Geert Uytterhoeven Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/m68k/include/asm/timex.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/m68k/include/asm/timex.h b/arch/m68k/include/asm/timex.h index 6a21d9358280..f4a7a340f4ca 100644 --- a/arch/m68k/include/asm/timex.h +++ b/arch/m68k/include/asm/timex.h @@ -35,7 +35,7 @@ static inline unsigned long random_get_entropy(void) { if (mach_random_get_entropy) return mach_random_get_entropy(); - return 0; + return random_get_entropy_fallback(); } #define random_get_entropy random_get_entropy From 714def449776d69dcf29b408b41ce6beff8e2074 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 8 Apr 2022 18:03:13 +0200 Subject: [PATCH 0226/1842] riscv: use fallback for random_get_entropy() instead of zero commit 6d01238623faa9425f820353d2066baf6c9dc872 upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Paul Walmsley Acked-by: Palmer Dabbelt Reviewed-by: Palmer Dabbelt Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/riscv/include/asm/timex.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/timex.h b/arch/riscv/include/asm/timex.h index 81de51e6aa32..a06697846e69 100644 --- a/arch/riscv/include/asm/timex.h +++ b/arch/riscv/include/asm/timex.h @@ -41,7 +41,7 @@ static inline u32 get_cycles_hi(void) static inline unsigned long random_get_entropy(void) { if (unlikely(clint_time_val == NULL)) - return 0; + return random_get_entropy_fallback(); return get_cycles(); } #define random_get_entropy() random_get_entropy() From d5531246afcf798cdb6b82143ade30fa1d7513e9 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 8 Apr 2022 18:03:13 +0200 Subject: [PATCH 0227/1842] mips: use fallback for random_get_entropy() instead of just c0 random commit 1c99c6a7c3c599a68321b01b9ec243215ede5a68 upstream. For situations in which we don't have a c0 counter register available, we've been falling back to reading the c0 "random" register, which is usually bounded by the amount of TLB entries and changes every other cycle or so. This means it wraps extremely often. We can do better by combining this fast-changing counter with a potentially slower-changing counter from random_get_entropy_fallback() in the more significant bits. This commit combines the two, taking into account that the changing bits are in a different bit position depending on the CPU model. In addition, we previously were falling back to 0 for ancient CPUs that Linux does not support anyway; remove that dead path entirely. Cc: Thomas Gleixner Cc: Arnd Bergmann Tested-by: Maciej W. Rozycki Acked-by: Thomas Bogendoerfer Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/mips/include/asm/timex.h | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/arch/mips/include/asm/timex.h b/arch/mips/include/asm/timex.h index 8026baf46e72..2e107886f97a 100644 --- a/arch/mips/include/asm/timex.h +++ b/arch/mips/include/asm/timex.h @@ -76,25 +76,24 @@ static inline cycles_t get_cycles(void) else return 0; /* no usable counter */ } +#define get_cycles get_cycles /* * Like get_cycles - but where c0_count is not available we desperately * use c0_random in an attempt to get at least a little bit of entropy. - * - * R6000 and R6000A neither have a count register nor a random register. - * That leaves no entropy source in the CPU itself. */ static inline unsigned long random_get_entropy(void) { - unsigned int prid = read_c0_prid(); - unsigned int imp = prid & PRID_IMP_MASK; + unsigned int c0_random; - if (can_use_mips_counter(prid)) + if (can_use_mips_counter(read_c0_prid())) return read_c0_count(); - else if (likely(imp != PRID_IMP_R6000 && imp != PRID_IMP_R6000A)) - return read_c0_random(); + + if (cpu_has_3kex) + c0_random = (read_c0_random() >> 8) & 0x3f; else - return 0; /* no usable register */ + c0_random = read_c0_random() & 0x3f; + return (random_get_entropy_fallback() << 6) | (0x3f - c0_random); } #define random_get_entropy random_get_entropy From fdca775081527364621857957655207d83035376 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 8 Apr 2022 18:03:13 +0200 Subject: [PATCH 0228/1842] arm: use fallback for random_get_entropy() instead of zero commit ff8a8f59c99f6a7c656387addc4d9f2247d75077 upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Cc: Thomas Gleixner Cc: Arnd Bergmann Reviewed-by: Russell King (Oracle) Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/arm/include/asm/timex.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm/include/asm/timex.h b/arch/arm/include/asm/timex.h index 7c3b3671d6c2..6d1337c169cd 100644 --- a/arch/arm/include/asm/timex.h +++ b/arch/arm/include/asm/timex.h @@ -11,5 +11,6 @@ typedef unsigned long cycles_t; #define get_cycles() ({ cycles_t c; read_current_timer(&c) ? 0 : c; }) +#define random_get_entropy() (((unsigned long)get_cycles()) ?: random_get_entropy_fallback()) #endif From 0b93f40cbe9712563bb15329768bd73c9c19ac03 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 8 Apr 2022 18:03:13 +0200 Subject: [PATCH 0229/1842] nios2: use fallback for random_get_entropy() instead of zero commit c04e72700f2293013dab40208e809369378f224c upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Cc: Thomas Gleixner Cc: Arnd Bergmann Acked-by: Dinh Nguyen Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/nios2/include/asm/timex.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/nios2/include/asm/timex.h b/arch/nios2/include/asm/timex.h index a769f871b28d..40a1adc9bd03 100644 --- a/arch/nios2/include/asm/timex.h +++ b/arch/nios2/include/asm/timex.h @@ -8,5 +8,8 @@ typedef unsigned long cycles_t; extern cycles_t get_cycles(void); +#define get_cycles get_cycles + +#define random_get_entropy() (((unsigned long)get_cycles()) ?: random_get_entropy_fallback()) #endif From 25d4fdf1f0f85e81e71d7c7e6cbcceb37d2ef65a Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 8 Apr 2022 18:03:13 +0200 Subject: [PATCH 0230/1842] x86/tsc: Use fallback for random_get_entropy() instead of zero commit 3bd4abc07a267e6a8b33d7f8717136e18f921c53 upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is suboptimal. Instead, fallback to calling random_get_entropy_fallback(), which isn't extremely high precision or guaranteed to be entropic, but is certainly better than returning zero all the time. If CONFIG_X86_TSC=n, then it's possible for the kernel to run on systems without RDTSC, such as 486 and certain 586, so the fallback code is only required for that case. As well, fix up both the new function and the get_cycles() function from which it was derived to use cpu_feature_enabled() rather than boot_cpu_has(), and use !IS_ENABLED() instead of #ifndef. Signed-off-by: Jason A. Donenfeld Reviewed-by: Thomas Gleixner Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Borislav Petkov Cc: x86@kernel.org Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/timex.h | 9 +++++++++ arch/x86/include/asm/tsc.h | 7 +++---- 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/timex.h b/arch/x86/include/asm/timex.h index a4a8b1b16c0c..956e4145311b 100644 --- a/arch/x86/include/asm/timex.h +++ b/arch/x86/include/asm/timex.h @@ -5,6 +5,15 @@ #include #include +static inline unsigned long random_get_entropy(void) +{ + if (!IS_ENABLED(CONFIG_X86_TSC) && + !cpu_feature_enabled(X86_FEATURE_TSC)) + return random_get_entropy_fallback(); + return rdtsc(); +} +#define random_get_entropy random_get_entropy + /* Assume we use the PIT time source for the clock tick */ #define CLOCK_TICK_RATE PIT_TICK_RATE diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h index 01a300a9700b..fbdc3d951494 100644 --- a/arch/x86/include/asm/tsc.h +++ b/arch/x86/include/asm/tsc.h @@ -20,13 +20,12 @@ extern void disable_TSC(void); static inline cycles_t get_cycles(void) { -#ifndef CONFIG_X86_TSC - if (!boot_cpu_has(X86_FEATURE_TSC)) + if (!IS_ENABLED(CONFIG_X86_TSC) && + !cpu_feature_enabled(X86_FEATURE_TSC)) return 0; -#endif - return rdtsc(); } +#define get_cycles get_cycles extern struct system_counterval_t convert_art_to_tsc(u64 art); extern struct system_counterval_t convert_art_ns_to_tsc(u64 art_ns); From a5092be129cf950aefcfd31789dba81f4d9337ac Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 8 Apr 2022 18:03:13 +0200 Subject: [PATCH 0231/1842] um: use fallback for random_get_entropy() instead of zero commit 9f13fb0cd11ed2327abff69f6501a2c124c88b5a upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. This is accomplished by just including the asm-generic code like on other architectures, which means we can get rid of the empty stub function here. Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Richard Weinberger Cc: Anton Ivanov Acked-by: Johannes Berg Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/um/include/asm/timex.h | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/arch/um/include/asm/timex.h b/arch/um/include/asm/timex.h index e392a9a5bc9b..9f27176adb26 100644 --- a/arch/um/include/asm/timex.h +++ b/arch/um/include/asm/timex.h @@ -2,13 +2,8 @@ #ifndef __UM_TIMEX_H #define __UM_TIMEX_H -typedef unsigned long cycles_t; - -static inline cycles_t get_cycles (void) -{ - return 0; -} - #define CLOCK_TICK_RATE (HZ) +#include + #endif From e1ea0e26d3e43a374695f56722f5be7ce5c9cd0e Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 8 Apr 2022 18:03:13 +0200 Subject: [PATCH 0232/1842] sparc: use fallback for random_get_entropy() instead of zero commit ac9756c79797bb98972736b13cfb239fd2cffb79 upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. This is accomplished by just including the asm-generic code like on other architectures, which means we can get rid of the empty stub function here. Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: David S. Miller Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/sparc/include/asm/timex_32.h | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/sparc/include/asm/timex_32.h b/arch/sparc/include/asm/timex_32.h index 542915b46209..f86326a6f89e 100644 --- a/arch/sparc/include/asm/timex_32.h +++ b/arch/sparc/include/asm/timex_32.h @@ -9,8 +9,6 @@ #define CLOCK_TICK_RATE 1193180 /* Underlying HZ */ -/* XXX Maybe do something better at some point... -DaveM */ -typedef unsigned long cycles_t; -#define get_cycles() (0) +#include #endif From ffcfdd5de9d0287da52522fbcd1bbba52c81b3ef Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 8 Apr 2022 18:03:13 +0200 Subject: [PATCH 0233/1842] xtensa: use fallback for random_get_entropy() instead of zero commit e10e2f58030c5c211d49042a8c2a1b93d40b2ffb upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. This is accomplished by just including the asm-generic code like on other architectures, which means we can get rid of the empty stub function here. Cc: Thomas Gleixner Cc: Arnd Bergmann Acked-by: Max Filippov Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/xtensa/include/asm/timex.h | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/arch/xtensa/include/asm/timex.h b/arch/xtensa/include/asm/timex.h index 233ec75e60c6..3f2462f2d027 100644 --- a/arch/xtensa/include/asm/timex.h +++ b/arch/xtensa/include/asm/timex.h @@ -29,10 +29,6 @@ extern unsigned long ccount_freq; -typedef unsigned long long cycles_t; - -#define get_cycles() (0) - void local_timer_setup(unsigned cpu); /* @@ -59,4 +55,6 @@ static inline void set_linux_timer (unsigned long ccompare) xtensa_set_sr(ccompare, SREG_CCOMPARE + LINUX_TIMER); } +#include + #endif /* _XTENSA_TIMEX_H */ From f4c98fe1d1005f021d78db102e6c52fc8b89c33b Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 12 Apr 2022 19:59:57 +0200 Subject: [PATCH 0234/1842] random: insist on random_get_entropy() existing in order to simplify commit 4b758eda851eb9336ca86a0041a4d3da55f66511 upstream. All platforms are now guaranteed to provide some value for random_get_entropy(). In case some bug leads to this not being so, we print a warning, because that indicates that something is really very wrong (and likely other things are impacted too). This should never be hit, but it's a good and cheap way of finding out if something ever is problematic. Since we now have viable fallback code for random_get_entropy() on all platforms, which is, in the worst case, not worse than jiffies, we can count on getting the best possible value out of it. That means there's no longer a use for using jiffies as entropy input. It also means we no longer have a reason for doing the round-robin register flow in the IRQ handler, which was always of fairly dubious value. Instead we can greatly simplify the IRQ handler inputs and also unify the construction between 64-bits and 32-bits. We now collect the cycle counter and the return address, since those are the two things that matter. Because the return address and the irq number are likely related, to the extent we mix in the irq number, we can just xor it into the top unchanging bytes of the return address, rather than the bottom changing bytes of the cycle counter as before. Then, we can do a fixed 2 rounds of SipHash/HSipHash. Finally, we use the same construction of hashing only half of the [H]SipHash state on 32-bit and 64-bit. We're not actually discarding any entropy, since that entropy is carried through until the next time. And more importantly, it lets us do the same sponge-like construction everywhere. Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 86 +++++++++++++------------------------------ 1 file changed, 26 insertions(+), 60 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 7035739d1924..f04cf5d8bcf7 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1022,15 +1022,14 @@ int __init rand_initialize(void) */ void add_device_randomness(const void *buf, size_t size) { - unsigned long cycles = random_get_entropy(); - unsigned long flags, now = jiffies; + unsigned long entropy = random_get_entropy(); + unsigned long flags; if (crng_init == 0 && size) crng_pre_init_inject(buf, size, false); spin_lock_irqsave(&input_pool.lock, flags); - _mix_pool_bytes(&cycles, sizeof(cycles)); - _mix_pool_bytes(&now, sizeof(now)); + _mix_pool_bytes(&entropy, sizeof(entropy)); _mix_pool_bytes(buf, size); spin_unlock_irqrestore(&input_pool.lock, flags); } @@ -1053,12 +1052,11 @@ struct timer_rand_state { */ static void add_timer_randomness(struct timer_rand_state *state, unsigned int num) { - unsigned long cycles = random_get_entropy(), now = jiffies, flags; + unsigned long entropy = random_get_entropy(), now = jiffies, flags; long delta, delta2, delta3; spin_lock_irqsave(&input_pool.lock, flags); - _mix_pool_bytes(&cycles, sizeof(cycles)); - _mix_pool_bytes(&now, sizeof(now)); + _mix_pool_bytes(&entropy, sizeof(entropy)); _mix_pool_bytes(&num, sizeof(num)); spin_unlock_irqrestore(&input_pool.lock, flags); @@ -1186,7 +1184,6 @@ struct fast_pool { unsigned long pool[4]; unsigned long last; unsigned int count; - u16 reg_idx; }; static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = { @@ -1204,13 +1201,13 @@ static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = { * This is [Half]SipHash-1-x, starting from an empty key. Because * the key is fixed, it assumes that its inputs are non-malicious, * and therefore this has no security on its own. s represents the - * 128 or 256-bit SipHash state, while v represents a 128-bit input. + * four-word SipHash state, while v represents a two-word input. */ -static void fast_mix(unsigned long s[4], const unsigned long *v) +static void fast_mix(unsigned long s[4], const unsigned long v[2]) { size_t i; - for (i = 0; i < 16 / sizeof(long); ++i) { + for (i = 0; i < 2; ++i) { s[3] ^= v[i]; #ifdef CONFIG_64BIT s[0] += s[1]; s[1] = rol64(s[1], 13); s[1] ^= s[0]; s[0] = rol64(s[0], 32); @@ -1250,33 +1247,17 @@ int random_online_cpu(unsigned int cpu) } #endif -static unsigned long get_reg(struct fast_pool *f, struct pt_regs *regs) -{ - unsigned long *ptr = (unsigned long *)regs; - unsigned int idx; - - if (regs == NULL) - return 0; - idx = READ_ONCE(f->reg_idx); - if (idx >= sizeof(struct pt_regs) / sizeof(unsigned long)) - idx = 0; - ptr += idx++; - WRITE_ONCE(f->reg_idx, idx); - return *ptr; -} - static void mix_interrupt_randomness(struct work_struct *work) { struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix); /* - * The size of the copied stack pool is explicitly 16 bytes so that we - * tax mix_pool_byte()'s compression function the same amount on all - * platforms. This means on 64-bit we copy half the pool into this, - * while on 32-bit we copy all of it. The entropy is supposed to be - * sufficiently dispersed between bits that in the sponge-like - * half case, on average we don't wind up "losing" some. + * The size of the copied stack pool is explicitly 2 longs so that we + * only ever ingest half of the siphash output each time, retaining + * the other half as the next "key" that carries over. The entropy is + * supposed to be sufficiently dispersed between bits so on average + * we don't wind up "losing" some. */ - u8 pool[16]; + unsigned long pool[2]; /* Check to see if we're running on the wrong CPU due to hotplug. */ local_irq_disable(); @@ -1308,36 +1289,21 @@ static void mix_interrupt_randomness(struct work_struct *work) void add_interrupt_randomness(int irq) { enum { MIX_INFLIGHT = 1U << 31 }; - unsigned long cycles = random_get_entropy(), now = jiffies; + unsigned long entropy = random_get_entropy(); struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness); struct pt_regs *regs = get_irq_regs(); unsigned int new_count; - union { - u32 u32[4]; - u64 u64[2]; - unsigned long longs[16 / sizeof(long)]; - } irq_data; - if (cycles == 0) - cycles = get_reg(fast_pool, regs); - - if (sizeof(unsigned long) == 8) { - irq_data.u64[0] = cycles ^ rol64(now, 32) ^ irq; - irq_data.u64[1] = regs ? instruction_pointer(regs) : _RET_IP_; - } else { - irq_data.u32[0] = cycles ^ irq; - irq_data.u32[1] = now; - irq_data.u32[2] = regs ? instruction_pointer(regs) : _RET_IP_; - irq_data.u32[3] = get_reg(fast_pool, regs); - } - - fast_mix(fast_pool->pool, irq_data.longs); + fast_mix(fast_pool->pool, (unsigned long[2]){ + entropy, + (regs ? instruction_pointer(regs) : _RET_IP_) ^ swab(irq) + }); new_count = ++fast_pool->count; if (new_count & MIX_INFLIGHT) return; - if (new_count < 64 && (!time_after(now, fast_pool->last + HZ) || + if (new_count < 64 && (!time_is_before_jiffies(fast_pool->last + HZ) || unlikely(crng_init == 0))) return; @@ -1373,28 +1339,28 @@ static void entropy_timer(struct timer_list *t) static void try_to_generate_entropy(void) { struct { - unsigned long cycles; + unsigned long entropy; struct timer_list timer; } stack; - stack.cycles = random_get_entropy(); + stack.entropy = random_get_entropy(); /* Slow counter - or none. Don't even bother */ - if (stack.cycles == random_get_entropy()) + if (stack.entropy == random_get_entropy()) return; timer_setup_on_stack(&stack.timer, entropy_timer, 0); while (!crng_ready() && !signal_pending(current)) { if (!timer_pending(&stack.timer)) mod_timer(&stack.timer, jiffies + 1); - mix_pool_bytes(&stack.cycles, sizeof(stack.cycles)); + mix_pool_bytes(&stack.entropy, sizeof(stack.entropy)); schedule(); - stack.cycles = random_get_entropy(); + stack.entropy = random_get_entropy(); } del_timer_sync(&stack.timer); destroy_timer_on_stack(&stack.timer); - mix_pool_bytes(&stack.cycles, sizeof(stack.cycles)); + mix_pool_bytes(&stack.entropy, sizeof(stack.entropy)); } From 273aebb50be6ce16e1b056cf9b39447821a5ac35 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 3 May 2022 14:14:32 +0200 Subject: [PATCH 0235/1842] random: do not use batches when !crng_ready() commit cbe89e5a375a51bbb952929b93fa973416fea74e upstream. It's too hard to keep the batches synchronized, and pointless anyway, since in !crng_ready(), we're updating the base_crng key really often, where batching only hurts. So instead, if the crng isn't ready, just call into get_random_bytes(). At this stage nothing is performance critical anyhow. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index f04cf5d8bcf7..01097606fed7 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -467,10 +467,8 @@ static void crng_pre_init_inject(const void *input, size_t len, bool account) if (account) { crng_init_cnt += min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt); - if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { - ++base_crng.generation; + if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) crng_init = 1; - } } spin_unlock_irqrestore(&base_crng.lock, flags); @@ -626,6 +624,11 @@ u64 get_random_u64(void) warn_unseeded_randomness(&previous); + if (!crng_ready()) { + _get_random_bytes(&ret, sizeof(ret)); + return ret; + } + local_lock_irqsave(&batched_entropy_u64.lock, flags); batch = raw_cpu_ptr(&batched_entropy_u64); @@ -660,6 +663,11 @@ u32 get_random_u32(void) warn_unseeded_randomness(&previous); + if (!crng_ready()) { + _get_random_bytes(&ret, sizeof(ret)); + return ret; + } + local_lock_irqsave(&batched_entropy_u32.lock, flags); batch = raw_cpu_ptr(&batched_entropy_u32); From 24d32756857804ec8b425f2fe04ab809abcea0f2 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 30 Apr 2022 15:08:20 +0200 Subject: [PATCH 0236/1842] random: use first 128 bits of input as fast init commit 5c3b747ef54fa2a7318776777f6044540d99f721 upstream. Before, the first 64 bytes of input, regardless of how entropic it was, would be used to mutate the crng base key directly, and none of those bytes would be credited as having entropy. Then 256 bits of credited input would be accumulated, and only then would the rng transition from the earlier "fast init" phase into being actually initialized. The thinking was that by mixing and matching fast init and real init, an attacker who compromised the fast init state, considered easy to do given how little entropy might be in those first 64 bytes, would then be able to bruteforce bits from the actual initialization. By keeping these separate, bruteforcing became impossible. However, by not crediting potentially creditable bits from those first 64 bytes of input, we delay initialization, and actually make the problem worse, because it means the user is drawing worse random numbers for a longer period of time. Instead, we can take the first 128 bits as fast init, and allow them to be credited, and then hold off on the next 128 bits until they've accumulated. This is still a wide enough margin to prevent bruteforcing the rng state, while still initializing much faster. Then, rather than trying to piecemeal inject into the base crng key at various points, instead just extract from the pool when we need it, for the crng_init==0 phase. Performance may even be better for the various inputs here, since there are likely more calls to mix_pool_bytes() then there are to get_random_bytes() during this phase of system execution. Since the preinit injection code is gone, bootloader randomness can then do something significantly more straight forward, removing the weird system_wq hack in hwgenerator randomness. Cc: Theodore Ts'o Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 146 ++++++++++++++---------------------------- 1 file changed, 49 insertions(+), 97 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 01097606fed7..9beba425c508 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -233,10 +233,7 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller, void * *********************************************************************/ -enum { - CRNG_RESEED_INTERVAL = 300 * HZ, - CRNG_INIT_CNT_THRESH = 2 * CHACHA_KEY_SIZE -}; +enum { CRNG_RESEED_INTERVAL = 300 * HZ }; static struct { u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long)); @@ -260,6 +257,8 @@ static DEFINE_PER_CPU(struct crng, crngs) = { /* Used by crng_reseed() to extract a new seed from the input pool. */ static bool drain_entropy(void *buf, size_t nbytes); +/* Used by crng_make_state() to extract a new seed when crng_init==0. */ +static void extract_entropy(void *buf, size_t nbytes); /* * This extracts a new crng key from the input pool, but only if there is a @@ -384,17 +383,20 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], /* * For the fast path, we check whether we're ready, unlocked first, and * then re-check once locked later. In the case where we're really not - * ready, we do fast key erasure with the base_crng directly, because - * this is what crng_pre_init_inject() mutates during early init. + * ready, we do fast key erasure with the base_crng directly, extracting + * when crng_init==0. */ if (!crng_ready()) { bool ready; spin_lock_irqsave(&base_crng.lock, flags); ready = crng_ready(); - if (!ready) + if (!ready) { + if (crng_init == 0) + extract_entropy(base_crng.key, sizeof(base_crng.key)); crng_fast_key_erasure(base_crng.key, chacha_state, random_data, random_data_len); + } spin_unlock_irqrestore(&base_crng.lock, flags); if (!ready) return; @@ -435,48 +437,6 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], local_unlock_irqrestore(&crngs.lock, flags); } -/* - * This function is for crng_init == 0 only. It loads entropy directly - * into the crng's key, without going through the input pool. It is, - * generally speaking, not very safe, but we use this only at early - * boot time when it's better to have something there rather than - * nothing. - * - * If account is set, then the crng_init_cnt counter is incremented. - * This shouldn't be set by functions like add_device_randomness(), - * where we can't trust the buffer passed to it is guaranteed to be - * unpredictable (so it might not have any entropy at all). - */ -static void crng_pre_init_inject(const void *input, size_t len, bool account) -{ - static int crng_init_cnt = 0; - struct blake2s_state hash; - unsigned long flags; - - blake2s_init(&hash, sizeof(base_crng.key)); - - spin_lock_irqsave(&base_crng.lock, flags); - if (crng_init != 0) { - spin_unlock_irqrestore(&base_crng.lock, flags); - return; - } - - blake2s_update(&hash, base_crng.key, sizeof(base_crng.key)); - blake2s_update(&hash, input, len); - blake2s_final(&hash, base_crng.key); - - if (account) { - crng_init_cnt += min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt); - if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) - crng_init = 1; - } - - spin_unlock_irqrestore(&base_crng.lock, flags); - - if (crng_init == 1) - pr_notice("fast init done\n"); -} - static void _get_random_bytes(void *buf, size_t nbytes) { u32 chacha_state[CHACHA_STATE_WORDS]; @@ -789,7 +749,8 @@ EXPORT_SYMBOL(get_random_bytes_arch); enum { POOL_BITS = BLAKE2S_HASH_SIZE * 8, - POOL_MIN_BITS = POOL_BITS /* No point in settling for less. */ + POOL_MIN_BITS = POOL_BITS, /* No point in settling for less. */ + POOL_FAST_INIT_BITS = POOL_MIN_BITS / 2 }; /* For notifying userspace should write into /dev/random. */ @@ -826,24 +787,6 @@ static void mix_pool_bytes(const void *in, size_t nbytes) spin_unlock_irqrestore(&input_pool.lock, flags); } -static void credit_entropy_bits(size_t nbits) -{ - unsigned int entropy_count, orig, add; - - if (!nbits) - return; - - add = min_t(size_t, nbits, POOL_BITS); - - do { - orig = READ_ONCE(input_pool.entropy_count); - entropy_count = min_t(unsigned int, POOL_BITS, orig + add); - } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig); - - if (!crng_ready() && entropy_count >= POOL_MIN_BITS) - crng_reseed(); -} - /* * This is an HKDF-like construction for using the hashed collected entropy * as a PRF key, that's then expanded block-by-block. @@ -909,6 +852,33 @@ static bool drain_entropy(void *buf, size_t nbytes) return true; } +static void credit_entropy_bits(size_t nbits) +{ + unsigned int entropy_count, orig, add; + unsigned long flags; + + if (!nbits) + return; + + add = min_t(size_t, nbits, POOL_BITS); + + do { + orig = READ_ONCE(input_pool.entropy_count); + entropy_count = min_t(unsigned int, POOL_BITS, orig + add); + } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig); + + if (!crng_ready() && entropy_count >= POOL_MIN_BITS) + crng_reseed(); + else if (unlikely(crng_init == 0 && entropy_count >= POOL_FAST_INIT_BITS)) { + spin_lock_irqsave(&base_crng.lock, flags); + if (crng_init == 0) { + extract_entropy(base_crng.key, sizeof(base_crng.key)); + crng_init = 1; + } + spin_unlock_irqrestore(&base_crng.lock, flags); + } +} + /********************************************************************** * @@ -951,9 +921,9 @@ static bool drain_entropy(void *buf, size_t nbytes) * entropy as specified by the caller. If the entropy pool is full it will * block until more entropy is needed. * - * add_bootloader_randomness() is the same as add_hwgenerator_randomness() or - * add_device_randomness(), depending on whether or not the configuration - * option CONFIG_RANDOM_TRUST_BOOTLOADER is set. + * add_bootloader_randomness() is called by bootloader drivers, such as EFI + * and device tree, and credits its input depending on whether or not the + * configuration option CONFIG_RANDOM_TRUST_BOOTLOADER is set. * * add_interrupt_randomness() uses the interrupt timing as random * inputs to the entropy pool. Using the cycle counters and the irq source @@ -1033,9 +1003,6 @@ void add_device_randomness(const void *buf, size_t size) unsigned long entropy = random_get_entropy(); unsigned long flags; - if (crng_init == 0 && size) - crng_pre_init_inject(buf, size, false); - spin_lock_irqsave(&input_pool.lock, flags); _mix_pool_bytes(&entropy, sizeof(entropy)); _mix_pool_bytes(buf, size); @@ -1151,12 +1118,6 @@ void rand_initialize_disk(struct gendisk *disk) void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy) { - if (unlikely(crng_init == 0 && entropy < POOL_MIN_BITS)) { - crng_pre_init_inject(buffer, count, true); - mix_pool_bytes(buffer, count); - return; - } - /* * Throttle writing if we're above the trickle threshold. * We'll be woken up again once below POOL_MIN_BITS, when @@ -1164,7 +1125,7 @@ void add_hwgenerator_randomness(const void *buffer, size_t count, * CRNG_RESEED_INTERVAL has elapsed. */ wait_event_interruptible_timeout(random_write_wait, - !system_wq || kthread_should_stop() || + kthread_should_stop() || input_pool.entropy_count < POOL_MIN_BITS, CRNG_RESEED_INTERVAL); mix_pool_bytes(buffer, count); @@ -1173,17 +1134,14 @@ void add_hwgenerator_randomness(const void *buffer, size_t count, EXPORT_SYMBOL_GPL(add_hwgenerator_randomness); /* - * Handle random seed passed by bootloader. - * If the seed is trustworthy, it would be regarded as hardware RNGs. Otherwise - * it would be regarded as device data. - * The decision is controlled by CONFIG_RANDOM_TRUST_BOOTLOADER. + * Handle random seed passed by bootloader, and credit it if + * CONFIG_RANDOM_TRUST_BOOTLOADER is set. */ void add_bootloader_randomness(const void *buf, size_t size) { + mix_pool_bytes(buf, size); if (trust_bootloader) - add_hwgenerator_randomness(buf, size, size * 8); - else - add_device_randomness(buf, size); + credit_entropy_bits(size * 8); } EXPORT_SYMBOL_GPL(add_bootloader_randomness); @@ -1283,13 +1241,8 @@ static void mix_interrupt_randomness(struct work_struct *work) fast_pool->last = jiffies; local_irq_enable(); - if (unlikely(crng_init == 0)) { - crng_pre_init_inject(pool, sizeof(pool), true); - mix_pool_bytes(pool, sizeof(pool)); - } else { - mix_pool_bytes(pool, sizeof(pool)); - credit_entropy_bits(1); - } + mix_pool_bytes(pool, sizeof(pool)); + credit_entropy_bits(1); memzero_explicit(pool, sizeof(pool)); } @@ -1311,8 +1264,7 @@ void add_interrupt_randomness(int irq) if (new_count & MIX_INFLIGHT) return; - if (new_count < 64 && (!time_is_before_jiffies(fast_pool->last + HZ) || - unlikely(crng_init == 0))) + if (new_count < 64 && !time_is_before_jiffies(fast_pool->last + HZ)) return; if (unlikely(!fast_pool->mix.func)) From ce3c4ff381865888c6375d3bc21f1eb867b6e4f0 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 30 Apr 2022 22:03:29 +0200 Subject: [PATCH 0237/1842] random: do not pretend to handle premature next security model commit e85c0fc1d94c52483a603651748d4c76d6aa1c6b upstream. Per the thread linked below, "premature next" is not considered to be a realistic threat model, and leads to more serious security problems. "Premature next" is the scenario in which: - Attacker compromises the current state of a fully initialized RNG via some kind of infoleak. - New bits of entropy are added directly to the key used to generate the /dev/urandom stream, without any buffering or pooling. - Attacker then, somehow having read access to /dev/urandom, samples RNG output and brute forces the individual new bits that were added. - Result: the RNG never "recovers" from the initial compromise, a so-called violation of what academics term "post-compromise security". The usual solutions to this involve some form of delaying when entropy gets mixed into the crng. With Fortuna, this involves multiple input buckets. With what the Linux RNG was trying to do prior, this involves entropy estimation. However, by delaying when entropy gets mixed in, it also means that RNG compromises are extremely dangerous during the window of time before the RNG has gathered enough entropy, during which time nonces may become predictable (or repeated), ephemeral keys may not be secret, and so forth. Moreover, it's unclear how realistic "premature next" is from an attack perspective, if these attacks even make sense in practice. Put together -- and discussed in more detail in the thread below -- these constitute grounds for just doing away with the current code that pretends to handle premature next. I say "pretends" because it wasn't doing an especially great job at it either; should we change our mind about this direction, we would probably implement Fortuna to "fix" the "problem", in which case, removing the pretend solution still makes sense. This also reduces the crng reseed period from 5 minutes down to 1 minute. The rationale from the thread might lead us toward reducing that even further in the future (or even eliminating it), but that remains a topic of a future commit. At a high level, this patch changes semantics from: Before: Seed for the first time after 256 "bits" of estimated entropy have been accumulated since the system booted. Thereafter, reseed once every five minutes, but only if 256 new "bits" have been accumulated since the last reseeding. After: Seed for the first time after 256 "bits" of estimated entropy have been accumulated since the system booted. Thereafter, reseed once every minute. Most of this patch is renaming and removing: POOL_MIN_BITS becomes POOL_INIT_BITS, credit_entropy_bits() becomes credit_init_bits(), crng_reseed() loses its "force" parameter since it's now always true, the drain_entropy() function no longer has any use so it's removed, entropy estimation is skipped if we've already init'd, the various notifiers for "low on entropy" are now only active prior to init, and finally, some documentation comments are cleaned up here and there. Link: https://lore.kernel.org/lkml/YmlMGx6+uigkGiZ0@zx2c4.com/ Cc: Theodore Ts'o Cc: Nadia Heninger Cc: Tom Ristenpart Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 176 +++++++++++++++--------------------------- 1 file changed, 63 insertions(+), 113 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 9beba425c508..5be537f80bab 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -15,14 +15,12 @@ * - Sysctl interface. * * The high level overview is that there is one input pool, into which - * various pieces of data are hashed. Some of that data is then "credited" as - * having a certain number of bits of entropy. When enough bits of entropy are - * available, the hash is finalized and handed as a key to a stream cipher that - * expands it indefinitely for various consumers. This key is periodically - * refreshed as the various entropy collectors, described below, add data to the - * input pool and credit it. There is currently no Fortuna-like scheduler - * involved, which can lead to malicious entropy sources causing a premature - * reseed, and the entropy estimates are, at best, conservative guesses. + * various pieces of data are hashed. Prior to initialization, some of that + * data is then "credited" as having a certain number of bits of entropy. + * When enough bits of entropy are available, the hash is finalized and + * handed as a key to a stream cipher that expands it indefinitely for + * various consumers. This key is periodically refreshed as the various + * entropy collectors, described below, add data to the input pool. */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt @@ -233,7 +231,10 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller, void * *********************************************************************/ -enum { CRNG_RESEED_INTERVAL = 300 * HZ }; +enum { + CRNG_RESEED_START_INTERVAL = HZ, + CRNG_RESEED_INTERVAL = 60 * HZ +}; static struct { u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long)); @@ -255,16 +256,10 @@ static DEFINE_PER_CPU(struct crng, crngs) = { .lock = INIT_LOCAL_LOCK(crngs.lock), }; -/* Used by crng_reseed() to extract a new seed from the input pool. */ -static bool drain_entropy(void *buf, size_t nbytes); -/* Used by crng_make_state() to extract a new seed when crng_init==0. */ +/* Used by crng_reseed() and crng_make_state() to extract a new seed from the input pool. */ static void extract_entropy(void *buf, size_t nbytes); -/* - * This extracts a new crng key from the input pool, but only if there is a - * sufficient amount of entropy available, in order to mitigate bruteforcing - * of newly added bits. - */ +/* This extracts a new crng key from the input pool. */ static void crng_reseed(void) { unsigned long flags; @@ -272,9 +267,7 @@ static void crng_reseed(void) u8 key[CHACHA_KEY_SIZE]; bool finalize_init = false; - /* Only reseed if we can, to prevent brute forcing a small amount of new bits. */ - if (!drain_entropy(key, sizeof(key))) - return; + extract_entropy(key, sizeof(key)); /* * We copy the new key into the base_crng, overwriting the old one, @@ -346,10 +339,10 @@ static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE], } /* - * Return whether the crng seed is considered to be sufficiently - * old that a reseeding might be attempted. This happens if the last - * reseeding was CRNG_RESEED_INTERVAL ago, or during early boot, at - * an interval proportional to the uptime. + * Return whether the crng seed is considered to be sufficiently old + * that a reseeding is needed. This happens if the last reseeding + * was CRNG_RESEED_INTERVAL ago, or during early boot, at an interval + * proportional to the uptime. */ static bool crng_has_old_seed(void) { @@ -361,7 +354,7 @@ static bool crng_has_old_seed(void) if (uptime >= CRNG_RESEED_INTERVAL / HZ * 2) WRITE_ONCE(early_boot, false); else - interval = max_t(unsigned int, 5 * HZ, + interval = max_t(unsigned int, CRNG_RESEED_START_INTERVAL, (unsigned int)uptime / 2 * HZ); } return time_after(jiffies, READ_ONCE(base_crng.birth) + interval); @@ -403,8 +396,8 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], } /* - * If the base_crng is old enough, we try to reseed, which in turn - * bumps the generation counter that we check below. + * If the base_crng is old enough, we reseed, which in turn bumps the + * generation counter that we check below. */ if (unlikely(crng_has_old_seed())) crng_reseed(); @@ -736,30 +729,24 @@ EXPORT_SYMBOL(get_random_bytes_arch); * * After which, if added entropy should be credited: * - * static void credit_entropy_bits(size_t nbits) + * static void credit_init_bits(size_t nbits) * - * Finally, extract entropy via these two, with the latter one - * setting the entropy count to zero and extracting only if there - * is POOL_MIN_BITS entropy credited prior: + * Finally, extract entropy via: * * static void extract_entropy(void *buf, size_t nbytes) - * static bool drain_entropy(void *buf, size_t nbytes) * **********************************************************************/ enum { POOL_BITS = BLAKE2S_HASH_SIZE * 8, - POOL_MIN_BITS = POOL_BITS, /* No point in settling for less. */ - POOL_FAST_INIT_BITS = POOL_MIN_BITS / 2 + POOL_INIT_BITS = POOL_BITS, /* No point in settling for less. */ + POOL_FAST_INIT_BITS = POOL_INIT_BITS / 2 }; -/* For notifying userspace should write into /dev/random. */ -static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); - static struct { struct blake2s_state hash; spinlock_t lock; - unsigned int entropy_count; + unsigned int init_bits; } input_pool = { .hash.h = { BLAKE2S_IV0 ^ (0x01010000 | BLAKE2S_HASH_SIZE), BLAKE2S_IV1, BLAKE2S_IV2, BLAKE2S_IV3, BLAKE2S_IV4, @@ -774,9 +761,9 @@ static void _mix_pool_bytes(const void *in, size_t nbytes) } /* - * This function adds bytes into the entropy "pool". It does not - * update the entropy estimate. The caller should call - * credit_entropy_bits if this is appropriate. + * This function adds bytes into the input pool. It does not + * update the initialization bit counter; the caller should call + * credit_init_bits if this is appropriate. */ static void mix_pool_bytes(const void *in, size_t nbytes) { @@ -833,43 +820,24 @@ static void extract_entropy(void *buf, size_t nbytes) memzero_explicit(&block, sizeof(block)); } -/* - * First we make sure we have POOL_MIN_BITS of entropy in the pool, and then we - * set the entropy count to zero (but don't actually touch any data). Only then - * can we extract a new key with extract_entropy(). - */ -static bool drain_entropy(void *buf, size_t nbytes) +static void credit_init_bits(size_t nbits) { - unsigned int entropy_count; - do { - entropy_count = READ_ONCE(input_pool.entropy_count); - if (entropy_count < POOL_MIN_BITS) - return false; - } while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count); - extract_entropy(buf, nbytes); - wake_up_interruptible(&random_write_wait); - kill_fasync(&fasync, SIGIO, POLL_OUT); - return true; -} - -static void credit_entropy_bits(size_t nbits) -{ - unsigned int entropy_count, orig, add; + unsigned int init_bits, orig, add; unsigned long flags; - if (!nbits) + if (crng_ready() || !nbits) return; add = min_t(size_t, nbits, POOL_BITS); do { - orig = READ_ONCE(input_pool.entropy_count); - entropy_count = min_t(unsigned int, POOL_BITS, orig + add); - } while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig); + orig = READ_ONCE(input_pool.init_bits); + init_bits = min_t(unsigned int, POOL_BITS, orig + add); + } while (cmpxchg(&input_pool.init_bits, orig, init_bits) != orig); - if (!crng_ready() && entropy_count >= POOL_MIN_BITS) + if (!crng_ready() && init_bits >= POOL_INIT_BITS) crng_reseed(); - else if (unlikely(crng_init == 0 && entropy_count >= POOL_FAST_INIT_BITS)) { + else if (unlikely(crng_init == 0 && init_bits >= POOL_FAST_INIT_BITS)) { spin_lock_irqsave(&base_crng.lock, flags); if (crng_init == 0) { extract_entropy(base_crng.key, sizeof(base_crng.key)); @@ -975,13 +943,10 @@ int __init rand_initialize(void) _mix_pool_bytes(&now, sizeof(now)); _mix_pool_bytes(utsname(), sizeof(*(utsname()))); - extract_entropy(base_crng.key, sizeof(base_crng.key)); - ++base_crng.generation; - - if (arch_init && trust_cpu && !crng_ready()) { - crng_init = 2; - pr_notice("crng init done (trusting CPU's manufacturer)\n"); - } + if (crng_ready()) + crng_reseed(); + else if (arch_init && trust_cpu) + credit_init_bits(BLAKE2S_BLOCK_SIZE * 8); if (ratelimit_disable) { urandom_warning.interval = 0; @@ -1035,6 +1000,9 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned int nu _mix_pool_bytes(&num, sizeof(num)); spin_unlock_irqrestore(&input_pool.lock, flags); + if (crng_ready()) + return; + /* * Calculate number of bits of randomness we probably added. * We take into account the first, second and third-order deltas @@ -1065,7 +1033,7 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned int nu * Round down by 1 bit on general principles, * and limit entropy estimate to 12 bits. */ - credit_entropy_bits(min_t(unsigned int, fls(delta >> 1), 11)); + credit_init_bits(min_t(unsigned int, fls(delta >> 1), 11)); } void add_input_randomness(unsigned int type, unsigned int code, @@ -1118,18 +1086,15 @@ void rand_initialize_disk(struct gendisk *disk) void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy) { - /* - * Throttle writing if we're above the trickle threshold. - * We'll be woken up again once below POOL_MIN_BITS, when - * the calling thread is about to terminate, or once - * CRNG_RESEED_INTERVAL has elapsed. - */ - wait_event_interruptible_timeout(random_write_wait, - kthread_should_stop() || - input_pool.entropy_count < POOL_MIN_BITS, - CRNG_RESEED_INTERVAL); mix_pool_bytes(buffer, count); - credit_entropy_bits(entropy); + credit_init_bits(entropy); + + /* + * Throttle writing to once every CRNG_RESEED_INTERVAL, unless + * we're not yet initialized. + */ + if (!kthread_should_stop() && crng_ready()) + schedule_timeout_interruptible(CRNG_RESEED_INTERVAL); } EXPORT_SYMBOL_GPL(add_hwgenerator_randomness); @@ -1141,7 +1106,7 @@ void add_bootloader_randomness(const void *buf, size_t size) { mix_pool_bytes(buf, size); if (trust_bootloader) - credit_entropy_bits(size * 8); + credit_init_bits(size * 8); } EXPORT_SYMBOL_GPL(add_bootloader_randomness); @@ -1242,7 +1207,7 @@ static void mix_interrupt_randomness(struct work_struct *work) local_irq_enable(); mix_pool_bytes(pool, sizeof(pool)); - credit_entropy_bits(1); + credit_init_bits(1); memzero_explicit(pool, sizeof(pool)); } @@ -1289,7 +1254,7 @@ EXPORT_SYMBOL_GPL(add_interrupt_randomness); */ static void entropy_timer(struct timer_list *t) { - credit_entropy_bits(1); + credit_init_bits(1); } /* @@ -1382,16 +1347,8 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int, static __poll_t random_poll(struct file *file, poll_table *wait) { - __poll_t mask; - poll_wait(file, &crng_init_wait, wait); - poll_wait(file, &random_write_wait, wait); - mask = 0; - if (crng_ready()) - mask |= EPOLLIN | EPOLLRDNORM; - if (input_pool.entropy_count < POOL_MIN_BITS) - mask |= EPOLLOUT | EPOLLWRNORM; - return mask; + return crng_ready() ? EPOLLIN | EPOLLRDNORM : EPOLLOUT | EPOLLWRNORM; } static int write_pool(const char __user *ubuf, size_t count) @@ -1464,7 +1421,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) switch (cmd) { case RNDGETENTCNT: /* Inherently racy, no point locking. */ - if (put_user(input_pool.entropy_count, p)) + if (put_user(input_pool.init_bits, p)) return -EFAULT; return 0; case RNDADDTOENTCNT: @@ -1474,7 +1431,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) return -EFAULT; if (ent_count < 0) return -EINVAL; - credit_entropy_bits(ent_count); + credit_init_bits(ent_count); return 0; case RNDADDENTROPY: if (!capable(CAP_SYS_ADMIN)) @@ -1488,20 +1445,13 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) retval = write_pool((const char __user *)p, size); if (retval < 0) return retval; - credit_entropy_bits(ent_count); + credit_init_bits(ent_count); return 0; case RNDZAPENTCNT: case RNDCLEARPOOL: - /* - * Clear the entropy pool counters. We no longer clear - * the entropy pool, as that's silly. - */ + /* No longer has any effect. */ if (!capable(CAP_SYS_ADMIN)) return -EPERM; - if (xchg(&input_pool.entropy_count, 0) >= POOL_MIN_BITS) { - wake_up_interruptible(&random_write_wait); - kill_fasync(&fasync, SIGIO, POLL_OUT); - } return 0; case RNDRESEEDCRNG: if (!capable(CAP_SYS_ADMIN)) @@ -1560,7 +1510,7 @@ const struct file_operations urandom_fops = { * * - write_wakeup_threshold - the amount of entropy in the input pool * below which write polls to /dev/random will unblock, requesting - * more entropy, tied to the POOL_MIN_BITS constant. It is writable + * more entropy, tied to the POOL_INIT_BITS constant. It is writable * to avoid breaking old userspaces, but writing to it does not * change any behavior of the RNG. * @@ -1575,7 +1525,7 @@ const struct file_operations urandom_fops = { #include static int sysctl_random_min_urandom_seed = CRNG_RESEED_INTERVAL / HZ; -static int sysctl_random_write_wakeup_bits = POOL_MIN_BITS; +static int sysctl_random_write_wakeup_bits = POOL_INIT_BITS; static int sysctl_poolsize = POOL_BITS; static u8 sysctl_bootid[UUID_SIZE]; @@ -1632,7 +1582,7 @@ struct ctl_table random_table[] = { }, { .procname = "entropy_avail", - .data = &input_pool.entropy_count, + .data = &input_pool.init_bits, .maxlen = sizeof(int), .mode = 0444, .proc_handler = proc_dointvec, From 999b0c9e8a97d8763edf0529cff573331f59162a Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 6 May 2022 18:27:38 +0200 Subject: [PATCH 0238/1842] random: order timer entropy functions below interrupt functions commit a4b5c26b79ffdfcfb816c198f2fc2b1e7b5b580f upstream. There are no code changes here; this is just a reordering of functions, so that in subsequent commits, the timer entropy functions can call into the interrupt ones. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 238 +++++++++++++++++++++--------------------- 1 file changed, 119 insertions(+), 119 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 5be537f80bab..b5c58997cd27 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -856,13 +856,13 @@ static void credit_init_bits(size_t nbits) * the above entropy accumulation routines: * * void add_device_randomness(const void *buf, size_t size); - * void add_input_randomness(unsigned int type, unsigned int code, - * unsigned int value); - * void add_disk_randomness(struct gendisk *disk); * void add_hwgenerator_randomness(const void *buffer, size_t count, * size_t entropy); * void add_bootloader_randomness(const void *buf, size_t size); * void add_interrupt_randomness(int irq); + * void add_input_randomness(unsigned int type, unsigned int code, + * unsigned int value); + * void add_disk_randomness(struct gendisk *disk); * * add_device_randomness() adds data to the input pool that * is likely to differ between two devices (or possibly even per boot). @@ -872,19 +872,6 @@ static void credit_init_bits(size_t nbits) * that might otherwise be identical and have very little entropy * available to them (particularly common in the embedded world). * - * add_input_randomness() uses the input layer interrupt timing, as well - * as the event type information from the hardware. - * - * add_disk_randomness() uses what amounts to the seek time of block - * layer request events, on a per-disk_devt basis, as input to the - * entropy pool. Note that high-speed solid state drives with very low - * seek times do not make for good sources of entropy, as their seek - * times are usually fairly consistent. - * - * The above two routines try to estimate how many bits of entropy - * to credit. They do this by keeping track of the first and second - * order deltas of the event timings. - * * add_hwgenerator_randomness() is for true hardware RNGs, and will credit * entropy as specified by the caller. If the entropy pool is full it will * block until more entropy is needed. @@ -898,6 +885,19 @@ static void credit_init_bits(size_t nbits) * as inputs, it feeds the input pool roughly once a second or after 64 * interrupts, crediting 1 bit of entropy for whichever comes first. * + * add_input_randomness() uses the input layer interrupt timing, as well + * as the event type information from the hardware. + * + * add_disk_randomness() uses what amounts to the seek time of block + * layer request events, on a per-disk_devt basis, as input to the + * entropy pool. Note that high-speed solid state drives with very low + * seek times do not make for good sources of entropy, as their seek + * times are usually fairly consistent. + * + * The last two routines try to estimate how many bits of entropy + * to credit. They do this by keeping track of the first and second + * order deltas of the event timings. + * **********************************************************************/ static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU); @@ -975,109 +975,6 @@ void add_device_randomness(const void *buf, size_t size) } EXPORT_SYMBOL(add_device_randomness); -/* There is one of these per entropy source */ -struct timer_rand_state { - unsigned long last_time; - long last_delta, last_delta2; -}; - -/* - * This function adds entropy to the entropy "pool" by using timing - * delays. It uses the timer_rand_state structure to make an estimate - * of how many bits of entropy this call has added to the pool. - * - * The number "num" is also added to the pool - it should somehow describe - * the type of event which just happened. This is currently 0-255 for - * keyboard scan codes, and 256 upwards for interrupts. - */ -static void add_timer_randomness(struct timer_rand_state *state, unsigned int num) -{ - unsigned long entropy = random_get_entropy(), now = jiffies, flags; - long delta, delta2, delta3; - - spin_lock_irqsave(&input_pool.lock, flags); - _mix_pool_bytes(&entropy, sizeof(entropy)); - _mix_pool_bytes(&num, sizeof(num)); - spin_unlock_irqrestore(&input_pool.lock, flags); - - if (crng_ready()) - return; - - /* - * Calculate number of bits of randomness we probably added. - * We take into account the first, second and third-order deltas - * in order to make our estimate. - */ - delta = now - READ_ONCE(state->last_time); - WRITE_ONCE(state->last_time, now); - - delta2 = delta - READ_ONCE(state->last_delta); - WRITE_ONCE(state->last_delta, delta); - - delta3 = delta2 - READ_ONCE(state->last_delta2); - WRITE_ONCE(state->last_delta2, delta2); - - if (delta < 0) - delta = -delta; - if (delta2 < 0) - delta2 = -delta2; - if (delta3 < 0) - delta3 = -delta3; - if (delta > delta2) - delta = delta2; - if (delta > delta3) - delta = delta3; - - /* - * delta is now minimum absolute delta. - * Round down by 1 bit on general principles, - * and limit entropy estimate to 12 bits. - */ - credit_init_bits(min_t(unsigned int, fls(delta >> 1), 11)); -} - -void add_input_randomness(unsigned int type, unsigned int code, - unsigned int value) -{ - static unsigned char last_value; - static struct timer_rand_state input_timer_state = { INITIAL_JIFFIES }; - - /* Ignore autorepeat and the like. */ - if (value == last_value) - return; - - last_value = value; - add_timer_randomness(&input_timer_state, - (type << 4) ^ code ^ (code >> 4) ^ value); -} -EXPORT_SYMBOL_GPL(add_input_randomness); - -#ifdef CONFIG_BLOCK -void add_disk_randomness(struct gendisk *disk) -{ - if (!disk || !disk->random) - return; - /* First major is 1, so we get >= 0x200 here. */ - add_timer_randomness(disk->random, 0x100 + disk_devt(disk)); -} -EXPORT_SYMBOL_GPL(add_disk_randomness); - -void rand_initialize_disk(struct gendisk *disk) -{ - struct timer_rand_state *state; - - /* - * If kzalloc returns null, we just won't use that entropy - * source. - */ - state = kzalloc(sizeof(struct timer_rand_state), GFP_KERNEL); - if (state) { - state->last_time = INITIAL_JIFFIES; - disk->random = state; - } -} -#endif - /* * Interface for in-kernel drivers of true hardware RNGs. * Those devices may produce endless random bits and will be throttled @@ -1239,6 +1136,109 @@ void add_interrupt_randomness(int irq) } EXPORT_SYMBOL_GPL(add_interrupt_randomness); +/* There is one of these per entropy source */ +struct timer_rand_state { + unsigned long last_time; + long last_delta, last_delta2; +}; + +/* + * This function adds entropy to the entropy "pool" by using timing + * delays. It uses the timer_rand_state structure to make an estimate + * of how many bits of entropy this call has added to the pool. + * + * The number "num" is also added to the pool - it should somehow describe + * the type of event which just happened. This is currently 0-255 for + * keyboard scan codes, and 256 upwards for interrupts. + */ +static void add_timer_randomness(struct timer_rand_state *state, unsigned int num) +{ + unsigned long entropy = random_get_entropy(), now = jiffies, flags; + long delta, delta2, delta3; + + spin_lock_irqsave(&input_pool.lock, flags); + _mix_pool_bytes(&entropy, sizeof(entropy)); + _mix_pool_bytes(&num, sizeof(num)); + spin_unlock_irqrestore(&input_pool.lock, flags); + + if (crng_ready()) + return; + + /* + * Calculate number of bits of randomness we probably added. + * We take into account the first, second and third-order deltas + * in order to make our estimate. + */ + delta = now - READ_ONCE(state->last_time); + WRITE_ONCE(state->last_time, now); + + delta2 = delta - READ_ONCE(state->last_delta); + WRITE_ONCE(state->last_delta, delta); + + delta3 = delta2 - READ_ONCE(state->last_delta2); + WRITE_ONCE(state->last_delta2, delta2); + + if (delta < 0) + delta = -delta; + if (delta2 < 0) + delta2 = -delta2; + if (delta3 < 0) + delta3 = -delta3; + if (delta > delta2) + delta = delta2; + if (delta > delta3) + delta = delta3; + + /* + * delta is now minimum absolute delta. + * Round down by 1 bit on general principles, + * and limit entropy estimate to 12 bits. + */ + credit_init_bits(min_t(unsigned int, fls(delta >> 1), 11)); +} + +void add_input_randomness(unsigned int type, unsigned int code, + unsigned int value) +{ + static unsigned char last_value; + static struct timer_rand_state input_timer_state = { INITIAL_JIFFIES }; + + /* Ignore autorepeat and the like. */ + if (value == last_value) + return; + + last_value = value; + add_timer_randomness(&input_timer_state, + (type << 4) ^ code ^ (code >> 4) ^ value); +} +EXPORT_SYMBOL_GPL(add_input_randomness); + +#ifdef CONFIG_BLOCK +void add_disk_randomness(struct gendisk *disk) +{ + if (!disk || !disk->random) + return; + /* First major is 1, so we get >= 0x200 here. */ + add_timer_randomness(disk->random, 0x100 + disk_devt(disk)); +} +EXPORT_SYMBOL_GPL(add_disk_randomness); + +void rand_initialize_disk(struct gendisk *disk) +{ + struct timer_rand_state *state; + + /* + * If kzalloc returns null, we just won't use that entropy + * source. + */ + state = kzalloc(sizeof(struct timer_rand_state), GFP_KERNEL); + if (state) { + state->last_time = INITIAL_JIFFIES; + disk->random = state; + } +} +#endif + /* * Each time the timer fires, we expect that we got an unpredictable * jump in the cycle counter. Even if the timer is running on another From 18413472339bb78395514b45012cb63a6fba26aa Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 6 May 2022 18:30:51 +0200 Subject: [PATCH 0239/1842] random: do not use input pool from hard IRQs commit e3e33fc2ea7fcefd0d761db9d6219f83b4248f5c upstream. Years ago, a separate fast pool was added for interrupts, so that the cost associated with taking the input pool spinlocks and mixing into it would be avoided in places where latency is critical. However, one oversight was that add_input_randomness() and add_disk_randomness() still sometimes are called directly from the interrupt handler, rather than being deferred to a thread. This means that some unlucky interrupts will be caught doing a blake2s_compress() call and potentially spinning on input_pool.lock, which can also be taken by unprivileged users by writing into /dev/urandom. In order to fix this, add_timer_randomness() now checks whether it is being called from a hard IRQ and if so, just mixes into the per-cpu IRQ fast pool using fast_mix(), which is much faster and can be done lock-free. A nice consequence of this, as well, is that it means hard IRQ context FPU support is likely no longer useful. The entropy estimation algorithm used by add_timer_randomness() is also somewhat different than the one used for add_interrupt_randomness(). The former looks at deltas of deltas of deltas, while the latter just waits for 64 interrupts for one bit or for one second since the last bit. In order to bridge these, and since add_interrupt_randomness() runs after an add_timer_randomness() that's called from hard IRQ, we add to the fast pool credit the related amount, and then subtract one to account for add_interrupt_randomness()'s contribution. A downside of this, however, is that the num argument is potentially attacker controlled, which puts a bit more pressure on the fast_mix() sponge to do more than it's really intended to do. As a mitigating factor, the first 96 bits of input aren't attacker controlled (a cycle counter followed by zeros), which means it's essentially two rounds of siphash rather than one, which is somewhat better. It's also not that much different from add_interrupt_randomness()'s use of the irq stack instruction pointer register. Cc: Thomas Gleixner Cc: Filipe Manana Cc: Peter Zijlstra Cc: Borislav Petkov Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 51 ++++++++++++++++++++++++++++++------------- 1 file changed, 36 insertions(+), 15 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index b5c58997cd27..9e3a1e93533d 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1086,6 +1086,7 @@ static void mix_interrupt_randomness(struct work_struct *work) * we don't wind up "losing" some. */ unsigned long pool[2]; + unsigned int count; /* Check to see if we're running on the wrong CPU due to hotplug. */ local_irq_disable(); @@ -1099,12 +1100,13 @@ static void mix_interrupt_randomness(struct work_struct *work) * consistent view, before we reenable irqs again. */ memcpy(pool, fast_pool->pool, sizeof(pool)); + count = fast_pool->count; fast_pool->count = 0; fast_pool->last = jiffies; local_irq_enable(); mix_pool_bytes(pool, sizeof(pool)); - credit_init_bits(1); + credit_init_bits(max(1u, (count & U16_MAX) / 64)); memzero_explicit(pool, sizeof(pool)); } @@ -1144,22 +1146,30 @@ struct timer_rand_state { /* * This function adds entropy to the entropy "pool" by using timing - * delays. It uses the timer_rand_state structure to make an estimate - * of how many bits of entropy this call has added to the pool. - * - * The number "num" is also added to the pool - it should somehow describe - * the type of event which just happened. This is currently 0-255 for - * keyboard scan codes, and 256 upwards for interrupts. + * delays. It uses the timer_rand_state structure to make an estimate + * of how many bits of entropy this call has added to the pool. The + * value "num" is also added to the pool; it should somehow describe + * the type of event that just happened. */ static void add_timer_randomness(struct timer_rand_state *state, unsigned int num) { unsigned long entropy = random_get_entropy(), now = jiffies, flags; long delta, delta2, delta3; + unsigned int bits; - spin_lock_irqsave(&input_pool.lock, flags); - _mix_pool_bytes(&entropy, sizeof(entropy)); - _mix_pool_bytes(&num, sizeof(num)); - spin_unlock_irqrestore(&input_pool.lock, flags); + /* + * If we're in a hard IRQ, add_interrupt_randomness() will be called + * sometime after, so mix into the fast pool. + */ + if (in_irq()) { + fast_mix(this_cpu_ptr(&irq_randomness)->pool, + (unsigned long[2]){ entropy, num }); + } else { + spin_lock_irqsave(&input_pool.lock, flags); + _mix_pool_bytes(&entropy, sizeof(entropy)); + _mix_pool_bytes(&num, sizeof(num)); + spin_unlock_irqrestore(&input_pool.lock, flags); + } if (crng_ready()) return; @@ -1190,11 +1200,22 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned int nu delta = delta3; /* - * delta is now minimum absolute delta. - * Round down by 1 bit on general principles, - * and limit entropy estimate to 12 bits. + * delta is now minimum absolute delta. Round down by 1 bit + * on general principles, and limit entropy estimate to 11 bits. */ - credit_init_bits(min_t(unsigned int, fls(delta >> 1), 11)); + bits = min(fls(delta >> 1), 11); + + /* + * As mentioned above, if we're in a hard IRQ, add_interrupt_randomness() + * will run after this, which uses a different crediting scheme of 1 bit + * per every 64 interrupts. In order to let that function do accounting + * close to the one in this function, we credit a full 64/64 bit per bit, + * and then subtract one to account for the extra one added. + */ + if (in_irq()) + this_cpu_ptr(&irq_randomness)->count += max(1u, bits * 64) - 1; + else + credit_init_bits(bits); } void add_input_randomness(unsigned int type, unsigned int code, From 772edeb8c76abcfc37bb7f75e7679936b6c50b2c Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 6 May 2022 23:19:43 +0200 Subject: [PATCH 0240/1842] random: help compiler out with fast_mix() by using simpler arguments commit 791332b3cbb080510954a4c152ce02af8832eac9 upstream. Now that fast_mix() has more than one caller, gcc no longer inlines it. That's fine. But it also doesn't handle the compound literal argument we pass it very efficiently, nor does it handle the loop as well as it could. So just expand the code to spell out this function so that it generates the same code as it did before. Performance-wise, this now behaves as it did before the last commit. The difference in actual code size on x86 is 45 bytes, which is less than a cache line. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 44 ++++++++++++++++++++++--------------------- 1 file changed, 23 insertions(+), 21 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 9e3a1e93533d..854e1d2cbd29 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1031,25 +1031,30 @@ static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = { * and therefore this has no security on its own. s represents the * four-word SipHash state, while v represents a two-word input. */ -static void fast_mix(unsigned long s[4], const unsigned long v[2]) +static void fast_mix(unsigned long s[4], unsigned long v1, unsigned long v2) { - size_t i; - - for (i = 0; i < 2; ++i) { - s[3] ^= v[i]; #ifdef CONFIG_64BIT - s[0] += s[1]; s[1] = rol64(s[1], 13); s[1] ^= s[0]; s[0] = rol64(s[0], 32); - s[2] += s[3]; s[3] = rol64(s[3], 16); s[3] ^= s[2]; - s[0] += s[3]; s[3] = rol64(s[3], 21); s[3] ^= s[0]; - s[2] += s[1]; s[1] = rol64(s[1], 17); s[1] ^= s[2]; s[2] = rol64(s[2], 32); +#define PERM() do { \ + s[0] += s[1]; s[1] = rol64(s[1], 13); s[1] ^= s[0]; s[0] = rol64(s[0], 32); \ + s[2] += s[3]; s[3] = rol64(s[3], 16); s[3] ^= s[2]; \ + s[0] += s[3]; s[3] = rol64(s[3], 21); s[3] ^= s[0]; \ + s[2] += s[1]; s[1] = rol64(s[1], 17); s[1] ^= s[2]; s[2] = rol64(s[2], 32); \ +} while (0) #else - s[0] += s[1]; s[1] = rol32(s[1], 5); s[1] ^= s[0]; s[0] = rol32(s[0], 16); - s[2] += s[3]; s[3] = rol32(s[3], 8); s[3] ^= s[2]; - s[0] += s[3]; s[3] = rol32(s[3], 7); s[3] ^= s[0]; - s[2] += s[1]; s[1] = rol32(s[1], 13); s[1] ^= s[2]; s[2] = rol32(s[2], 16); +#define PERM() do { \ + s[0] += s[1]; s[1] = rol32(s[1], 5); s[1] ^= s[0]; s[0] = rol32(s[0], 16); \ + s[2] += s[3]; s[3] = rol32(s[3], 8); s[3] ^= s[2]; \ + s[0] += s[3]; s[3] = rol32(s[3], 7); s[3] ^= s[0]; \ + s[2] += s[1]; s[1] = rol32(s[1], 13); s[1] ^= s[2]; s[2] = rol32(s[2], 16); \ +} while (0) #endif - s[0] ^= v[i]; - } + + s[3] ^= v1; + PERM(); + s[0] ^= v1; + s[3] ^= v2; + PERM(); + s[0] ^= v2; } #ifdef CONFIG_SMP @@ -1119,10 +1124,8 @@ void add_interrupt_randomness(int irq) struct pt_regs *regs = get_irq_regs(); unsigned int new_count; - fast_mix(fast_pool->pool, (unsigned long[2]){ - entropy, - (regs ? instruction_pointer(regs) : _RET_IP_) ^ swab(irq) - }); + fast_mix(fast_pool->pool, entropy, + (regs ? instruction_pointer(regs) : _RET_IP_) ^ swab(irq)); new_count = ++fast_pool->count; if (new_count & MIX_INFLIGHT) @@ -1162,8 +1165,7 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned int nu * sometime after, so mix into the fast pool. */ if (in_irq()) { - fast_mix(this_cpu_ptr(&irq_randomness)->pool, - (unsigned long[2]){ entropy, num }); + fast_mix(this_cpu_ptr(&irq_randomness)->pool, entropy, num); } else { spin_lock_irqsave(&input_pool.lock, flags); _mix_pool_bytes(&entropy, sizeof(entropy)); From 30e9f362661c0311a2a89531bcdbf98c3313e3c6 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 7 May 2022 14:03:46 +0200 Subject: [PATCH 0241/1842] siphash: use one source of truth for siphash permutations commit e73aaae2fa9024832e1f42e30c787c7baf61d014 upstream. The SipHash family of permutations is currently used in three places: - siphash.c itself, used in the ordinary way it was intended. - random32.c, in a construction from an anonymous contributor. - random.c, as part of its fast_mix function. Each one of these places reinvents the wheel with the same C code, same rotation constants, and same symmetry-breaking constants. This commit tidies things up a bit by placing macros for the permutations and constants into siphash.h, where each of the three .c users can access them. It also leaves a note dissuading more users of them from emerging. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 30 +++++++----------------------- include/linux/prandom.h | 23 +++++++---------------- include/linux/siphash.h | 28 ++++++++++++++++++++++++++++ lib/siphash.c | 32 ++++++++++---------------------- 4 files changed, 52 insertions(+), 61 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 854e1d2cbd29..7fe0c398be85 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -51,6 +51,7 @@ #include #include #include +#include #include #include #include @@ -1016,12 +1017,11 @@ struct fast_pool { static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = { #ifdef CONFIG_64BIT - /* SipHash constants */ - .pool = { 0x736f6d6570736575UL, 0x646f72616e646f6dUL, - 0x6c7967656e657261UL, 0x7465646279746573UL } +#define FASTMIX_PERM SIPHASH_PERMUTATION + .pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 } #else - /* HalfSipHash constants */ - .pool = { 0, 0, 0x6c796765U, 0x74656462U } +#define FASTMIX_PERM HSIPHASH_PERMUTATION + .pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 } #endif }; @@ -1033,27 +1033,11 @@ static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = { */ static void fast_mix(unsigned long s[4], unsigned long v1, unsigned long v2) { -#ifdef CONFIG_64BIT -#define PERM() do { \ - s[0] += s[1]; s[1] = rol64(s[1], 13); s[1] ^= s[0]; s[0] = rol64(s[0], 32); \ - s[2] += s[3]; s[3] = rol64(s[3], 16); s[3] ^= s[2]; \ - s[0] += s[3]; s[3] = rol64(s[3], 21); s[3] ^= s[0]; \ - s[2] += s[1]; s[1] = rol64(s[1], 17); s[1] ^= s[2]; s[2] = rol64(s[2], 32); \ -} while (0) -#else -#define PERM() do { \ - s[0] += s[1]; s[1] = rol32(s[1], 5); s[1] ^= s[0]; s[0] = rol32(s[0], 16); \ - s[2] += s[3]; s[3] = rol32(s[3], 8); s[3] ^= s[2]; \ - s[0] += s[3]; s[3] = rol32(s[3], 7); s[3] ^= s[0]; \ - s[2] += s[1]; s[1] = rol32(s[1], 13); s[1] ^= s[2]; s[2] = rol32(s[2], 16); \ -} while (0) -#endif - s[3] ^= v1; - PERM(); + FASTMIX_PERM(s[0], s[1], s[2], s[3]); s[0] ^= v1; s[3] ^= v2; - PERM(); + FASTMIX_PERM(s[0], s[1], s[2], s[3]); s[0] ^= v2; } diff --git a/include/linux/prandom.h b/include/linux/prandom.h index 056d31317e49..a4aadd2dc153 100644 --- a/include/linux/prandom.h +++ b/include/linux/prandom.h @@ -10,6 +10,7 @@ #include #include +#include u32 prandom_u32(void); void prandom_bytes(void *buf, size_t nbytes); @@ -27,15 +28,10 @@ DECLARE_PER_CPU(unsigned long, net_rand_noise); * The core SipHash round function. Each line can be executed in * parallel given enough CPU resources. */ -#define PRND_SIPROUND(v0, v1, v2, v3) ( \ - v0 += v1, v1 = rol64(v1, 13), v2 += v3, v3 = rol64(v3, 16), \ - v1 ^= v0, v0 = rol64(v0, 32), v3 ^= v2, \ - v0 += v3, v3 = rol64(v3, 21), v2 += v1, v1 = rol64(v1, 17), \ - v3 ^= v0, v1 ^= v2, v2 = rol64(v2, 32) \ -) +#define PRND_SIPROUND(v0, v1, v2, v3) SIPHASH_PERMUTATION(v0, v1, v2, v3) -#define PRND_K0 (0x736f6d6570736575 ^ 0x6c7967656e657261) -#define PRND_K1 (0x646f72616e646f6d ^ 0x7465646279746573) +#define PRND_K0 (SIPHASH_CONST_0 ^ SIPHASH_CONST_2) +#define PRND_K1 (SIPHASH_CONST_1 ^ SIPHASH_CONST_3) #elif BITS_PER_LONG == 32 /* @@ -43,14 +39,9 @@ DECLARE_PER_CPU(unsigned long, net_rand_noise); * This is weaker, but 32-bit machines are not used for high-traffic * applications, so there is less output for an attacker to analyze. */ -#define PRND_SIPROUND(v0, v1, v2, v3) ( \ - v0 += v1, v1 = rol32(v1, 5), v2 += v3, v3 = rol32(v3, 8), \ - v1 ^= v0, v0 = rol32(v0, 16), v3 ^= v2, \ - v0 += v3, v3 = rol32(v3, 7), v2 += v1, v1 = rol32(v1, 13), \ - v3 ^= v0, v1 ^= v2, v2 = rol32(v2, 16) \ -) -#define PRND_K0 0x6c796765 -#define PRND_K1 0x74656462 +#define PRND_SIPROUND(v0, v1, v2, v3) HSIPHASH_PERMUTATION(v0, v1, v2, v3) +#define PRND_K0 (HSIPHASH_CONST_0 ^ HSIPHASH_CONST_2) +#define PRND_K1 (HSIPHASH_CONST_1 ^ HSIPHASH_CONST_3) #else #error Unsupported BITS_PER_LONG diff --git a/include/linux/siphash.h b/include/linux/siphash.h index 0cda61855d90..0bb5ecd507be 100644 --- a/include/linux/siphash.h +++ b/include/linux/siphash.h @@ -136,4 +136,32 @@ static inline u32 hsiphash(const void *data, size_t len, return ___hsiphash_aligned(data, len, key); } +/* + * These macros expose the raw SipHash and HalfSipHash permutations. + * Do not use them directly! If you think you have a use for them, + * be sure to CC the maintainer of this file explaining why. + */ + +#define SIPHASH_PERMUTATION(a, b, c, d) ( \ + (a) += (b), (b) = rol64((b), 13), (b) ^= (a), (a) = rol64((a), 32), \ + (c) += (d), (d) = rol64((d), 16), (d) ^= (c), \ + (a) += (d), (d) = rol64((d), 21), (d) ^= (a), \ + (c) += (b), (b) = rol64((b), 17), (b) ^= (c), (c) = rol64((c), 32)) + +#define SIPHASH_CONST_0 0x736f6d6570736575ULL +#define SIPHASH_CONST_1 0x646f72616e646f6dULL +#define SIPHASH_CONST_2 0x6c7967656e657261ULL +#define SIPHASH_CONST_3 0x7465646279746573ULL + +#define HSIPHASH_PERMUTATION(a, b, c, d) ( \ + (a) += (b), (b) = rol32((b), 5), (b) ^= (a), (a) = rol32((a), 16), \ + (c) += (d), (d) = rol32((d), 8), (d) ^= (c), \ + (a) += (d), (d) = rol32((d), 7), (d) ^= (a), \ + (c) += (b), (b) = rol32((b), 13), (b) ^= (c), (c) = rol32((c), 16)) + +#define HSIPHASH_CONST_0 0U +#define HSIPHASH_CONST_1 0U +#define HSIPHASH_CONST_2 0x6c796765U +#define HSIPHASH_CONST_3 0x74656462U + #endif /* _LINUX_SIPHASH_H */ diff --git a/lib/siphash.c b/lib/siphash.c index 025f0cbf6d7a..b4055b1cc2f6 100644 --- a/lib/siphash.c +++ b/lib/siphash.c @@ -18,19 +18,13 @@ #include #endif -#define SIPROUND \ - do { \ - v0 += v1; v1 = rol64(v1, 13); v1 ^= v0; v0 = rol64(v0, 32); \ - v2 += v3; v3 = rol64(v3, 16); v3 ^= v2; \ - v0 += v3; v3 = rol64(v3, 21); v3 ^= v0; \ - v2 += v1; v1 = rol64(v1, 17); v1 ^= v2; v2 = rol64(v2, 32); \ - } while (0) +#define SIPROUND SIPHASH_PERMUTATION(v0, v1, v2, v3) #define PREAMBLE(len) \ - u64 v0 = 0x736f6d6570736575ULL; \ - u64 v1 = 0x646f72616e646f6dULL; \ - u64 v2 = 0x6c7967656e657261ULL; \ - u64 v3 = 0x7465646279746573ULL; \ + u64 v0 = SIPHASH_CONST_0; \ + u64 v1 = SIPHASH_CONST_1; \ + u64 v2 = SIPHASH_CONST_2; \ + u64 v3 = SIPHASH_CONST_3; \ u64 b = ((u64)(len)) << 56; \ v3 ^= key->key[1]; \ v2 ^= key->key[0]; \ @@ -389,19 +383,13 @@ u32 hsiphash_4u32(const u32 first, const u32 second, const u32 third, } EXPORT_SYMBOL(hsiphash_4u32); #else -#define HSIPROUND \ - do { \ - v0 += v1; v1 = rol32(v1, 5); v1 ^= v0; v0 = rol32(v0, 16); \ - v2 += v3; v3 = rol32(v3, 8); v3 ^= v2; \ - v0 += v3; v3 = rol32(v3, 7); v3 ^= v0; \ - v2 += v1; v1 = rol32(v1, 13); v1 ^= v2; v2 = rol32(v2, 16); \ - } while (0) +#define HSIPROUND HSIPHASH_PERMUTATION(v0, v1, v2, v3) #define HPREAMBLE(len) \ - u32 v0 = 0; \ - u32 v1 = 0; \ - u32 v2 = 0x6c796765U; \ - u32 v3 = 0x74656462U; \ + u32 v0 = HSIPHASH_CONST_0; \ + u32 v1 = HSIPHASH_CONST_1; \ + u32 v2 = HSIPHASH_CONST_2; \ + u32 v3 = HSIPHASH_CONST_3; \ u32 b = ((u32)(len)) << 24; \ v3 ^= key->key[1]; \ v2 ^= key->key[0]; \ From cef9010b78c4e54313a911313227a743b2443aeb Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sun, 8 May 2022 13:20:30 +0200 Subject: [PATCH 0242/1842] random: use symbolic constants for crng_init states commit e3d2c5e79a999aa4e7d6f0127e16d3da5a4ff70d upstream. crng_init represents a state machine, with three states, and various rules for transitions. For the longest time, we've been managing these with "0", "1", and "2", and expecting people to figure it out. To make the code more obvious, replace these with proper enum values representing the transition, and then redocument what each of these states mean. Reviewed-by: Dominik Brodowski Cc: Joe Perches Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 7fe0c398be85..322c3dc7e790 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -70,16 +70,16 @@ *********************************************************************/ /* - * crng_init = 0 --> Uninitialized - * 1 --> Initialized - * 2 --> Initialized from input_pool - * * crng_init is protected by base_crng->lock, and only increases - * its value (from 0->1->2). + * its value (from empty->early->ready). */ -static int crng_init = 0; -#define crng_ready() (likely(crng_init > 1)) -/* Various types of waiters for crng_init->2 transition. */ +static enum { + CRNG_EMPTY = 0, /* Little to no entropy collected */ + CRNG_EARLY = 1, /* At least POOL_EARLY_BITS collected */ + CRNG_READY = 2 /* Fully initialized with POOL_READY_BITS collected */ +} crng_init = CRNG_EMPTY; +#define crng_ready() (likely(crng_init >= CRNG_READY)) +/* Various types of waiters for crng_init->CRNG_READY transition. */ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); static struct fasync_struct *fasync; static DEFINE_SPINLOCK(random_ready_chain_lock); @@ -284,7 +284,7 @@ static void crng_reseed(void) WRITE_ONCE(base_crng.generation, next_gen); WRITE_ONCE(base_crng.birth, jiffies); if (!crng_ready()) { - crng_init = 2; + crng_init = CRNG_READY; finalize_init = true; } spin_unlock_irqrestore(&base_crng.lock, flags); @@ -378,7 +378,7 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], * For the fast path, we check whether we're ready, unlocked first, and * then re-check once locked later. In the case where we're really not * ready, we do fast key erasure with the base_crng directly, extracting - * when crng_init==0. + * when crng_init is CRNG_EMPTY. */ if (!crng_ready()) { bool ready; @@ -386,7 +386,7 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], spin_lock_irqsave(&base_crng.lock, flags); ready = crng_ready(); if (!ready) { - if (crng_init == 0) + if (crng_init == CRNG_EMPTY) extract_entropy(base_crng.key, sizeof(base_crng.key)); crng_fast_key_erasure(base_crng.key, chacha_state, random_data, random_data_len); @@ -740,8 +740,8 @@ EXPORT_SYMBOL(get_random_bytes_arch); enum { POOL_BITS = BLAKE2S_HASH_SIZE * 8, - POOL_INIT_BITS = POOL_BITS, /* No point in settling for less. */ - POOL_FAST_INIT_BITS = POOL_INIT_BITS / 2 + POOL_READY_BITS = POOL_BITS, /* When crng_init->CRNG_READY */ + POOL_EARLY_BITS = POOL_READY_BITS / 2 /* When crng_init->CRNG_EARLY */ }; static struct { @@ -836,13 +836,13 @@ static void credit_init_bits(size_t nbits) init_bits = min_t(unsigned int, POOL_BITS, orig + add); } while (cmpxchg(&input_pool.init_bits, orig, init_bits) != orig); - if (!crng_ready() && init_bits >= POOL_INIT_BITS) + if (!crng_ready() && init_bits >= POOL_READY_BITS) crng_reseed(); - else if (unlikely(crng_init == 0 && init_bits >= POOL_FAST_INIT_BITS)) { + else if (unlikely(crng_init == CRNG_EMPTY && init_bits >= POOL_EARLY_BITS)) { spin_lock_irqsave(&base_crng.lock, flags); - if (crng_init == 0) { + if (crng_init == CRNG_EMPTY) { extract_entropy(base_crng.key, sizeof(base_crng.key)); - crng_init = 1; + crng_init = CRNG_EARLY; } spin_unlock_irqrestore(&base_crng.lock, flags); } @@ -1517,7 +1517,7 @@ const struct file_operations urandom_fops = { * * - write_wakeup_threshold - the amount of entropy in the input pool * below which write polls to /dev/random will unblock, requesting - * more entropy, tied to the POOL_INIT_BITS constant. It is writable + * more entropy, tied to the POOL_READY_BITS constant. It is writable * to avoid breaking old userspaces, but writing to it does not * change any behavior of the RNG. * @@ -1532,7 +1532,7 @@ const struct file_operations urandom_fops = { #include static int sysctl_random_min_urandom_seed = CRNG_RESEED_INTERVAL / HZ; -static int sysctl_random_write_wakeup_bits = POOL_INIT_BITS; +static int sysctl_random_write_wakeup_bits = POOL_READY_BITS; static int sysctl_poolsize = POOL_BITS; static u8 sysctl_bootid[UUID_SIZE]; From 4c4110c052e86c5e1f6debf51c9c8cc0e501f19e Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Mon, 9 May 2022 13:40:55 +0200 Subject: [PATCH 0243/1842] random: avoid initializing twice in credit race commit fed7ef061686cc813b1f3d8d0edc6c35b4d3537b upstream. Since all changes of crng_init now go through credit_init_bits(), we can fix a long standing race in which two concurrent callers of credit_init_bits() have the new bit count >= some threshold, but are doing so with crng_init as a lower threshold, checked outside of a lock, resulting in crng_reseed() or similar being called twice. In order to fix this, we can use the original cmpxchg value of the bit count, and only change crng_init when the bit count transitions from below a threshold to meeting the threshold. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 322c3dc7e790..609483f18d8c 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -823,7 +823,7 @@ static void extract_entropy(void *buf, size_t nbytes) static void credit_init_bits(size_t nbits) { - unsigned int init_bits, orig, add; + unsigned int new, orig, add; unsigned long flags; if (crng_ready() || !nbits) @@ -833,12 +833,12 @@ static void credit_init_bits(size_t nbits) do { orig = READ_ONCE(input_pool.init_bits); - init_bits = min_t(unsigned int, POOL_BITS, orig + add); - } while (cmpxchg(&input_pool.init_bits, orig, init_bits) != orig); + new = min_t(unsigned int, POOL_BITS, orig + add); + } while (cmpxchg(&input_pool.init_bits, orig, new) != orig); - if (!crng_ready() && init_bits >= POOL_READY_BITS) + if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) crng_reseed(); - else if (unlikely(crng_init == CRNG_EMPTY && init_bits >= POOL_EARLY_BITS)) { + else if (orig < POOL_EARLY_BITS && new >= POOL_EARLY_BITS) { spin_lock_irqsave(&base_crng.lock, flags); if (crng_init == CRNG_EMPTY) { extract_entropy(base_crng.key, sizeof(base_crng.key)); From b50f2830b3df0f5067ac6cc472e7ba6aff94647a Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Mon, 9 May 2022 13:53:24 +0200 Subject: [PATCH 0244/1842] random: move initialization out of reseeding hot path commit 68c9c8b192c6dae9be6278e98ee44029d5da2d31 upstream. Initialization happens once -- by way of credit_init_bits() -- and then it never happens again. Therefore, it doesn't need to be in crng_reseed(), which is a hot path that is called multiple times. It also doesn't make sense to have there, as initialization activity is better associated with initialization routines. After the prior commit, crng_reseed() now won't be called by multiple concurrent callers, which means that we can safely move the "finialize_init" logic into crng_init_bits() unconditionally. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 42 +++++++++++++++++++----------------------- 1 file changed, 19 insertions(+), 23 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 609483f18d8c..de49ea25df98 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -266,7 +266,6 @@ static void crng_reseed(void) unsigned long flags; unsigned long next_gen; u8 key[CHACHA_KEY_SIZE]; - bool finalize_init = false; extract_entropy(key, sizeof(key)); @@ -283,28 +282,10 @@ static void crng_reseed(void) ++next_gen; WRITE_ONCE(base_crng.generation, next_gen); WRITE_ONCE(base_crng.birth, jiffies); - if (!crng_ready()) { + if (!crng_ready()) crng_init = CRNG_READY; - finalize_init = true; - } spin_unlock_irqrestore(&base_crng.lock, flags); memzero_explicit(key, sizeof(key)); - if (finalize_init) { - process_random_ready_list(); - wake_up_interruptible(&crng_init_wait); - kill_fasync(&fasync, SIGIO, POLL_IN); - pr_notice("crng init done\n"); - if (unseeded_warning.missed) { - pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n", - unseeded_warning.missed); - unseeded_warning.missed = 0; - } - if (urandom_warning.missed) { - pr_notice("%d urandom warning(s) missed due to ratelimiting\n", - urandom_warning.missed); - urandom_warning.missed = 0; - } - } } /* @@ -836,10 +817,25 @@ static void credit_init_bits(size_t nbits) new = min_t(unsigned int, POOL_BITS, orig + add); } while (cmpxchg(&input_pool.init_bits, orig, new) != orig); - if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) - crng_reseed(); - else if (orig < POOL_EARLY_BITS && new >= POOL_EARLY_BITS) { + if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) { + crng_reseed(); /* Sets crng_init to CRNG_READY under base_crng.lock. */ + process_random_ready_list(); + wake_up_interruptible(&crng_init_wait); + kill_fasync(&fasync, SIGIO, POLL_IN); + pr_notice("crng init done\n"); + if (unseeded_warning.missed) { + pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n", + unseeded_warning.missed); + unseeded_warning.missed = 0; + } + if (urandom_warning.missed) { + pr_notice("%d urandom warning(s) missed due to ratelimiting\n", + urandom_warning.missed); + urandom_warning.missed = 0; + } + } else if (orig < POOL_EARLY_BITS && new >= POOL_EARLY_BITS) { spin_lock_irqsave(&base_crng.lock, flags); + /* Check if crng_init is CRNG_EMPTY, to avoid race with crng_reseed(). */ if (crng_init == CRNG_EMPTY) { extract_entropy(base_crng.key, sizeof(base_crng.key)); crng_init = CRNG_EARLY; From 31ac294037be272967a4c8bc25d3178020580595 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Mon, 9 May 2022 16:13:18 +0200 Subject: [PATCH 0245/1842] random: remove ratelimiting for in-kernel unseeded randomness commit cc1e127bfa95b5fb2f9307e7168bf8b2b45b4c5e upstream. The CONFIG_WARN_ALL_UNSEEDED_RANDOM debug option controls whether the kernel warns about all unseeded randomness or just the first instance. There's some complicated rate limiting and comparison to the previous caller, such that even with CONFIG_WARN_ALL_UNSEEDED_RANDOM enabled, developers still don't see all the messages or even an accurate count of how many were missed. This is the result of basically parallel mechanisms aimed at accomplishing more or less the same thing, added at different points in random.c history, which sort of compete with the first-instance-only limiting we have now. It turns out, however, that nobody cares about the first unseeded randomness instance of in-kernel users. The same first user has been there for ages now, and nobody is doing anything about it. It isn't even clear that anybody _can_ do anything about it. Most places that can do something about it have switched over to using get_random_bytes_wait() or wait_for_random_bytes(), which is the right thing to do, but there is still much code that needs randomness sometimes during init, and as a geeneral rule, if you're not using one of the _wait functions or the readiness notifier callback, you're bound to be doing it wrong just based on that fact alone. So warning about this same first user that can't easily change is simply not an effective mechanism for anything at all. Users can't do anything about it, as the Kconfig text points out -- the problem isn't in userspace code -- and kernel developers don't or more often can't react to it. Instead, show the warning for all instances when CONFIG_WARN_ALL_UNSEEDED_RANDOM is set, so that developers can debug things need be, or if it isn't set, don't show a warning at all. At the same time, CONFIG_WARN_ALL_UNSEEDED_RANDOM now implies setting random.ratelimit_disable=1 on by default, since if you care about one you probably care about the other too. And we can clean up usage around the related urandom_warning ratelimiter as well (whose behavior isn't changing), so that it properly counts missed messages after the 10 message threshold is reached. Cc: Theodore Ts'o Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 61 +++++++++++++------------------------------ lib/Kconfig.debug | 3 +-- 2 files changed, 19 insertions(+), 45 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index de49ea25df98..f1c60f486196 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -86,11 +86,10 @@ static DEFINE_SPINLOCK(random_ready_chain_lock); static RAW_NOTIFIER_HEAD(random_ready_chain); /* Control how we warn userspace. */ -static struct ratelimit_state unseeded_warning = - RATELIMIT_STATE_INIT("warn_unseeded_randomness", HZ, 3); static struct ratelimit_state urandom_warning = RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3); -static int ratelimit_disable __read_mostly; +static int ratelimit_disable __read_mostly = + IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM); module_param_named(ratelimit_disable, ratelimit_disable, int, 0644); MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression"); @@ -183,27 +182,15 @@ static void process_random_ready_list(void) spin_unlock_irqrestore(&random_ready_chain_lock, flags); } -#define warn_unseeded_randomness(previous) \ - _warn_unseeded_randomness(__func__, (void *)_RET_IP_, (previous)) +#define warn_unseeded_randomness() \ + _warn_unseeded_randomness(__func__, (void *)_RET_IP_) -static void _warn_unseeded_randomness(const char *func_name, void *caller, void **previous) +static void _warn_unseeded_randomness(const char *func_name, void *caller) { -#ifdef CONFIG_WARN_ALL_UNSEEDED_RANDOM - const bool print_once = false; -#else - static bool print_once __read_mostly; -#endif - - if (print_once || crng_ready() || - (previous && (caller == READ_ONCE(*previous)))) + if (!IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM) || crng_ready()) return; - WRITE_ONCE(*previous, caller); -#ifndef CONFIG_WARN_ALL_UNSEEDED_RANDOM - print_once = true; -#endif - if (__ratelimit(&unseeded_warning)) - printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n", - func_name, caller, crng_init); + printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n", + func_name, caller, crng_init); } @@ -456,9 +443,7 @@ static void _get_random_bytes(void *buf, size_t nbytes) */ void get_random_bytes(void *buf, size_t nbytes) { - static void *previous; - - warn_unseeded_randomness(&previous); + warn_unseeded_randomness(); _get_random_bytes(buf, nbytes); } EXPORT_SYMBOL(get_random_bytes); @@ -554,10 +539,9 @@ u64 get_random_u64(void) u64 ret; unsigned long flags; struct batched_entropy *batch; - static void *previous; unsigned long next_gen; - warn_unseeded_randomness(&previous); + warn_unseeded_randomness(); if (!crng_ready()) { _get_random_bytes(&ret, sizeof(ret)); @@ -593,10 +577,9 @@ u32 get_random_u32(void) u32 ret; unsigned long flags; struct batched_entropy *batch; - static void *previous; unsigned long next_gen; - warn_unseeded_randomness(&previous); + warn_unseeded_randomness(); if (!crng_ready()) { _get_random_bytes(&ret, sizeof(ret)); @@ -823,16 +806,9 @@ static void credit_init_bits(size_t nbits) wake_up_interruptible(&crng_init_wait); kill_fasync(&fasync, SIGIO, POLL_IN); pr_notice("crng init done\n"); - if (unseeded_warning.missed) { - pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n", - unseeded_warning.missed); - unseeded_warning.missed = 0; - } - if (urandom_warning.missed) { + if (urandom_warning.missed) pr_notice("%d urandom warning(s) missed due to ratelimiting\n", urandom_warning.missed); - urandom_warning.missed = 0; - } } else if (orig < POOL_EARLY_BITS && new >= POOL_EARLY_BITS) { spin_lock_irqsave(&base_crng.lock, flags); /* Check if crng_init is CRNG_EMPTY, to avoid race with crng_reseed(). */ @@ -945,10 +921,6 @@ int __init rand_initialize(void) else if (arch_init && trust_cpu) credit_init_bits(BLAKE2S_BLOCK_SIZE * 8); - if (ratelimit_disable) { - urandom_warning.interval = 0; - unseeded_warning.interval = 0; - } return 0; } @@ -1394,11 +1366,14 @@ static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes, { static int maxwarn = 10; - if (!crng_ready() && maxwarn > 0) { - maxwarn--; - if (__ratelimit(&urandom_warning)) + if (!crng_ready()) { + if (!ratelimit_disable && maxwarn <= 0) + ++urandom_warning.missed; + else if (ratelimit_disable || __ratelimit(&urandom_warning)) { + --maxwarn; pr_notice("%s: uninitialized urandom read (%zd bytes read)\n", current->comm, nbytes); + } } return get_random_bytes_user(buf, nbytes); diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 95f909540587..3656fa883783 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1426,8 +1426,7 @@ config WARN_ALL_UNSEEDED_RANDOM so architecture maintainers really need to do what they can to get the CRNG seeded sooner after the system is booted. However, since users cannot do anything actionable to - address this, by default the kernel will issue only a single - warning for the first use of unseeded randomness. + address this, by default this option is disabled. Say Y here if you want to receive warnings for all uses of unseeded randomness. This will be of use primarily for From 9320e087f2b64257f34106eecbfe5a43be5199b0 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 10 May 2022 15:20:42 +0200 Subject: [PATCH 0246/1842] random: use proper jiffies comparison macro commit 8a5b8a4a4ceb353b4dd5bafd09e2b15751bcdb51 upstream. This expands to exactly the same code that it replaces, but makes things consistent by using the same macro for jiffy comparisons throughout. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index f1c60f486196..40ad91e6e88e 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -326,7 +326,7 @@ static bool crng_has_old_seed(void) interval = max_t(unsigned int, CRNG_RESEED_START_INTERVAL, (unsigned int)uptime / 2 * HZ); } - return time_after(jiffies, READ_ONCE(base_crng.birth) + interval); + return time_is_before_jiffies(READ_ONCE(base_crng.birth) + interval); } /* From 5123cc61e27d6aaddf564fe9068e3bbdd193abff Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 5 May 2022 02:20:22 +0200 Subject: [PATCH 0247/1842] random: handle latent entropy and command line from random_init() commit 2f14062bb14b0fcfcc21e6dc7d5b5c0d25966164 upstream. Currently, start_kernel() adds latent entropy and the command line to the entropy bool *after* the RNG has been initialized, deferring when it's actually used by things like stack canaries until the next time the pool is seeded. This surely is not intended. Rather than splitting up which entropy gets added where and when between start_kernel() and random_init(), just do everything in random_init(), which should eliminate these kinds of bugs in the future. While we're at it, rename the awkwardly titled "rand_initialize()" to the more standard "random_init()" nomenclature. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 13 ++++++++----- include/linux/random.h | 22 ++++++++++------------ init/main.c | 10 +++------- 3 files changed, 21 insertions(+), 24 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 40ad91e6e88e..2960ce4c9231 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -888,12 +888,13 @@ early_param("random.trust_bootloader", parse_trust_bootloader); /* * The first collection of entropy occurs at system boot while interrupts - * are still turned off. Here we push in RDSEED, a timestamp, and utsname(). - * Depending on the above configuration knob, RDSEED may be considered - * sufficient for initialization. Note that much earlier setup may already - * have pushed entropy into the input pool by the time we get here. + * are still turned off. Here we push in latent entropy, RDSEED, a timestamp, + * utsname(), and the command line. Depending on the above configuration knob, + * RDSEED may be considered sufficient for initialization. Note that much + * earlier setup may already have pushed entropy into the input pool by the + * time we get here. */ -int __init rand_initialize(void) +int __init random_init(const char *command_line) { size_t i; ktime_t now = ktime_get_real(); @@ -915,6 +916,8 @@ int __init rand_initialize(void) } _mix_pool_bytes(&now, sizeof(now)); _mix_pool_bytes(utsname(), sizeof(*(utsname()))); + _mix_pool_bytes(command_line, strlen(command_line)); + add_latent_entropy(); if (crng_ready()) crng_reseed(); diff --git a/include/linux/random.h b/include/linux/random.h index d4abda9e2348..588e31dc10af 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -14,26 +14,24 @@ struct notifier_block; extern void add_device_randomness(const void *, size_t); extern void add_bootloader_randomness(const void *, size_t); - -#if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__) -static inline void add_latent_entropy(void) -{ - add_device_randomness((const void *)&latent_entropy, - sizeof(latent_entropy)); -} -#else -static inline void add_latent_entropy(void) {} -#endif - extern void add_input_randomness(unsigned int type, unsigned int code, unsigned int value) __latent_entropy; extern void add_interrupt_randomness(int irq) __latent_entropy; extern void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy); +#if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__) +static inline void add_latent_entropy(void) +{ + add_device_randomness((const void *)&latent_entropy, sizeof(latent_entropy)); +} +#else +static inline void add_latent_entropy(void) {} +#endif + extern void get_random_bytes(void *buf, size_t nbytes); extern int wait_for_random_bytes(void); -extern int __init rand_initialize(void); +extern int __init random_init(const char *command_line); extern bool rng_is_initialized(void); extern int register_random_ready_notifier(struct notifier_block *nb); extern int unregister_random_ready_notifier(struct notifier_block *nb); diff --git a/init/main.c b/init/main.c index 5fd0da8629e5..d8bfe61b5a88 100644 --- a/init/main.c +++ b/init/main.c @@ -957,15 +957,11 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void) /* * For best initial stack canary entropy, prepare it after: * - setup_arch() for any UEFI RNG entropy and boot cmdline access - * - timekeeping_init() for ktime entropy used in rand_initialize() + * - timekeeping_init() for ktime entropy used in random_init() * - time_init() for making random_get_entropy() work on some platforms - * - rand_initialize() to get any arch-specific entropy like RDRAND - * - add_latent_entropy() to get any latent entropy - * - adding command line entropy + * - random_init() to initialize the RNG from from early entropy sources */ - rand_initialize(); - add_latent_entropy(); - add_device_randomness(command_line, strlen(command_line)); + random_init(command_line); boot_init_stack_canary(); perf_event_init(); From 04d61b96bd8a97755d416fbbe07b4ba8bffba564 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 12 May 2022 15:32:26 +0200 Subject: [PATCH 0248/1842] random: credit architectural init the exact amount commit 12e45a2a6308105469968951e6d563e8f4fea187 upstream. RDRAND and RDSEED can fail sometimes, which is fine. We currently initialize the RNG with 512 bits of RDRAND/RDSEED. We only need 256 bits of those to succeed in order to initialize the RNG. Instead of the current "all or nothing" approach, actually credit these contributions the amount that is actually contributed. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 2960ce4c9231..cd63bc7c5e9b 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -896,9 +896,8 @@ early_param("random.trust_bootloader", parse_trust_bootloader); */ int __init random_init(const char *command_line) { - size_t i; ktime_t now = ktime_get_real(); - bool arch_init = true; + unsigned int i, arch_bytes; unsigned long rv; #if defined(LATENT_ENTROPY_PLUGIN) @@ -906,11 +905,12 @@ int __init random_init(const char *command_line) _mix_pool_bytes(compiletime_seed, sizeof(compiletime_seed)); #endif - for (i = 0; i < BLAKE2S_BLOCK_SIZE; i += sizeof(rv)) { + for (i = 0, arch_bytes = BLAKE2S_BLOCK_SIZE; + i < BLAKE2S_BLOCK_SIZE; i += sizeof(rv)) { if (!arch_get_random_seed_long_early(&rv) && !arch_get_random_long_early(&rv)) { rv = random_get_entropy(); - arch_init = false; + arch_bytes -= sizeof(rv); } _mix_pool_bytes(&rv, sizeof(rv)); } @@ -921,8 +921,8 @@ int __init random_init(const char *command_line) if (crng_ready()) crng_reseed(); - else if (arch_init && trust_cpu) - credit_init_bits(BLAKE2S_BLOCK_SIZE * 8); + else if (trust_cpu) + credit_init_bits(arch_bytes * 8); return 0; } From 811afd06e0f333d7bdd5c7debd90ad0c392b465c Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 3 May 2022 15:30:45 +0200 Subject: [PATCH 0249/1842] random: use static branch for crng_ready() commit f5bda35fba615ace70a656d4700423fa6c9bebee upstream. Since crng_ready() is only false briefly during initialization and then forever after becomes true, we don't need to evaluate it after, making it a prime candidate for a static branch. One complication, however, is that it changes state in a particular call to credit_init_bits(), which might be made from atomic context, which means we must kick off a workqueue to change the static key. Further complicating things, credit_init_bits() may be called sufficiently early on in system initialization such that system_wq is NULL. Fortunately, there exists the nice function execute_in_process_context(), which will immediately execute the function if !in_interrupt(), and otherwise defer it to a workqueue. During early init, before workqueues are available, in_interrupt() is always false, because interrupts haven't even been enabled yet, which means the function in that case executes immediately. Later on, after workqueues are available, in_interrupt() might be true, but in that case, the work is queued in system_wq and all goes well. Cc: Theodore Ts'o Cc: Sultan Alsawaf Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index cd63bc7c5e9b..62938b56356a 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -77,8 +77,9 @@ static enum { CRNG_EMPTY = 0, /* Little to no entropy collected */ CRNG_EARLY = 1, /* At least POOL_EARLY_BITS collected */ CRNG_READY = 2 /* Fully initialized with POOL_READY_BITS collected */ -} crng_init = CRNG_EMPTY; -#define crng_ready() (likely(crng_init >= CRNG_READY)) +} crng_init __read_mostly = CRNG_EMPTY; +static DEFINE_STATIC_KEY_FALSE(crng_is_ready); +#define crng_ready() (static_branch_likely(&crng_is_ready) || crng_init >= CRNG_READY) /* Various types of waiters for crng_init->CRNG_READY transition. */ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); static struct fasync_struct *fasync; @@ -108,6 +109,11 @@ bool rng_is_initialized(void) } EXPORT_SYMBOL(rng_is_initialized); +static void crng_set_ready(struct work_struct *work) +{ + static_branch_enable(&crng_is_ready); +} + /* Used by wait_for_random_bytes(), and considered an entropy collector, below. */ static void try_to_generate_entropy(void); @@ -269,7 +275,7 @@ static void crng_reseed(void) ++next_gen; WRITE_ONCE(base_crng.generation, next_gen); WRITE_ONCE(base_crng.birth, jiffies); - if (!crng_ready()) + if (!static_branch_likely(&crng_is_ready)) crng_init = CRNG_READY; spin_unlock_irqrestore(&base_crng.lock, flags); memzero_explicit(key, sizeof(key)); @@ -787,6 +793,7 @@ static void extract_entropy(void *buf, size_t nbytes) static void credit_init_bits(size_t nbits) { + static struct execute_work set_ready; unsigned int new, orig, add; unsigned long flags; @@ -802,6 +809,7 @@ static void credit_init_bits(size_t nbits) if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) { crng_reseed(); /* Sets crng_init to CRNG_READY under base_crng.lock. */ + execute_in_process_context(crng_set_ready, &set_ready); process_random_ready_list(); wake_up_interruptible(&crng_init_wait); kill_fasync(&fasync, SIGIO, POLL_IN); @@ -1311,7 +1319,7 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int, if (count > INT_MAX) count = INT_MAX; - if (!(flags & GRND_INSECURE) && !crng_ready()) { + if (!crng_ready() && !(flags & GRND_INSECURE)) { int ret; if (flags & GRND_NONBLOCK) From 1fdd7eef2100790d372f58ffee2fed3b38214e6e Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 13 May 2022 12:29:38 +0200 Subject: [PATCH 0250/1842] random: remove extern from functions in header commit 7782cfeca7d420e8bb707613d4cfb0f7ff29bb3a upstream. Accoriding to the kernel style guide, having `extern` on functions in headers is old school and deprecated, and doesn't add anything. So remove them from random.h, and tidy up the file a little bit too. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- include/linux/random.h | 71 +++++++++++++++++------------------------- 1 file changed, 28 insertions(+), 43 deletions(-) diff --git a/include/linux/random.h b/include/linux/random.h index 588e31dc10af..0c140a0847a2 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -12,13 +12,12 @@ struct notifier_block; -extern void add_device_randomness(const void *, size_t); -extern void add_bootloader_randomness(const void *, size_t); -extern void add_input_randomness(unsigned int type, unsigned int code, - unsigned int value) __latent_entropy; -extern void add_interrupt_randomness(int irq) __latent_entropy; -extern void add_hwgenerator_randomness(const void *buffer, size_t count, - size_t entropy); +void add_device_randomness(const void *, size_t); +void add_bootloader_randomness(const void *, size_t); +void add_input_randomness(unsigned int type, unsigned int code, + unsigned int value) __latent_entropy; +void add_interrupt_randomness(int irq) __latent_entropy; +void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy); #if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__) static inline void add_latent_entropy(void) @@ -26,21 +25,11 @@ static inline void add_latent_entropy(void) add_device_randomness((const void *)&latent_entropy, sizeof(latent_entropy)); } #else -static inline void add_latent_entropy(void) {} -#endif - -extern void get_random_bytes(void *buf, size_t nbytes); -extern int wait_for_random_bytes(void); -extern int __init random_init(const char *command_line); -extern bool rng_is_initialized(void); -extern int register_random_ready_notifier(struct notifier_block *nb); -extern int unregister_random_ready_notifier(struct notifier_block *nb); -extern size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes); - -#ifndef MODULE -extern const struct file_operations random_fops, urandom_fops; +static inline void add_latent_entropy(void) { } #endif +void get_random_bytes(void *buf, size_t nbytes); +size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes); u32 get_random_u32(void); u64 get_random_u64(void); static inline unsigned int get_random_int(void) @@ -72,11 +61,17 @@ static inline unsigned long get_random_long(void) static inline unsigned long get_random_canary(void) { - unsigned long val = get_random_long(); - - return val & CANARY_MASK; + return get_random_long() & CANARY_MASK; } +unsigned long randomize_page(unsigned long start, unsigned long range); + +int __init random_init(const char *command_line); +bool rng_is_initialized(void); +int wait_for_random_bytes(void); +int register_random_ready_notifier(struct notifier_block *nb); +int unregister_random_ready_notifier(struct notifier_block *nb); + /* Calls wait_for_random_bytes() and then calls get_random_bytes(buf, nbytes). * Returns the result of the call to wait_for_random_bytes. */ static inline int get_random_bytes_wait(void *buf, size_t nbytes) @@ -100,8 +95,6 @@ declare_get_random_var_wait(int) declare_get_random_var_wait(long) #undef declare_get_random_var -unsigned long randomize_page(unsigned long start, unsigned long range); - /* * This is designed to be standalone for just prandom * users, but for now we include it from @@ -112,22 +105,10 @@ unsigned long randomize_page(unsigned long start, unsigned long range); #ifdef CONFIG_ARCH_RANDOM # include #else -static inline bool __must_check arch_get_random_long(unsigned long *v) -{ - return false; -} -static inline bool __must_check arch_get_random_int(unsigned int *v) -{ - return false; -} -static inline bool __must_check arch_get_random_seed_long(unsigned long *v) -{ - return false; -} -static inline bool __must_check arch_get_random_seed_int(unsigned int *v) -{ - return false; -} +static inline bool __must_check arch_get_random_long(unsigned long *v) { return false; } +static inline bool __must_check arch_get_random_int(unsigned int *v) { return false; } +static inline bool __must_check arch_get_random_seed_long(unsigned long *v) { return false; } +static inline bool __must_check arch_get_random_seed_int(unsigned int *v) { return false; } #endif /* @@ -151,8 +132,12 @@ static inline bool __init arch_get_random_long_early(unsigned long *v) #endif #ifdef CONFIG_SMP -extern int random_prepare_cpu(unsigned int cpu); -extern int random_online_cpu(unsigned int cpu); +int random_prepare_cpu(unsigned int cpu); +int random_online_cpu(unsigned int cpu); +#endif + +#ifndef MODULE +extern const struct file_operations random_fops, urandom_fops; #endif #endif /* _LINUX_RANDOM_H */ From 33783ca3556e8d3965fe44aa79ae41ab19897189 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 13 May 2022 12:32:23 +0200 Subject: [PATCH 0251/1842] random: use proper return types on get_random_{int,long}_wait() commit 7c3a8a1db5e03d02cc0abb3357a84b8b326dfac3 upstream. Before these were returning signed values, but the API is intended to be used with unsigned values. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- include/linux/random.h | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/include/linux/random.h b/include/linux/random.h index 0c140a0847a2..0416b393ca4d 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -81,18 +81,18 @@ static inline int get_random_bytes_wait(void *buf, size_t nbytes) return ret; } -#define declare_get_random_var_wait(var) \ - static inline int get_random_ ## var ## _wait(var *out) { \ +#define declare_get_random_var_wait(name, ret_type) \ + static inline int get_random_ ## name ## _wait(ret_type *out) { \ int ret = wait_for_random_bytes(); \ if (unlikely(ret)) \ return ret; \ - *out = get_random_ ## var(); \ + *out = get_random_ ## name(); \ return 0; \ } -declare_get_random_var_wait(u32) -declare_get_random_var_wait(u64) -declare_get_random_var_wait(int) -declare_get_random_var_wait(long) +declare_get_random_var_wait(u32, u32) +declare_get_random_var_wait(u64, u32) +declare_get_random_var_wait(int, unsigned int) +declare_get_random_var_wait(long, unsigned long) #undef declare_get_random_var /* From 02102b63bd962b724a98c8ee7ffc038ca2c920cc Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 13 May 2022 13:18:46 +0200 Subject: [PATCH 0252/1842] random: make consistent use of buf and len commit a19402634c435a4eae226df53c141cdbb9922e7b upstream. The current code was a mix of "nbytes", "count", "size", "buffer", "in", and so forth. Instead, let's clean this up by naming input parameters "buf" (or "ubuf") and "len", so that you always understand that you're reading this variety of function argument. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 193 ++++++++++++++++++++--------------------- include/linux/random.h | 10 +-- 2 files changed, 99 insertions(+), 104 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 62938b56356a..5473a5970912 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -210,7 +210,7 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller) * * There are a few exported interfaces for use by other drivers: * - * void get_random_bytes(void *buf, size_t nbytes) + * void get_random_bytes(void *buf, size_t len) * u32 get_random_u32() * u64 get_random_u64() * unsigned int get_random_int() @@ -251,7 +251,7 @@ static DEFINE_PER_CPU(struct crng, crngs) = { }; /* Used by crng_reseed() and crng_make_state() to extract a new seed from the input pool. */ -static void extract_entropy(void *buf, size_t nbytes); +static void extract_entropy(void *buf, size_t len); /* This extracts a new crng key from the input pool. */ static void crng_reseed(void) @@ -405,24 +405,24 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS], local_unlock_irqrestore(&crngs.lock, flags); } -static void _get_random_bytes(void *buf, size_t nbytes) +static void _get_random_bytes(void *buf, size_t len) { u32 chacha_state[CHACHA_STATE_WORDS]; u8 tmp[CHACHA_BLOCK_SIZE]; - size_t len; + size_t first_block_len; - if (!nbytes) + if (!len) return; - len = min_t(size_t, 32, nbytes); - crng_make_state(chacha_state, buf, len); - nbytes -= len; - buf += len; + first_block_len = min_t(size_t, 32, len); + crng_make_state(chacha_state, buf, first_block_len); + len -= first_block_len; + buf += first_block_len; - while (nbytes) { - if (nbytes < CHACHA_BLOCK_SIZE) { + while (len) { + if (len < CHACHA_BLOCK_SIZE) { chacha20_block(chacha_state, tmp); - memcpy(buf, tmp, nbytes); + memcpy(buf, tmp, len); memzero_explicit(tmp, sizeof(tmp)); break; } @@ -430,7 +430,7 @@ static void _get_random_bytes(void *buf, size_t nbytes) chacha20_block(chacha_state, buf); if (unlikely(chacha_state[12] == 0)) ++chacha_state[13]; - nbytes -= CHACHA_BLOCK_SIZE; + len -= CHACHA_BLOCK_SIZE; buf += CHACHA_BLOCK_SIZE; } @@ -447,20 +447,20 @@ static void _get_random_bytes(void *buf, size_t nbytes) * wait_for_random_bytes() should be called and return 0 at least once * at any point prior. */ -void get_random_bytes(void *buf, size_t nbytes) +void get_random_bytes(void *buf, size_t len) { warn_unseeded_randomness(); - _get_random_bytes(buf, nbytes); + _get_random_bytes(buf, len); } EXPORT_SYMBOL(get_random_bytes); -static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) +static ssize_t get_random_bytes_user(void __user *ubuf, size_t len) { - size_t len, left, ret = 0; + size_t block_len, left, ret = 0; u32 chacha_state[CHACHA_STATE_WORDS]; u8 output[CHACHA_BLOCK_SIZE]; - if (!nbytes) + if (!len) return 0; /* @@ -474,8 +474,8 @@ static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) * use chacha_state after, so we can simply return those bytes to * the user directly. */ - if (nbytes <= CHACHA_KEY_SIZE) { - ret = nbytes - copy_to_user(buf, &chacha_state[4], nbytes); + if (len <= CHACHA_KEY_SIZE) { + ret = len - copy_to_user(ubuf, &chacha_state[4], len); goto out_zero_chacha; } @@ -484,17 +484,17 @@ static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes) if (unlikely(chacha_state[12] == 0)) ++chacha_state[13]; - len = min_t(size_t, nbytes, CHACHA_BLOCK_SIZE); - left = copy_to_user(buf, output, len); + block_len = min_t(size_t, len, CHACHA_BLOCK_SIZE); + left = copy_to_user(ubuf, output, block_len); if (left) { - ret += len - left; + ret += block_len - left; break; } - buf += len; - ret += len; - nbytes -= len; - if (!nbytes) + ubuf += block_len; + ret += block_len; + len -= block_len; + if (!len) break; BUILD_BUG_ON(PAGE_SIZE % CHACHA_BLOCK_SIZE != 0); @@ -668,24 +668,24 @@ unsigned long randomize_page(unsigned long start, unsigned long range) * use. Use get_random_bytes() instead. It returns the number of * bytes filled in. */ -size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes) +size_t __must_check get_random_bytes_arch(void *buf, size_t len) { - size_t left = nbytes; + size_t left = len; u8 *p = buf; while (left) { unsigned long v; - size_t chunk = min_t(size_t, left, sizeof(unsigned long)); + size_t block_len = min_t(size_t, left, sizeof(unsigned long)); if (!arch_get_random_long(&v)) break; - memcpy(p, &v, chunk); - p += chunk; - left -= chunk; + memcpy(p, &v, block_len); + p += block_len; + left -= block_len; } - return nbytes - left; + return len - left; } EXPORT_SYMBOL(get_random_bytes_arch); @@ -696,15 +696,15 @@ EXPORT_SYMBOL(get_random_bytes_arch); * * Callers may add entropy via: * - * static void mix_pool_bytes(const void *in, size_t nbytes) + * static void mix_pool_bytes(const void *buf, size_t len) * * After which, if added entropy should be credited: * - * static void credit_init_bits(size_t nbits) + * static void credit_init_bits(size_t bits) * * Finally, extract entropy via: * - * static void extract_entropy(void *buf, size_t nbytes) + * static void extract_entropy(void *buf, size_t len) * **********************************************************************/ @@ -726,9 +726,9 @@ static struct { .lock = __SPIN_LOCK_UNLOCKED(input_pool.lock), }; -static void _mix_pool_bytes(const void *in, size_t nbytes) +static void _mix_pool_bytes(const void *buf, size_t len) { - blake2s_update(&input_pool.hash, in, nbytes); + blake2s_update(&input_pool.hash, buf, len); } /* @@ -736,12 +736,12 @@ static void _mix_pool_bytes(const void *in, size_t nbytes) * update the initialization bit counter; the caller should call * credit_init_bits if this is appropriate. */ -static void mix_pool_bytes(const void *in, size_t nbytes) +static void mix_pool_bytes(const void *buf, size_t len) { unsigned long flags; spin_lock_irqsave(&input_pool.lock, flags); - _mix_pool_bytes(in, nbytes); + _mix_pool_bytes(buf, len); spin_unlock_irqrestore(&input_pool.lock, flags); } @@ -749,7 +749,7 @@ static void mix_pool_bytes(const void *in, size_t nbytes) * This is an HKDF-like construction for using the hashed collected entropy * as a PRF key, that's then expanded block-by-block. */ -static void extract_entropy(void *buf, size_t nbytes) +static void extract_entropy(void *buf, size_t len) { unsigned long flags; u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE]; @@ -778,12 +778,12 @@ static void extract_entropy(void *buf, size_t nbytes) spin_unlock_irqrestore(&input_pool.lock, flags); memzero_explicit(next_key, sizeof(next_key)); - while (nbytes) { - i = min_t(size_t, nbytes, BLAKE2S_HASH_SIZE); + while (len) { + i = min_t(size_t, len, BLAKE2S_HASH_SIZE); /* output = HASHPRF(seed, RDSEED || ++counter) */ ++block.counter; blake2s(buf, (u8 *)&block, seed, i, sizeof(block), sizeof(seed)); - nbytes -= i; + len -= i; buf += i; } @@ -791,16 +791,16 @@ static void extract_entropy(void *buf, size_t nbytes) memzero_explicit(&block, sizeof(block)); } -static void credit_init_bits(size_t nbits) +static void credit_init_bits(size_t bits) { static struct execute_work set_ready; unsigned int new, orig, add; unsigned long flags; - if (crng_ready() || !nbits) + if (crng_ready() || !bits) return; - add = min_t(size_t, nbits, POOL_BITS); + add = min_t(size_t, bits, POOL_BITS); do { orig = READ_ONCE(input_pool.init_bits); @@ -836,13 +836,11 @@ static void credit_init_bits(size_t nbits) * The following exported functions are used for pushing entropy into * the above entropy accumulation routines: * - * void add_device_randomness(const void *buf, size_t size); - * void add_hwgenerator_randomness(const void *buffer, size_t count, - * size_t entropy); - * void add_bootloader_randomness(const void *buf, size_t size); + * void add_device_randomness(const void *buf, size_t len); + * void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy); + * void add_bootloader_randomness(const void *buf, size_t len); * void add_interrupt_randomness(int irq); - * void add_input_randomness(unsigned int type, unsigned int code, - * unsigned int value); + * void add_input_randomness(unsigned int type, unsigned int code, unsigned int value); * void add_disk_randomness(struct gendisk *disk); * * add_device_randomness() adds data to the input pool that @@ -906,7 +904,7 @@ int __init random_init(const char *command_line) { ktime_t now = ktime_get_real(); unsigned int i, arch_bytes; - unsigned long rv; + unsigned long entropy; #if defined(LATENT_ENTROPY_PLUGIN) static const u8 compiletime_seed[BLAKE2S_BLOCK_SIZE] __initconst __latent_entropy; @@ -914,13 +912,13 @@ int __init random_init(const char *command_line) #endif for (i = 0, arch_bytes = BLAKE2S_BLOCK_SIZE; - i < BLAKE2S_BLOCK_SIZE; i += sizeof(rv)) { - if (!arch_get_random_seed_long_early(&rv) && - !arch_get_random_long_early(&rv)) { - rv = random_get_entropy(); - arch_bytes -= sizeof(rv); + i < BLAKE2S_BLOCK_SIZE; i += sizeof(entropy)) { + if (!arch_get_random_seed_long_early(&entropy) && + !arch_get_random_long_early(&entropy)) { + entropy = random_get_entropy(); + arch_bytes -= sizeof(entropy); } - _mix_pool_bytes(&rv, sizeof(rv)); + _mix_pool_bytes(&entropy, sizeof(entropy)); } _mix_pool_bytes(&now, sizeof(now)); _mix_pool_bytes(utsname(), sizeof(*(utsname()))); @@ -943,14 +941,14 @@ int __init random_init(const char *command_line) * the entropy pool having similar initial state across largely * identical devices. */ -void add_device_randomness(const void *buf, size_t size) +void add_device_randomness(const void *buf, size_t len) { unsigned long entropy = random_get_entropy(); unsigned long flags; spin_lock_irqsave(&input_pool.lock, flags); _mix_pool_bytes(&entropy, sizeof(entropy)); - _mix_pool_bytes(buf, size); + _mix_pool_bytes(buf, len); spin_unlock_irqrestore(&input_pool.lock, flags); } EXPORT_SYMBOL(add_device_randomness); @@ -960,10 +958,9 @@ EXPORT_SYMBOL(add_device_randomness); * Those devices may produce endless random bits and will be throttled * when our pool is full. */ -void add_hwgenerator_randomness(const void *buffer, size_t count, - size_t entropy) +void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy) { - mix_pool_bytes(buffer, count); + mix_pool_bytes(buf, len); credit_init_bits(entropy); /* @@ -979,11 +976,11 @@ EXPORT_SYMBOL_GPL(add_hwgenerator_randomness); * Handle random seed passed by bootloader, and credit it if * CONFIG_RANDOM_TRUST_BOOTLOADER is set. */ -void add_bootloader_randomness(const void *buf, size_t size) +void add_bootloader_randomness(const void *buf, size_t len) { - mix_pool_bytes(buf, size); + mix_pool_bytes(buf, len); if (trust_bootloader) - credit_init_bits(size * 8); + credit_init_bits(len * 8); } EXPORT_SYMBOL_GPL(add_bootloader_randomness); @@ -1183,8 +1180,7 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned int nu credit_init_bits(bits); } -void add_input_randomness(unsigned int type, unsigned int code, - unsigned int value) +void add_input_randomness(unsigned int type, unsigned int code, unsigned int value) { static unsigned char last_value; static struct timer_rand_state input_timer_state = { INITIAL_JIFFIES }; @@ -1303,8 +1299,7 @@ static void try_to_generate_entropy(void) * **********************************************************************/ -SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int, - flags) +SYSCALL_DEFINE3(getrandom, char __user *, ubuf, size_t, len, unsigned int, flags) { if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE)) return -EINVAL; @@ -1316,8 +1311,8 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int, if ((flags & (GRND_INSECURE | GRND_RANDOM)) == (GRND_INSECURE | GRND_RANDOM)) return -EINVAL; - if (count > INT_MAX) - count = INT_MAX; + if (len > INT_MAX) + len = INT_MAX; if (!crng_ready() && !(flags & GRND_INSECURE)) { int ret; @@ -1328,7 +1323,7 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int, if (unlikely(ret)) return ret; } - return get_random_bytes_user(buf, count); + return get_random_bytes_user(ubuf, len); } static __poll_t random_poll(struct file *file, poll_table *wait) @@ -1337,21 +1332,21 @@ static __poll_t random_poll(struct file *file, poll_table *wait) return crng_ready() ? EPOLLIN | EPOLLRDNORM : EPOLLOUT | EPOLLWRNORM; } -static int write_pool(const char __user *ubuf, size_t count) +static int write_pool(const char __user *ubuf, size_t len) { - size_t len; + size_t block_len; int ret = 0; u8 block[BLAKE2S_BLOCK_SIZE]; - while (count) { - len = min(count, sizeof(block)); - if (copy_from_user(block, ubuf, len)) { + while (len) { + block_len = min(len, sizeof(block)); + if (copy_from_user(block, ubuf, block_len)) { ret = -EFAULT; goto out; } - count -= len; - ubuf += len; - mix_pool_bytes(block, len); + len -= block_len; + ubuf += block_len; + mix_pool_bytes(block, block_len); cond_resched(); } @@ -1360,20 +1355,20 @@ static int write_pool(const char __user *ubuf, size_t count) return ret; } -static ssize_t random_write(struct file *file, const char __user *buffer, - size_t count, loff_t *ppos) +static ssize_t random_write(struct file *file, const char __user *ubuf, + size_t len, loff_t *ppos) { int ret; - ret = write_pool(buffer, count); + ret = write_pool(ubuf, len); if (ret) return ret; - return (ssize_t)count; + return (ssize_t)len; } -static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes, - loff_t *ppos) +static ssize_t urandom_read(struct file *file, char __user *ubuf, + size_t len, loff_t *ppos) { static int maxwarn = 10; @@ -1383,22 +1378,22 @@ static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes, else if (ratelimit_disable || __ratelimit(&urandom_warning)) { --maxwarn; pr_notice("%s: uninitialized urandom read (%zd bytes read)\n", - current->comm, nbytes); + current->comm, len); } } - return get_random_bytes_user(buf, nbytes); + return get_random_bytes_user(ubuf, len); } -static ssize_t random_read(struct file *file, char __user *buf, size_t nbytes, - loff_t *ppos) +static ssize_t random_read(struct file *file, char __user *ubuf, + size_t len, loff_t *ppos) { int ret; ret = wait_for_random_bytes(); if (ret != 0) return ret; - return get_random_bytes_user(buf, nbytes); + return get_random_bytes_user(ubuf, len); } static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) @@ -1523,7 +1518,7 @@ static u8 sysctl_bootid[UUID_SIZE]; * UUID. The difference is in whether table->data is NULL; if it is, * then a new UUID is generated and returned to the user. */ -static int proc_do_uuid(struct ctl_table *table, int write, void *buffer, +static int proc_do_uuid(struct ctl_table *table, int write, void *buf, size_t *lenp, loff_t *ppos) { u8 tmp_uuid[UUID_SIZE], *uuid; @@ -1550,14 +1545,14 @@ static int proc_do_uuid(struct ctl_table *table, int write, void *buffer, } snprintf(uuid_string, sizeof(uuid_string), "%pU", uuid); - return proc_dostring(&fake_table, 0, buffer, lenp, ppos); + return proc_dostring(&fake_table, 0, buf, lenp, ppos); } /* The same as proc_dointvec, but writes don't change anything. */ -static int proc_do_rointvec(struct ctl_table *table, int write, void *buffer, +static int proc_do_rointvec(struct ctl_table *table, int write, void *buf, size_t *lenp, loff_t *ppos) { - return write ? 0 : proc_dointvec(table, 0, buffer, lenp, ppos); + return write ? 0 : proc_dointvec(table, 0, buf, lenp, ppos); } extern struct ctl_table random_table[]; diff --git a/include/linux/random.h b/include/linux/random.h index 0416b393ca4d..d6ef6cf4346a 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -12,12 +12,12 @@ struct notifier_block; -void add_device_randomness(const void *, size_t); -void add_bootloader_randomness(const void *, size_t); +void add_device_randomness(const void *buf, size_t len); +void add_bootloader_randomness(const void *buf, size_t len); void add_input_randomness(unsigned int type, unsigned int code, unsigned int value) __latent_entropy; void add_interrupt_randomness(int irq) __latent_entropy; -void add_hwgenerator_randomness(const void *buffer, size_t count, size_t entropy); +void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy); #if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__) static inline void add_latent_entropy(void) @@ -28,8 +28,8 @@ static inline void add_latent_entropy(void) static inline void add_latent_entropy(void) { } #endif -void get_random_bytes(void *buf, size_t nbytes); -size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes); +void get_random_bytes(void *buf, size_t len); +size_t __must_check get_random_bytes_arch(void *buf, size_t len); u32 get_random_u32(void); u64 get_random_u64(void); static inline unsigned int get_random_int(void) From 5f2a040b2fb47a76b11bc5f46dbdad55f6cdd753 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Fri, 13 May 2022 16:17:12 +0200 Subject: [PATCH 0253/1842] random: move initialization functions out of hot pages commit 560181c27b582557d633ecb608110075433383af upstream. Much of random.c is devoted to initializing the rng and accounting for when a sufficient amount of entropy has been added. In a perfect world, this would all happen during init, and so we could mark these functions as __init. But in reality, this isn't the case: sometimes the rng only finishes initializing some seconds after system init is finished. For this reason, at the moment, a whole host of functions that are only used relatively close to system init and then never again are intermixed with functions that are used in hot code all the time. This creates more cache misses than necessary. In order to pack the hot code closer together, this commit moves the initialization functions that can't be marked as __init into .text.unlikely by way of the __cold attribute. Of particular note is moving credit_init_bits() into a macro wrapper that inlines the crng_ready() static branch check. This avoids a function call to a nop+ret, and most notably prevents extra entropy arithmetic from being computed in mix_interrupt_randomness(). Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 40 ++++++++++++++++++---------------------- 1 file changed, 18 insertions(+), 22 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 5473a5970912..10d5c969c05b 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -109,7 +109,7 @@ bool rng_is_initialized(void) } EXPORT_SYMBOL(rng_is_initialized); -static void crng_set_ready(struct work_struct *work) +static void __cold crng_set_ready(struct work_struct *work) { static_branch_enable(&crng_is_ready); } @@ -148,7 +148,7 @@ EXPORT_SYMBOL(wait_for_random_bytes); * returns: 0 if callback is successfully added * -EALREADY if pool is already initialised (callback not called) */ -int register_random_ready_notifier(struct notifier_block *nb) +int __cold register_random_ready_notifier(struct notifier_block *nb) { unsigned long flags; int ret = -EALREADY; @@ -167,7 +167,7 @@ EXPORT_SYMBOL(register_random_ready_notifier); /* * Delete a previously registered readiness callback function. */ -int unregister_random_ready_notifier(struct notifier_block *nb) +int __cold unregister_random_ready_notifier(struct notifier_block *nb) { unsigned long flags; int ret; @@ -179,7 +179,7 @@ int unregister_random_ready_notifier(struct notifier_block *nb) } EXPORT_SYMBOL(unregister_random_ready_notifier); -static void process_random_ready_list(void) +static void __cold process_random_ready_list(void) { unsigned long flags; @@ -189,15 +189,9 @@ static void process_random_ready_list(void) } #define warn_unseeded_randomness() \ - _warn_unseeded_randomness(__func__, (void *)_RET_IP_) - -static void _warn_unseeded_randomness(const char *func_name, void *caller) -{ - if (!IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM) || crng_ready()) - return; - printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n", - func_name, caller, crng_init); -} + if (IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM) && !crng_ready()) \ + printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n", \ + __func__, (void *)_RET_IP_, crng_init) /********************************************************************* @@ -616,7 +610,7 @@ EXPORT_SYMBOL(get_random_u32); * This function is called when the CPU is coming up, with entry * CPUHP_RANDOM_PREPARE, which comes before CPUHP_WORKQUEUE_PREP. */ -int random_prepare_cpu(unsigned int cpu) +int __cold random_prepare_cpu(unsigned int cpu) { /* * When the cpu comes back online, immediately invalidate both @@ -791,13 +785,15 @@ static void extract_entropy(void *buf, size_t len) memzero_explicit(&block, sizeof(block)); } -static void credit_init_bits(size_t bits) +#define credit_init_bits(bits) if (!crng_ready()) _credit_init_bits(bits) + +static void __cold _credit_init_bits(size_t bits) { static struct execute_work set_ready; unsigned int new, orig, add; unsigned long flags; - if (crng_ready() || !bits) + if (!bits) return; add = min_t(size_t, bits, POOL_BITS); @@ -976,7 +972,7 @@ EXPORT_SYMBOL_GPL(add_hwgenerator_randomness); * Handle random seed passed by bootloader, and credit it if * CONFIG_RANDOM_TRUST_BOOTLOADER is set. */ -void add_bootloader_randomness(const void *buf, size_t len) +void __cold add_bootloader_randomness(const void *buf, size_t len) { mix_pool_bytes(buf, len); if (trust_bootloader) @@ -1022,7 +1018,7 @@ static void fast_mix(unsigned long s[4], unsigned long v1, unsigned long v2) * This function is called when the CPU has just come online, with * entry CPUHP_AP_RANDOM_ONLINE, just after CPUHP_AP_WORKQUEUE_ONLINE. */ -int random_online_cpu(unsigned int cpu) +int __cold random_online_cpu(unsigned int cpu) { /* * During CPU shutdown and before CPU onlining, add_interrupt_ @@ -1177,7 +1173,7 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned int nu if (in_irq()) this_cpu_ptr(&irq_randomness)->count += max(1u, bits * 64) - 1; else - credit_init_bits(bits); + _credit_init_bits(bits); } void add_input_randomness(unsigned int type, unsigned int code, unsigned int value) @@ -1205,7 +1201,7 @@ void add_disk_randomness(struct gendisk *disk) } EXPORT_SYMBOL_GPL(add_disk_randomness); -void rand_initialize_disk(struct gendisk *disk) +void __cold rand_initialize_disk(struct gendisk *disk) { struct timer_rand_state *state; @@ -1234,7 +1230,7 @@ void rand_initialize_disk(struct gendisk *disk) * * So the re-arming always happens in the entropy loop itself. */ -static void entropy_timer(struct timer_list *t) +static void __cold entropy_timer(struct timer_list *t) { credit_init_bits(1); } @@ -1243,7 +1239,7 @@ static void entropy_timer(struct timer_list *t) * If we have an actual cycle counter, see if we can * generate enough entropy with timing noise */ -static void try_to_generate_entropy(void) +static void __cold try_to_generate_entropy(void) { struct { unsigned long entropy; From 552ae8e4841ba0550952063922d3b38bcd84ec6e Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 14 May 2022 13:59:30 +0200 Subject: [PATCH 0254/1842] random: move randomize_page() into mm where it belongs commit 5ad7dd882e45d7fe432c32e896e2aaa0b21746ea upstream. randomize_page is an mm function. It is documented like one. It contains the history of one. It has the naming convention of one. It looks just like another very similar function in mm, randomize_stack_top(). And it has always been maintained and updated by mm people. There is no need for it to be in random.c. In the "which shape does not look like the other ones" test, pointing to randomize_page() is correct. So move randomize_page() into mm/util.c, right next to the similar randomize_stack_top() function. This commit contains no actual code changes. Cc: Andrew Morton Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 32 -------------------------------- include/linux/mm.h | 1 + include/linux/random.h | 2 -- mm/util.c | 32 ++++++++++++++++++++++++++++++++ 4 files changed, 33 insertions(+), 34 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 10d5c969c05b..3461a10c11fa 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -624,38 +624,6 @@ int __cold random_prepare_cpu(unsigned int cpu) } #endif -/** - * randomize_page - Generate a random, page aligned address - * @start: The smallest acceptable address the caller will take. - * @range: The size of the area, starting at @start, within which the - * random address must fall. - * - * If @start + @range would overflow, @range is capped. - * - * NOTE: Historical use of randomize_range, which this replaces, presumed that - * @start was already page aligned. We now align it regardless. - * - * Return: A page aligned address within [start, start + range). On error, - * @start is returned. - */ -unsigned long randomize_page(unsigned long start, unsigned long range) -{ - if (!PAGE_ALIGNED(start)) { - range -= PAGE_ALIGN(start) - start; - start = PAGE_ALIGN(start); - } - - if (start > ULONG_MAX - range) - range = ULONG_MAX - start; - - range >>= PAGE_SHIFT; - - if (range == 0) - return start; - - return start + (get_random_long() % range << PAGE_SHIFT); -} - /* * This function will use the architecture-specific hardware random * number generator if it is available. It is not recommended for diff --git a/include/linux/mm.h b/include/linux/mm.h index 289c26f055cd..5b4d88faf114 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2585,6 +2585,7 @@ extern int install_special_mapping(struct mm_struct *mm, unsigned long flags, struct page **pages); unsigned long randomize_stack_top(unsigned long stack_top); +unsigned long randomize_page(unsigned long start, unsigned long range); extern unsigned long get_unmapped_area(struct file *, unsigned long, unsigned long, unsigned long, unsigned long); diff --git a/include/linux/random.h b/include/linux/random.h index d6ef6cf4346a..917470c4490a 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -64,8 +64,6 @@ static inline unsigned long get_random_canary(void) return get_random_long() & CANARY_MASK; } -unsigned long randomize_page(unsigned long start, unsigned long range); - int __init random_init(const char *command_line); bool rng_is_initialized(void); int wait_for_random_bytes(void); diff --git a/mm/util.c b/mm/util.c index 890472760790..ba9643de689e 100644 --- a/mm/util.c +++ b/mm/util.c @@ -331,6 +331,38 @@ unsigned long randomize_stack_top(unsigned long stack_top) #endif } +/** + * randomize_page - Generate a random, page aligned address + * @start: The smallest acceptable address the caller will take. + * @range: The size of the area, starting at @start, within which the + * random address must fall. + * + * If @start + @range would overflow, @range is capped. + * + * NOTE: Historical use of randomize_range, which this replaces, presumed that + * @start was already page aligned. We now align it regardless. + * + * Return: A page aligned address within [start, start + range). On error, + * @start is returned. + */ +unsigned long randomize_page(unsigned long start, unsigned long range) +{ + if (!PAGE_ALIGNED(start)) { + range -= PAGE_ALIGN(start) - start; + start = PAGE_ALIGN(start); + } + + if (start > ULONG_MAX - range) + range = ULONG_MAX - start; + + range >>= PAGE_SHIFT; + + if (range == 0) + return start; + + return start + (get_random_long() % range << PAGE_SHIFT); +} + #ifdef CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT unsigned long arch_randomize_brk(struct mm_struct *mm) { From 4bb374a1183b0d89ba8e0aa4510ce292bb3ed458 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sun, 15 May 2022 00:22:05 +0200 Subject: [PATCH 0255/1842] random: unify batched entropy implementations commit 3092adcef3ffd2ef59634998297ca8358461ebce upstream. There are currently two separate batched entropy implementations, for u32 and u64, with nearly identical code, with the goal of avoiding unaligned memory accesses and letting the buffers be used more efficiently. Having to maintain these two functions independently is a bit of a hassle though, considering that they always need to be kept in sync. This commit factors them out into a type-generic macro, so that the expansion produces the same code as before, such that diffing the assembly shows no differences. This will also make it easier in the future to add u16 and u8 batches. This was initially tested using an always_inline function and letting gcc constant fold the type size in, but the code gen was less efficient, and in general it was more verbose and harder to follow. So this patch goes with the boring macro solution, similar to what's already done for the _wait functions in random.h. Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 145 ++++++++++++++++-------------------------- 1 file changed, 54 insertions(+), 91 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 3461a10c11fa..6b7c0f0c9f69 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -511,99 +511,62 @@ static ssize_t get_random_bytes_user(void __user *ubuf, size_t len) * provided by this function is okay, the function wait_for_random_bytes() * should be called and return 0 at least once at any point prior. */ -struct batched_entropy { - union { - /* - * We make this 1.5x a ChaCha block, so that we get the - * remaining 32 bytes from fast key erasure, plus one full - * block from the detached ChaCha state. We can increase - * the size of this later if needed so long as we keep the - * formula of (integer_blocks + 0.5) * CHACHA_BLOCK_SIZE. - */ - u64 entropy_u64[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u64))]; - u32 entropy_u32[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u32))]; - }; - local_lock_t lock; - unsigned long generation; - unsigned int position; -}; +#define DEFINE_BATCHED_ENTROPY(type) \ +struct batch_ ##type { \ + /* \ + * We make this 1.5x a ChaCha block, so that we get the \ + * remaining 32 bytes from fast key erasure, plus one full \ + * block from the detached ChaCha state. We can increase \ + * the size of this later if needed so long as we keep the \ + * formula of (integer_blocks + 0.5) * CHACHA_BLOCK_SIZE. \ + */ \ + type entropy[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(type))]; \ + local_lock_t lock; \ + unsigned long generation; \ + unsigned int position; \ +}; \ + \ +static DEFINE_PER_CPU(struct batch_ ##type, batched_entropy_ ##type) = { \ + .lock = INIT_LOCAL_LOCK(batched_entropy_ ##type.lock), \ + .position = UINT_MAX \ +}; \ + \ +type get_random_ ##type(void) \ +{ \ + type ret; \ + unsigned long flags; \ + struct batch_ ##type *batch; \ + unsigned long next_gen; \ + \ + warn_unseeded_randomness(); \ + \ + if (!crng_ready()) { \ + _get_random_bytes(&ret, sizeof(ret)); \ + return ret; \ + } \ + \ + local_lock_irqsave(&batched_entropy_ ##type.lock, flags); \ + batch = raw_cpu_ptr(&batched_entropy_##type); \ + \ + next_gen = READ_ONCE(base_crng.generation); \ + if (batch->position >= ARRAY_SIZE(batch->entropy) || \ + next_gen != batch->generation) { \ + _get_random_bytes(batch->entropy, sizeof(batch->entropy)); \ + batch->position = 0; \ + batch->generation = next_gen; \ + } \ + \ + ret = batch->entropy[batch->position]; \ + batch->entropy[batch->position] = 0; \ + ++batch->position; \ + local_unlock_irqrestore(&batched_entropy_ ##type.lock, flags); \ + return ret; \ +} \ +EXPORT_SYMBOL(get_random_ ##type); -static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = { - .lock = INIT_LOCAL_LOCK(batched_entropy_u64.lock), - .position = UINT_MAX -}; - -u64 get_random_u64(void) -{ - u64 ret; - unsigned long flags; - struct batched_entropy *batch; - unsigned long next_gen; - - warn_unseeded_randomness(); - - if (!crng_ready()) { - _get_random_bytes(&ret, sizeof(ret)); - return ret; - } - - local_lock_irqsave(&batched_entropy_u64.lock, flags); - batch = raw_cpu_ptr(&batched_entropy_u64); - - next_gen = READ_ONCE(base_crng.generation); - if (batch->position >= ARRAY_SIZE(batch->entropy_u64) || - next_gen != batch->generation) { - _get_random_bytes(batch->entropy_u64, sizeof(batch->entropy_u64)); - batch->position = 0; - batch->generation = next_gen; - } - - ret = batch->entropy_u64[batch->position]; - batch->entropy_u64[batch->position] = 0; - ++batch->position; - local_unlock_irqrestore(&batched_entropy_u64.lock, flags); - return ret; -} -EXPORT_SYMBOL(get_random_u64); - -static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) = { - .lock = INIT_LOCAL_LOCK(batched_entropy_u32.lock), - .position = UINT_MAX -}; - -u32 get_random_u32(void) -{ - u32 ret; - unsigned long flags; - struct batched_entropy *batch; - unsigned long next_gen; - - warn_unseeded_randomness(); - - if (!crng_ready()) { - _get_random_bytes(&ret, sizeof(ret)); - return ret; - } - - local_lock_irqsave(&batched_entropy_u32.lock, flags); - batch = raw_cpu_ptr(&batched_entropy_u32); - - next_gen = READ_ONCE(base_crng.generation); - if (batch->position >= ARRAY_SIZE(batch->entropy_u32) || - next_gen != batch->generation) { - _get_random_bytes(batch->entropy_u32, sizeof(batch->entropy_u32)); - batch->position = 0; - batch->generation = next_gen; - } - - ret = batch->entropy_u32[batch->position]; - batch->entropy_u32[batch->position] = 0; - ++batch->position; - local_unlock_irqrestore(&batched_entropy_u32.lock, flags); - return ret; -} -EXPORT_SYMBOL(get_random_u32); +DEFINE_BATCHED_ENTROPY(u64) +DEFINE_BATCHED_ENTROPY(u32) #ifdef CONFIG_SMP /* From affa1ae52219459af7a3bb1044360d31da1999fd Mon Sep 17 00:00:00 2001 From: Jens Axboe Date: Thu, 19 May 2022 17:31:36 -0600 Subject: [PATCH 0256/1842] random: convert to using fops->read_iter() commit 1b388e7765f2eaa137cf5d92b47ef5925ad83ced upstream. This is a pre-requisite to wiring up splice() again for the random and urandom drivers. It also allows us to remove the INT_MAX check in getrandom(), because import_single_range() applies capping internally. Signed-off-by: Jens Axboe [Jason: rewrote get_random_bytes_user() to simplify and also incorporate additional suggestions from Al.] Cc: Al Viro Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 66 ++++++++++++++++++++----------------------- 1 file changed, 30 insertions(+), 36 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 6b7c0f0c9f69..aa868f0983c6 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -52,6 +52,7 @@ #include #include #include +#include #include #include #include @@ -448,13 +449,13 @@ void get_random_bytes(void *buf, size_t len) } EXPORT_SYMBOL(get_random_bytes); -static ssize_t get_random_bytes_user(void __user *ubuf, size_t len) +static ssize_t get_random_bytes_user(struct iov_iter *iter) { - size_t block_len, left, ret = 0; u32 chacha_state[CHACHA_STATE_WORDS]; - u8 output[CHACHA_BLOCK_SIZE]; + u8 block[CHACHA_BLOCK_SIZE]; + size_t ret = 0, copied; - if (!len) + if (unlikely(!iov_iter_count(iter))) return 0; /* @@ -468,30 +469,22 @@ static ssize_t get_random_bytes_user(void __user *ubuf, size_t len) * use chacha_state after, so we can simply return those bytes to * the user directly. */ - if (len <= CHACHA_KEY_SIZE) { - ret = len - copy_to_user(ubuf, &chacha_state[4], len); + if (iov_iter_count(iter) <= CHACHA_KEY_SIZE) { + ret = copy_to_iter(&chacha_state[4], CHACHA_KEY_SIZE, iter); goto out_zero_chacha; } for (;;) { - chacha20_block(chacha_state, output); + chacha20_block(chacha_state, block); if (unlikely(chacha_state[12] == 0)) ++chacha_state[13]; - block_len = min_t(size_t, len, CHACHA_BLOCK_SIZE); - left = copy_to_user(ubuf, output, block_len); - if (left) { - ret += block_len - left; - break; - } - - ubuf += block_len; - ret += block_len; - len -= block_len; - if (!len) + copied = copy_to_iter(block, sizeof(block), iter); + ret += copied; + if (!iov_iter_count(iter) || copied != sizeof(block)) break; - BUILD_BUG_ON(PAGE_SIZE % CHACHA_BLOCK_SIZE != 0); + BUILD_BUG_ON(PAGE_SIZE % sizeof(block) != 0); if (ret % PAGE_SIZE == 0) { if (signal_pending(current)) break; @@ -499,7 +492,7 @@ static ssize_t get_random_bytes_user(void __user *ubuf, size_t len) } } - memzero_explicit(output, sizeof(output)); + memzero_explicit(block, sizeof(block)); out_zero_chacha: memzero_explicit(chacha_state, sizeof(chacha_state)); return ret ? ret : -EFAULT; @@ -1228,6 +1221,10 @@ static void __cold try_to_generate_entropy(void) SYSCALL_DEFINE3(getrandom, char __user *, ubuf, size_t, len, unsigned int, flags) { + struct iov_iter iter; + struct iovec iov; + int ret; + if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE)) return -EINVAL; @@ -1238,19 +1235,18 @@ SYSCALL_DEFINE3(getrandom, char __user *, ubuf, size_t, len, unsigned int, flags if ((flags & (GRND_INSECURE | GRND_RANDOM)) == (GRND_INSECURE | GRND_RANDOM)) return -EINVAL; - if (len > INT_MAX) - len = INT_MAX; - if (!crng_ready() && !(flags & GRND_INSECURE)) { - int ret; - if (flags & GRND_NONBLOCK) return -EAGAIN; ret = wait_for_random_bytes(); if (unlikely(ret)) return ret; } - return get_random_bytes_user(ubuf, len); + + ret = import_single_range(READ, ubuf, len, &iov, &iter); + if (unlikely(ret)) + return ret; + return get_random_bytes_user(&iter); } static __poll_t random_poll(struct file *file, poll_table *wait) @@ -1294,8 +1290,7 @@ static ssize_t random_write(struct file *file, const char __user *ubuf, return (ssize_t)len; } -static ssize_t urandom_read(struct file *file, char __user *ubuf, - size_t len, loff_t *ppos) +static ssize_t urandom_read_iter(struct kiocb *kiocb, struct iov_iter *iter) { static int maxwarn = 10; @@ -1304,23 +1299,22 @@ static ssize_t urandom_read(struct file *file, char __user *ubuf, ++urandom_warning.missed; else if (ratelimit_disable || __ratelimit(&urandom_warning)) { --maxwarn; - pr_notice("%s: uninitialized urandom read (%zd bytes read)\n", - current->comm, len); + pr_notice("%s: uninitialized urandom read (%zu bytes read)\n", + current->comm, iov_iter_count(iter)); } } - return get_random_bytes_user(ubuf, len); + return get_random_bytes_user(iter); } -static ssize_t random_read(struct file *file, char __user *ubuf, - size_t len, loff_t *ppos) +static ssize_t random_read_iter(struct kiocb *kiocb, struct iov_iter *iter) { int ret; ret = wait_for_random_bytes(); if (ret != 0) return ret; - return get_random_bytes_user(ubuf, len); + return get_random_bytes_user(iter); } static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) @@ -1382,7 +1376,7 @@ static int random_fasync(int fd, struct file *filp, int on) } const struct file_operations random_fops = { - .read = random_read, + .read_iter = random_read_iter, .write = random_write, .poll = random_poll, .unlocked_ioctl = random_ioctl, @@ -1392,7 +1386,7 @@ const struct file_operations random_fops = { }; const struct file_operations urandom_fops = { - .read = urandom_read, + .read_iter = urandom_read_iter, .write = random_write, .unlocked_ioctl = random_ioctl, .compat_ioctl = compat_ptr_ioctl, From cf8f8d37586f5e492ea2a6dd82f10628a31f03b4 Mon Sep 17 00:00:00 2001 From: Jens Axboe Date: Thu, 19 May 2022 17:43:15 -0600 Subject: [PATCH 0257/1842] random: convert to using fops->write_iter() commit 22b0a222af4df8ee9bb8e07013ab44da9511b047 upstream. Now that the read side has been converted to fix a regression with splice, convert the write side as well to have some symmetry in the interface used (and help deprecate ->write()). Signed-off-by: Jens Axboe [Jason: cleaned up random_ioctl a bit, require full writes in RNDADDENTROPY since it's crediting entropy, simplify control flow of write_pool(), and incorporate suggestions from Al.] Cc: Al Viro Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 67 ++++++++++++++++++++++--------------------- 1 file changed, 35 insertions(+), 32 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index aa868f0983c6..75d8942f4481 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1255,39 +1255,31 @@ static __poll_t random_poll(struct file *file, poll_table *wait) return crng_ready() ? EPOLLIN | EPOLLRDNORM : EPOLLOUT | EPOLLWRNORM; } -static int write_pool(const char __user *ubuf, size_t len) +static ssize_t write_pool(struct iov_iter *iter) { - size_t block_len; - int ret = 0; u8 block[BLAKE2S_BLOCK_SIZE]; + ssize_t ret = 0; + size_t copied; - while (len) { - block_len = min(len, sizeof(block)); - if (copy_from_user(block, ubuf, block_len)) { - ret = -EFAULT; - goto out; - } - len -= block_len; - ubuf += block_len; - mix_pool_bytes(block, block_len); + if (unlikely(!iov_iter_count(iter))) + return 0; + + for (;;) { + copied = copy_from_iter(block, sizeof(block), iter); + ret += copied; + mix_pool_bytes(block, copied); + if (!iov_iter_count(iter) || copied != sizeof(block)) + break; cond_resched(); } -out: memzero_explicit(block, sizeof(block)); - return ret; + return ret ? ret : -EFAULT; } -static ssize_t random_write(struct file *file, const char __user *ubuf, - size_t len, loff_t *ppos) +static ssize_t random_write_iter(struct kiocb *kiocb, struct iov_iter *iter) { - int ret; - - ret = write_pool(ubuf, len); - if (ret) - return ret; - - return (ssize_t)len; + return write_pool(iter); } static ssize_t urandom_read_iter(struct kiocb *kiocb, struct iov_iter *iter) @@ -1319,9 +1311,8 @@ static ssize_t random_read_iter(struct kiocb *kiocb, struct iov_iter *iter) static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) { - int size, ent_count; int __user *p = (int __user *)arg; - int retval; + int ent_count; switch (cmd) { case RNDGETENTCNT: @@ -1338,20 +1329,32 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) return -EINVAL; credit_init_bits(ent_count); return 0; - case RNDADDENTROPY: + case RNDADDENTROPY: { + struct iov_iter iter; + struct iovec iov; + ssize_t ret; + int len; + if (!capable(CAP_SYS_ADMIN)) return -EPERM; if (get_user(ent_count, p++)) return -EFAULT; if (ent_count < 0) return -EINVAL; - if (get_user(size, p++)) + if (get_user(len, p++)) + return -EFAULT; + ret = import_single_range(WRITE, p, len, &iov, &iter); + if (unlikely(ret)) + return ret; + ret = write_pool(&iter); + if (unlikely(ret < 0)) + return ret; + /* Since we're crediting, enforce that it was all written into the pool. */ + if (unlikely(ret != len)) return -EFAULT; - retval = write_pool((const char __user *)p, size); - if (retval < 0) - return retval; credit_init_bits(ent_count); return 0; + } case RNDZAPENTCNT: case RNDCLEARPOOL: /* No longer has any effect. */ @@ -1377,7 +1380,7 @@ static int random_fasync(int fd, struct file *filp, int on) const struct file_operations random_fops = { .read_iter = random_read_iter, - .write = random_write, + .write_iter = random_write_iter, .poll = random_poll, .unlocked_ioctl = random_ioctl, .compat_ioctl = compat_ptr_ioctl, @@ -1387,7 +1390,7 @@ const struct file_operations random_fops = { const struct file_operations urandom_fops = { .read_iter = urandom_read_iter, - .write = random_write, + .write_iter = random_write_iter, .unlocked_ioctl = random_ioctl, .compat_ioctl = compat_ptr_ioctl, .fasync = random_fasync, From 18c261e9485a238129b857a8e53d3e2da8ca8246 Mon Sep 17 00:00:00 2001 From: Jens Axboe Date: Thu, 19 May 2022 17:31:37 -0600 Subject: [PATCH 0258/1842] random: wire up fops->splice_{read,write}_iter() commit 79025e727a846be6fd215ae9cdb654368ac3f9a6 upstream. Now that random/urandom is using {read,write}_iter, we can wire it up to using the generic splice handlers. Fixes: 36e2c7421f02 ("fs: don't allow splice read/write without explicit ops") Signed-off-by: Jens Axboe [Jason: added the splice_write path. Note that sendfile() and such still does not work for read, though it does for write, because of a file type restriction in splice_direct_to_actor(), which I'll address separately.] Cc: Al Viro Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index 75d8942f4481..b3ce4201a51f 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1386,6 +1386,8 @@ const struct file_operations random_fops = { .compat_ioctl = compat_ptr_ioctl, .fasync = random_fasync, .llseek = noop_llseek, + .splice_read = generic_file_splice_read, + .splice_write = iter_file_splice_write, }; const struct file_operations urandom_fops = { @@ -1395,6 +1397,8 @@ const struct file_operations urandom_fops = { .compat_ioctl = compat_ptr_ioctl, .fasync = random_fasync, .llseek = noop_llseek, + .splice_read = generic_file_splice_read, + .splice_write = iter_file_splice_write, }; From 514f587340010942e7d22f2a51b8c812399e4ce7 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sun, 22 May 2022 22:25:41 +0200 Subject: [PATCH 0259/1842] random: check for signals after page of pool writes commit 1ce6c8d68f8ac587f54d0a271ac594d3d51f3efb upstream. get_random_bytes_user() checks for signals after producing a PAGE_SIZE worth of output, just like /dev/zero does. write_pool() is doing basically the same work (actually, slightly more expensive), and so should stop to check for signals in the same way. Let's also name it write_pool_user() to match get_random_bytes_user(), so this won't be misused in the future. Before this patch, massive writes to /dev/urandom would tie up the process for an extremely long time and make it unterminatable. After, it can be successfully interrupted. The following test program can be used to see this works as intended: #include #include #include #include static unsigned char x[~0U]; static void handle(int) { } int main(int argc, char *argv[]) { pid_t pid = getpid(), child; int fd; signal(SIGUSR1, handle); if (!(child = fork())) { for (;;) kill(pid, SIGUSR1); } fd = open("/dev/urandom", O_WRONLY); pause(); printf("interrupted after writing %zd bytes\n", write(fd, x, sizeof(x))); close(fd); kill(child, SIGTERM); return 0; } Result before: "interrupted after writing 2147479552 bytes" Result after: "interrupted after writing 4096 bytes" Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index b3ce4201a51f..00b50ccc9fae 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1255,7 +1255,7 @@ static __poll_t random_poll(struct file *file, poll_table *wait) return crng_ready() ? EPOLLIN | EPOLLRDNORM : EPOLLOUT | EPOLLWRNORM; } -static ssize_t write_pool(struct iov_iter *iter) +static ssize_t write_pool_user(struct iov_iter *iter) { u8 block[BLAKE2S_BLOCK_SIZE]; ssize_t ret = 0; @@ -1270,7 +1270,13 @@ static ssize_t write_pool(struct iov_iter *iter) mix_pool_bytes(block, copied); if (!iov_iter_count(iter) || copied != sizeof(block)) break; - cond_resched(); + + BUILD_BUG_ON(PAGE_SIZE % sizeof(block) != 0); + if (ret % PAGE_SIZE == 0) { + if (signal_pending(current)) + break; + cond_resched(); + } } memzero_explicit(block, sizeof(block)); @@ -1279,7 +1285,7 @@ static ssize_t write_pool(struct iov_iter *iter) static ssize_t random_write_iter(struct kiocb *kiocb, struct iov_iter *iter) { - return write_pool(iter); + return write_pool_user(iter); } static ssize_t urandom_read_iter(struct kiocb *kiocb, struct iov_iter *iter) @@ -1346,7 +1352,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) ret = import_single_range(WRITE, p, len, &iov, &iter); if (unlikely(ret)) return ret; - ret = write_pool(&iter); + ret = write_pool_user(&iter); if (unlikely(ret < 0)) return ret; /* Since we're crediting, enforce that it was all written into the pool. */ From 7c57f213498871972b0d84828d0d5dcd1893b36c Mon Sep 17 00:00:00 2001 From: Edward Matijevic Date: Fri, 20 May 2022 23:45:15 -0500 Subject: [PATCH 0260/1842] ALSA: ctxfi: Add SB046x PCI ID commit 1b073ebb174d0c7109b438e0a5eb4495137803ec upstream. Adds the PCI ID for X-Fi cards sold under the Platnum and XtremeMusic names Before: snd_ctxfi 0000:05:05.0: chip 20K1 model Unknown (1102:0021) is found After: snd_ctxfi 0000:05:05.0: chip 20K1 model SB046x (1102:0021) is found [ This is only about defining the model name string, and the rest is handled just like before, as a default unknown device. Edward confirmed that the stuff has been working fine -- tiwai ] Signed-off-by: Edward Matijevic Cc: Link: https://lore.kernel.org/r/cae7d1a4-8bd9-7dfe-7427-db7e766f7272@gmail.com Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/ctxfi/ctatc.c | 2 ++ sound/pci/ctxfi/cthardware.h | 3 ++- 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/sound/pci/ctxfi/ctatc.c b/sound/pci/ctxfi/ctatc.c index f8ac96cf38a4..06775519dab0 100644 --- a/sound/pci/ctxfi/ctatc.c +++ b/sound/pci/ctxfi/ctatc.c @@ -36,6 +36,7 @@ | ((IEC958_AES3_CON_FS_48000) << 24)) static const struct snd_pci_quirk subsys_20k1_list[] = { + SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x0021, "SB046x", CTSB046X), SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x0022, "SB055x", CTSB055X), SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x002f, "SB055x", CTSB055X), SND_PCI_QUIRK(PCI_VENDOR_ID_CREATIVE, 0x0029, "SB073x", CTSB073X), @@ -64,6 +65,7 @@ static const struct snd_pci_quirk subsys_20k2_list[] = { static const char *ct_subsys_name[NUM_CTCARDS] = { /* 20k1 models */ + [CTSB046X] = "SB046x", [CTSB055X] = "SB055x", [CTSB073X] = "SB073x", [CTUAA] = "UAA", diff --git a/sound/pci/ctxfi/cthardware.h b/sound/pci/ctxfi/cthardware.h index 9e6b83bd432d..b50d61a08e28 100644 --- a/sound/pci/ctxfi/cthardware.h +++ b/sound/pci/ctxfi/cthardware.h @@ -26,8 +26,9 @@ enum CHIPTYP { enum CTCARDS { /* 20k1 models */ + CTSB046X, + CT20K1_MODEL_FIRST = CTSB046X, CTSB055X, - CT20K1_MODEL_FIRST = CTSB055X, CTSB073X, CTUAA, CT20K1_UNKNOWN, From 56c31ac1d8aadb706ea977c097c714762b3fcfdd Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Mon, 30 May 2022 09:33:46 +0200 Subject: [PATCH 0261/1842] Linux 5.10.119 Link: https://lore.kernel.org/r/20220527084828.156494029@linuxfoundation.org Tested-by: Guenter Roeck Tested-by: Salvatore Bonaccorso Tested-by: Linux Kernel Functional Testing Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index f9210e43121d..b442cc5bbfc3 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 118 +SUBLEVEL = 119 EXTRAVERSION = NAME = Dare mighty things From 75e35951d6ec28a3a1802ffd76fabe788aa8bb02 Mon Sep 17 00:00:00 2001 From: IotaHydrae Date: Wed, 4 May 2022 19:59:04 +0800 Subject: [PATCH 0262/1842] pinctrl: sunxi: fix f1c100s uart2 function [ Upstream commit fa8785e5931367e2b43f2c507f26bcf3e281c0ca ] Change suniv f1c100s pinctrl,PD14 multiplexing function lvds1 to uart2 When the pin PD13 and PD14 is setting up to uart2 function in dts, there's an error occurred: 1c20800.pinctrl: unsupported function uart2 on pin PD14 Because 'uart2' is not any one multiplexing option of PD14, and pinctrl don't know how to configure it. So change the pin PD14 lvds1 function to uart2. Signed-off-by: IotaHydrae Reviewed-by: Andre Przywara Link: https://lore.kernel.org/r/tencent_70C1308DDA794C81CAEF389049055BACEC09@qq.com Signed-off-by: Linus Walleij Signed-off-by: Sasha Levin --- drivers/pinctrl/sunxi/pinctrl-suniv-f1c100s.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/pinctrl/sunxi/pinctrl-suniv-f1c100s.c b/drivers/pinctrl/sunxi/pinctrl-suniv-f1c100s.c index 2801ca706273..68a5b627fb9b 100644 --- a/drivers/pinctrl/sunxi/pinctrl-suniv-f1c100s.c +++ b/drivers/pinctrl/sunxi/pinctrl-suniv-f1c100s.c @@ -204,7 +204,7 @@ static const struct sunxi_desc_pin suniv_f1c100s_pins[] = { SUNXI_FUNCTION(0x0, "gpio_in"), SUNXI_FUNCTION(0x1, "gpio_out"), SUNXI_FUNCTION(0x2, "lcd"), /* D20 */ - SUNXI_FUNCTION(0x3, "lvds1"), /* RX */ + SUNXI_FUNCTION(0x3, "uart2"), /* RX */ SUNXI_FUNCTION_IRQ_BANK(0x6, 0, 14)), SUNXI_PIN(SUNXI_PINCTRL_PIN(D, 15), SUNXI_FUNCTION(0x0, "gpio_in"), From d007f49ab789bee8ed76021830b49745d5feaf61 Mon Sep 17 00:00:00 2001 From: Al Viro Date: Wed, 18 May 2022 02:13:40 -0400 Subject: [PATCH 0263/1842] percpu_ref_init(): clean ->percpu_count_ref on failure [ Upstream commit a91714312eb16f9ecd1f7f8b3efe1380075f28d4 ] That way percpu_ref_exit() is safe after failing percpu_ref_init(). At least one user (cgroup_create()) had a double-free that way; there might be other similar bugs. Easier to fix in percpu_ref_init(), rather than playing whack-a-mole in sloppy users... Usual symptoms look like a messed refcounting in one of subsystems that use percpu allocations (might be percpu-refcount, might be something else). Having refcounts for two different objects share memory is Not Nice(tm)... Reported-by: syzbot+5b1e53987f858500ec00@syzkaller.appspotmail.com Signed-off-by: Al Viro Signed-off-by: Sasha Levin --- lib/percpu-refcount.c | 1 + 1 file changed, 1 insertion(+) diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c index e59eda07305e..493093b97093 100644 --- a/lib/percpu-refcount.c +++ b/lib/percpu-refcount.c @@ -75,6 +75,7 @@ int percpu_ref_init(struct percpu_ref *ref, percpu_ref_func_t *release, data = kzalloc(sizeof(*ref->data), gfp); if (!data) { free_percpu((void __percpu *)ref->percpu_count_ptr); + ref->percpu_count_ptr = 0; return -ENOMEM; } From ac8d5eb26c9edeb139af1e02e1d3743aa2e1fcd7 Mon Sep 17 00:00:00 2001 From: Thomas Bartschies Date: Wed, 18 May 2022 08:32:18 +0200 Subject: [PATCH 0264/1842] net: af_key: check encryption module availability consistency [ Upstream commit 015c44d7bff3f44d569716117becd570c179ca32 ] Since the recent introduction supporting the SM3 and SM4 hash algos for IPsec, the kernel produces invalid pfkey acquire messages, when these encryption modules are disabled. This happens because the availability of the algos wasn't checked in all necessary functions. This patch adds these checks. Signed-off-by: Thomas Bartschies Signed-off-by: Steffen Klassert Signed-off-by: Sasha Levin --- net/key/af_key.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/net/key/af_key.c b/net/key/af_key.c index 61505b0df57d..6b7ed5568c09 100644 --- a/net/key/af_key.c +++ b/net/key/af_key.c @@ -2904,7 +2904,7 @@ static int count_ah_combs(const struct xfrm_tmpl *t) break; if (!aalg->pfkey_supported) continue; - if (aalg_tmpl_set(t, aalg)) + if (aalg_tmpl_set(t, aalg) && aalg->available) sz += sizeof(struct sadb_comb); } return sz + sizeof(struct sadb_prop); @@ -2922,7 +2922,7 @@ static int count_esp_combs(const struct xfrm_tmpl *t) if (!ealg->pfkey_supported) continue; - if (!(ealg_tmpl_set(t, ealg))) + if (!(ealg_tmpl_set(t, ealg) && ealg->available)) continue; for (k = 1; ; k++) { @@ -2933,7 +2933,7 @@ static int count_esp_combs(const struct xfrm_tmpl *t) if (!aalg->pfkey_supported) continue; - if (aalg_tmpl_set(t, aalg)) + if (aalg_tmpl_set(t, aalg) && aalg->available) sz += sizeof(struct sadb_comb); } } From 640397afdf6ebfac558ed8340bac4bfd05f06c53 Mon Sep 17 00:00:00 2001 From: Lin Ma Date: Wed, 18 May 2022 18:53:21 +0800 Subject: [PATCH 0265/1842] nfc: pn533: Fix buggy cleanup order [ Upstream commit b8cedb7093b2d1394cae9b86494cba4b62d3a30a ] When removing the pn533 device (i2c or USB), there is a logic error. The original code first cancels the worker (flush_delayed_work) and then destroys the workqueue (destroy_workqueue), leaving the timer the last one to be deleted (del_timer). This result in a possible race condition in a multi-core preempt-able kernel. That is, if the cleanup (pn53x_common_clean) is concurrently run with the timer handler (pn533_listen_mode_timer), the timer can queue the poll_work to the already destroyed workqueue, causing use-after-free. This patch reorder the cleanup: it uses the del_timer_sync to make sure the handler is finished before the routine will destroy the workqueue. Note that the timer cannot be activated by the worker again. static void pn533_wq_poll(struct work_struct *work) ... rc = pn533_send_poll_frame(dev); if (rc) return; if (cur_mod->len == 0 && dev->poll_mod_count > 1) mod_timer(&dev->listen_timer, ...); That is, the mod_timer can be called only when pn533_send_poll_frame() returns no error, which is impossible because the device is detaching and the lower driver should return ENODEV code. Signed-off-by: Lin Ma Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/nfc/pn533/pn533.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/nfc/pn533/pn533.c b/drivers/nfc/pn533/pn533.c index d2c011615775..8d7e29d953b7 100644 --- a/drivers/nfc/pn533/pn533.c +++ b/drivers/nfc/pn533/pn533.c @@ -2844,13 +2844,14 @@ void pn53x_common_clean(struct pn533 *priv) { struct pn533_cmd *cmd, *n; + /* delete the timer before cleanup the worker */ + del_timer_sync(&priv->listen_timer); + flush_delayed_work(&priv->poll_work); destroy_workqueue(priv->wq); skb_queue_purge(&priv->resp_q); - del_timer(&priv->listen_timer); - list_for_each_entry_safe(cmd, n, &priv->cmd_queue, queue) { list_del(&cmd->queue); kfree(cmd); From 828309eee5b639394c84dca27a712c8a714819d0 Mon Sep 17 00:00:00 2001 From: Joel Stanley Date: Tue, 17 May 2022 18:52:17 +0930 Subject: [PATCH 0266/1842] net: ftgmac100: Disable hardware checksum on AST2600 [ Upstream commit 6fd45e79e8b93b8d22fb8fe22c32fbad7e9190bd ] The AST2600 when using the i210 NIC over NC-SI has been observed to produce incorrect checksum results with specific MTU values. This was first observed when sending data across a long distance set of networks. On a local network, the following test was performed using a 1MB file of random data. On the receiver run this script: #!/bin/bash while [ 1 ]; do # Zero the stats nstat -r > /dev/null nc -l 9899 > test-file # Check for checksum errors TcpInCsumErrors=$(nstat | grep TcpInCsumErrors) if [ -z "$TcpInCsumErrors" ]; then echo No TcpInCsumErrors else echo TcpInCsumErrors = $TcpInCsumErrors fi done On an AST2600 system: # nc 9899 < test-file The test was repeated with various MTU values: # ip link set mtu 1410 dev eth0 The observed results: 1500 - good 1434 - bad 1400 - good 1410 - bad 1420 - good The test was repeated after disabling tx checksumming: # ethtool -K eth0 tx-checksumming off And all MTU values tested resulted in transfers without error. An issue with the driver cannot be ruled out, however there has been no bug discovered so far. David has done the work to take the original bug report of slow data transfer between long distance connections and triaged it down to this test case. The vendor suspects this this is a hardware issue when using NC-SI. The fixes line refers to the patch that introduced AST2600 support. Reported-by: David Wilder Reviewed-by: Dylan Hung Signed-off-by: Joel Stanley Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/faraday/ftgmac100.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c index 5bc11d1bb9df..eea4bd3116e8 100644 --- a/drivers/net/ethernet/faraday/ftgmac100.c +++ b/drivers/net/ethernet/faraday/ftgmac100.c @@ -1893,6 +1893,11 @@ static int ftgmac100_probe(struct platform_device *pdev) /* AST2400 doesn't have working HW checksum generation */ if (np && (of_device_is_compatible(np, "aspeed,ast2400-mac"))) netdev->hw_features &= ~NETIF_F_HW_CSUM; + + /* AST2600 tx checksum with NCSI is broken */ + if (priv->use_ncsi && of_device_is_compatible(np, "aspeed,ast2600-mac")) + netdev->hw_features &= ~NETIF_F_HW_CSUM; + if (np && of_get_property(np, "no-hw-checksum", NULL)) netdev->hw_features &= ~(NETIF_F_HW_CSUM | NETIF_F_RXCSUM); netdev->features |= netdev->hw_features; From f0749aecb20b2d8fbc600a4467f29c6572e4f434 Mon Sep 17 00:00:00 2001 From: Mika Westerberg Date: Wed, 27 Apr 2022 13:19:10 +0300 Subject: [PATCH 0267/1842] i2c: ismt: Provide a DMA buffer for Interrupt Cause Logging [ Upstream commit 17a0f3acdc6ec8b89ad40f6e22165a4beee25663 ] Before sending a MSI the hardware writes information pertinent to the interrupt cause to a memory location pointed by SMTICL register. This memory holds three double words where the least significant bit tells whether the interrupt cause of master/target/error is valid. The driver does not use this but we need to set it up because otherwise it will perform DMA write to the default address (0) and this will cause an IOMMU fault such as below: DMAR: DRHD: handling fault status reg 2 DMAR: [DMA Write] Request device [00:12.0] PASID ffffffff fault addr 0 [fault reason 05] PTE Write access is not set To prevent this from happening, provide a proper DMA buffer for this that then gets mapped by the IOMMU accordingly. Signed-off-by: Mika Westerberg Reviewed-by: From: Andy Shevchenko Signed-off-by: Wolfram Sang Signed-off-by: Sasha Levin --- drivers/i2c/busses/i2c-ismt.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/i2c/busses/i2c-ismt.c b/drivers/i2c/busses/i2c-ismt.c index a35a27c320e7..3d2d92640651 100644 --- a/drivers/i2c/busses/i2c-ismt.c +++ b/drivers/i2c/busses/i2c-ismt.c @@ -82,6 +82,7 @@ #define ISMT_DESC_ENTRIES 2 /* number of descriptor entries */ #define ISMT_MAX_RETRIES 3 /* number of SMBus retries to attempt */ +#define ISMT_LOG_ENTRIES 3 /* number of interrupt cause log entries */ /* Hardware Descriptor Constants - Control Field */ #define ISMT_DESC_CWRL 0x01 /* Command/Write Length */ @@ -175,6 +176,8 @@ struct ismt_priv { u8 head; /* ring buffer head pointer */ struct completion cmp; /* interrupt completion */ u8 buffer[I2C_SMBUS_BLOCK_MAX + 16]; /* temp R/W data buffer */ + dma_addr_t log_dma; + u32 *log; }; static const struct pci_device_id ismt_ids[] = { @@ -409,6 +412,9 @@ static int ismt_access(struct i2c_adapter *adap, u16 addr, memset(desc, 0, sizeof(struct ismt_desc)); desc->tgtaddr_rw = ISMT_DESC_ADDR_RW(addr, read_write); + /* Always clear the log entries */ + memset(priv->log, 0, ISMT_LOG_ENTRIES * sizeof(u32)); + /* Initialize common control bits */ if (likely(pci_dev_msi_enabled(priv->pci_dev))) desc->control = ISMT_DESC_INT | ISMT_DESC_FAIR; @@ -693,6 +699,8 @@ static void ismt_hw_init(struct ismt_priv *priv) /* initialize the Master Descriptor Base Address (MDBA) */ writeq(priv->io_rng_dma, priv->smba + ISMT_MSTR_MDBA); + writeq(priv->log_dma, priv->smba + ISMT_GR_SMTICL); + /* initialize the Master Control Register (MCTRL) */ writel(ISMT_MCTRL_MEIE, priv->smba + ISMT_MSTR_MCTRL); @@ -780,6 +788,12 @@ static int ismt_dev_init(struct ismt_priv *priv) priv->head = 0; init_completion(&priv->cmp); + priv->log = dmam_alloc_coherent(&priv->pci_dev->dev, + ISMT_LOG_ENTRIES * sizeof(u32), + &priv->log_dma, GFP_KERNEL); + if (!priv->log) + return -ENOMEM; + return 0; } From 5525af175be2184e0c87e268bd17c81235662f7d Mon Sep 17 00:00:00 2001 From: Piyush Malgujar Date: Wed, 11 May 2022 06:36:59 -0700 Subject: [PATCH 0268/1842] drivers: i2c: thunderx: Allow driver to work with ACPI defined TWSI controllers [ Upstream commit 03a35bc856ddc09f2cc1f4701adecfbf3b464cb3 ] Due to i2c->adap.dev.fwnode not being set, ACPI_COMPANION() wasn't properly found for TWSI controllers. Signed-off-by: Szymon Balcerak Signed-off-by: Piyush Malgujar Signed-off-by: Wolfram Sang Signed-off-by: Sasha Levin --- drivers/i2c/busses/i2c-thunderx-pcidrv.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/i2c/busses/i2c-thunderx-pcidrv.c b/drivers/i2c/busses/i2c-thunderx-pcidrv.c index 12c90aa0900e..a77cd86fe75e 100644 --- a/drivers/i2c/busses/i2c-thunderx-pcidrv.c +++ b/drivers/i2c/busses/i2c-thunderx-pcidrv.c @@ -213,6 +213,7 @@ static int thunder_i2c_probe_pci(struct pci_dev *pdev, i2c->adap.bus_recovery_info = &octeon_i2c_recovery_info; i2c->adap.dev.parent = dev; i2c->adap.dev.of_node = pdev->dev.of_node; + i2c->adap.dev.fwnode = dev->fwnode; snprintf(i2c->adap.name, sizeof(i2c->adap.name), "Cavium ThunderX i2c adapter at %s", dev_name(dev)); i2c_set_adapdata(&i2c->adap, i2c); From ea62d169b6e731e0b54abda1d692406f6bc6a696 Mon Sep 17 00:00:00 2001 From: Pablo Neira Ayuso Date: Wed, 25 May 2022 10:36:38 +0200 Subject: [PATCH 0269/1842] netfilter: nf_tables: disallow non-stateful expression in sets earlier commit 520778042ccca019f3ffa136dd0ca565c486cedd upstream. Since 3e135cd499bf ("netfilter: nft_dynset: dynamic stateful expression instantiation"), it is possible to attach stateful expressions to set elements. cd5125d8f518 ("netfilter: nf_tables: split set destruction in deactivate and destroy phase") introduces conditional destruction on the object to accomodate transaction semantics. nft_expr_init() calls expr->ops->init() first, then check for NFT_STATEFUL_EXPR, this stills allows to initialize a non-stateful lookup expressions which points to a set, which might lead to UAF since the set is not properly detached from the set->binding for this case. Anyway, this combination is non-sense from nf_tables perspective. This patch fixes this problem by checking for NFT_STATEFUL_EXPR before expr->ops->init() is called. The reporter provides a KASAN splat and a poc reproducer (similar to those autogenerated by syzbot to report use-after-free errors). It is unknown to me if they are using syzbot or if they use similar automated tool to locate the bug that they are reporting. For the record, this is the KASAN splat. [ 85.431824] ================================================================== [ 85.432901] BUG: KASAN: use-after-free in nf_tables_bind_set+0x81b/0xa20 [ 85.433825] Write of size 8 at addr ffff8880286f0e98 by task poc/776 [ 85.434756] [ 85.434999] CPU: 1 PID: 776 Comm: poc Tainted: G W 5.18.0+ #2 [ 85.436023] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 Fixes: 0b2d8a7b638b ("netfilter: nf_tables: add helper functions for expression handling") Reported-and-tested-by: Aaron Adams Signed-off-by: Pablo Neira Ayuso Signed-off-by: Greg Kroah-Hartman --- net/netfilter/nf_tables_api.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index fdd1da9ecea9..c837ce4e5fe6 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -2679,27 +2679,31 @@ static struct nft_expr *nft_expr_init(const struct nft_ctx *ctx, err = nf_tables_expr_parse(ctx, nla, &info); if (err < 0) - goto err1; + goto err_expr_parse; + + err = -EOPNOTSUPP; + if (!(info.ops->type->flags & NFT_EXPR_STATEFUL)) + goto err_expr_stateful; err = -ENOMEM; expr = kzalloc(info.ops->size, GFP_KERNEL); if (expr == NULL) - goto err2; + goto err_expr_stateful; err = nf_tables_newexpr(ctx, &info, expr); if (err < 0) - goto err3; + goto err_expr_new; return expr; -err3: +err_expr_new: kfree(expr); -err2: +err_expr_stateful: owner = info.ops->type->owner; if (info.ops->type->release_ops) info.ops->type->release_ops(info.ops); module_put(owner); -err1: +err_expr_parse: return ERR_PTR(err); } @@ -5055,9 +5059,6 @@ struct nft_expr *nft_set_elem_expr_alloc(const struct nft_ctx *ctx, return expr; err = -EOPNOTSUPP; - if (!(expr->ops->type->flags & NFT_EXPR_STATEFUL)) - goto err_set_elem_expr; - if (expr->ops->type->flags & NFT_EXPR_GC) { if (set->flags & NFT_SET_TIMEOUT) goto err_set_elem_expr; From cd720fad8b574f449fef015514542bd7455abde5 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 29 Apr 2022 14:38:01 -0700 Subject: [PATCH 0270/1842] pipe: make poll_usage boolean and annotate its access commit f485922d8fe4e44f6d52a5bb95a603b7c65554bb upstream. Patch series "Fix data-races around epoll reported by KCSAN." This series suppresses a false positive KCSAN's message and fixes a real data-race. This patch (of 2): pipe_poll() runs locklessly and assigns 1 to poll_usage. Once poll_usage is set to 1, it never changes in other places. However, concurrent writes of a value trigger KCSAN, so let's make KCSAN happy. BUG: KCSAN: data-race in pipe_poll / pipe_poll write to 0xffff8880042f6678 of 4 bytes by task 174 on cpu 3: pipe_poll (fs/pipe.c:656) ep_item_poll.isra.0 (./include/linux/poll.h:88 fs/eventpoll.c:853) do_epoll_wait (fs/eventpoll.c:1692 fs/eventpoll.c:1806 fs/eventpoll.c:2234) __x64_sys_epoll_wait (fs/eventpoll.c:2246 fs/eventpoll.c:2241 fs/eventpoll.c:2241) do_syscall_64 (arch/x86/entry/common.c:50 arch/x86/entry/common.c:80) entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:113) write to 0xffff8880042f6678 of 4 bytes by task 177 on cpu 1: pipe_poll (fs/pipe.c:656) ep_item_poll.isra.0 (./include/linux/poll.h:88 fs/eventpoll.c:853) do_epoll_wait (fs/eventpoll.c:1692 fs/eventpoll.c:1806 fs/eventpoll.c:2234) __x64_sys_epoll_wait (fs/eventpoll.c:2246 fs/eventpoll.c:2241 fs/eventpoll.c:2241) do_syscall_64 (arch/x86/entry/common.c:50 arch/x86/entry/common.c:80) entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:113) Reported by Kernel Concurrency Sanitizer on: CPU: 1 PID: 177 Comm: epoll_race Not tainted 5.17.0-58927-gf443e374ae13 #6 Hardware name: Red Hat KVM, BIOS 1.11.0-2.amzn2 04/01/2014 Link: https://lkml.kernel.org/r/20220322002653.33865-1-kuniyu@amazon.co.jp Link: https://lkml.kernel.org/r/20220322002653.33865-2-kuniyu@amazon.co.jp Fixes: 3b844826b6c6 ("pipe: avoid unnecessary EPOLLET wakeups under normal loads") Signed-off-by: Kuniyuki Iwashima Cc: Alexander Duyck Cc: Al Viro Cc: Davidlohr Bueso Cc: Kuniyuki Iwashima Cc: "Soheil Hassas Yeganeh" Cc: "Sridhar Samudrala" Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- fs/pipe.c | 2 +- include/linux/pipe_fs_i.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/pipe.c b/fs/pipe.c index 9f2ca1b1c17a..f758ee946522 100644 --- a/fs/pipe.c +++ b/fs/pipe.c @@ -652,7 +652,7 @@ pipe_poll(struct file *filp, poll_table *wait) unsigned int head, tail; /* Epoll has some historical nasty semantics, this enables them */ - pipe->poll_usage = 1; + WRITE_ONCE(pipe->poll_usage, true); /* * Reading pipe state only -- no need for acquiring the semaphore. diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h index fc5642431b92..c0b6ec6bf65b 100644 --- a/include/linux/pipe_fs_i.h +++ b/include/linux/pipe_fs_i.h @@ -71,7 +71,7 @@ struct pipe_inode_info { unsigned int files; unsigned int r_counter; unsigned int w_counter; - unsigned int poll_usage; + bool poll_usage; struct page *tmp_page; struct fasync_struct *fasync_readers; struct fasync_struct *fasync_writers; From 8fbd54ab06c955d247c1a91d5d980cddc868f1e7 Mon Sep 17 00:00:00 2001 From: David Howells Date: Thu, 26 May 2022 07:34:52 +0100 Subject: [PATCH 0271/1842] pipe: Fix missing lock in pipe_resize_ring() commit 189b0ddc245139af81198d1a3637cac74f96e13a upstream. pipe_resize_ring() needs to take the pipe->rd_wait.lock spinlock to prevent post_one_notification() from trying to insert into the ring whilst the ring is being replaced. The occupancy check must be done after the lock is taken, and the lock must be taken after the new ring is allocated. The bug can lead to an oops looking something like: BUG: KASAN: use-after-free in post_one_notification.isra.0+0x62e/0x840 Read of size 4 at addr ffff88801cc72a70 by task poc/27196 ... Call Trace: post_one_notification.isra.0+0x62e/0x840 __post_watch_notification+0x3b7/0x650 key_create_or_update+0xb8b/0xd20 __do_sys_add_key+0x175/0x340 __x64_sys_add_key+0xbe/0x140 do_syscall_64+0x5c/0xc0 entry_SYSCALL_64_after_hwframe+0x44/0xae Reported by Selim Enes Karaduman @Enesdex working with Trend Micro Zero Day Initiative. Fixes: c73be61cede5 ("pipe: Add general notification queue support") Reported-by: zdi-disclosures@trendmicro.com # ZDI-CAN-17291 Signed-off-by: David Howells Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- fs/pipe.c | 31 ++++++++++++++++++------------- 1 file changed, 18 insertions(+), 13 deletions(-) diff --git a/fs/pipe.c b/fs/pipe.c index f758ee946522..dbb090e1b026 100644 --- a/fs/pipe.c +++ b/fs/pipe.c @@ -1244,30 +1244,33 @@ unsigned int round_pipe_size(unsigned long size) /* * Resize the pipe ring to a number of slots. + * + * Note the pipe can be reduced in capacity, but only if the current + * occupancy doesn't exceed nr_slots; if it does, EBUSY will be + * returned instead. */ int pipe_resize_ring(struct pipe_inode_info *pipe, unsigned int nr_slots) { struct pipe_buffer *bufs; unsigned int head, tail, mask, n; - /* - * We can shrink the pipe, if arg is greater than the ring occupancy. - * Since we don't expect a lot of shrink+grow operations, just free and - * allocate again like we would do for growing. If the pipe currently - * contains more buffers than arg, then return busy. - */ - mask = pipe->ring_size - 1; - head = pipe->head; - tail = pipe->tail; - n = pipe_occupancy(pipe->head, pipe->tail); - if (nr_slots < n) - return -EBUSY; - bufs = kcalloc(nr_slots, sizeof(*bufs), GFP_KERNEL_ACCOUNT | __GFP_NOWARN); if (unlikely(!bufs)) return -ENOMEM; + spin_lock_irq(&pipe->rd_wait.lock); + mask = pipe->ring_size - 1; + head = pipe->head; + tail = pipe->tail; + + n = pipe_occupancy(head, tail); + if (nr_slots < n) { + spin_unlock_irq(&pipe->rd_wait.lock); + kfree(bufs); + return -EBUSY; + } + /* * The pipe array wraps around, so just start the new one at zero * and adjust the indices. @@ -1299,6 +1302,8 @@ int pipe_resize_ring(struct pipe_inode_info *pipe, unsigned int nr_slots) pipe->tail = tail; pipe->head = head; + spin_unlock_irq(&pipe->rd_wait.lock); + /* This might have made more room for writers */ wake_up_interruptible(&pipe->wr_wait); return 0; From b96b4aa65bbc0364ea44807f3a32b8d862008aa6 Mon Sep 17 00:00:00 2001 From: Miri Korenblit Date: Fri, 18 Jun 2021 13:41:46 +0300 Subject: [PATCH 0272/1842] cfg80211: set custom regdomain after wiphy registration commit 1b7b3ac8ff3317cdcf07a1c413de9bdb68019c2b upstream. We used to set regulatory info before the registration of the device and then the regulatory info didn't get set, because the device isn't registered so there isn't a device to set the regulatory info for. So set the regulatory info after the device registration. Call reg_process_self_managed_hints() once again after the device registration because it does nothing before it. Signed-off-by: Miri Korenblit Signed-off-by: Luca Coelho Link: https://lore.kernel.org/r/iwlwifi.20210618133832.c96eadcffe80.I86799c2c866b5610b4cf91115c21d8ceb525c5aa@changeid Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman --- net/wireless/core.c | 8 ++++---- net/wireless/reg.c | 1 + 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/net/wireless/core.c b/net/wireless/core.c index 3f4554723761..3b25b78896a2 100644 --- a/net/wireless/core.c +++ b/net/wireless/core.c @@ -5,7 +5,7 @@ * Copyright 2006-2010 Johannes Berg * Copyright 2013-2014 Intel Mobile Communications GmbH * Copyright 2015-2017 Intel Deutschland GmbH - * Copyright (C) 2018-2020 Intel Corporation + * Copyright (C) 2018-2021 Intel Corporation */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt @@ -918,9 +918,6 @@ int wiphy_register(struct wiphy *wiphy) return res; } - /* set up regulatory info */ - wiphy_regulatory_register(wiphy); - list_add_rcu(&rdev->list, &cfg80211_rdev_list); cfg80211_rdev_list_generation++; @@ -931,6 +928,9 @@ int wiphy_register(struct wiphy *wiphy) cfg80211_debugfs_rdev_add(rdev); nl80211_notify_wiphy(rdev, NL80211_CMD_NEW_WIPHY); + /* set up regulatory info */ + wiphy_regulatory_register(wiphy); + if (wiphy->regulatory_flags & REGULATORY_CUSTOM_REG) { struct regulatory_request request; diff --git a/net/wireless/reg.c b/net/wireless/reg.c index a04fdfb35f07..6b3386e1d93a 100644 --- a/net/wireless/reg.c +++ b/net/wireless/reg.c @@ -4001,6 +4001,7 @@ void wiphy_regulatory_register(struct wiphy *wiphy) wiphy_update_regulatory(wiphy, lr->initiator); wiphy_all_share_dfs_chan_state(wiphy); + reg_process_self_managed_hints(); } void wiphy_regulatory_deregister(struct wiphy *wiphy) From 6029f86740c92c182ff29b34b3c40bb5462050a1 Mon Sep 17 00:00:00 2001 From: Stephen Brennan Date: Thu, 19 May 2022 09:50:30 +0100 Subject: [PATCH 0273/1842] assoc_array: Fix BUG_ON during garbage collect commit d1dc87763f406d4e67caf16dbe438a5647692395 upstream. A rare BUG_ON triggered in assoc_array_gc: [3430308.818153] kernel BUG at lib/assoc_array.c:1609! Which corresponded to the statement currently at line 1593 upstream: BUG_ON(assoc_array_ptr_is_meta(p)); Using the data from the core dump, I was able to generate a userspace reproducer[1] and determine the cause of the bug. [1]: https://github.com/brenns10/kernel_stuff/tree/master/assoc_array_gc After running the iterator on the entire branch, an internal tree node looked like the following: NODE (nr_leaves_on_branch: 3) SLOT [0] NODE (2 leaves) SLOT [1] NODE (1 leaf) SLOT [2..f] NODE (empty) In the userspace reproducer, the pr_devel output when compressing this node was: -- compress node 0x5607cc089380 -- free=0, leaves=0 [0] retain node 2/1 [nx 0] [1] fold node 1/1 [nx 0] [2] fold node 0/1 [nx 2] [3] fold node 0/2 [nx 2] [4] fold node 0/3 [nx 2] [5] fold node 0/4 [nx 2] [6] fold node 0/5 [nx 2] [7] fold node 0/6 [nx 2] [8] fold node 0/7 [nx 2] [9] fold node 0/8 [nx 2] [10] fold node 0/9 [nx 2] [11] fold node 0/10 [nx 2] [12] fold node 0/11 [nx 2] [13] fold node 0/12 [nx 2] [14] fold node 0/13 [nx 2] [15] fold node 0/14 [nx 2] after: 3 At slot 0, an internal node with 2 leaves could not be folded into the node, because there was only one available slot (slot 0). Thus, the internal node was retained. At slot 1, the node had one leaf, and was able to be folded in successfully. The remaining nodes had no leaves, and so were removed. By the end of the compression stage, there were 14 free slots, and only 3 leaf nodes. The tree was ascended and then its parent node was compressed. When this node was seen, it could not be folded, due to the internal node it contained. The invariant for compression in this function is: whenever nr_leaves_on_branch < ASSOC_ARRAY_FAN_OUT, the node should contain all leaf nodes. The compression step currently cannot guarantee this, given the corner case shown above. To fix this issue, retry compression whenever we have retained a node, and yet nr_leaves_on_branch < ASSOC_ARRAY_FAN_OUT. This second compression will then allow the node in slot 1 to be folded in, satisfying the invariant. Below is the output of the reproducer once the fix is applied: -- compress node 0x560e9c562380 -- free=0, leaves=0 [0] retain node 2/1 [nx 0] [1] fold node 1/1 [nx 0] [2] fold node 0/1 [nx 2] [3] fold node 0/2 [nx 2] [4] fold node 0/3 [nx 2] [5] fold node 0/4 [nx 2] [6] fold node 0/5 [nx 2] [7] fold node 0/6 [nx 2] [8] fold node 0/7 [nx 2] [9] fold node 0/8 [nx 2] [10] fold node 0/9 [nx 2] [11] fold node 0/10 [nx 2] [12] fold node 0/11 [nx 2] [13] fold node 0/12 [nx 2] [14] fold node 0/13 [nx 2] [15] fold node 0/14 [nx 2] internal nodes remain despite enough space, retrying -- compress node 0x560e9c562380 -- free=14, leaves=1 [0] fold node 2/15 [nx 0] after: 3 Changes ======= DH: - Use false instead of 0. - Reorder the inserted lines in a couple of places to put retained before next_slot. ver #2) - Fix typo in pr_devel, correct comparison to "<=" Fixes: 3cb989501c26 ("Add a generic associative array implementation.") Cc: Signed-off-by: Stephen Brennan Signed-off-by: David Howells cc: Andrew Morton cc: keyrings@vger.kernel.org Link: https://lore.kernel.org/r/20220511225517.407935-1-stephen.s.brennan@oracle.com/ # v1 Link: https://lore.kernel.org/r/20220512215045.489140-1-stephen.s.brennan@oracle.com/ # v2 Reviewed-by: Jarkko Sakkinen Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- lib/assoc_array.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/lib/assoc_array.c b/lib/assoc_array.c index 6f4bcf524554..b537a83678e1 100644 --- a/lib/assoc_array.c +++ b/lib/assoc_array.c @@ -1462,6 +1462,7 @@ int assoc_array_gc(struct assoc_array *array, struct assoc_array_ptr *cursor, *ptr; struct assoc_array_ptr *new_root, *new_parent, **new_ptr_pp; unsigned long nr_leaves_on_tree; + bool retained; int keylen, slot, nr_free, next_slot, i; pr_devel("-->%s()\n", __func__); @@ -1538,6 +1539,7 @@ int assoc_array_gc(struct assoc_array *array, goto descend; } +retry_compress: pr_devel("-- compress node %p --\n", new_n); /* Count up the number of empty slots in this node and work out the @@ -1555,6 +1557,7 @@ int assoc_array_gc(struct assoc_array *array, pr_devel("free=%d, leaves=%lu\n", nr_free, new_n->nr_leaves_on_branch); /* See what we can fold in */ + retained = false; next_slot = 0; for (slot = 0; slot < ASSOC_ARRAY_FAN_OUT; slot++) { struct assoc_array_shortcut *s; @@ -1604,9 +1607,14 @@ int assoc_array_gc(struct assoc_array *array, pr_devel("[%d] retain node %lu/%d [nx %d]\n", slot, child->nr_leaves_on_branch, nr_free + 1, next_slot); + retained = true; } } + if (retained && new_n->nr_leaves_on_branch <= ASSOC_ARRAY_FAN_OUT) { + pr_devel("internal nodes remain despite enough space, retrying\n"); + goto retry_compress; + } pr_devel("after: %lu\n", new_n->nr_leaves_on_branch); nr_leaves_on_tree = new_n->nr_leaves_on_branch; From 57d01bcae7041cfb86553091718d12bf36c082aa Mon Sep 17 00:00:00 2001 From: Pavel Begunkov Date: Fri, 3 Jun 2022 13:17:04 +0100 Subject: [PATCH 0274/1842] io_uring: don't re-import iovecs from callbacks We can't re-import or modify iterators from iocb callbacks, it's not safe as it might be reverted and/or reexpanded while unwinding stack. It's also not safe to resubmit as io-wq thread will race with stack undwinding for the iterator and other data. Disallow resubmission from callbacks, it can fail some cases that were handled before, but the possibility of such a failure was a part of the API from the beginning and so it should be fine. Signed-off-by: Pavel Begunkov Signed-off-by: Greg Kroah-Hartman --- fs/io_uring.c | 39 --------------------------------------- 1 file changed, 39 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 3ecf71151fb1..0b9724c0a2b8 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -2579,45 +2579,6 @@ static void io_complete_rw_common(struct kiocb *kiocb, long res, #ifdef CONFIG_BLOCK static bool io_resubmit_prep(struct io_kiocb *req, int error) { - struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs; - ssize_t ret = -ECANCELED; - struct iov_iter iter; - int rw; - - if (error) { - ret = error; - goto end_req; - } - - switch (req->opcode) { - case IORING_OP_READV: - case IORING_OP_READ_FIXED: - case IORING_OP_READ: - rw = READ; - break; - case IORING_OP_WRITEV: - case IORING_OP_WRITE_FIXED: - case IORING_OP_WRITE: - rw = WRITE; - break; - default: - printk_once(KERN_WARNING "io_uring: bad opcode in resubmit %d\n", - req->opcode); - goto end_req; - } - - if (!req->async_data) { - ret = io_import_iovec(rw, req, &iovec, &iter, false); - if (ret < 0) - goto end_req; - ret = io_setup_async_rw(req, iovec, inline_vecs, &iter, false); - if (!ret) - return true; - kfree(iovec); - } else { - return true; - } -end_req: req_set_fail_links(req); return false; } From 8adb751d294ed3b668f1c7e41bd7ebe49002a744 Mon Sep 17 00:00:00 2001 From: Pavel Begunkov Date: Fri, 3 Jun 2022 13:17:05 +0100 Subject: [PATCH 0275/1842] io_uring: fix using under-expanded iters [ upstream commit cd65869512ab5668a5d16f789bc4da1319c435c4 ] The issue was first described and addressed in 89c2b3b7491820 ("io_uring: reexpand under-reexpanded iters"), but shortly after reimplemented as. cd65869512ab56 ("io_uring: use iov_iter state save/restore helpers"). Here we follow the approach from the second patch but without in-callback resubmissions, fixups for not yet supported in 5.10 short read retries and replacing iov_iter_state with iter copies to not pull even more dependencies, and because it's just much simpler. Signed-off-by: Pavel Begunkov Signed-off-by: Greg Kroah-Hartman --- fs/io_uring.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 0b9724c0a2b8..871475d3fca2 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -3389,6 +3389,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock, struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs; struct kiocb *kiocb = &req->rw.kiocb; struct iov_iter __iter, *iter = &__iter; + struct iov_iter iter_cp; struct io_async_rw *rw = req->async_data; ssize_t io_size, ret, ret2; bool no_async; @@ -3399,6 +3400,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock, ret = io_import_iovec(READ, req, &iovec, iter, !force_nonblock); if (ret < 0) return ret; + iter_cp = *iter; io_size = iov_iter_count(iter); req->result = io_size; ret = 0; @@ -3434,7 +3436,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock, if (req->file->f_flags & O_NONBLOCK) goto done; /* some cases will consume bytes even on error returns */ - iov_iter_revert(iter, io_size - iov_iter_count(iter)); + *iter = iter_cp; ret = 0; goto copy_iov; } else if (ret < 0) { @@ -3517,6 +3519,7 @@ static int io_write(struct io_kiocb *req, bool force_nonblock, struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs; struct kiocb *kiocb = &req->rw.kiocb; struct iov_iter __iter, *iter = &__iter; + struct iov_iter iter_cp; struct io_async_rw *rw = req->async_data; ssize_t ret, ret2, io_size; @@ -3526,6 +3529,7 @@ static int io_write(struct io_kiocb *req, bool force_nonblock, ret = io_import_iovec(WRITE, req, &iovec, iter, !force_nonblock); if (ret < 0) return ret; + iter_cp = *iter; io_size = iov_iter_count(iter); req->result = io_size; @@ -3587,7 +3591,7 @@ static int io_write(struct io_kiocb *req, bool force_nonblock, } else { copy_iov: /* some cases will consume bytes even on error returns */ - iov_iter_revert(iter, io_size - iov_iter_count(iter)); + *iter = iter_cp; ret = io_setup_async_rw(req, iovec, inline_vecs, iter, false); if (!ret) return -EAGAIN; From ffc8d613876f0225ac3cfe047fd0ab31623825cf Mon Sep 17 00:00:00 2001 From: Alex Elder Date: Thu, 21 Apr 2022 13:53:33 -0500 Subject: [PATCH 0276/1842] net: ipa: compute proper aggregation limit commit c5794097b269f15961ed78f7f27b50e51766dec9 upstream. The aggregation byte limit for an endpoint is currently computed based on the endpoint's receive buffer size. However, some bytes at the front of each receive buffer are reserved on the assumption that--as with SKBs--it might be useful to insert data (such as headers) before what lands in the buffer. The aggregation byte limit currently doesn't take into account that reserved space, and as a result, aggregation could require space past that which is available in the buffer. Fix this by reducing the size used to compute the aggregation byte limit by the NET_SKB_PAD offset reserved for each receive buffer. Signed-off-by: Alex Elder Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- drivers/net/ipa/ipa_endpoint.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c index 621648ce750b..eb25a13042ea 100644 --- a/drivers/net/ipa/ipa_endpoint.c +++ b/drivers/net/ipa/ipa_endpoint.c @@ -610,12 +610,14 @@ static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint) if (endpoint->data->aggregation) { if (!endpoint->toward_ipa) { + u32 buffer_size; u32 limit; val |= u32_encode_bits(IPA_ENABLE_AGGR, AGGR_EN_FMASK); val |= u32_encode_bits(IPA_GENERIC, AGGR_TYPE_FMASK); - limit = ipa_aggr_size_kb(IPA_RX_BUFFER_SIZE); + buffer_size = IPA_RX_BUFFER_SIZE - NET_SKB_PAD; + limit = ipa_aggr_size_kb(buffer_size); val |= u32_encode_bits(limit, AGGR_BYTE_LIMIT_FMASK); limit = IPA_AGGR_TIME_LIMIT_DEFAULT; From f20e67b455e425a0d3d03f27bda5fdd32dc2c324 Mon Sep 17 00:00:00 2001 From: "Darrick J. Wong" Date: Fri, 27 May 2022 16:02:15 +0300 Subject: [PATCH 0277/1842] xfs: detect overflows in bmbt records commit acf104c2331c1ba2a667e65dd36139d1555b1432 upstream. Detect file block mappings with a blockcount that's either so large that integer overflows occur or are zero, because neither are valid in the filesystem. Worse yet, attempting directory modifications causes the iext code to trip over the bmbt key handling and takes the filesystem down. We can fix most of this by preventing the bad metadata from entering the incore structures in the first place. Found by setting blockcount=0 in a directory data fork mapping and watching the fireworks. Signed-off-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Signed-off-by: Amir Goldstein Signed-off-by: Greg Kroah-Hartman --- fs/xfs/libxfs/xfs_bmap.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c index d9a692484eae..de9c27ef68d8 100644 --- a/fs/xfs/libxfs/xfs_bmap.c +++ b/fs/xfs/libxfs/xfs_bmap.c @@ -6229,6 +6229,11 @@ xfs_bmap_validate_extent( xfs_fsblock_t endfsb; bool isrt; + if (irec->br_startblock + irec->br_blockcount <= irec->br_startblock) + return __this_address; + if (irec->br_startoff + irec->br_blockcount <= irec->br_startoff) + return __this_address; + isrt = XFS_IS_REALTIME_INODE(ip); endfsb = irec->br_startblock + irec->br_blockcount - 1; if (isrt && whichfork == XFS_DATA_FORK) { From 45d97f70da4d47f303a5202dba124c1d0e9120c3 Mon Sep 17 00:00:00 2001 From: Kaixu Xia Date: Fri, 27 May 2022 16:02:16 +0300 Subject: [PATCH 0278/1842] xfs: show the proper user quota options commit 237d7887ae723af7d978e8b9a385fdff416f357b upstream. The quota option 'usrquota' should be shown if both the XFS_UQUOTA_ACCT and XFS_UQUOTA_ENFD flags are set. The option 'uqnoenforce' should be shown when only the XFS_UQUOTA_ACCT flag is set. The current code logic seems wrong, Fix it and show proper options. Signed-off-by: Kaixu Xia Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_super.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index e3e229e52512..5ebd6cdc44a7 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -199,10 +199,12 @@ xfs_fs_show_options( seq_printf(m, ",swidth=%d", (int)XFS_FSB_TO_BB(mp, mp->m_swidth)); - if (mp->m_qflags & (XFS_UQUOTA_ACCT|XFS_UQUOTA_ENFD)) - seq_puts(m, ",usrquota"); - else if (mp->m_qflags & XFS_UQUOTA_ACCT) - seq_puts(m, ",uqnoenforce"); + if (mp->m_qflags & XFS_UQUOTA_ACCT) { + if (mp->m_qflags & XFS_UQUOTA_ENFD) + seq_puts(m, ",usrquota"); + else + seq_puts(m, ",uqnoenforce"); + } if (mp->m_qflags & XFS_PQUOTA_ACCT) { if (mp->m_qflags & XFS_PQUOTA_ENFD) From 72464fd2b4b76fe924fd8c3d5bfe9b10a61aaa1c Mon Sep 17 00:00:00 2001 From: "Darrick J. Wong" Date: Fri, 27 May 2022 16:02:17 +0300 Subject: [PATCH 0279/1842] xfs: fix the forward progress assertion in xfs_iwalk_run_callbacks commit a5336d6bb2d02d0e9d4d3c8be04b80b8b68d56c8 upstream. In commit 27c14b5daa82 we started tracking the last inode seen during an inode walk to avoid infinite loops if a corrupt inobt record happens to have a lower ir_startino than the record preceeding it. Unfortunately, the assertion trips over the case where there are completely empty inobt records (which can happen quite easily on 64k page filesystems) because we advance the tracking cursor without actually putting the empty record into the processing buffer. Fix the assert to allow for this case. Reported-by: zlang@redhat.com Fixes: 27c14b5daa82 ("xfs: ensure inobt record walks always make forward progress") Signed-off-by: Darrick J. Wong Reviewed-by: Zorro Lang Reviewed-by: Dave Chinner Signed-off-by: Amir Goldstein Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_iwalk.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/xfs/xfs_iwalk.c b/fs/xfs/xfs_iwalk.c index 2a45138831e3..eae3aff9bc97 100644 --- a/fs/xfs/xfs_iwalk.c +++ b/fs/xfs/xfs_iwalk.c @@ -363,7 +363,7 @@ xfs_iwalk_run_callbacks( /* Delete cursor but remember the last record we cached... */ xfs_iwalk_del_inobt(tp, curpp, agi_bpp, 0); irec = &iwag->recs[iwag->nr_recs - 1]; - ASSERT(next_agino == irec->ir_startino + XFS_INODES_PER_CHUNK); + ASSERT(next_agino >= irec->ir_startino + XFS_INODES_PER_CHUNK); error = xfs_iwalk_ag_recs(iwag); if (error) From a9e7f19a5577894242e09dab6dcfc8064e634a1c Mon Sep 17 00:00:00 2001 From: "Darrick J. Wong" Date: Fri, 27 May 2022 16:02:18 +0300 Subject: [PATCH 0280/1842] xfs: fix an ABBA deadlock in xfs_rename commit 6da1b4b1ab36d80a3994fd4811c8381de10af604 upstream. When overlayfs is running on top of xfs and the user unlinks a file in the overlay, overlayfs will create a whiteout inode and ask xfs to "rename" the whiteout file atop the one being unlinked. If the file being unlinked loses its one nlink, we then have to put the inode on the unlinked list. This requires us to grab the AGI buffer of the whiteout inode to take it off the unlinked list (which is where whiteouts are created) and to grab the AGI buffer of the file being deleted. If the whiteout was created in a higher numbered AG than the file being deleted, we'll lock the AGIs in the wrong order and deadlock. Therefore, grab all the AGI locks we think we'll need ahead of time, and in order of increasing AG number per the locking rules. Reported-by: wenli xie Fixes: 93597ae8dac0 ("xfs: Fix deadlock between AGI and AGF when target_ip exists in xfs_rename()") Signed-off-by: Darrick J. Wong Reviewed-by: Brian Foster Signed-off-by: Amir Goldstein Signed-off-by: Greg Kroah-Hartman --- fs/xfs/libxfs/xfs_dir2.h | 2 -- fs/xfs/libxfs/xfs_dir2_sf.c | 2 +- fs/xfs/xfs_inode.c | 42 ++++++++++++++++++++++--------------- 3 files changed, 26 insertions(+), 20 deletions(-) diff --git a/fs/xfs/libxfs/xfs_dir2.h b/fs/xfs/libxfs/xfs_dir2.h index e55378640b05..d03e6098ded9 100644 --- a/fs/xfs/libxfs/xfs_dir2.h +++ b/fs/xfs/libxfs/xfs_dir2.h @@ -47,8 +47,6 @@ extern int xfs_dir_lookup(struct xfs_trans *tp, struct xfs_inode *dp, extern int xfs_dir_removename(struct xfs_trans *tp, struct xfs_inode *dp, struct xfs_name *name, xfs_ino_t ino, xfs_extlen_t tot); -extern bool xfs_dir2_sf_replace_needblock(struct xfs_inode *dp, - xfs_ino_t inum); extern int xfs_dir_replace(struct xfs_trans *tp, struct xfs_inode *dp, struct xfs_name *name, xfs_ino_t inum, xfs_extlen_t tot); diff --git a/fs/xfs/libxfs/xfs_dir2_sf.c b/fs/xfs/libxfs/xfs_dir2_sf.c index 2463b5d73447..8c4f76bba88b 100644 --- a/fs/xfs/libxfs/xfs_dir2_sf.c +++ b/fs/xfs/libxfs/xfs_dir2_sf.c @@ -1018,7 +1018,7 @@ xfs_dir2_sf_removename( /* * Check whether the sf dir replace operation need more blocks. */ -bool +static bool xfs_dir2_sf_replace_needblock( struct xfs_inode *dp, xfs_ino_t inum) diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index 2bfbcf28b1bd..e958b1c74561 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -3152,7 +3152,7 @@ xfs_rename( struct xfs_trans *tp; struct xfs_inode *wip = NULL; /* whiteout inode */ struct xfs_inode *inodes[__XFS_SORT_INODES]; - struct xfs_buf *agibp; + int i; int num_inodes = __XFS_SORT_INODES; bool new_parent = (src_dp != target_dp); bool src_is_directory = S_ISDIR(VFS_I(src_ip)->i_mode); @@ -3265,6 +3265,30 @@ xfs_rename( } } + /* + * Lock the AGI buffers we need to handle bumping the nlink of the + * whiteout inode off the unlinked list and to handle dropping the + * nlink of the target inode. Per locking order rules, do this in + * increasing AG order and before directory block allocation tries to + * grab AGFs because we grab AGIs before AGFs. + * + * The (vfs) caller must ensure that if src is a directory then + * target_ip is either null or an empty directory. + */ + for (i = 0; i < num_inodes && inodes[i] != NULL; i++) { + if (inodes[i] == wip || + (inodes[i] == target_ip && + (VFS_I(target_ip)->i_nlink == 1 || src_is_directory))) { + struct xfs_buf *bp; + xfs_agnumber_t agno; + + agno = XFS_INO_TO_AGNO(mp, inodes[i]->i_ino); + error = xfs_read_agi(mp, tp, agno, &bp); + if (error) + goto out_trans_cancel; + } + } + /* * Directory entry creation below may acquire the AGF. Remove * the whiteout from the unlinked list first to preserve correct @@ -3317,22 +3341,6 @@ xfs_rename( * In case there is already an entry with the same * name at the destination directory, remove it first. */ - - /* - * Check whether the replace operation will need to allocate - * blocks. This happens when the shortform directory lacks - * space and we have to convert it to a block format directory. - * When more blocks are necessary, we must lock the AGI first - * to preserve locking order (AGI -> AGF). - */ - if (xfs_dir2_sf_replace_needblock(target_dp, src_ip->i_ino)) { - error = xfs_read_agi(mp, tp, - XFS_INO_TO_AGNO(mp, target_ip->i_ino), - &agibp); - if (error) - goto out_trans_cancel; - } - error = xfs_dir_replace(tp, target_dp, target_name, src_ip->i_ino, spaceres); if (error) From 2728d95c6c952ccf2c9a4b769b5852dfb36268af Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Fri, 27 May 2022 16:02:19 +0300 Subject: [PATCH 0281/1842] xfs: Fix CIL throttle hang when CIL space used going backwards commit 19f4e7cc819771812a7f527d7897c2deffbf7a00 upstream. A hang with tasks stuck on the CIL hard throttle was reported and largely diagnosed by Donald Buczek, who discovered that it was a result of the CIL context space usage decrementing in committed transactions once the hard throttle limit had been hit and processes were already blocked. This resulted in the CIL push not waking up those waiters because the CIL context was no longer over the hard throttle limit. The surprising aspect of this was the CIL space usage going backwards regularly enough to trigger this situation. Assumptions had been made in design that the relogging process would only increase the size of the objects in the CIL, and so that space would only increase. This change and commit message fixes the issue and documents the result of an audit of the triggers that can cause the CIL space to go backwards, how large the backwards steps tend to be, the frequency in which they occur, and what the impact on the CIL accounting code is. Even though the CIL ctx->space_used can go backwards, it will only do so if the log item is already logged to the CIL and contains a space reservation for it's entire logged state. This is tracked by the shadow buffer state on the log item. If the item is not previously logged in the CIL it has no shadow buffer nor log vector, and hence the entire size of the logged item copied to the log vector is accounted to the CIL space usage. i.e. it will always go up in this case. If the item has a log vector (i.e. already in the CIL) and the size decreases, then the existing log vector will be overwritten and the space usage will go down. This is the only condition where the space usage reduces, and it can only occur when an item is already tracked in the CIL. Hence we are safe from CIL space usage underruns as a result of log items decreasing in size when they are relogged. Typically this reduction in CIL usage occurs from metadata blocks being free, such as when a btree block merge occurs or a directory enter/xattr entry is removed and the da-tree is reduced in size. This generally results in a reduction in size of around a single block in the CIL, but also tends to increase the number of log vectors because the parent and sibling nodes in the tree needs to be updated when a btree block is removed. If a multi-level merge occurs, then we see reduction in size of 2+ blocks, but again the log vector count goes up. The other vector is inode fork size changes, which only log the current size of the fork and ignore the previously logged size when the fork is relogged. Hence if we are removing items from the inode fork (dir/xattr removal in shortform, extent record removal in extent form, etc) the relogged size of the inode for can decrease. No other log items can decrease in size either because they are a fixed size (e.g. dquots) or they cannot be relogged (e.g. relogging an intent actually creates a new intent log item and doesn't relog the old item at all.) Hence the only two vectors for CIL context size reduction are relogging inode forks and marking buffers active in the CIL as stale. Long story short: the majority of the code does the right thing and handles the reduction in log item size correctly, and only the CIL hard throttle implementation is problematic and needs fixing. This patch makes that fix, as well as adds comments in the log item code that result in items shrinking in size when they are relogged as a clear reminder that this can and does happen frequently. The throttle fix is based upon the change Donald proposed, though it goes further to ensure that once the throttle is activated, it captures all tasks until the CIL push issues a wakeup, regardless of whether the CIL space used has gone back under the throttle threshold. This ensures that we prevent tasks reducing the CIL slightly under the throttle threshold and then making more changes that push it well over the throttle limit. This is acheived by checking if the throttle wait queue is already active as a condition of throttling. Hence once we start throttling, we continue to apply the throttle until the CIL context push wakes everything on the wait queue. We can use waitqueue_active() for the waitqueue manipulations and checks as they are all done under the ctx->xc_push_lock. Hence the waitqueue has external serialisation and we can safely peek inside the wait queue without holding the internal waitqueue locks. Many thanks to Donald for his diagnostic and analysis work to isolate the cause of this hang. Reported-and-tested-by: Donald Buczek Signed-off-by: Dave Chinner Reviewed-by: Brian Foster Reviewed-by: Chandan Babu R Reviewed-by: Darrick J. Wong Reviewed-by: Allison Henderson Signed-off-by: Darrick J. Wong Signed-off-by: Christoph Hellwig Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_buf_item.c | 37 ++++++++++++++++++------------------- fs/xfs/xfs_inode_item.c | 14 ++++++++++++++ fs/xfs/xfs_log_cil.c | 22 +++++++++++++++++----- 3 files changed, 49 insertions(+), 24 deletions(-) diff --git a/fs/xfs/xfs_buf_item.c b/fs/xfs/xfs_buf_item.c index 0356f2e340a1..8c6e26d62ef2 100644 --- a/fs/xfs/xfs_buf_item.c +++ b/fs/xfs/xfs_buf_item.c @@ -56,14 +56,12 @@ xfs_buf_log_format_size( } /* - * This returns the number of log iovecs needed to log the - * given buf log item. + * Return the number of log iovecs and space needed to log the given buf log + * item segment. * - * It calculates this as 1 iovec for the buf log format structure - * and 1 for each stretch of non-contiguous chunks to be logged. - * Contiguous chunks are logged in a single iovec. - * - * If the XFS_BLI_STALE flag has been set, then log nothing. + * It calculates this as 1 iovec for the buf log format structure and 1 for each + * stretch of non-contiguous chunks to be logged. Contiguous chunks are logged + * in a single iovec. */ STATIC void xfs_buf_item_size_segment( @@ -119,11 +117,8 @@ xfs_buf_item_size_segment( } /* - * This returns the number of log iovecs needed to log the given buf log item. - * - * It calculates this as 1 iovec for the buf log format structure and 1 for each - * stretch of non-contiguous chunks to be logged. Contiguous chunks are logged - * in a single iovec. + * Return the number of log iovecs and space needed to log the given buf log + * item. * * Discontiguous buffers need a format structure per region that is being * logged. This makes the changes in the buffer appear to log recovery as though @@ -133,7 +128,11 @@ xfs_buf_item_size_segment( * what ends up on disk. * * If the XFS_BLI_STALE flag has been set, then log nothing but the buf log - * format structures. + * format structures. If the item has previously been logged and has dirty + * regions, we do not relog them in stale buffers. This has the effect of + * reducing the size of the relogged item by the amount of dirty data tracked + * by the log item. This can result in the committing transaction reducing the + * amount of space being consumed by the CIL. */ STATIC void xfs_buf_item_size( @@ -147,9 +146,9 @@ xfs_buf_item_size( ASSERT(atomic_read(&bip->bli_refcount) > 0); if (bip->bli_flags & XFS_BLI_STALE) { /* - * The buffer is stale, so all we need to log - * is the buf log format structure with the - * cancel flag in it. + * The buffer is stale, so all we need to log is the buf log + * format structure with the cancel flag in it as we are never + * going to replay the changes tracked in the log item. */ trace_xfs_buf_item_size_stale(bip); ASSERT(bip->__bli_format.blf_flags & XFS_BLF_CANCEL); @@ -164,9 +163,9 @@ xfs_buf_item_size( if (bip->bli_flags & XFS_BLI_ORDERED) { /* - * The buffer has been logged just to order it. - * It is not being included in the transaction - * commit, so no vectors are used at all. + * The buffer has been logged just to order it. It is not being + * included in the transaction commit, so no vectors are used at + * all. */ trace_xfs_buf_item_size_ordered(bip); *nvecs = XFS_LOG_VEC_ORDERED; diff --git a/fs/xfs/xfs_inode_item.c b/fs/xfs/xfs_inode_item.c index 17e20a6d8b4e..6ff91e5bf3cd 100644 --- a/fs/xfs/xfs_inode_item.c +++ b/fs/xfs/xfs_inode_item.c @@ -28,6 +28,20 @@ static inline struct xfs_inode_log_item *INODE_ITEM(struct xfs_log_item *lip) return container_of(lip, struct xfs_inode_log_item, ili_item); } +/* + * The logged size of an inode fork is always the current size of the inode + * fork. This means that when an inode fork is relogged, the size of the logged + * region is determined by the current state, not the combination of the + * previously logged state + the current state. This is different relogging + * behaviour to most other log items which will retain the size of the + * previously logged changes when smaller regions are relogged. + * + * Hence operations that remove data from the inode fork (e.g. shortform + * dir/attr remove, extent form extent removal, etc), the size of the relogged + * inode gets -smaller- rather than stays the same size as the previously logged + * size and this can result in the committing transaction reducing the amount of + * space being consumed by the CIL. + */ STATIC void xfs_inode_item_data_fork_size( struct xfs_inode_log_item *iip, diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c index b0ef071b3cb5..cd5c04dabe2e 100644 --- a/fs/xfs/xfs_log_cil.c +++ b/fs/xfs/xfs_log_cil.c @@ -668,9 +668,14 @@ xlog_cil_push_work( ASSERT(push_seq <= ctx->sequence); /* - * Wake up any background push waiters now this context is being pushed. + * As we are about to switch to a new, empty CIL context, we no longer + * need to throttle tasks on CIL space overruns. Wake any waiters that + * the hard push throttle may have caught so they can start committing + * to the new context. The ctx->xc_push_lock provides the serialisation + * necessary for safely using the lockless waitqueue_active() check in + * this context. */ - if (ctx->space_used >= XLOG_CIL_BLOCKING_SPACE_LIMIT(log)) + if (waitqueue_active(&cil->xc_push_wait)) wake_up_all(&cil->xc_push_wait); /* @@ -907,7 +912,7 @@ xlog_cil_push_background( ASSERT(!list_empty(&cil->xc_cil)); /* - * don't do a background push if we haven't used up all the + * Don't do a background push if we haven't used up all the * space available yet. */ if (cil->xc_ctx->space_used < XLOG_CIL_SPACE_LIMIT(log)) { @@ -931,9 +936,16 @@ xlog_cil_push_background( /* * If we are well over the space limit, throttle the work that is being - * done until the push work on this context has begun. + * done until the push work on this context has begun. Enforce the hard + * throttle on all transaction commits once it has been activated, even + * if the committing transactions have resulted in the space usage + * dipping back down under the hard limit. + * + * The ctx->xc_push_lock provides the serialisation necessary for safely + * using the lockless waitqueue_active() check in this context. */ - if (cil->xc_ctx->space_used >= XLOG_CIL_BLOCKING_SPACE_LIMIT(log)) { + if (cil->xc_ctx->space_used >= XLOG_CIL_BLOCKING_SPACE_LIMIT(log) || + waitqueue_active(&cil->xc_push_wait)) { trace_xfs_log_cil_wait(log, cil->xc_ctx->ticket); ASSERT(cil->xc_ctx->space_used < log->l_logsize); xlog_wait(&cil->xc_push_wait, &cil->xc_push_lock); From 1f0681f3bd5665080bde3c5b9568cc27df765ce0 Mon Sep 17 00:00:00 2001 From: "Gustavo A. R. Silva" Date: Wed, 27 Apr 2022 17:47:14 -0500 Subject: [PATCH 0282/1842] drm/i915: Fix -Wstringop-overflow warning in call to intel_read_wm_latency() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 336feb502a715909a8136eb6a62a83d7268a353b upstream. Fix the following -Wstringop-overflow warnings when building with GCC-11: drivers/gpu/drm/i915/intel_pm.c:3106:9: warning: ‘intel_read_wm_latency’ accessing 16 bytes in a region of size 10 [-Wstringop-overflow=] 3106 | intel_read_wm_latency(dev_priv, dev_priv->wm.pri_latency); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/gpu/drm/i915/intel_pm.c:3106:9: note: referencing argument 2 of type ‘u16 *’ {aka ‘short unsigned int *’} drivers/gpu/drm/i915/intel_pm.c:2861:13: note: in a call to function ‘intel_read_wm_latency’ 2861 | static void intel_read_wm_latency(struct drm_i915_private *dev_priv, | ^~~~~~~~~~~~~~~~~~~~~ by removing the over-specified array size from the argument declarations. It seems that this code is actually safe because the size of the array depends on the hardware generation, and the function checks for that. Notice that wm can be an array of 5 elements: drivers/gpu/drm/i915/intel_pm.c:3109: intel_read_wm_latency(dev_priv, dev_priv->wm.pri_latency); or an array of 8 elements: drivers/gpu/drm/i915/intel_pm.c:3131: intel_read_wm_latency(dev_priv, dev_priv->wm.skl_latency); and the compiler legitimately complains about that. This helps with the ongoing efforts to globally enable -Wstringop-overflow. Link: https://github.com/KSPP/linux/issues/181 Signed-off-by: Gustavo A. R. Silva Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/i915/intel_pm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index 472aaea75ef8..2f2dc029668b 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -2846,7 +2846,7 @@ static void ilk_compute_wm_level(const struct drm_i915_private *dev_priv, } static void intel_read_wm_latency(struct drm_i915_private *dev_priv, - u16 wm[8]) + u16 wm[]) { struct intel_uncore *uncore = &dev_priv->uncore; From 82f723b8a5adf497f9e34c702a30ca7298615654 Mon Sep 17 00:00:00 2001 From: Tadeusz Struk Date: Tue, 17 May 2022 08:13:08 +0900 Subject: [PATCH 0283/1842] exfat: check if cluster num is valid commit 64ba4b15e5c045f8b746c6da5fc9be9a6b00b61d upstream. Syzbot reported slab-out-of-bounds read in exfat_clear_bitmap. This was triggered by reproducer calling truncute with size 0, which causes the following trace: BUG: KASAN: slab-out-of-bounds in exfat_clear_bitmap+0x147/0x490 fs/exfat/balloc.c:174 Read of size 8 at addr ffff888115aa9508 by task syz-executor251/365 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack_lvl+0x1e2/0x24b lib/dump_stack.c:118 print_address_description+0x81/0x3c0 mm/kasan/report.c:233 __kasan_report mm/kasan/report.c:419 [inline] kasan_report+0x1a4/0x1f0 mm/kasan/report.c:436 __asan_report_load8_noabort+0x14/0x20 mm/kasan/report_generic.c:309 exfat_clear_bitmap+0x147/0x490 fs/exfat/balloc.c:174 exfat_free_cluster+0x25a/0x4a0 fs/exfat/fatent.c:181 __exfat_truncate+0x99e/0xe00 fs/exfat/file.c:217 exfat_truncate+0x11b/0x4f0 fs/exfat/file.c:243 exfat_setattr+0xa03/0xd40 fs/exfat/file.c:339 notify_change+0xb76/0xe10 fs/attr.c:336 do_truncate+0x1ea/0x2d0 fs/open.c:65 Move the is_valid_cluster() helper from fatent.c to a common header to make it reusable in other *.c files. And add is_valid_cluster() to validate if cluster number is within valid range in exfat_clear_bitmap() and exfat_set_bitmap(). Link: https://syzkaller.appspot.com/bug?id=50381fc73821ecae743b8cf24b4c9a04776f767c Reported-by: syzbot+a4087e40b9c13aad7892@syzkaller.appspotmail.com Fixes: 1e49a94cf707 ("exfat: add bitmap operations") Cc: stable@vger.kernel.org # v5.7+ Signed-off-by: Tadeusz Struk Reviewed-by: Sungjong Seo Signed-off-by: Namjae Jeon Signed-off-by: Greg Kroah-Hartman --- fs/exfat/balloc.c | 8 ++++++-- fs/exfat/exfat_fs.h | 8 ++++++++ fs/exfat/fatent.c | 8 -------- 3 files changed, 14 insertions(+), 10 deletions(-) diff --git a/fs/exfat/balloc.c b/fs/exfat/balloc.c index 579c10f57c2b..258b6bb5762a 100644 --- a/fs/exfat/balloc.c +++ b/fs/exfat/balloc.c @@ -148,7 +148,9 @@ int exfat_set_bitmap(struct inode *inode, unsigned int clu) struct super_block *sb = inode->i_sb; struct exfat_sb_info *sbi = EXFAT_SB(sb); - WARN_ON(clu < EXFAT_FIRST_CLUSTER); + if (!is_valid_cluster(sbi, clu)) + return -EINVAL; + ent_idx = CLUSTER_TO_BITMAP_ENT(clu); i = BITMAP_OFFSET_SECTOR_INDEX(sb, ent_idx); b = BITMAP_OFFSET_BIT_IN_SECTOR(sb, ent_idx); @@ -166,7 +168,9 @@ void exfat_clear_bitmap(struct inode *inode, unsigned int clu) struct exfat_sb_info *sbi = EXFAT_SB(sb); struct exfat_mount_options *opts = &sbi->options; - WARN_ON(clu < EXFAT_FIRST_CLUSTER); + if (!is_valid_cluster(sbi, clu)) + return; + ent_idx = CLUSTER_TO_BITMAP_ENT(clu); i = BITMAP_OFFSET_SECTOR_INDEX(sb, ent_idx); b = BITMAP_OFFSET_BIT_IN_SECTOR(sb, ent_idx); diff --git a/fs/exfat/exfat_fs.h b/fs/exfat/exfat_fs.h index b8f0e829ecbd..0d139c7d150d 100644 --- a/fs/exfat/exfat_fs.h +++ b/fs/exfat/exfat_fs.h @@ -380,6 +380,14 @@ static inline int exfat_sector_to_cluster(struct exfat_sb_info *sbi, EXFAT_RESERVED_CLUSTERS; } +static inline bool is_valid_cluster(struct exfat_sb_info *sbi, + unsigned int clus) +{ + if (clus < EXFAT_FIRST_CLUSTER || sbi->num_clusters <= clus) + return false; + return true; +} + /* super.c */ int exfat_set_volume_dirty(struct super_block *sb); int exfat_clear_volume_dirty(struct super_block *sb); diff --git a/fs/exfat/fatent.c b/fs/exfat/fatent.c index c3c9afee7418..a1481e47a761 100644 --- a/fs/exfat/fatent.c +++ b/fs/exfat/fatent.c @@ -81,14 +81,6 @@ int exfat_ent_set(struct super_block *sb, unsigned int loc, return 0; } -static inline bool is_valid_cluster(struct exfat_sb_info *sbi, - unsigned int clus) -{ - if (clus < EXFAT_FIRST_CLUSTER || sbi->num_clusters <= clus) - return false; - return true; -} - int exfat_ent_get(struct super_block *sb, unsigned int loc, unsigned int *content) { From 630192aa45233acfe3d17952862ca215f6d31f09 Mon Sep 17 00:00:00 2001 From: "Justin M. Forbes" Date: Thu, 2 Jun 2022 22:22:28 +0200 Subject: [PATCH 0284/1842] lib/crypto: add prompts back to crypto libraries commit e56e18985596617ae426ed5997fb2e737cffb58b upstream. Commit 6048fdcc5f269 ("lib/crypto: blake2s: include as built-in") took away a number of prompt texts from other crypto libraries. This makes values flip from built-in to module when oldconfig runs, and causes problems when these crypto libs need to be built in for thingslike BIG_KEYS. Fixes: 6048fdcc5f269 ("lib/crypto: blake2s: include as built-in") Cc: Herbert Xu Cc: linux-crypto@vger.kernel.org Signed-off-by: Justin M. Forbes [Jason: - moved menu into submenu of lib/ instead of root menu - fixed chacha sub-dependencies for CONFIG_CRYPTO] Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- crypto/Kconfig | 2 -- lib/Kconfig | 2 ++ lib/crypto/Kconfig | 17 ++++++++++++----- 3 files changed, 14 insertions(+), 7 deletions(-) diff --git a/crypto/Kconfig b/crypto/Kconfig index 0dee9242491c..c15bfc0e3723 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1941,5 +1941,3 @@ source "crypto/asymmetric_keys/Kconfig" source "certs/Kconfig" endif # if CRYPTO - -source "lib/crypto/Kconfig" diff --git a/lib/Kconfig b/lib/Kconfig index 9216e24e5164..258e1ec7d592 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -101,6 +101,8 @@ config INDIRECT_PIO When in doubt, say N. +source "lib/crypto/Kconfig" + config CRC_CCITT tristate "CRC-CCITT functions" help diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index af3da5a8bde8..9856e291f414 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -1,5 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 +menu "Crypto library routines" + config CRYPTO_LIB_AES tristate @@ -31,7 +33,7 @@ config CRYPTO_ARCH_HAVE_LIB_CHACHA config CRYPTO_LIB_CHACHA_GENERIC tristate - select CRYPTO_ALGAPI + select XOR_BLOCKS help This symbol can be depended upon by arch implementations of the ChaCha library interface that require the generic code as a @@ -40,7 +42,8 @@ config CRYPTO_LIB_CHACHA_GENERIC of CRYPTO_LIB_CHACHA. config CRYPTO_LIB_CHACHA - tristate + tristate "ChaCha library interface" + depends on CRYPTO depends on CRYPTO_ARCH_HAVE_LIB_CHACHA || !CRYPTO_ARCH_HAVE_LIB_CHACHA select CRYPTO_LIB_CHACHA_GENERIC if CRYPTO_ARCH_HAVE_LIB_CHACHA=n help @@ -65,7 +68,7 @@ config CRYPTO_LIB_CURVE25519_GENERIC of CRYPTO_LIB_CURVE25519. config CRYPTO_LIB_CURVE25519 - tristate + tristate "Curve25519 scalar multiplication library" depends on CRYPTO_ARCH_HAVE_LIB_CURVE25519 || !CRYPTO_ARCH_HAVE_LIB_CURVE25519 select CRYPTO_LIB_CURVE25519_GENERIC if CRYPTO_ARCH_HAVE_LIB_CURVE25519=n help @@ -100,7 +103,7 @@ config CRYPTO_LIB_POLY1305_GENERIC of CRYPTO_LIB_POLY1305. config CRYPTO_LIB_POLY1305 - tristate + tristate "Poly1305 library interface" depends on CRYPTO_ARCH_HAVE_LIB_POLY1305 || !CRYPTO_ARCH_HAVE_LIB_POLY1305 select CRYPTO_LIB_POLY1305_GENERIC if CRYPTO_ARCH_HAVE_LIB_POLY1305=n help @@ -109,11 +112,15 @@ config CRYPTO_LIB_POLY1305 is available and enabled. config CRYPTO_LIB_CHACHA20POLY1305 - tristate + tristate "ChaCha20-Poly1305 AEAD support (8-byte nonce library version)" depends on CRYPTO_ARCH_HAVE_LIB_CHACHA || !CRYPTO_ARCH_HAVE_LIB_CHACHA depends on CRYPTO_ARCH_HAVE_LIB_POLY1305 || !CRYPTO_ARCH_HAVE_LIB_POLY1305 + depends on CRYPTO select CRYPTO_LIB_CHACHA select CRYPTO_LIB_POLY1305 + select CRYPTO_ALGAPI config CRYPTO_LIB_SHA256 tristate + +endmenu From b2bef5500e0d2000c40c361720b0788db2abca5e Mon Sep 17 00:00:00 2001 From: Nicolai Stange Date: Thu, 2 Jun 2022 22:22:29 +0200 Subject: [PATCH 0285/1842] crypto: drbg - prepare for more fine-grained tracking of seeding state MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit ce8ce31b2c5c8b18667784b8c515650c65d57b4e upstream. There are two different randomness sources the DRBGs are getting seeded from, namely the jitterentropy source (if enabled) and get_random_bytes(). At initial DRBG seeding time during boot, the latter might not have collected sufficient entropy for seeding itself yet and thus, the DRBG implementation schedules a reseed work from a random_ready_callback once that has happened. This is particularly important for the !->pr DRBG instances, for which (almost) no further reseeds are getting triggered during their lifetime. Because collecting data from the jitterentropy source is a rather expensive operation, the aforementioned asynchronously scheduled reseed work restricts itself to get_random_bytes() only. That is, it in some sense amends the initial DRBG seed derived from jitterentropy output at full (estimated) entropy with fresh randomness obtained from get_random_bytes() once that has been seeded with sufficient entropy itself. With the advent of rng_is_initialized(), there is no real need for doing the reseed operation from an asynchronously scheduled work anymore and a subsequent patch will make it synchronous by moving it next to related logic already present in drbg_generate(). However, for tracking whether a full reseed including the jitterentropy source is required or a "partial" reseed involving only get_random_bytes() would be sufficient already, the boolean struct drbg_state's ->seeded member must become a tristate value. Prepare for this by introducing the new enum drbg_seed_state and change struct drbg_state's ->seeded member's type from bool to that type. For facilitating review, enum drbg_seed_state is made to only contain two members corresponding to the former ->seeded values of false and true resp. at this point: DRBG_SEED_STATE_UNSEEDED and DRBG_SEED_STATE_FULL. A third one for tracking the intermediate state of "seeded from jitterentropy only" will be introduced with a subsequent patch. There is no change in behaviour at this point. Signed-off-by: Nicolai Stange Reviewed-by: Stephan Müller Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- crypto/drbg.c | 19 ++++++++++--------- include/crypto/drbg.h | 7 ++++++- 2 files changed, 16 insertions(+), 10 deletions(-) diff --git a/crypto/drbg.c b/crypto/drbg.c index 19ea8d6628ff..87f97da5d0ba 100644 --- a/crypto/drbg.c +++ b/crypto/drbg.c @@ -1042,7 +1042,7 @@ static inline int __drbg_seed(struct drbg_state *drbg, struct list_head *seed, if (ret) return ret; - drbg->seeded = true; + drbg->seeded = DRBG_SEED_STATE_FULL; /* 10.1.1.2 / 10.1.1.3 step 5 */ drbg->reseed_ctr = 1; @@ -1087,14 +1087,14 @@ static void drbg_async_seed(struct work_struct *work) if (ret) goto unlock; - /* Set seeded to false so that if __drbg_seed fails the - * next generate call will trigger a reseed. + /* Reset ->seeded so that if __drbg_seed fails the next + * generate call will trigger a reseed. */ - drbg->seeded = false; + drbg->seeded = DRBG_SEED_STATE_UNSEEDED; __drbg_seed(drbg, &seedlist, true); - if (drbg->seeded) + if (drbg->seeded == DRBG_SEED_STATE_FULL) drbg->reseed_threshold = drbg_max_requests(drbg); unlock: @@ -1385,13 +1385,14 @@ static int drbg_generate(struct drbg_state *drbg, * here. The spec is a bit convoluted here, we make it simpler. */ if (drbg->reseed_threshold < drbg->reseed_ctr) - drbg->seeded = false; + drbg->seeded = DRBG_SEED_STATE_UNSEEDED; - if (drbg->pr || !drbg->seeded) { + if (drbg->pr || drbg->seeded == DRBG_SEED_STATE_UNSEEDED) { pr_devel("DRBG: reseeding before generation (prediction " "resistance: %s, state %s)\n", drbg->pr ? "true" : "false", - drbg->seeded ? "seeded" : "unseeded"); + (drbg->seeded == DRBG_SEED_STATE_FULL ? + "seeded" : "unseeded")); /* 9.3.1 steps 7.1 through 7.3 */ len = drbg_seed(drbg, addtl, true); if (len) @@ -1576,7 +1577,7 @@ static int drbg_instantiate(struct drbg_state *drbg, struct drbg_string *pers, if (!drbg->core) { drbg->core = &drbg_cores[coreref]; drbg->pr = pr; - drbg->seeded = false; + drbg->seeded = DRBG_SEED_STATE_UNSEEDED; drbg->reseed_threshold = drbg_max_requests(drbg); ret = drbg_alloc_state(drbg); diff --git a/include/crypto/drbg.h b/include/crypto/drbg.h index 88e4d145f7cd..2db72121d568 100644 --- a/include/crypto/drbg.h +++ b/include/crypto/drbg.h @@ -105,6 +105,11 @@ struct drbg_test_data { struct drbg_string *testentropy; /* TEST PARAMETER: test entropy */ }; +enum drbg_seed_state { + DRBG_SEED_STATE_UNSEEDED, + DRBG_SEED_STATE_FULL, +}; + struct drbg_state { struct mutex drbg_mutex; /* lock around DRBG */ unsigned char *V; /* internal state 10.1.1.1 1a) */ @@ -127,7 +132,7 @@ struct drbg_state { struct crypto_wait ctr_wait; /* CTR mode async wait obj */ struct scatterlist sg_in, sg_out; /* CTR mode SGLs */ - bool seeded; /* DRBG fully seeded? */ + enum drbg_seed_state seeded; /* DRBG fully seeded? */ bool pr; /* Prediction resistance enabled? */ bool fips_primed; /* Continuous test primed? */ unsigned char *prev; /* FIPS 140-2 continuous test value */ From 54700e82a7a75d0f2b9126b7ff8bdd26efad738a Mon Sep 17 00:00:00 2001 From: Nicolai Stange Date: Thu, 2 Jun 2022 22:22:30 +0200 Subject: [PATCH 0286/1842] crypto: drbg - track whether DRBG was seeded with !rng_is_initialized() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 2bcd25443868aa8863779a6ebc6c9319633025d2 upstream. Currently, the DRBG implementation schedules asynchronous works from random_ready_callbacks for reseeding the DRBG instances with output from get_random_bytes() once the latter has sufficient entropy available. However, as the get_random_bytes() initialization state can get queried by means of rng_is_initialized() now, there is no real need for this asynchronous reseeding logic anymore and it's better to keep things simple by doing it synchronously when needed instead, i.e. from drbg_generate() once rng_is_initialized() has flipped to true. Of course, for this to work, drbg_generate() would need some means by which it can tell whether or not rng_is_initialized() has flipped to true since the last seeding from get_random_bytes(). Or equivalently, whether or not the last seed from get_random_bytes() has happened when rng_is_initialized() was still evaluating to false. As it currently stands, enum drbg_seed_state allows for the representation of two different DRBG seeding states: DRBG_SEED_STATE_UNSEEDED and DRBG_SEED_STATE_FULL. The former makes drbg_generate() to invoke a full reseeding operation involving both, the rather expensive jitterentropy as well as the get_random_bytes() randomness sources. The DRBG_SEED_STATE_FULL state on the other hand implies that no reseeding at all is required for a !->pr DRBG variant. Introduce the new DRBG_SEED_STATE_PARTIAL state to enum drbg_seed_state for representing the condition that a DRBG was being seeded when rng_is_initialized() had still been false. In particular, this new state implies that - the given DRBG instance has been fully seeded from the jitterentropy source (if enabled) - and drbg_generate() is supposed to reseed from get_random_bytes() *only* once rng_is_initialized() turns to true. Up to now, the __drbg_seed() helper used to set the given DRBG instance's ->seeded state to constant DRBG_SEED_STATE_FULL. Introduce a new argument allowing for the specification of the to be written ->seeded value instead. Make the first of its two callers, drbg_seed(), determine the appropriate value based on rng_is_initialized(). The remaining caller, drbg_async_seed(), is known to get invoked only once rng_is_initialized() is true, hence let it pass constant DRBG_SEED_STATE_FULL for the new argument to __drbg_seed(). There is no change in behaviour, except for that the pr_devel() in drbg_generate() would now report "unseeded" for ->pr DRBG instances which had last been seeded when rng_is_initialized() was still evaluating to false. Signed-off-by: Nicolai Stange Reviewed-by: Stephan Müller Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- crypto/drbg.c | 12 ++++++++---- include/crypto/drbg.h | 1 + 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/crypto/drbg.c b/crypto/drbg.c index 87f97da5d0ba..7723d6e494aa 100644 --- a/crypto/drbg.c +++ b/crypto/drbg.c @@ -1035,14 +1035,14 @@ static const struct drbg_state_ops drbg_hash_ops = { ******************************************************************/ static inline int __drbg_seed(struct drbg_state *drbg, struct list_head *seed, - int reseed) + int reseed, enum drbg_seed_state new_seed_state) { int ret = drbg->d_ops->update(drbg, seed, reseed); if (ret) return ret; - drbg->seeded = DRBG_SEED_STATE_FULL; + drbg->seeded = new_seed_state; /* 10.1.1.2 / 10.1.1.3 step 5 */ drbg->reseed_ctr = 1; @@ -1092,7 +1092,7 @@ static void drbg_async_seed(struct work_struct *work) */ drbg->seeded = DRBG_SEED_STATE_UNSEEDED; - __drbg_seed(drbg, &seedlist, true); + __drbg_seed(drbg, &seedlist, true, DRBG_SEED_STATE_FULL); if (drbg->seeded == DRBG_SEED_STATE_FULL) drbg->reseed_threshold = drbg_max_requests(drbg); @@ -1122,6 +1122,7 @@ static int drbg_seed(struct drbg_state *drbg, struct drbg_string *pers, unsigned int entropylen = drbg_sec_strength(drbg->core->flags); struct drbg_string data1; LIST_HEAD(seedlist); + enum drbg_seed_state new_seed_state = DRBG_SEED_STATE_FULL; /* 9.1 / 9.2 / 9.3.1 step 3 */ if (pers && pers->len > (drbg_max_addtl(drbg))) { @@ -1149,6 +1150,9 @@ static int drbg_seed(struct drbg_state *drbg, struct drbg_string *pers, BUG_ON((entropylen * 2) > sizeof(entropy)); /* Get seed from in-kernel /dev/urandom */ + if (!rng_is_initialized()) + new_seed_state = DRBG_SEED_STATE_PARTIAL; + ret = drbg_get_random_bytes(drbg, entropy, entropylen); if (ret) goto out; @@ -1205,7 +1209,7 @@ static int drbg_seed(struct drbg_state *drbg, struct drbg_string *pers, memset(drbg->C, 0, drbg_statelen(drbg)); } - ret = __drbg_seed(drbg, &seedlist, reseed); + ret = __drbg_seed(drbg, &seedlist, reseed, new_seed_state); out: memzero_explicit(entropy, entropylen * 2); diff --git a/include/crypto/drbg.h b/include/crypto/drbg.h index 2db72121d568..01caab5e65de 100644 --- a/include/crypto/drbg.h +++ b/include/crypto/drbg.h @@ -107,6 +107,7 @@ struct drbg_test_data { enum drbg_seed_state { DRBG_SEED_STATE_UNSEEDED, + DRBG_SEED_STATE_PARTIAL, /* Seeded with !rng_is_initialized() */ DRBG_SEED_STATE_FULL, }; From e744e34a3c35644b5af2b45053fbd178a15bf73f Mon Sep 17 00:00:00 2001 From: Nicolai Stange Date: Thu, 2 Jun 2022 22:22:31 +0200 Subject: [PATCH 0287/1842] crypto: drbg - move dynamic ->reseed_threshold adjustments to __drbg_seed() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 262d83a4290c331cd4f617a457408bdb82fbb738 upstream. Since commit 42ea507fae1a ("crypto: drbg - reseed often if seedsource is degraded"), the maximum seed lifetime represented by ->reseed_threshold gets temporarily lowered if the get_random_bytes() source cannot provide sufficient entropy yet, as is common during boot, and restored back to the original value again once that has changed. More specifically, if the add_random_ready_callback() invoked from drbg_prepare_hrng() in the course of DRBG instantiation does not return -EALREADY, that is, if get_random_bytes() has not been fully initialized at this point yet, drbg_prepare_hrng() will lower ->reseed_threshold to a value of 50. The drbg_async_seed() scheduled from said random_ready_callback will eventually restore the original value. A future patch will replace the random_ready_callback based notification mechanism and thus, there will be no add_random_ready_callback() return value anymore which could get compared to -EALREADY. However, there's __drbg_seed() which gets invoked in the course of both, the DRBG instantiation as well as the eventual reseeding from get_random_bytes() in aforementioned drbg_async_seed(), if any. Moreover, it knows about the get_random_bytes() initialization state by the time the seed data had been obtained from it: the new_seed_state argument introduced with the previous patch would get set to DRBG_SEED_STATE_PARTIAL in case get_random_bytes() had not been fully initialized yet and to DRBG_SEED_STATE_FULL otherwise. Thus, __drbg_seed() provides a convenient alternative for managing that ->reseed_threshold lowering and restoring at a central place. Move all ->reseed_threshold adjustment code from drbg_prepare_hrng() and drbg_async_seed() respectively to __drbg_seed(). Make __drbg_seed() lower the ->reseed_threshold to 50 in case its new_seed_state argument equals DRBG_SEED_STATE_PARTIAL and let it restore the original value otherwise. There is no change in behaviour. Signed-off-by: Nicolai Stange Reviewed-by: Stephan Müller Signed-off-by: Herbert Xu Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- crypto/drbg.c | 30 +++++++++++++++++++++--------- 1 file changed, 21 insertions(+), 9 deletions(-) diff --git a/crypto/drbg.c b/crypto/drbg.c index 7723d6e494aa..bec9dd3fc761 100644 --- a/crypto/drbg.c +++ b/crypto/drbg.c @@ -1046,6 +1046,27 @@ static inline int __drbg_seed(struct drbg_state *drbg, struct list_head *seed, /* 10.1.1.2 / 10.1.1.3 step 5 */ drbg->reseed_ctr = 1; + switch (drbg->seeded) { + case DRBG_SEED_STATE_UNSEEDED: + /* Impossible, but handle it to silence compiler warnings. */ + fallthrough; + case DRBG_SEED_STATE_PARTIAL: + /* + * Require frequent reseeds until the seed source is + * fully initialized. + */ + drbg->reseed_threshold = 50; + break; + + case DRBG_SEED_STATE_FULL: + /* + * Seed source has become fully initialized, frequent + * reseeds no longer required. + */ + drbg->reseed_threshold = drbg_max_requests(drbg); + break; + } + return ret; } @@ -1094,9 +1115,6 @@ static void drbg_async_seed(struct work_struct *work) __drbg_seed(drbg, &seedlist, true, DRBG_SEED_STATE_FULL); - if (drbg->seeded == DRBG_SEED_STATE_FULL) - drbg->reseed_threshold = drbg_max_requests(drbg); - unlock: mutex_unlock(&drbg->drbg_mutex); @@ -1532,12 +1550,6 @@ static int drbg_prepare_hrng(struct drbg_state *drbg) return err; } - /* - * Require frequent reseeds until the seed source is fully - * initialized. - */ - drbg->reseed_threshold = 50; - return err; } From 44f1ce55308d914e911184d3df30a1a6d78253e7 Mon Sep 17 00:00:00 2001 From: Nicolai Stange Date: Thu, 2 Jun 2022 22:22:32 +0200 Subject: [PATCH 0288/1842] crypto: drbg - make reseeding from get_random_bytes() synchronous commit 074bcd4000e0d812bc253f86fedc40f81ed59ccc upstream. get_random_bytes() usually hasn't full entropy available by the time DRBG instances are first getting seeded from it during boot. Thus, the DRBG implementation registers random_ready_callbacks which would in turn schedule some work for reseeding the DRBGs once get_random_bytes() has sufficient entropy available. For reference, the relevant history around handling DRBG (re)seeding in the context of a not yet fully seeded get_random_bytes() is: commit 16b369a91d0d ("random: Blocking API for accessing nonblocking_pool") commit 4c7879907edd ("crypto: drbg - add async seeding operation") commit 205a525c3342 ("random: Add callback API for random pool readiness") commit 57225e679788 ("crypto: drbg - Use callback API for random readiness") commit c2719503f5e1 ("random: Remove kernel blocking API") However, some time later, the initialization state of get_random_bytes() has been made queryable via rng_is_initialized() introduced with commit 9a47249d444d ("random: Make crng state queryable"). This primitive now allows for streamlining the DRBG reseeding from get_random_bytes() by replacing that aforementioned asynchronous work scheduling from random_ready_callbacks with some simpler, synchronous code in drbg_generate() next to the related logic already present therein. Apart from improving overall code readability, this change will also enable DRBG users to rely on wait_for_random_bytes() for ensuring that the initial seeding has completed, if desired. The previous patches already laid the grounds by making drbg_seed() to record at each DRBG instance whether it was being seeded at a time when rng_is_initialized() still had been false as indicated by ->seeded == DRBG_SEED_STATE_PARTIAL. All that remains to be done now is to make drbg_generate() check for this condition, determine whether rng_is_initialized() has flipped to true in the meanwhile and invoke a reseed from get_random_bytes() if so. Make this move: - rename the former drbg_async_seed() work handler, i.e. the one in charge of reseeding a DRBG instance from get_random_bytes(), to "drbg_seed_from_random()", - change its signature as appropriate, i.e. make it take a struct drbg_state rather than a work_struct and change its return type from "void" to "int" in order to allow for passing error information from e.g. its __drbg_seed() invocation onwards to callers, - make drbg_generate() invoke this drbg_seed_from_random() once it encounters a DRBG instance with ->seeded == DRBG_SEED_STATE_PARTIAL by the time rng_is_initialized() has flipped to true and - prune everything related to the former, random_ready_callback based mechanism. As drbg_seed_from_random() is now getting invoked from drbg_generate() with the ->drbg_mutex being held, it must not attempt to recursively grab it once again. Remove the corresponding mutex operations from what is now drbg_seed_from_random(). Furthermore, as drbg_seed_from_random() can now report errors directly to its caller, there's no need for it to temporarily switch the DRBG's ->seeded state to DRBG_SEED_STATE_UNSEEDED so that a failure of the subsequently invoked __drbg_seed() will get signaled to drbg_generate(). Don't do it then. Signed-off-by: Nicolai Stange Signed-off-by: Herbert Xu [Jason: for stable, undid the modifications for the backport of 5acd3548.] Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- crypto/drbg.c | 61 ++++++++----------------------------------- drivers/char/random.c | 2 -- include/crypto/drbg.h | 2 -- 3 files changed, 11 insertions(+), 54 deletions(-) diff --git a/crypto/drbg.c b/crypto/drbg.c index bec9dd3fc761..a4b5d6dbe99d 100644 --- a/crypto/drbg.c +++ b/crypto/drbg.c @@ -1086,12 +1086,10 @@ static inline int drbg_get_random_bytes(struct drbg_state *drbg, return 0; } -static void drbg_async_seed(struct work_struct *work) +static int drbg_seed_from_random(struct drbg_state *drbg) { struct drbg_string data; LIST_HEAD(seedlist); - struct drbg_state *drbg = container_of(work, struct drbg_state, - seed_work); unsigned int entropylen = drbg_sec_strength(drbg->core->flags); unsigned char entropy[32]; int ret; @@ -1102,23 +1100,15 @@ static void drbg_async_seed(struct work_struct *work) drbg_string_fill(&data, entropy, entropylen); list_add_tail(&data.list, &seedlist); - mutex_lock(&drbg->drbg_mutex); - ret = drbg_get_random_bytes(drbg, entropy, entropylen); if (ret) - goto unlock; + goto out; - /* Reset ->seeded so that if __drbg_seed fails the next - * generate call will trigger a reseed. - */ - drbg->seeded = DRBG_SEED_STATE_UNSEEDED; - - __drbg_seed(drbg, &seedlist, true, DRBG_SEED_STATE_FULL); - -unlock: - mutex_unlock(&drbg->drbg_mutex); + ret = __drbg_seed(drbg, &seedlist, true, DRBG_SEED_STATE_FULL); +out: memzero_explicit(entropy, entropylen); + return ret; } /* @@ -1421,6 +1411,11 @@ static int drbg_generate(struct drbg_state *drbg, goto err; /* 9.3.1 step 7.4 */ addtl = NULL; + } else if (rng_is_initialized() && + drbg->seeded == DRBG_SEED_STATE_PARTIAL) { + len = drbg_seed_from_random(drbg); + if (len) + goto err; } if (addtl && 0 < addtl->len) @@ -1513,44 +1508,15 @@ static int drbg_generate_long(struct drbg_state *drbg, return 0; } -static int drbg_schedule_async_seed(struct notifier_block *nb, unsigned long action, void *data) -{ - struct drbg_state *drbg = container_of(nb, struct drbg_state, - random_ready); - - schedule_work(&drbg->seed_work); - return 0; -} - static int drbg_prepare_hrng(struct drbg_state *drbg) { - int err; - /* We do not need an HRNG in test mode. */ if (list_empty(&drbg->test_data.list)) return 0; drbg->jent = crypto_alloc_rng("jitterentropy_rng", 0, 0); - INIT_WORK(&drbg->seed_work, drbg_async_seed); - - drbg->random_ready.notifier_call = drbg_schedule_async_seed; - err = register_random_ready_notifier(&drbg->random_ready); - - switch (err) { - case 0: - break; - - case -EALREADY: - err = 0; - fallthrough; - - default: - drbg->random_ready.notifier_call = NULL; - return err; - } - - return err; + return 0; } /* @@ -1644,11 +1610,6 @@ static int drbg_instantiate(struct drbg_state *drbg, struct drbg_string *pers, */ static int drbg_uninstantiate(struct drbg_state *drbg) { - if (drbg->random_ready.notifier_call) { - unregister_random_ready_notifier(&drbg->random_ready); - cancel_work_sync(&drbg->seed_work); - } - if (!IS_ERR_OR_NULL(drbg->jent)) crypto_free_rng(drbg->jent); drbg->jent = NULL; diff --git a/drivers/char/random.c b/drivers/char/random.c index 00b50ccc9fae..c206db96f60a 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -163,7 +163,6 @@ int __cold register_random_ready_notifier(struct notifier_block *nb) spin_unlock_irqrestore(&random_ready_chain_lock, flags); return ret; } -EXPORT_SYMBOL(register_random_ready_notifier); /* * Delete a previously registered readiness callback function. @@ -178,7 +177,6 @@ int __cold unregister_random_ready_notifier(struct notifier_block *nb) spin_unlock_irqrestore(&random_ready_chain_lock, flags); return ret; } -EXPORT_SYMBOL(unregister_random_ready_notifier); static void __cold process_random_ready_list(void) { diff --git a/include/crypto/drbg.h b/include/crypto/drbg.h index 01caab5e65de..a6c3b8e7deb6 100644 --- a/include/crypto/drbg.h +++ b/include/crypto/drbg.h @@ -137,12 +137,10 @@ struct drbg_state { bool pr; /* Prediction resistance enabled? */ bool fips_primed; /* Continuous test primed? */ unsigned char *prev; /* FIPS 140-2 continuous test value */ - struct work_struct seed_work; /* asynchronous seeding support */ struct crypto_rng *jent; const struct drbg_state_ops *d_ops; const struct drbg_core *core; struct drbg_string test_data; - struct notifier_block random_ready; }; static inline __u8 drbg_statelen(struct drbg_state *drbg) From c0aff1faf66b6b7a19103f83e6a5d0fdc64b9048 Mon Sep 17 00:00:00 2001 From: Pablo Neira Ayuso Date: Fri, 27 May 2022 09:56:18 +0200 Subject: [PATCH 0289/1842] netfilter: nf_tables: sanitize nft_set_desc_concat_parse() commit fecf31ee395b0295f2d7260aa29946b7605f7c85 upstream. Add several sanity checks for nft_set_desc_concat_parse(): - validate desc->field_count not larger than desc->field_len array. - field length cannot be larger than desc->field_len (ie. U8_MAX) - total length of the concatenation cannot be larger than register array. Joint work with Florian Westphal. Fixes: f3a2181e16f1 ("netfilter: nf_tables: Support for sets with multiple ranged fields") Reported-by: Reviewed-by: Stefano Brivio Signed-off-by: Florian Westphal Signed-off-by: Pablo Neira Ayuso Signed-off-by: Greg Kroah-Hartman --- net/netfilter/nf_tables_api.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index c837ce4e5fe6..ea162e36e0e4 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -4051,6 +4051,9 @@ static int nft_set_desc_concat_parse(const struct nlattr *attr, u32 len; int err; + if (desc->field_count >= ARRAY_SIZE(desc->field_len)) + return -E2BIG; + err = nla_parse_nested_deprecated(tb, NFTA_SET_FIELD_MAX, attr, nft_concat_policy, NULL); if (err < 0) @@ -4060,9 +4063,8 @@ static int nft_set_desc_concat_parse(const struct nlattr *attr, return -EINVAL; len = ntohl(nla_get_be32(tb[NFTA_SET_FIELD_LEN])); - - if (len * BITS_PER_BYTE / 32 > NFT_REG32_COUNT) - return -E2BIG; + if (!len || len > U8_MAX) + return -EINVAL; desc->field_len[desc->field_count++] = len; @@ -4073,7 +4075,8 @@ static int nft_set_desc_concat(struct nft_set_desc *desc, const struct nlattr *nla) { struct nlattr *attr; - int rem, err; + u32 num_regs = 0; + int rem, err, i; nla_for_each_nested(attr, nla, rem) { if (nla_type(attr) != NFTA_LIST_ELEM) @@ -4084,6 +4087,12 @@ static int nft_set_desc_concat(struct nft_set_desc *desc, return err; } + for (i = 0; i < desc->field_count; i++) + num_regs += DIV_ROUND_UP(desc->field_len[i], sizeof(u32)); + + if (num_regs > NFT_REG32_COUNT) + return -E2BIG; + return 0; } From 91a36ec160ec1a0c8f5352b772dffcbb0b6023e3 Mon Sep 17 00:00:00 2001 From: Florian Westphal Date: Fri, 20 May 2022 00:02:04 +0200 Subject: [PATCH 0290/1842] netfilter: conntrack: re-fetch conntrack after insertion commit 56b14ecec97f39118bf85c9ac2438c5a949509ed upstream. In case the conntrack is clashing, insertion can free skb->_nfct and set skb->_nfct to the already-confirmed entry. This wasn't found before because the conntrack entry and the extension space used to free'd after an rcu grace period, plus the race needs events enabled to trigger. Reported-by: Fixes: 71d8c47fc653 ("netfilter: conntrack: introduce clash resolution on insertion race") Fixes: 2ad9d7747c10 ("netfilter: conntrack: free extension area immediately") Signed-off-by: Florian Westphal Signed-off-by: Pablo Neira Ayuso Signed-off-by: Greg Kroah-Hartman --- include/net/netfilter/nf_conntrack_core.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/include/net/netfilter/nf_conntrack_core.h b/include/net/netfilter/nf_conntrack_core.h index 09f2efea0b97..5805fe4947f3 100644 --- a/include/net/netfilter/nf_conntrack_core.h +++ b/include/net/netfilter/nf_conntrack_core.h @@ -59,8 +59,13 @@ static inline int nf_conntrack_confirm(struct sk_buff *skb) int ret = NF_ACCEPT; if (ct) { - if (!nf_ct_is_confirmed(ct)) + if (!nf_ct_is_confirmed(ct)) { ret = __nf_conntrack_confirm(skb); + + if (ret == NF_ACCEPT) + ct = (struct nf_conn *)skb_nfct(skb); + } + if (likely(ret == NF_ACCEPT)) nf_ct_deliver_cached_events(ct); } From 4c4a11c74adac284534f3db927c726bd419bbacb Mon Sep 17 00:00:00 2001 From: Xiaomeng Tong Date: Thu, 14 Apr 2022 14:21:03 +0800 Subject: [PATCH 0291/1842] KVM: PPC: Book3S HV: fix incorrect NULL check on list iterator commit 300981abddcb13f8f06ad58f52358b53a8096775 upstream. The bug is here: if (!p) return ret; The list iterator value 'p' will *always* be set and non-NULL by list_for_each_entry(), so it is incorrect to assume that the iterator value will be NULL if the list is empty or no element is found. To fix the bug, Use a new value 'iter' as the list iterator, while use the old value 'p' as a dedicated variable to point to the found element. Fixes: dfaa973ae960 ("KVM: PPC: Book3S HV: In H_SVM_INIT_DONE, migrate remaining normal-GFNs to secure-GFNs") Cc: stable@vger.kernel.org # v5.9+ Signed-off-by: Xiaomeng Tong Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220414062103.8153-1-xiam0nd.tong@gmail.com Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/kvm/book3s_hv_uvmem.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index 84e5a2dc8be5..3dd58b4ee33e 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -359,13 +359,15 @@ static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm, static bool kvmppc_next_nontransitioned_gfn(const struct kvm_memory_slot *memslot, struct kvm *kvm, unsigned long *gfn) { - struct kvmppc_uvmem_slot *p; + struct kvmppc_uvmem_slot *p = NULL, *iter; bool ret = false; unsigned long i; - list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) - if (*gfn >= p->base_pfn && *gfn < p->base_pfn + p->nr_pfns) + list_for_each_entry(iter, &kvm->arch.uvmem_pfns, list) + if (*gfn >= iter->base_pfn && *gfn < iter->base_pfn + iter->nr_pfns) { + p = iter; break; + } if (!p) return ret; /* From 4a9f3a9c28a6966c699b4264b6a3c5aaed21ea3e Mon Sep 17 00:00:00 2001 From: Sean Christopherson Date: Thu, 19 May 2022 07:57:11 -0700 Subject: [PATCH 0292/1842] x86/kvm: Alloc dummy async #PF token outside of raw spinlock commit 0547758a6de3cc71a0cfdd031a3621a30db6a68b upstream. Drop the raw spinlock in kvm_async_pf_task_wake() before allocating the the dummy async #PF token, the allocator is preemptible on PREEMPT_RT kernels and must not be called from truly atomic contexts. Opportunistically document why it's ok to loop on allocation failure, i.e. why the function won't get stuck in an infinite loop. Reported-by: Yajun Deng Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/kvm.c | 43 ++++++++++++++++++++++++++++--------------- 1 file changed, 28 insertions(+), 15 deletions(-) diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 6c3d38b5a8ad..c2398509aebf 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -188,7 +188,7 @@ void kvm_async_pf_task_wake(u32 token) { u32 key = hash_32(token, KVM_TASK_SLEEP_HASHBITS); struct kvm_task_sleep_head *b = &async_pf_sleepers[key]; - struct kvm_task_sleep_node *n; + struct kvm_task_sleep_node *n, *dummy = NULL; if (token == ~0) { apf_task_wake_all(); @@ -200,28 +200,41 @@ void kvm_async_pf_task_wake(u32 token) n = _find_apf_task(b, token); if (!n) { /* - * async PF was not yet handled. - * Add dummy entry for the token. + * Async #PF not yet handled, add a dummy entry for the token. + * Allocating the token must be down outside of the raw lock + * as the allocator is preemptible on PREEMPT_RT kernels. */ - n = kzalloc(sizeof(*n), GFP_ATOMIC); - if (!n) { - /* - * Allocation failed! Busy wait while other cpu - * handles async PF. - */ + if (!dummy) { raw_spin_unlock(&b->lock); - cpu_relax(); + dummy = kzalloc(sizeof(*dummy), GFP_KERNEL); + + /* + * Continue looping on allocation failure, eventually + * the async #PF will be handled and allocating a new + * node will be unnecessary. + */ + if (!dummy) + cpu_relax(); + + /* + * Recheck for async #PF completion before enqueueing + * the dummy token to avoid duplicate list entries. + */ goto again; } - n->token = token; - n->cpu = smp_processor_id(); - init_swait_queue_head(&n->wq); - hlist_add_head(&n->link, &b->list); + dummy->token = token; + dummy->cpu = smp_processor_id(); + init_swait_queue_head(&dummy->wq); + hlist_add_head(&dummy->link, &b->list); + dummy = NULL; } else { apf_task_wake_one(n); } raw_spin_unlock(&b->lock); - return; + + /* A dummy token might be allocated and ultimately not used. */ + if (dummy) + kfree(dummy); } EXPORT_SYMBOL_GPL(kvm_async_pf_task_wake); From a2a3fa5b616a1b4caaf1c352051e169471296d4b Mon Sep 17 00:00:00 2001 From: Paolo Bonzini Date: Tue, 24 May 2022 09:43:31 -0400 Subject: [PATCH 0293/1842] x86, kvm: use correct GFP flags for preemption disabled commit baec4f5a018fe2d708fc1022330dba04b38b5fe3 upstream. Commit ddd7ed842627 ("x86/kvm: Alloc dummy async #PF token outside of raw spinlock") leads to the following Smatch static checker warning: arch/x86/kernel/kvm.c:212 kvm_async_pf_task_wake() warn: sleeping in atomic context arch/x86/kernel/kvm.c 202 raw_spin_lock(&b->lock); 203 n = _find_apf_task(b, token); 204 if (!n) { 205 /* 206 * Async #PF not yet handled, add a dummy entry for the token. 207 * Allocating the token must be down outside of the raw lock 208 * as the allocator is preemptible on PREEMPT_RT kernels. 209 */ 210 if (!dummy) { 211 raw_spin_unlock(&b->lock); --> 212 dummy = kzalloc(sizeof(*dummy), GFP_KERNEL); ^^^^^^^^^^ Smatch thinks the caller has preempt disabled. The `smdb.py preempt kvm_async_pf_task_wake` output call tree is: sysvec_kvm_asyncpf_interrupt() <- disables preempt -> __sysvec_kvm_asyncpf_interrupt() -> kvm_async_pf_task_wake() The caller is this: arch/x86/kernel/kvm.c 290 DEFINE_IDTENTRY_SYSVEC(sysvec_kvm_asyncpf_interrupt) 291 { 292 struct pt_regs *old_regs = set_irq_regs(regs); 293 u32 token; 294 295 ack_APIC_irq(); 296 297 inc_irq_stat(irq_hv_callback_count); 298 299 if (__this_cpu_read(apf_reason.enabled)) { 300 token = __this_cpu_read(apf_reason.token); 301 kvm_async_pf_task_wake(token); 302 __this_cpu_write(apf_reason.token, 0); 303 wrmsrl(MSR_KVM_ASYNC_PF_ACK, 1); 304 } 305 306 set_irq_regs(old_regs); 307 } The DEFINE_IDTENTRY_SYSVEC() is a wrapper that calls this function from the call_on_irqstack_cond(). It's inside the call_on_irqstack_cond() where preempt is disabled (unless it's already disabled). The irq_enter/exit_rcu() functions disable/enable preempt. Reported-by: Dan Carpenter Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/kvm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index c2398509aebf..971609fb15c5 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -206,7 +206,7 @@ void kvm_async_pf_task_wake(u32 token) */ if (!dummy) { raw_spin_unlock(&b->lock); - dummy = kzalloc(sizeof(*dummy), GFP_KERNEL); + dummy = kzalloc(sizeof(*dummy), GFP_ATOMIC); /* * Continue looping on allocation failure, eventually From 3d8fc6e28f321d753ab727e3c3e740daf36a8fa3 Mon Sep 17 00:00:00 2001 From: Sean Christopherson Date: Fri, 11 Mar 2022 03:27:41 +0000 Subject: [PATCH 0294/1842] KVM: x86: avoid calling x86 emulator without a decoded instruction commit fee060cd52d69c114b62d1a2948ea9648b5131f9 upstream. Whenever x86_decode_emulated_instruction() detects a breakpoint, it returns the value that kvm_vcpu_check_breakpoint() writes into its pass-by-reference second argument. Unfortunately this is completely bogus because the expected outcome of x86_decode_emulated_instruction is an EMULATION_* value. Then, if kvm_vcpu_check_breakpoint() does "*r = 0" (corresponding to a KVM_EXIT_DEBUG userspace exit), it is misunderstood as EMULATION_OK and x86_emulate_instruction() is called without having decoded the instruction. This causes various havoc from running with a stale emulation context. The fix is to move the call to kvm_vcpu_check_breakpoint() where it was before commit 4aa2691dcbd3 ("KVM: x86: Factor out x86 instruction emulation with decoding") introduced x86_decode_emulated_instruction(). The other caller of the function does not need breakpoint checks, because it is invoked as part of a vmexit and the processor has already checked those before executing the instruction that #GP'd. This fixes CVE-2022-1852. Reported-by: Qiuhao Li Reported-by: Gaoning Pan Reported-by: Yongkang Jia Fixes: 4aa2691dcbd3 ("KVM: x86: Factor out x86 instruction emulation with decoding") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Message-Id: <20220311032801.3467418-2-seanjc@google.com> [Rewrote commit message according to Qiuhao's report, since a patch already existed to fix the bug. - Paolo] Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/x86.c | 31 +++++++++++++++++++------------ 1 file changed, 19 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ae18062c26a6..d9cec5daa1ff 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7295,7 +7295,7 @@ int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(kvm_skip_emulated_instruction); -static bool kvm_vcpu_check_breakpoint(struct kvm_vcpu *vcpu, int *r) +static bool kvm_vcpu_check_code_breakpoint(struct kvm_vcpu *vcpu, int *r) { if (unlikely(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP) && (vcpu->arch.guest_debug_dr7 & DR7_BP_EN_MASK)) { @@ -7364,25 +7364,23 @@ static bool is_vmware_backdoor_opcode(struct x86_emulate_ctxt *ctxt) } /* - * Decode to be emulated instruction. Return EMULATION_OK if success. + * Decode an instruction for emulation. The caller is responsible for handling + * code breakpoints. Note, manually detecting code breakpoints is unnecessary + * (and wrong) when emulating on an intercepted fault-like exception[*], as + * code breakpoints have higher priority and thus have already been done by + * hardware. + * + * [*] Except #MC, which is higher priority, but KVM should never emulate in + * response to a machine check. */ int x86_decode_emulated_instruction(struct kvm_vcpu *vcpu, int emulation_type, void *insn, int insn_len) { - int r = EMULATION_OK; struct x86_emulate_ctxt *ctxt = vcpu->arch.emulate_ctxt; + int r; init_emulate_ctxt(vcpu); - /* - * We will reenter on the same instruction since we do not set - * complete_userspace_io. This does not handle watchpoints yet, - * those would be handled in the emulate_ops. - */ - if (!(emulation_type & EMULTYPE_SKIP) && - kvm_vcpu_check_breakpoint(vcpu, &r)) - return r; - ctxt->ud = emulation_type & EMULTYPE_TRAP_UD; r = x86_decode_insn(ctxt, insn, insn_len); @@ -7417,6 +7415,15 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (!(emulation_type & EMULTYPE_NO_DECODE)) { kvm_clear_exception_queue(vcpu); + /* + * Return immediately if RIP hits a code breakpoint, such #DBs + * are fault-like and are higher priority than any faults on + * the code fetch itself. + */ + if (!(emulation_type & EMULTYPE_SKIP) && + kvm_vcpu_check_code_breakpoint(vcpu, &r)) + return r; + r = x86_decode_emulated_instruction(vcpu, emulation_type, insn, insn_len); if (r != EMULATION_OK) { From c013f7d1cd92d945398c63a7d6a8b0dd99c23679 Mon Sep 17 00:00:00 2001 From: Fabio Estevam Date: Wed, 20 Apr 2022 09:06:01 -0300 Subject: [PATCH 0295/1842] crypto: caam - fix i.MX6SX entropy delay value MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 4ee4cdad368a26de3967f2975806a9ee2fa245df upstream. Since commit 358ba762d9f1 ("crypto: caam - enable prediction resistance in HRWNG") the following CAAM errors can be seen on i.MX6SX: caam_jr 2101000.jr: 20003c5b: CCB: desc idx 60: RNG: Hardware error hwrng: no data available This error is due to an incorrect entropy delay for i.MX6SX. Fix it by increasing the minimum entropy delay for i.MX6SX as done in U-Boot: https://patchwork.ozlabs.org/project/uboot/patch/20220415111049.2565744-1-gaurav.jain@nxp.com/ As explained in the U-Boot patch: "RNG self tests are run to determine the correct entropy delay. Such tests are executed with different voltages and temperatures to identify the worst case value for the entropy delay. For i.MX6SX, it was determined that after adding a margin value of 1000 the minimum entropy delay should be at least 12000." Cc: Fixes: 358ba762d9f1 ("crypto: caam - enable prediction resistance in HRWNG") Signed-off-by: Fabio Estevam Reviewed-by: Horia Geantă Reviewed-by: Vabhav Sharma Reviewed-by: Gaurav Jain Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman --- drivers/crypto/caam/ctrl.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c index ca0361b2dbb0..f87aa2169e5f 100644 --- a/drivers/crypto/caam/ctrl.c +++ b/drivers/crypto/caam/ctrl.c @@ -609,6 +609,13 @@ static bool check_version(struct fsl_mc_version *mc_version, u32 major, } #endif +static bool needs_entropy_delay_adjustment(void) +{ + if (of_machine_is_compatible("fsl,imx6sx")) + return true; + return false; +} + /* Probe routine for CAAM top (controller) level */ static int caam_probe(struct platform_device *pdev) { @@ -855,6 +862,8 @@ static int caam_probe(struct platform_device *pdev) * Also, if a handle was instantiated, do not change * the TRNG parameters. */ + if (needs_entropy_delay_adjustment()) + ent_delay = 12000; if (!(ctrlpriv->rng4_sh_init || inst_handles)) { dev_info(dev, "Entropy delay = %u\n", @@ -871,6 +880,15 @@ static int caam_probe(struct platform_device *pdev) */ ret = instantiate_rng(dev, inst_handles, gen_sk); + /* + * Entropy delay is determined via TRNG characterization. + * TRNG characterization is run across different voltages + * and temperatures. + * If worst case value for ent_dly is identified, + * the loop can be skipped for that platform. + */ + if (needs_entropy_delay_adjustment()) + break; if (ret == -EAGAIN) /* * if here, the loop will rerun, From 6a1cc25494056e6b8dff243f8b3d9c57259535f6 Mon Sep 17 00:00:00 2001 From: Vitaly Chikunov Date: Thu, 21 Apr 2022 20:25:10 +0300 Subject: [PATCH 0296/1842] crypto: ecrdsa - Fix incorrect use of vli_cmp commit 7cc7ab73f83ee6d50dc9536bc3355495d8600fad upstream. Correctly compare values that shall be greater-or-equal and not just greater. Fixes: 0d7a78643f69 ("crypto: ecrdsa - add EC-RDSA (GOST 34.10) algorithm") Cc: Signed-off-by: Vitaly Chikunov Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman --- crypto/ecrdsa.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/crypto/ecrdsa.c b/crypto/ecrdsa.c index 6a3fd09057d0..f7ed43020672 100644 --- a/crypto/ecrdsa.c +++ b/crypto/ecrdsa.c @@ -113,15 +113,15 @@ static int ecrdsa_verify(struct akcipher_request *req) /* Step 1: verify that 0 < r < q, 0 < s < q */ if (vli_is_zero(r, ndigits) || - vli_cmp(r, ctx->curve->n, ndigits) == 1 || + vli_cmp(r, ctx->curve->n, ndigits) >= 0 || vli_is_zero(s, ndigits) || - vli_cmp(s, ctx->curve->n, ndigits) == 1) + vli_cmp(s, ctx->curve->n, ndigits) >= 0) return -EKEYREJECTED; /* Step 2: calculate hash (h) of the message (passed as input) */ /* Step 3: calculate e = h \mod q */ vli_from_le64(e, digest, ndigits); - if (vli_cmp(e, ctx->curve->n, ndigits) == 1) + if (vli_cmp(e, ctx->curve->n, ndigits) >= 0) vli_sub(e, e, ctx->curve->n, ndigits); if (vli_is_zero(e, ndigits)) e[0] = 1; @@ -137,7 +137,7 @@ static int ecrdsa_verify(struct akcipher_request *req) /* Step 6: calculate point C = z_1P + z_2Q, and R = x_c \mod q */ ecc_point_mult_shamir(&cc, z1, &ctx->curve->g, z2, &ctx->pub_key, ctx->curve); - if (vli_cmp(cc.x, ctx->curve->n, ndigits) == 1) + if (vli_cmp(cc.x, ctx->curve->n, ndigits) >= 0) vli_sub(cc.x, cc.x, ctx->curve->n, ndigits); /* Step 7: if R == r signature is valid */ From fae05b2314b147a78fbed1dc4c645d9a66313758 Mon Sep 17 00:00:00 2001 From: Sultan Alsawaf Date: Fri, 13 May 2022 15:11:26 -0700 Subject: [PATCH 0297/1842] zsmalloc: fix races between asynchronous zspage free and page migration commit 2505a981114dcb715f8977b8433f7540854851d8 upstream. The asynchronous zspage free worker tries to lock a zspage's entire page list without defending against page migration. Since pages which haven't yet been locked can concurrently migrate off the zspage page list while lock_zspage() churns away, lock_zspage() can suffer from a few different lethal races. It can lock a page which no longer belongs to the zspage and unsafely dereference page_private(), it can unsafely dereference a torn pointer to the next page (since there's a data race), and it can observe a spurious NULL pointer to the next page and thus not lock all of the zspage's pages (since a single page migration will reconstruct the entire page list, and create_page_chain() unconditionally zeroes out each list pointer in the process). Fix the races by using migrate_read_lock() in lock_zspage() to synchronize with page migration. Link: https://lkml.kernel.org/r/20220509024703.243847-1-sultan@kerneltoast.com Fixes: 77ff465799c602 ("zsmalloc: zs_page_migrate: skip unnecessary loops but not return -EBUSY if zspage is not inuse") Signed-off-by: Sultan Alsawaf Acked-by: Minchan Kim Cc: Nitin Gupta Cc: Sergey Senozhatsky Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- mm/zsmalloc.c | 37 +++++++++++++++++++++++++++++++++---- 1 file changed, 33 insertions(+), 4 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 73cd50735df2..c18dc8e61d35 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1748,11 +1748,40 @@ static enum fullness_group putback_zspage(struct size_class *class, */ static void lock_zspage(struct zspage *zspage) { - struct page *page = get_first_page(zspage); + struct page *curr_page, *page; - do { - lock_page(page); - } while ((page = get_next_page(page)) != NULL); + /* + * Pages we haven't locked yet can be migrated off the list while we're + * trying to lock them, so we need to be careful and only attempt to + * lock each page under migrate_read_lock(). Otherwise, the page we lock + * may no longer belong to the zspage. This means that we may wait for + * the wrong page to unlock, so we must take a reference to the page + * prior to waiting for it to unlock outside migrate_read_lock(). + */ + while (1) { + migrate_read_lock(zspage); + page = get_first_page(zspage); + if (trylock_page(page)) + break; + get_page(page); + migrate_read_unlock(zspage); + wait_on_page_locked(page); + put_page(page); + } + + curr_page = page; + while ((page = get_next_page(curr_page))) { + if (trylock_page(page)) { + curr_page = page; + } else { + get_page(page); + migrate_read_unlock(zspage); + wait_on_page_locked(page); + put_page(page); + migrate_read_lock(zspage); + } + } + migrate_read_unlock(zspage); } static int zs_init_fs_context(struct fs_context *fc) From 4989bb03342941f2b730b37dfa38bce27b543661 Mon Sep 17 00:00:00 2001 From: Steven Rostedt Date: Tue, 5 Apr 2022 10:02:00 -0400 Subject: [PATCH 0298/1842] Bluetooth: hci_qca: Use del_timer_sync() before freeing commit 72ef98445aca568a81c2da050532500a8345ad3a upstream. While looking at a crash report on a timer list being corrupted, which usually happens when a timer is freed while still active. This is commonly triggered by code calling del_timer() instead of del_timer_sync() just before freeing. One possible culprit is the hci_qca driver, which does exactly that. Eric mentioned that wake_retrans_timer could be rearmed via the work queue, so also move the destruction of the work queue before del_timer_sync(). Cc: Eric Dumazet Cc: stable@vger.kernel.org Fixes: 0ff252c1976da ("Bluetooth: hciuart: Add support QCA chipset for UART") Signed-off-by: Steven Rostedt (Google) Signed-off-by: Marcel Holtmann Signed-off-by: Greg Kroah-Hartman --- drivers/bluetooth/hci_qca.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c index dc7ee5dd2eec..eea18aed17f8 100644 --- a/drivers/bluetooth/hci_qca.c +++ b/drivers/bluetooth/hci_qca.c @@ -689,9 +689,9 @@ static int qca_close(struct hci_uart *hu) skb_queue_purge(&qca->tx_wait_q); skb_queue_purge(&qca->txq); skb_queue_purge(&qca->rx_memdump_q); - del_timer(&qca->tx_idle_timer); - del_timer(&qca->wake_retrans_timer); destroy_workqueue(qca->workqueue); + del_timer_sync(&qca->tx_idle_timer); + del_timer_sync(&qca->wake_retrans_timer); qca->hu = NULL; kfree_skb(qca->rx_skb); From 8845027e55fc8b977607b4576ca6efd5d8d4566d Mon Sep 17 00:00:00 2001 From: Jonathan Bakker Date: Sun, 27 Mar 2022 11:08:51 -0700 Subject: [PATCH 0299/1842] ARM: dts: s5pv210: Correct interrupt name for bluetooth in Aries commit 3f5e3d3a8b895c8a11da8b0063ba2022dd9e2045 upstream. Correct the name of the bluetooth interrupt from host-wake to host-wakeup. Fixes: 1c65b6184441b ("ARM: dts: s5pv210: Correct BCM4329 bluetooth node") Cc: Signed-off-by: Jonathan Bakker Link: https://lore.kernel.org/r/CY4PR04MB0567495CFCBDC8D408D44199CB1C9@CY4PR04MB0567.namprd04.prod.outlook.com Signed-off-by: Krzysztof Kozlowski Signed-off-by: Greg Kroah-Hartman --- arch/arm/boot/dts/s5pv210-aries.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/boot/dts/s5pv210-aries.dtsi b/arch/arm/boot/dts/s5pv210-aries.dtsi index bd4450dbdcb6..986fa0b1a877 100644 --- a/arch/arm/boot/dts/s5pv210-aries.dtsi +++ b/arch/arm/boot/dts/s5pv210-aries.dtsi @@ -896,7 +896,7 @@ bluetooth { device-wakeup-gpios = <&gpg3 4 GPIO_ACTIVE_HIGH>; interrupt-parent = <&gph2>; interrupts = <5 IRQ_TYPE_LEVEL_HIGH>; - interrupt-names = "host-wake"; + interrupt-names = "host-wakeup"; }; }; From bb64957c472adc90eb7dbb45db95019d7a574088 Mon Sep 17 00:00:00 2001 From: Dan Carpenter Date: Mon, 25 Apr 2022 14:56:48 +0300 Subject: [PATCH 0300/1842] dm integrity: fix error code in dm_integrity_ctr() commit d3f2a14b8906df913cb04a706367b012db94a6e8 upstream. The "r" variable shadows an earlier "r" that has function scope. It means that we accidentally return success instead of an error code. Smatch has a warning for this: drivers/md/dm-integrity.c:4503 dm_integrity_ctr() warn: missing error code 'r' Fixes: 7eada909bfd7 ("dm: add integrity target") Cc: stable@vger.kernel.org Signed-off-by: Dan Carpenter Reviewed-by: Mikulas Patocka Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-integrity.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c index 6f085e96c3f3..835b1f3464d0 100644 --- a/drivers/md/dm-integrity.c +++ b/drivers/md/dm-integrity.c @@ -4327,8 +4327,6 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv) } if (should_write_sb) { - int r; - init_journal(ic, 0, ic->journal_sections, 0); r = dm_integrity_failed(ic); if (unlikely(r)) { From 4617778417d0a8c59f309b5eea21d943877f3c74 Mon Sep 17 00:00:00 2001 From: Mikulas Patocka Date: Mon, 25 Apr 2022 08:53:29 -0400 Subject: [PATCH 0301/1842] dm crypt: make printing of the key constant-time commit 567dd8f34560fa221a6343729474536aa7ede4fd upstream. The device mapper dm-crypt target is using scnprintf("%02x", cc->key[i]) to report the current key to userspace. However, this is not a constant-time operation and it may leak information about the key via timing, via cache access patterns or via the branch predictor. Change dm-crypt's key printing to use "%c" instead of "%02x". Also introduce hex2asc() that carefully avoids any branching or memory accesses when converting a number in the range 0 ... 15 to an ascii character. Cc: stable@vger.kernel.org Signed-off-by: Mikulas Patocka Tested-by: Milan Broz Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-crypt.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index b9677f701b6a..3d975db86434 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -3404,6 +3404,11 @@ static int crypt_map(struct dm_target *ti, struct bio *bio) return DM_MAPIO_SUBMITTED; } +static char hex2asc(unsigned char c) +{ + return c + '0' + ((unsigned)(9 - c) >> 4 & 0x27); +} + static void crypt_status(struct dm_target *ti, status_type_t type, unsigned status_flags, char *result, unsigned maxlen) { @@ -3422,9 +3427,12 @@ static void crypt_status(struct dm_target *ti, status_type_t type, if (cc->key_size > 0) { if (cc->key_string) DMEMIT(":%u:%s", cc->key_size, cc->key_string); - else - for (i = 0; i < cc->key_size; i++) - DMEMIT("%02x", cc->key[i]); + else { + for (i = 0; i < cc->key_size; i++) { + DMEMIT("%c%c", hex2asc(cc->key[i] >> 4), + hex2asc(cc->key[i] & 0xf)); + } + } } else DMEMIT("-"); From e39b536d70edc5f622187cf787db94287e389c50 Mon Sep 17 00:00:00 2001 From: Mikulas Patocka Date: Sun, 24 Apr 2022 16:43:00 -0400 Subject: [PATCH 0302/1842] dm stats: add cond_resched when looping over entries commit bfe2b0146c4d0230b68f5c71a64380ff8d361f8b upstream. dm-stats can be used with a very large number of entries (it is only limited by 1/4 of total system memory), so add rescheduling points to the loops that iterate over the entries. Cc: stable@vger.kernel.org Signed-off-by: Mikulas Patocka Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-stats.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/md/dm-stats.c b/drivers/md/dm-stats.c index 35d368c418d0..55443a6598fa 100644 --- a/drivers/md/dm-stats.c +++ b/drivers/md/dm-stats.c @@ -224,6 +224,7 @@ void dm_stats_cleanup(struct dm_stats *stats) atomic_read(&shared->in_flight[READ]), atomic_read(&shared->in_flight[WRITE])); } + cond_resched(); } dm_stat_free(&s->rcu_head); } @@ -313,6 +314,7 @@ static int dm_stats_create(struct dm_stats *stats, sector_t start, sector_t end, for (ni = 0; ni < n_entries; ni++) { atomic_set(&s->stat_shared[ni].in_flight[READ], 0); atomic_set(&s->stat_shared[ni].in_flight[WRITE], 0); + cond_resched(); } if (s->n_histogram_entries) { @@ -325,6 +327,7 @@ static int dm_stats_create(struct dm_stats *stats, sector_t start, sector_t end, for (ni = 0; ni < n_entries; ni++) { s->stat_shared[ni].tmp.histogram = hi; hi += s->n_histogram_entries + 1; + cond_resched(); } } @@ -345,6 +348,7 @@ static int dm_stats_create(struct dm_stats *stats, sector_t start, sector_t end, for (ni = 0; ni < n_entries; ni++) { p[ni].histogram = hi; hi += s->n_histogram_entries + 1; + cond_resched(); } } } @@ -474,6 +478,7 @@ static int dm_stats_list(struct dm_stats *stats, const char *program, } DMEMIT("\n"); } + cond_resched(); } mutex_unlock(&stats->mutex); @@ -750,6 +755,7 @@ static void __dm_stat_clear(struct dm_stat *s, size_t idx_start, size_t idx_end, local_irq_enable(); } } + cond_resched(); } } @@ -865,6 +871,8 @@ static int dm_stats_print(struct dm_stats *stats, int id, if (unlikely(sz + 1 >= maxlen)) goto buffer_overflow; + + cond_resched(); } if (clear) From 8df42bcd364cc3b41105215d841792aea787b133 Mon Sep 17 00:00:00 2001 From: Sarthak Kukreti Date: Tue, 31 May 2022 15:56:40 -0400 Subject: [PATCH 0303/1842] dm verity: set DM_TARGET_IMMUTABLE feature flag commit 4caae58406f8ceb741603eee460d79bacca9b1b5 upstream. The device-mapper framework provides a mechanism to mark targets as immutable (and hence fail table reloads that try to change the target type). Add the DM_TARGET_IMMUTABLE flag to the dm-verity target's feature flags to prevent switching the verity target with a different target type. Fixes: a4ffc152198e ("dm: add verity target") Cc: stable@vger.kernel.org Signed-off-by: Sarthak Kukreti Reviewed-by: Kees Cook Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-verity-target.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index 808a98ef624c..c801f6b93b7b 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -1242,6 +1242,7 @@ static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv) static struct target_type verity_target = { .name = "verity", + .features = DM_TARGET_IMMUTABLE, .version = {1, 7, 0}, .module = THIS_MODULE, .ctr = verity_ctr, From 0f03885059c1f2a5fb690d21578d0cad55a98b1f Mon Sep 17 00:00:00 2001 From: Mariusz Tkaczyk Date: Tue, 22 Mar 2022 16:23:39 +0100 Subject: [PATCH 0304/1842] raid5: introduce MD_BROKEN commit 57668f0a4cc4083a120cc8c517ca0055c4543b59 upstream. Raid456 module had allowed to achieve failed state. It was fixed by fb73b357fb9 ("raid5: block failing device if raid will be failed"). This fix introduces a bug, now if raid5 fails during IO, it may result with a hung task without completion. Faulty flag on the device is necessary to process all requests and is checked many times, mainly in analyze_stripe(). Allow to set faulty on drive again and set MD_BROKEN if raid is failed. As a result, this level is allowed to achieve failed state again, but communication with userspace (via -EBUSY status) will be preserved. This restores possibility to fail array via #mdadm --set-faulty command and will be fixed by additional verification on mdadm side. Reproduction steps: mdadm -CR imsm -e imsm -n 3 /dev/nvme[0-2]n1 mdadm -CR r5 -e imsm -l5 -n3 /dev/nvme[0-2]n1 --assume-clean mkfs.xfs /dev/md126 -f mount /dev/md126 /mnt/root/ fio --filename=/mnt/root/file --size=5GB --direct=1 --rw=randrw --bs=64k --ioengine=libaio --iodepth=64 --runtime=240 --numjobs=4 --time_based --group_reporting --name=throughput-test-job --eta-newline=1 & echo 1 > /sys/block/nvme2n1/device/device/remove echo 1 > /sys/block/nvme1n1/device/device/remove [ 1475.787779] Call Trace: [ 1475.793111] __schedule+0x2a6/0x700 [ 1475.799460] schedule+0x38/0xa0 [ 1475.805454] raid5_get_active_stripe+0x469/0x5f0 [raid456] [ 1475.813856] ? finish_wait+0x80/0x80 [ 1475.820332] raid5_make_request+0x180/0xb40 [raid456] [ 1475.828281] ? finish_wait+0x80/0x80 [ 1475.834727] ? finish_wait+0x80/0x80 [ 1475.841127] ? finish_wait+0x80/0x80 [ 1475.847480] md_handle_request+0x119/0x190 [ 1475.854390] md_make_request+0x8a/0x190 [ 1475.861041] generic_make_request+0xcf/0x310 [ 1475.868145] submit_bio+0x3c/0x160 [ 1475.874355] iomap_dio_submit_bio.isra.20+0x51/0x60 [ 1475.882070] iomap_dio_bio_actor+0x175/0x390 [ 1475.889149] iomap_apply+0xff/0x310 [ 1475.895447] ? iomap_dio_bio_actor+0x390/0x390 [ 1475.902736] ? iomap_dio_bio_actor+0x390/0x390 [ 1475.909974] iomap_dio_rw+0x2f2/0x490 [ 1475.916415] ? iomap_dio_bio_actor+0x390/0x390 [ 1475.923680] ? atime_needs_update+0x77/0xe0 [ 1475.930674] ? xfs_file_dio_aio_read+0x6b/0xe0 [xfs] [ 1475.938455] xfs_file_dio_aio_read+0x6b/0xe0 [xfs] [ 1475.946084] xfs_file_read_iter+0xba/0xd0 [xfs] [ 1475.953403] aio_read+0xd5/0x180 [ 1475.959395] ? _cond_resched+0x15/0x30 [ 1475.965907] io_submit_one+0x20b/0x3c0 [ 1475.972398] __x64_sys_io_submit+0xa2/0x180 [ 1475.979335] ? do_io_getevents+0x7c/0xc0 [ 1475.986009] do_syscall_64+0x5b/0x1a0 [ 1475.992419] entry_SYSCALL_64_after_hwframe+0x65/0xca [ 1476.000255] RIP: 0033:0x7f11fc27978d [ 1476.006631] Code: Bad RIP value. [ 1476.073251] INFO: task fio:3877 blocked for more than 120 seconds. Cc: stable@vger.kernel.org Fixes: fb73b357fb9 ("raid5: block failing device if raid will be failed") Reviewd-by: Xiao Ni Signed-off-by: Mariusz Tkaczyk Signed-off-by: Song Liu Signed-off-by: Greg Kroah-Hartman --- drivers/md/raid5.c | 49 ++++++++++++++++++++++------------------------ 1 file changed, 23 insertions(+), 26 deletions(-) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index c82953a3299e..02767866b9ff 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -686,17 +686,17 @@ int raid5_calc_degraded(struct r5conf *conf) return degraded; } -static int has_failed(struct r5conf *conf) +static bool has_failed(struct r5conf *conf) { - int degraded; + int degraded = conf->mddev->degraded; - if (conf->mddev->reshape_position == MaxSector) - return conf->mddev->degraded > conf->max_degraded; + if (test_bit(MD_BROKEN, &conf->mddev->flags)) + return true; - degraded = raid5_calc_degraded(conf); - if (degraded > conf->max_degraded) - return 1; - return 0; + if (conf->mddev->reshape_position != MaxSector) + degraded = raid5_calc_degraded(conf); + + return degraded > conf->max_degraded; } struct stripe_head * @@ -2877,34 +2877,31 @@ static void raid5_error(struct mddev *mddev, struct md_rdev *rdev) unsigned long flags; pr_debug("raid456: error called\n"); + pr_crit("md/raid:%s: Disk failure on %s, disabling device.\n", + mdname(mddev), bdevname(rdev->bdev, b)); + spin_lock_irqsave(&conf->device_lock, flags); - - if (test_bit(In_sync, &rdev->flags) && - mddev->degraded == conf->max_degraded) { - /* - * Don't allow to achieve failed state - * Don't try to recover this device - */ - conf->recovery_disabled = mddev->recovery_disabled; - spin_unlock_irqrestore(&conf->device_lock, flags); - return; - } - set_bit(Faulty, &rdev->flags); clear_bit(In_sync, &rdev->flags); mddev->degraded = raid5_calc_degraded(conf); + + if (has_failed(conf)) { + set_bit(MD_BROKEN, &conf->mddev->flags); + conf->recovery_disabled = mddev->recovery_disabled; + + pr_crit("md/raid:%s: Cannot continue operation (%d/%d failed).\n", + mdname(mddev), mddev->degraded, conf->raid_disks); + } else { + pr_crit("md/raid:%s: Operation continuing on %d devices.\n", + mdname(mddev), conf->raid_disks - mddev->degraded); + } + spin_unlock_irqrestore(&conf->device_lock, flags); set_bit(MD_RECOVERY_INTR, &mddev->recovery); set_bit(Blocked, &rdev->flags); set_mask_bits(&mddev->sb_flags, 0, BIT(MD_SB_CHANGE_DEVS) | BIT(MD_SB_CHANGE_PENDING)); - pr_crit("md/raid:%s: Disk failure on %s, disabling device.\n" - "md/raid:%s: Operation continuing on %d devices.\n", - mdname(mddev), - bdevname(rdev->bdev, b), - mdname(mddev), - conf->raid_disks - mddev->degraded); r5c_update_on_rdev_error(mddev, rdev); } From d6822d82c0e8d025fbc157755cab17252ad7092b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Marek=20Ma=C5=9Blanka?= Date: Tue, 5 Apr 2022 17:04:07 +0200 Subject: [PATCH 0305/1842] HID: multitouch: Add support for Google Whiskers Touchpad commit 1d07cef7fd7599450b3d03e1915efc2a96e1f03f upstream. The Google Whiskers touchpad does not work properly with the default multitouch configuration. Instead, use the same configuration as Google Rose. Signed-off-by: Marek Maslanka Acked-by: Benjamin Tissoires Signed-off-by: Jiri Kosina Signed-off-by: Greg Kroah-Hartman --- drivers/hid/hid-multitouch.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c index e5a3704b9fe8..9e1ec0e01582 100644 --- a/drivers/hid/hid-multitouch.c +++ b/drivers/hid/hid-multitouch.c @@ -2129,6 +2129,9 @@ static const struct hid_device_id mt_devices[] = { { .driver_data = MT_CLS_GOOGLE, HID_DEVICE(HID_BUS_ANY, HID_GROUP_ANY, USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_TOUCH_ROSE) }, + { .driver_data = MT_CLS_GOOGLE, + HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8, USB_VENDOR_ID_GOOGLE, + USB_DEVICE_ID_GOOGLE_WHISKERS) }, /* Generic MT device */ { HID_DEVICE(HID_BUS_ANY, HID_GROUP_MULTITOUCH, HID_ANY_ID, HID_ANY_ID) }, From 0c56e5d0e65531747c437c608d610a2fa8ecd9fe Mon Sep 17 00:00:00 2001 From: Tao Jin Date: Sun, 3 Apr 2022 12:57:44 -0400 Subject: [PATCH 0306/1842] HID: multitouch: add quirks to enable Lenovo X12 trackpoint commit 95cd2cdc88c755dcd0a58b951faeb77742c733a4 upstream. This applies the similar quirks used by previous generation devices such as X1 tablet for X12 tablet, so that the trackpoint and buttons can work. This patch was applied and tested working on 5.17.1 . Cc: stable@vger.kernel.org # 5.8+ given that it relies on 40d5bb87377a Signed-off-by: Tao Jin Signed-off-by: Benjamin Tissoires Link: https://lore.kernel.org/r/CO6PR03MB6241CB276FCDC7F4CEDC34F6E1E29@CO6PR03MB6241.namprd03.prod.outlook.com Signed-off-by: Greg Kroah-Hartman --- drivers/hid/hid-ids.h | 1 + drivers/hid/hid-multitouch.c | 6 ++++++ 2 files changed, 7 insertions(+) diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h index d2e4f9f5507d..3744c3db5140 100644 --- a/drivers/hid/hid-ids.h +++ b/drivers/hid/hid-ids.h @@ -743,6 +743,7 @@ #define USB_DEVICE_ID_LENOVO_X1_COVER 0x6085 #define USB_DEVICE_ID_LENOVO_X1_TAB 0x60a3 #define USB_DEVICE_ID_LENOVO_X1_TAB3 0x60b5 +#define USB_DEVICE_ID_LENOVO_X12_TAB 0x60fe #define USB_DEVICE_ID_LENOVO_OPTICAL_USB_MOUSE_600E 0x600e #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D 0x608d #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019 0x6019 diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c index 9e1ec0e01582..d686917cc3b1 100644 --- a/drivers/hid/hid-multitouch.c +++ b/drivers/hid/hid-multitouch.c @@ -1990,6 +1990,12 @@ static const struct hid_device_id mt_devices[] = { USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB3) }, + /* Lenovo X12 TAB Gen 1 */ + { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT, + HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8, + USB_VENDOR_ID_LENOVO, + USB_DEVICE_ID_LENOVO_X12_TAB) }, + /* MosArt panels */ { .driver_data = MT_CLS_CONFIDENCE_MINUS_ONE, MT_USB_DEVICE(USB_VENDOR_ID_ASUS, From 5933a191ac3d6724833d87bd99bda1d1904cb800 Mon Sep 17 00:00:00 2001 From: Stefan Mahnke-Hartmann Date: Fri, 13 May 2022 15:41:51 +0200 Subject: [PATCH 0307/1842] tpm: Fix buffer access in tpm2_get_tpm_pt() commit e57b2523bd37e6434f4e64c7a685e3715ad21e9a upstream. Under certain conditions uninitialized memory will be accessed. As described by TCG Trusted Platform Module Library Specification, rev. 1.59 (Part 3: Commands), if a TPM2_GetCapability is received, requesting a capability, the TPM in field upgrade mode may return a zero length list. Check the property count in tpm2_get_tpm_pt(). Fixes: 2ab3241161b3 ("tpm: migrate tpm2_get_tpm_pt() to use struct tpm_buf") Cc: stable@vger.kernel.org Signed-off-by: Stefan Mahnke-Hartmann Reviewed-by: Jarkko Sakkinen Signed-off-by: Jarkko Sakkinen Signed-off-by: Greg Kroah-Hartman --- drivers/char/tpm/tpm2-cmd.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c index c84d23951219..d0e11d7a3c08 100644 --- a/drivers/char/tpm/tpm2-cmd.c +++ b/drivers/char/tpm/tpm2-cmd.c @@ -400,7 +400,16 @@ ssize_t tpm2_get_tpm_pt(struct tpm_chip *chip, u32 property_id, u32 *value, if (!rc) { out = (struct tpm2_get_cap_out *) &buf.data[TPM_HEADER_SIZE]; - *value = be32_to_cpu(out->value); + /* + * To prevent failing boot up of some systems, Infineon TPM2.0 + * returns SUCCESS on TPM2_Startup in field upgrade mode. Also + * the TPM2_Getcapability command returns a zero length list + * in field upgrade mode. + */ + if (be32_to_cpu(out->property_cnt) > 0) + *value = be32_to_cpu(out->value); + else + rc = -ENODATA; } tpm_buf_destroy(&buf); return rc; From ebbbffae71e2e0f322bf9e3fadb62d2bee0c33b3 Mon Sep 17 00:00:00 2001 From: Xiu Jianfeng Date: Fri, 18 Mar 2022 14:02:01 +0800 Subject: [PATCH 0308/1842] tpm: ibmvtpm: Correct the return value in tpm_ibmvtpm_probe() commit d0dc1a7100f19121f6e7450f9cdda11926aa3838 upstream. Currently it returns zero when CRQ response timed out, it should return an error code instead. Fixes: d8d74ea3c002 ("tpm: ibmvtpm: Wait for buffer to be set before proceeding") Signed-off-by: Xiu Jianfeng Reviewed-by: Stefan Berger Acked-by: Jarkko Sakkinen Signed-off-by: Jarkko Sakkinen Signed-off-by: Greg Kroah-Hartman --- drivers/char/tpm/tpm_ibmvtpm.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c index 3ca7528322f5..a1ec722d62a7 100644 --- a/drivers/char/tpm/tpm_ibmvtpm.c +++ b/drivers/char/tpm/tpm_ibmvtpm.c @@ -683,6 +683,7 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev, if (!wait_event_timeout(ibmvtpm->crq_queue.wq, ibmvtpm->rtce_buf != NULL, HZ)) { + rc = -ENODEV; dev_err(dev, "CRQ response timed out\n"); goto init_irq_cleanup; } From 1d100fcc1da7a5baaf29d81d1bfb8e106fc3c297 Mon Sep 17 00:00:00 2001 From: Akira Yokosawa Date: Wed, 27 Apr 2022 18:28:39 +0900 Subject: [PATCH 0309/1842] docs: submitting-patches: Fix crossref to 'The canonical patch format' commit 6d5aa418b3bd42cdccc36e94ee199af423ef7c84 upstream. The reference to `explicit_in_reply_to` is pointless as when the reference was added in the form of "#15" [1], Section 15) was "The canonical patch format". The reference of "#15" had not been properly updated in a couple of reorganizations during the plain-text SubmittingPatches era. Fix it by using `the_canonical_patch_format`. [1]: 2ae19acaa50a ("Documentation: Add "how to write a good patch summary" to SubmittingPatches") Signed-off-by: Akira Yokosawa Fixes: 5903019b2a5e ("Documentation/SubmittingPatches: convert it to ReST markup") Fixes: 9b2c76777acc ("Documentation/SubmittingPatches: enrich the Sphinx output") Cc: Jonathan Corbet Cc: Mauro Carvalho Chehab Cc: stable@vger.kernel.org # v4.9+ Link: https://lore.kernel.org/r/64e105a5-50be-23f2-6cae-903a2ea98e18@gmail.com Signed-off-by: Jonathan Corbet Signed-off-by: Greg Kroah-Hartman --- Documentation/process/submitting-patches.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Documentation/process/submitting-patches.rst b/Documentation/process/submitting-patches.rst index 5a267f5d1a50..edd263e0992d 100644 --- a/Documentation/process/submitting-patches.rst +++ b/Documentation/process/submitting-patches.rst @@ -71,7 +71,7 @@ as you intend it to. The maintainer will thank you if you write your patch description in a form which can be easily pulled into Linux's source code management -system, ``git``, as a "commit log". See :ref:`explicit_in_reply_to`. +system, ``git``, as a "commit log". See :ref:`the_canonical_patch_format`. Solve only one problem per patch. If your description starts to get long, that's a sign that you probably need to split up your patch. From 78a62e09d88537150ffb31451d07efdc8a1c9b78 Mon Sep 17 00:00:00 2001 From: Trond Myklebust Date: Sat, 14 May 2022 10:08:10 -0400 Subject: [PATCH 0310/1842] NFS: Memory allocation failures are not server fatal errors commit 452284407c18d8a522c3039339b1860afa0025a8 upstream. We need to filter out ENOMEM in nfs_error_is_fatal_on_server(), because running out of memory on our client is not a server error. Reported-by: Olga Kornievskaia Fixes: 2dc23afffbca ("NFS: ENOMEM should also be a fatal error.") Cc: stable@vger.kernel.org Signed-off-by: Trond Myklebust Signed-off-by: Anna Schumaker Signed-off-by: Greg Kroah-Hartman --- fs/nfs/internal.h | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h index 7009a8dddd45..a7e0970b5bfe 100644 --- a/fs/nfs/internal.h +++ b/fs/nfs/internal.h @@ -832,6 +832,7 @@ static inline bool nfs_error_is_fatal_on_server(int err) case 0: case -ERESTARTSYS: case -EINTR: + case -ENOMEM: return false; } return nfs_error_is_fatal(err); From 3097f38e91266c7132c3fdb7e778fac858c00670 Mon Sep 17 00:00:00 2001 From: Chuck Lever Date: Sat, 21 May 2022 19:06:13 -0400 Subject: [PATCH 0311/1842] NFSD: Fix possible sleep during nfsd4_release_lockowner() commit ce3c4ad7f4ce5db7b4f08a1e237d8dd94b39180b upstream. nfsd4_release_lockowner() holds clp->cl_lock when it calls check_for_locks(). However, check_for_locks() calls nfsd_file_get() / nfsd_file_put() to access the backing inode's flc_posix list, and nfsd_file_put() can sleep if the inode was recently removed. Let's instead rely on the stateowner's reference count to gate whether the release is permitted. This should be a reliable indication of locks-in-use since file lock operations and ->lm_get_owner take appropriate references, which are released appropriately when file locks are removed. Reported-by: Dai Ngo Signed-off-by: Chuck Lever Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman --- fs/nfsd/nfs4state.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c index 84dd68091f42..f1b503bec222 100644 --- a/fs/nfsd/nfs4state.c +++ b/fs/nfsd/nfs4state.c @@ -7122,16 +7122,12 @@ nfsd4_release_lockowner(struct svc_rqst *rqstp, if (sop->so_is_open_owner || !same_owner_str(sop, owner)) continue; - /* see if there are still any locks associated with it */ - lo = lockowner(sop); - list_for_each_entry(stp, &sop->so_stateids, st_perstateowner) { - if (check_for_locks(stp->st_stid.sc_file, lo)) { - status = nfserr_locks_held; - spin_unlock(&clp->cl_lock); - return status; - } + if (atomic_read(&sop->so_count) != 1) { + spin_unlock(&clp->cl_lock); + return nfserr_locks_held; } + lo = lockowner(sop); nfs4_get_stateowner(sop); break; } From 7f845de2863334bed4f362e95853f5e7bc323737 Mon Sep 17 00:00:00 2001 From: Yuntao Wang Date: Sat, 30 Apr 2022 21:08:03 +0800 Subject: [PATCH 0312/1842] bpf: Fix potential array overflow in bpf_trampoline_get_progs() commit a2aa95b71c9bbec793b5c5fa50f0a80d882b3e8d upstream. The cnt value in the 'cnt >= BPF_MAX_TRAMP_PROGS' check does not include BPF_TRAMP_MODIFY_RETURN bpf programs, so the number of the attached BPF_TRAMP_MODIFY_RETURN bpf programs in a trampoline can exceed BPF_MAX_TRAMP_PROGS. When this happens, the assignment '*progs++ = aux->prog' in bpf_trampoline_get_progs() will cause progs array overflow as the progs field in the bpf_tramp_progs struct can only hold at most BPF_MAX_TRAMP_PROGS bpf programs. Fixes: 88fd9e5352fe ("bpf: Refactor trampoline update code") Signed-off-by: Yuntao Wang Link: https://lore.kernel.org/r/20220430130803.210624-1-ytcoode@gmail.com Signed-off-by: Alexei Starovoitov Signed-off-by: Greg Kroah-Hartman --- kernel/bpf/trampoline.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c index 986dabc3d11f..87becf77cc75 100644 --- a/kernel/bpf/trampoline.c +++ b/kernel/bpf/trampoline.c @@ -378,7 +378,7 @@ int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr) { enum bpf_tramp_prog_type kind; int err = 0; - int cnt; + int cnt = 0, i; kind = bpf_attach_type_to_tramp(prog); mutex_lock(&tr->mutex); @@ -389,7 +389,10 @@ int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr) err = -EBUSY; goto out; } - cnt = tr->progs_cnt[BPF_TRAMP_FENTRY] + tr->progs_cnt[BPF_TRAMP_FEXIT]; + + for (i = 0; i < BPF_TRAMP_MAX; i++) + cnt += tr->progs_cnt[i]; + if (kind == BPF_TRAMP_REPLACE) { /* Cannot attach extension if fentry/fexit are in use. */ if (cnt) { @@ -467,16 +470,19 @@ struct bpf_trampoline *bpf_trampoline_get(u64 key, void bpf_trampoline_put(struct bpf_trampoline *tr) { + int i; + if (!tr) return; mutex_lock(&trampoline_mutex); if (!refcount_dec_and_test(&tr->refcnt)) goto out; WARN_ON_ONCE(mutex_is_locked(&tr->mutex)); - if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FENTRY]))) - goto out; - if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT]))) - goto out; + + for (i = 0; i < BPF_TRAMP_MAX; i++) + if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[i]))) + goto out; + /* This code will be executed even when the last bpf_tramp_image * is alive. All progs are detached from the trampoline and the * trampoline image is patched with jmp into epilogue to skip From 886eeb046096fec4f7e43ed8fc94974564b868d4 Mon Sep 17 00:00:00 2001 From: Liu Jian Date: Sat, 16 Apr 2022 18:57:59 +0800 Subject: [PATCH 0313/1842] bpf: Enlarge offset check value to INT_MAX in bpf_skb_{load,store}_bytes commit 45969b4152c1752089351cd6836a42a566d49bcf upstream. The data length of skb frags + frag_list may be greater than 0xffff, and skb_header_pointer can not handle negative offset. So, here INT_MAX is used to check the validity of offset. Add the same change to the related function skb_store_bytes. Fixes: 05c74e5e53f6 ("bpf: add bpf_skb_load_bytes helper") Signed-off-by: Liu Jian Signed-off-by: Daniel Borkmann Acked-by: Song Liu Link: https://lore.kernel.org/bpf/20220416105801.88708-2-liujian56@huawei.com Signed-off-by: Greg Kroah-Hartman --- net/core/filter.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/core/filter.c b/net/core/filter.c index ddf9792c0cb2..d348f1d3fb8f 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -1687,7 +1687,7 @@ BPF_CALL_5(bpf_skb_store_bytes, struct sk_buff *, skb, u32, offset, if (unlikely(flags & ~(BPF_F_RECOMPUTE_CSUM | BPF_F_INVALIDATE_HASH))) return -EINVAL; - if (unlikely(offset > 0xffff)) + if (unlikely(offset > INT_MAX)) return -EFAULT; if (unlikely(bpf_try_make_writable(skb, offset + len))) return -EFAULT; @@ -1722,7 +1722,7 @@ BPF_CALL_4(bpf_skb_load_bytes, const struct sk_buff *, skb, u32, offset, { void *ptr; - if (unlikely(offset > 0xffff)) + if (unlikely(offset > INT_MAX)) goto err_clear; ptr = skb_header_pointer(skb, offset, len, to); From 70dd2d169d08f059ff25a41278ab7c658b1d2af8 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Mon, 6 Jun 2022 08:42:45 +0200 Subject: [PATCH 0314/1842] Linux 5.10.120 Link: https://lore.kernel.org/r/20220603173818.716010877@linuxfoundation.org Tested-by: Sudip Mukherjee Tested-by: Linux Kernel Functional Testing Tested-by: Guenter Roeck Tested-by: Salvatore Bonaccorso Tested-by: Fox Chen Tested-by: Hulk Robot Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index b442cc5bbfc3..fdd2ac273f42 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 119 +SUBLEVEL = 120 EXTRAVERSION = NAME = Dare mighty things From 9cef71eceaa8895dfdab828f1e076bc201b261f0 Mon Sep 17 00:00:00 2001 From: Niklas Cassel Date: Thu, 14 Apr 2022 11:10:18 +0200 Subject: [PATCH 0315/1842] binfmt_flat: do not stop relocating GOT entries prematurely on riscv commit 6045ab5fea4c849153ebeb0acb532da5f29d69c4 upstream. bFLT binaries are usually created using elf2flt. The linker script used by elf2flt has defined the .data section like the following for the last 19 years: .data : { _sdata = . ; __data_start = . ; data_start = . ; *(.got.plt) *(.got) FILL(0) ; . = ALIGN(0x20) ; LONG(-1) . = ALIGN(0x20) ; ... } It places the .got.plt input section before the .got input section. The same is true for the default linker script (ld --verbose) on most architectures except x86/x86-64. The binfmt_flat loader should relocate all GOT entries until it encounters a -1 (the LONG(-1) in the linker script). The problem is that the .got.plt input section starts with a GOTPLT header (which has size 16 bytes on elf64-riscv and 8 bytes on elf32-riscv), where the first word is set to -1. See the binutils implementation for riscv [1]. This causes the binfmt_flat loader to stop relocating GOT entries prematurely and thus causes the application to crash when running. Fix this by skipping the whole GOTPLT header, since the whole GOTPLT header is reserved for the dynamic linker. The GOTPLT header will only be skipped for bFLT binaries with flag FLAT_FLAG_GOTPIC set. This flag is unconditionally set by elf2flt if the supplied ELF binary has the symbol _GLOBAL_OFFSET_TABLE_ defined. ELF binaries without a .got input section should thus remain unaffected. Tested on RISC-V Canaan Kendryte K210 and RISC-V QEMU nommu_virt_defconfig. [1] https://sourceware.org/git/?p=binutils-gdb.git;a=blob;f=bfd/elfnn-riscv.c;hb=binutils-2_38#l3275 Cc: Signed-off-by: Niklas Cassel Reviewed-by: Damien Le Moal Link: https://lore.kernel.org/r/20220414091018.896737-1-niklas.cassel@wdc.com Fixed-by: kernel test robot Link: https://lore.kernel.org/lkml/202204182333.OIUOotK8-lkp@intel.com Signed-off-by: Kees Cook Signed-off-by: Greg Kroah-Hartman --- fs/binfmt_flat.c | 27 ++++++++++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-) diff --git a/fs/binfmt_flat.c b/fs/binfmt_flat.c index b9c658e0548e..69f4db05191a 100644 --- a/fs/binfmt_flat.c +++ b/fs/binfmt_flat.c @@ -427,6 +427,30 @@ static void old_reloc(unsigned long rl) /****************************************************************************/ +static inline u32 __user *skip_got_header(u32 __user *rp) +{ + if (IS_ENABLED(CONFIG_RISCV)) { + /* + * RISC-V has a 16 byte GOT PLT header for elf64-riscv + * and 8 byte GOT PLT header for elf32-riscv. + * Skip the whole GOT PLT header, since it is reserved + * for the dynamic linker (ld.so). + */ + u32 rp_val0, rp_val1; + + if (get_user(rp_val0, rp)) + return rp; + if (get_user(rp_val1, rp + 1)) + return rp; + + if (rp_val0 == 0xffffffff && rp_val1 == 0xffffffff) + rp += 4; + else if (rp_val0 == 0xffffffff) + rp += 2; + } + return rp; +} + static int load_flat_file(struct linux_binprm *bprm, struct lib_info *libinfo, int id, unsigned long *extra_stack) { @@ -774,7 +798,8 @@ static int load_flat_file(struct linux_binprm *bprm, * image. */ if (flags & FLAT_FLAG_GOTPIC) { - for (rp = (u32 __user *)datapos; ; rp++) { + rp = skip_got_header((u32 __user *) datapos); + for (; ; rp++) { u32 addr, rp_val; if (get_user(rp_val, rp)) return -EFAULT; From 6b45437959dcfa177df32ac63318d3e1d7377523 Mon Sep 17 00:00:00 2001 From: Helge Deller Date: Thu, 2 Jun 2022 13:50:44 +0200 Subject: [PATCH 0316/1842] parisc/stifb: Implement fb_is_primary_device() commit cf936af790a3ef5f41ff687ec91bfbffee141278 upstream. Implement fb_is_primary_device() function, so that fbcon detects if this framebuffer belongs to the default graphics card which was used to start the system. Signed-off-by: Helge Deller Cc: stable@vger.kernel.org # v5.10+ Signed-off-by: Greg Kroah-Hartman --- arch/parisc/include/asm/fb.h | 4 ++++ drivers/video/console/sticore.c | 17 +++++++++++++++++ drivers/video/fbdev/stifb.c | 4 ++-- 3 files changed, 23 insertions(+), 2 deletions(-) diff --git a/arch/parisc/include/asm/fb.h b/arch/parisc/include/asm/fb.h index c4cd6360f996..d63a2acb91f2 100644 --- a/arch/parisc/include/asm/fb.h +++ b/arch/parisc/include/asm/fb.h @@ -12,9 +12,13 @@ static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma, pgprot_val(vma->vm_page_prot) |= _PAGE_NO_CACHE; } +#if defined(CONFIG_STI_CONSOLE) || defined(CONFIG_FB_STI) +int fb_is_primary_device(struct fb_info *info); +#else static inline int fb_is_primary_device(struct fb_info *info) { return 0; } +#endif #endif /* _ASM_FB_H_ */ diff --git a/drivers/video/console/sticore.c b/drivers/video/console/sticore.c index 6a26a364f9bd..680a573af00b 100644 --- a/drivers/video/console/sticore.c +++ b/drivers/video/console/sticore.c @@ -30,6 +30,7 @@ #include #include #include +#include #include "../fbdev/sticore.h" @@ -1127,6 +1128,22 @@ int sti_call(const struct sti_struct *sti, unsigned long func, return ret; } +/* check if given fb_info is the primary device */ +int fb_is_primary_device(struct fb_info *info) +{ + struct sti_struct *sti; + + sti = sti_get_rom(0); + + /* if no built-in graphics card found, allow any fb driver as default */ + if (!sti) + return true; + + /* return true if it's the default built-in framebuffer driver */ + return (sti->info == info); +} +EXPORT_SYMBOL(fb_is_primary_device); + MODULE_AUTHOR("Philipp Rumpf, Helge Deller, Thomas Bogendoerfer"); MODULE_DESCRIPTION("Core STI driver for HP's NGLE series graphics cards in HP PARISC machines"); MODULE_LICENSE("GPL v2"); diff --git a/drivers/video/fbdev/stifb.c b/drivers/video/fbdev/stifb.c index 265865610edc..002f265d8db5 100644 --- a/drivers/video/fbdev/stifb.c +++ b/drivers/video/fbdev/stifb.c @@ -1317,11 +1317,11 @@ static int __init stifb_init_fb(struct sti_struct *sti, int bpp_pref) goto out_err3; } + /* save for primary gfx device detection & unregister_framebuffer() */ + sti->info = info; if (register_framebuffer(&fb->info) < 0) goto out_err4; - sti->info = info; /* save for unregister_framebuffer() */ - fb_info(&fb->info, "%s %dx%d-%d frame buffer device, %s, id: %04x, mmio: 0x%04lx\n", fix->id, var->xres, From 4a5c7a61ff506a7cc385cd952038183a2d095880 Mon Sep 17 00:00:00 2001 From: Alexandre Ghiti Date: Mon, 6 Dec 2021 11:46:56 +0100 Subject: [PATCH 0317/1842] riscv: Initialize thread pointer before calling C functions commit 35d33c76d68dfacc330a8eb477b51cc647c5a847 upstream. Because of the stack canary feature that reads from the current task structure the stack canary value, the thread pointer register "tp" must be set before calling any C function from head.S: by chance, setup_vm and all the functions that it calls does not seem to be part of the functions where the canary check is done, but in the following commits, some functions will. Fixes: f2c9699f65557a31 ("riscv: Add STACKPROTECTOR supported") Signed-off-by: Alexandre Ghiti Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt Signed-off-by: Greg Kroah-Hartman --- arch/riscv/kernel/head.S | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S index 1a819c18bede..47d1411db0a9 100644 --- a/arch/riscv/kernel/head.S +++ b/arch/riscv/kernel/head.S @@ -261,6 +261,7 @@ clear_bss_done: REG_S a0, (a2) /* Initialize page tables and relocate to virtual addresses */ + la tp, init_task la sp, init_thread_union + THREAD_SIZE mv a0, s1 call setup_vm From d2f3acde3d52b3b351db09e2e2a5e5812eda2735 Mon Sep 17 00:00:00 2001 From: Samuel Holland Date: Fri, 29 Apr 2022 22:00:23 -0500 Subject: [PATCH 0318/1842] riscv: Fix irq_work when SMP is disabled commit 2273272823db6f67d57761df8116ae32e7f05bed upstream. irq_work is triggered via an IPI, but the IPI infrastructure is not included in uniprocessor kernels. As a result, irq_work never runs. Fall back to the tick-based irq_work implementation on uniprocessor configurations. Fixes: 298447928bb1 ("riscv: Support irq_work via self IPIs") Signed-off-by: Samuel Holland Reviewed-by: Heiko Stuebner Link: https://lore.kernel.org/r/20220430030025.58405-1-samuel@sholland.org Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt Signed-off-by: Greg Kroah-Hartman --- arch/riscv/include/asm/irq_work.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/irq_work.h b/arch/riscv/include/asm/irq_work.h index d6c277992f76..b53891964ae0 100644 --- a/arch/riscv/include/asm/irq_work.h +++ b/arch/riscv/include/asm/irq_work.h @@ -4,7 +4,7 @@ static inline bool arch_irq_work_has_interrupt(void) { - return true; + return IS_ENABLED(CONFIG_SMP); } extern void arch_irq_work_raise(void); #endif /* _ASM_RISCV_IRQ_WORK_H */ From 223368eaf60cfedbe8b51895be5d82d0f4ea5b67 Mon Sep 17 00:00:00 2001 From: Rik van der Kemp Date: Fri, 27 May 2022 14:07:26 +0200 Subject: [PATCH 0319/1842] ALSA: hda/realtek: Enable 4-speaker output for Dell XPS 15 9520 laptop commit 15dad62f4bdb5dc0f0efde8181d680db9963544c upstream. The 2022-model XPS 15 appears to use the same 4-speakers-on-ALC289 audio setup as the Dell XPS 15 9510, so requires the same quirk to enable woofer output. Tested on my own 9520. [ Move the entry to the right position in the SSID order -- tiwai ] BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=216035 Cc: Signed-off-by: Rik van der Kemp Link: https://lore.kernel.org/r/181056a137b.d14baf90133058.8425453735588429828@upto11.nl Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index f630f9257ad1..82279c31a0b4 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -8651,6 +8651,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1028, 0x0a62, "Dell Precision 5560", ALC289_FIXUP_DUAL_SPK), SND_PCI_QUIRK(0x1028, 0x0a9d, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1028, 0x0a9e, "Dell Latitude 5430", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1028, 0x0b19, "Dell XPS 15 9520", ALC289_FIXUP_DUAL_SPK), SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2), From 1cf70d5c15bcab4411f826fd2e83a9aabfe5166d Mon Sep 17 00:00:00 2001 From: Marios Levogiannis Date: Mon, 30 May 2022 10:41:31 +0300 Subject: [PATCH 0320/1842] ALSA: hda/realtek - Fix microphone noise on ASUS TUF B550M-PLUS commit 9bfa7b36343c7d84370bc61c9ed774635b05e4eb upstream. Set microphone pins 0x18 (rear) and 0x19 (front) to VREF_50 to fix the microphone noise on ASUS TUF B550M-PLUS which uses the ALCS1200A codec. The initial value was VREF_80. The same issue is also present on Windows using both the default Windows driver and all tested Realtek drivers before version 6.0.9049.1. Comparing Realtek driver 6.0.9049.1 (the first one without the microphone noise) to Realtek driver 6.0.9047.1 (the last one with the microphone noise) revealed that the fix is the result of setting pins 0x18 and 0x19 to VREF_50. This fix may also work for other boards that have been reported to have the same microphone issue and use the ALC1150 and ALCS1200A codecs, since these codecs are similar and the fix in the Realtek driver on Windows is common for both. However, it is currently enabled only for ASUS TUF B550M-PLUS as this is the only board that could be tested. Signed-off-by: Marios Levogiannis Cc: Link: https://lore.kernel.org/r/20220530074131.12258-1-marios.levogiannis@gmail.com Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_realtek.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 82279c31a0b4..71a9462e8f6e 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -1990,6 +1990,7 @@ enum { ALC1220_FIXUP_CLEVO_PB51ED_PINS, ALC887_FIXUP_ASUS_AUDIO, ALC887_FIXUP_ASUS_HMIC, + ALCS1200A_FIXUP_MIC_VREF, }; static void alc889_fixup_coef(struct hda_codec *codec, @@ -2535,6 +2536,14 @@ static const struct hda_fixup alc882_fixups[] = { .chained = true, .chain_id = ALC887_FIXUP_ASUS_AUDIO, }, + [ALCS1200A_FIXUP_MIC_VREF] = { + .type = HDA_FIXUP_PINCTLS, + .v.pins = (const struct hda_pintbl[]) { + { 0x18, PIN_VREF50 }, /* rear mic */ + { 0x19, PIN_VREF50 }, /* front mic */ + {} + } + }, }; static const struct snd_pci_quirk alc882_fixup_tbl[] = { @@ -2572,6 +2581,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = { SND_PCI_QUIRK(0x1043, 0x835f, "Asus Eee 1601", ALC888_FIXUP_EEE1601), SND_PCI_QUIRK(0x1043, 0x84bc, "ASUS ET2700", ALC887_FIXUP_ASUS_BASS), SND_PCI_QUIRK(0x1043, 0x8691, "ASUS ROG Ranger VIII", ALC882_FIXUP_GPIO3), + SND_PCI_QUIRK(0x1043, 0x8797, "ASUS TUF B550M-PLUS", ALCS1200A_FIXUP_MIC_VREF), SND_PCI_QUIRK(0x104d, 0x9043, "Sony Vaio VGC-LN51JGB", ALC882_FIXUP_NO_PRIMARY_HP), SND_PCI_QUIRK(0x104d, 0x9044, "Sony VAIO AiO", ALC882_FIXUP_NO_PRIMARY_HP), SND_PCI_QUIRK(0x104d, 0x9047, "Sony Vaio TT", ALC889_FIXUP_VAIO_TT), From 40bdb5ec957aca5c5c1924602bef6b0ab18e22d3 Mon Sep 17 00:00:00 2001 From: Takashi Iwai Date: Wed, 25 May 2022 15:12:03 +0200 Subject: [PATCH 0321/1842] ALSA: usb-audio: Cancel pending work at closing a MIDI substream commit 0125de38122f0f66bf61336158d12a1aabfe6425 upstream. At closing a USB MIDI output substream, there might be still a pending work, which would eventually access the rawmidi runtime object that is being released. For fixing the race, make sure to cancel the pending work at closing. Reported-by: syzbot+6912c9592caca7ca0e7d@syzkaller.appspotmail.com Cc: Link: https://lore.kernel.org/r/000000000000e7e75005dfd07cf6@google.com Link: https://lore.kernel.org/r/20220525131203.11299-1-tiwai@suse.de Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/usb/midi.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/sound/usb/midi.c b/sound/usb/midi.c index 84676a8fb60d..93fee6e365a6 100644 --- a/sound/usb/midi.c +++ b/sound/usb/midi.c @@ -1161,6 +1161,9 @@ static int snd_usbmidi_output_open(struct snd_rawmidi_substream *substream) static int snd_usbmidi_output_close(struct snd_rawmidi_substream *substream) { + struct usbmidi_out_port *port = substream->runtime->private_data; + + cancel_work_sync(&port->ep->work); return substream_open(substream, 0, 0); } From 19b3fe8a7cb17d16c467756113079ff53548530f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Carl=20Yin=28=E6=AE=B7=E5=BC=A0=E6=88=90=29?= Date: Thu, 19 May 2022 02:34:43 +0000 Subject: [PATCH 0322/1842] USB: serial: option: add Quectel BG95 modem commit 33b7af2f459df453feb0d44628d820c47fefe7a8 upstream. The BG95 modem has 3 USB configurations that are configurable via the AT command AT+QCFGEXT="usbnet",["ecm"|"modem"|"rmnet"] which make the modem enumerate with the following interfaces, respectively: "modem": Diag + GNSS + Modem + Modem "ecm" : Diag + GNSS + Modem + ECM "rmnet": Diag + GNSS + Modem + QMI Don't support Full QMI messages (e.g WDS_START_NETWORK_INTERFACE) A detailed description of the USB configuration for each mode follows: +QCFGEXT: "usbnet","modem" -------------------------- T: Bus=01 Lev=02 Prnt=02 Port=01 Cnt=01 Dev#= 3 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=2c7c ProdID=0700 Rev= 0.00 S: Manufacturer=Quectel, Incorporated S: Product=Quectel LPWA Module S: SerialNumber=884328a2 C:* #Ifs= 4 Cfg#= 1 Atr=e0 MxPwr=500mA I:* If#= 0 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option E: Ad=81(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=01(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 1 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option E: Ad=82(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=02(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=option E: Ad=83(I) Atr=03(Int.) MxPS= 64 Ivl=2ms E: Ad=84(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=03(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=fe Prot=ff Driver=option E: Ad=85(I) Atr=03(Int.) MxPS= 64 Ivl=2ms E: Ad=86(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=04(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms +QCFGEXT: "usbnet","ecm" ------------------------ T: Bus=01 Lev=02 Prnt=02 Port=01 Cnt=01 Dev#= 4 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=2c7c ProdID=0700 Rev= 0.00 S: Manufacturer=Quectel, Incorporated S: Product=Quectel LPWA Module S: SerialNumber=884328a2 C:* #Ifs= 5 Cfg#= 1 Atr=e0 MxPwr=500mA A: FirstIf#= 3 IfCount= 2 Cls=02(comm.) Sub=00 Prot=00 I:* If#= 0 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option E: Ad=81(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=01(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 1 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option E: Ad=82(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=02(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=option E: Ad=83(I) Atr=03(Int.) MxPS= 64 Ivl=2ms E: Ad=84(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=03(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 3 Alt= 0 #EPs= 1 Cls=02(comm.) Sub=06 Prot=00 Driver=cdc_ether E: Ad=85(I) Atr=03(Int.) MxPS= 64 Ivl=2ms I: If#= 4 Alt= 0 #EPs= 0 Cls=0a(data ) Sub=00 Prot=00 Driver=cdc_ether I:* If#= 4 Alt= 1 #EPs= 2 Cls=0a(data ) Sub=00 Prot=00 Driver=cdc_ether E: Ad=86(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=04(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms +QCFGEXT: "usbnet","rmnet" -------------------------- T: Bus=01 Lev=02 Prnt=02 Port=01 Cnt=01 Dev#= 4 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=2c7c ProdID=0700 Rev= 0.00 S: Manufacturer=Quectel, Incorporated S: Product=Quectel LPWA Module S: SerialNumber=884328a2 C:* #Ifs= 4 Cfg#= 1 Atr=e0 MxPwr=500mA I:* If#= 0 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option E: Ad=81(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=01(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 1 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option E: Ad=82(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=02(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=option E: Ad=83(I) Atr=03(Int.) MxPS= 64 Ivl=2ms E: Ad=84(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=03(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 3 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=qmi_wwan E: Ad=85(I) Atr=03(Int.) MxPS= 64 Ivl=2ms E: Ad=86(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=04(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms Signed-off-by: Carl Yin Cc: stable@vger.kernel.org Signed-off-by: Johan Hovold Signed-off-by: Greg Kroah-Hartman --- drivers/usb/serial/option.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c index 2eb4083c5b45..a40c0f3b85c2 100644 --- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -1137,6 +1137,8 @@ static const struct usb_device_id option_ids[] = { { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0620, 0xff, 0xff, 0x30) }, /* EM160R-GL */ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0620, 0xff, 0, 0) }, + { USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, 0x0700, 0xff), /* BG95 */ + .driver_info = RSVD(3) | ZLP }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x30) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10), From 0420275d643e4886b127785454e981b261e653f5 Mon Sep 17 00:00:00 2001 From: Monish Kumar R Date: Fri, 20 May 2022 18:30:44 +0530 Subject: [PATCH 0323/1842] USB: new quirk for Dell Gen 2 devices commit 97fa5887cf283bb75ffff5f6b2c0e71794c02400 upstream. Add USB_QUIRK_NO_LPM and USB_QUIRK_RESET_RESUME quirks for Dell usb gen 2 device to not fail during enumeration. Found this bug on own testing Signed-off-by: Monish Kumar R Cc: stable Link: https://lore.kernel.org/r/20220520130044.17303-1-monish.kumar.r@intel.com Signed-off-by: Greg Kroah-Hartman --- drivers/usb/core/quirks.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c index 7e5fd6afd9f4..f03ee889ecc7 100644 --- a/drivers/usb/core/quirks.c +++ b/drivers/usb/core/quirks.c @@ -511,6 +511,9 @@ static const struct usb_device_id usb_quirk_list[] = { /* DJI CineSSD */ { USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM }, + /* DELL USB GEN2 */ + { USB_DEVICE(0x413c, 0xb062), .driver_info = USB_QUIRK_NO_LPM | USB_QUIRK_RESET_RESUME }, + /* VCOM device */ { USB_DEVICE(0x4296, 0x7570), .driver_info = USB_QUIRK_CONFIG_INTF_STRINGS }, From a2532c441705c406f07f55afe586af54e9cf4041 Mon Sep 17 00:00:00 2001 From: Albert Wang Date: Wed, 18 May 2022 14:13:15 +0800 Subject: [PATCH 0324/1842] usb: dwc3: gadget: Move null pinter check to proper place commit 3c5880745b4439ac64eccdb040e37fc1cc4c5406 upstream. When dwc3_gadget_ep_cleanup_completed_requests() called to dwc3_gadget_giveback() where the dwc3 lock is released, other thread is able to execute. In this situation, usb_ep_disable() gets the chance to clear endpoint descriptor pointer which leds to the null pointer dereference problem. So needs to move the null pointer check to a proper place. Example call stack: Thread#1: dwc3_thread_interrupt() spin_lock -> dwc3_process_event_buf() -> dwc3_process_event_entry() -> dwc3_endpoint_interrupt() -> dwc3_gadget_endpoint_trbs_complete() -> dwc3_gadget_ep_cleanup_completed_requests() ... -> dwc3_giveback() spin_unlock Thread#2 executes Thread#2: configfs_composite_disconnect() -> __composite_disconnect() -> ffs_func_disable() -> ffs_func_set_alt() -> ffs_func_eps_disable() -> usb_ep_disable() wait for dwc3 spin_lock Thread#1 released lock clear endpoint.desc Fixes: 26288448120b ("usb: dwc3: gadget: Fix null pointer exception") Cc: stable Signed-off-by: Albert Wang Link: https://lore.kernel.org/r/20220518061315.3359198-1-albertccwang@google.com Signed-off-by: Greg Kroah-Hartman --- drivers/usb/dwc3/gadget.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c index 1f0503dc96ee..05fe6ded66a5 100644 --- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -2960,14 +2960,14 @@ static bool dwc3_gadget_endpoint_trbs_complete(struct dwc3_ep *dep, struct dwc3 *dwc = dep->dwc; bool no_started_trb = true; - if (!dep->endpoint.desc) - return no_started_trb; - dwc3_gadget_ep_cleanup_completed_requests(dep, event, status); if (dep->flags & DWC3_EP_END_TRANSFER_PENDING) goto out; + if (!dep->endpoint.desc) + return no_started_trb; + if (usb_endpoint_xfer_isoc(dep->endpoint.desc) && list_empty(&dep->started_list) && (list_empty(&dep->pending_list) || status == -EXDEV)) From ce4627f09e666f1c8de880c19786a449725862ef Mon Sep 17 00:00:00 2001 From: Kishon Vijay Abraham I Date: Tue, 10 May 2022 14:46:29 +0530 Subject: [PATCH 0325/1842] usb: core: hcd: Add support for deferring roothub registration commit a44623d9279086c89f631201d993aa332f7c9e66 upstream. It has been observed with certain PCIe USB cards (like Inateck connected to AM64 EVM or J7200 EVM) that as soon as the primary roothub is registered, port status change is handled even before xHC is running leading to cold plug USB devices not detected. For such cases, registering both the root hubs along with the second HCD is required. Add support for deferring roothub registration in usb_add_hcd(), so that both primary and secondary roothubs are registered along with the second HCD. This patch has been added and reverted earier as it triggered a race in usb device enumeration. That race is now fixed in 5.16-rc3, and in stable back to 5.4 commit 6cca13de26ee ("usb: hub: Fix locking issues with address0_mutex") commit 6ae6dc22d2d1 ("usb: hub: Fix usb enumeration issue due to address0 race") CC: stable@vger.kernel.org # 5.4+ Suggested-by: Mathias Nyman Tested-by: Chris Chiu Acked-by: Alan Stern Signed-off-by: Kishon Vijay Abraham I Link: https://lore.kernel.org/r/20220510091630.16564-2-kishon@ti.com Signed-off-by: Greg Kroah-Hartman --- drivers/usb/core/hcd.c | 31 ++++++++++++++++++++++++------- include/linux/usb/hcd.h | 2 ++ 2 files changed, 26 insertions(+), 7 deletions(-) diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c index ddd1d3eef912..bf5e37667697 100644 --- a/drivers/usb/core/hcd.c +++ b/drivers/usb/core/hcd.c @@ -2661,6 +2661,7 @@ int usb_add_hcd(struct usb_hcd *hcd, { int retval; struct usb_device *rhdev; + struct usb_hcd *shared_hcd; if (!hcd->skip_phy_initialization && usb_hcd_is_primary_hcd(hcd)) { hcd->phy_roothub = usb_phy_roothub_alloc(hcd->self.sysdev); @@ -2817,13 +2818,26 @@ int usb_add_hcd(struct usb_hcd *hcd, goto err_hcd_driver_start; } - /* starting here, usbcore will pay attention to this root hub */ - retval = register_root_hub(hcd); - if (retval != 0) - goto err_register_root_hub; + /* starting here, usbcore will pay attention to the shared HCD roothub */ + shared_hcd = hcd->shared_hcd; + if (!usb_hcd_is_primary_hcd(hcd) && shared_hcd && HCD_DEFER_RH_REGISTER(shared_hcd)) { + retval = register_root_hub(shared_hcd); + if (retval != 0) + goto err_register_root_hub; - if (hcd->uses_new_polling && HCD_POLL_RH(hcd)) - usb_hcd_poll_rh_status(hcd); + if (shared_hcd->uses_new_polling && HCD_POLL_RH(shared_hcd)) + usb_hcd_poll_rh_status(shared_hcd); + } + + /* starting here, usbcore will pay attention to this root hub */ + if (!HCD_DEFER_RH_REGISTER(hcd)) { + retval = register_root_hub(hcd); + if (retval != 0) + goto err_register_root_hub; + + if (hcd->uses_new_polling && HCD_POLL_RH(hcd)) + usb_hcd_poll_rh_status(hcd); + } return retval; @@ -2866,6 +2880,7 @@ EXPORT_SYMBOL_GPL(usb_add_hcd); void usb_remove_hcd(struct usb_hcd *hcd) { struct usb_device *rhdev = hcd->self.root_hub; + bool rh_registered; dev_info(hcd->self.controller, "remove, state %x\n", hcd->state); @@ -2876,6 +2891,7 @@ void usb_remove_hcd(struct usb_hcd *hcd) dev_dbg(hcd->self.controller, "roothub graceful disconnect\n"); spin_lock_irq (&hcd_root_hub_lock); + rh_registered = hcd->rh_registered; hcd->rh_registered = 0; spin_unlock_irq (&hcd_root_hub_lock); @@ -2885,7 +2901,8 @@ void usb_remove_hcd(struct usb_hcd *hcd) cancel_work_sync(&hcd->died_work); mutex_lock(&usb_bus_idr_lock); - usb_disconnect(&rhdev); /* Sets rhdev to NULL */ + if (rh_registered) + usb_disconnect(&rhdev); /* Sets rhdev to NULL */ mutex_unlock(&usb_bus_idr_lock); /* diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h index 3dbb42c637c1..9f05016d823f 100644 --- a/include/linux/usb/hcd.h +++ b/include/linux/usb/hcd.h @@ -124,6 +124,7 @@ struct usb_hcd { #define HCD_FLAG_RH_RUNNING 5 /* root hub is running? */ #define HCD_FLAG_DEAD 6 /* controller has died? */ #define HCD_FLAG_INTF_AUTHORIZED 7 /* authorize interfaces? */ +#define HCD_FLAG_DEFER_RH_REGISTER 8 /* Defer roothub registration */ /* The flags can be tested using these macros; they are likely to * be slightly faster than test_bit(). @@ -134,6 +135,7 @@ struct usb_hcd { #define HCD_WAKEUP_PENDING(hcd) ((hcd)->flags & (1U << HCD_FLAG_WAKEUP_PENDING)) #define HCD_RH_RUNNING(hcd) ((hcd)->flags & (1U << HCD_FLAG_RH_RUNNING)) #define HCD_DEAD(hcd) ((hcd)->flags & (1U << HCD_FLAG_DEAD)) +#define HCD_DEFER_RH_REGISTER(hcd) ((hcd)->flags & (1U << HCD_FLAG_DEFER_RH_REGISTER)) /* * Specifies if interfaces are authorized by default From c9ac773715fcfd15f2f9e61682ba1e1c82290ab5 Mon Sep 17 00:00:00 2001 From: Ronnie Sahlberg Date: Wed, 1 Jun 2022 08:48:38 +1000 Subject: [PATCH 0326/1842] cifs: when extending a file with falloc we should make files not-sparse commit f66f8b94e7f2f4ac9fffe710be231ca8f25c5057 upstream. as this is the only way to make sure the region is allocated. Fix the conditional that was wrong and only tried to make already non-sparse files non-sparse. Cc: stable@vger.kernel.org Signed-off-by: Ronnie Sahlberg Signed-off-by: Steve French Signed-off-by: Greg Kroah-Hartman --- fs/cifs/smb2ops.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c index c758ff41b638..7fea94ebda57 100644 --- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -3632,7 +3632,7 @@ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon, if (rc) goto out; - if ((cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE) == 0) + if (cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE) smb2_set_sparse(xid, tcon, cfile, inode, false); eof = cpu_to_le64(off + len); From c5b9b7fb123dbf560a4a19e26fab3d507825ee75 Mon Sep 17 00:00:00 2001 From: Mathias Nyman Date: Thu, 12 May 2022 01:04:50 +0300 Subject: [PATCH 0327/1842] xhci: Allow host runtime PM as default for Intel Alder Lake N xHCI commit 74f55a62c4c354f43a6d75f77dd184c4f57b9a26 upstream. Alder Lake N TCSS xHCI needs to be runtime suspended whenever possible to allow the TCSS hardware block to enter D3 and thus save energy Cc: stable@kernel.org Suggested-by: Gopal Vamshi Krishna Signed-off-by: Mathias Nyman Link: https://lore.kernel.org/r/20220511220450.85367-10-mathias.nyman@linux.intel.com Signed-off-by: Greg Kroah-Hartman --- drivers/usb/host/xhci-pci.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index 48e2c0474189..886279755804 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -59,6 +59,7 @@ #define PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI 0x9a13 #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI 0x1138 #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI 0x461e +#define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI 0x464e #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI 0x51ed #define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9 @@ -263,6 +264,7 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) pdev->device == PCI_DEVICE_ID_INTEL_TIGER_LAKE_XHCI || pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI || pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI || + pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI || pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI)) xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW; From 78e008dca2255bba0ffa890203049f6eb4bdbc28 Mon Sep 17 00:00:00 2001 From: Peilin Ye Date: Wed, 28 Oct 2020 06:56:47 -0400 Subject: [PATCH 0328/1842] Fonts: Make font size unsigned in font_desc commit 7cb415003468d41aecd6877ae088c38f6c0fc174 upstream. `width` and `height` are defined as unsigned in our UAPI font descriptor `struct console_font`. Make them unsigned in our kernel font descriptor `struct font_desc`, too. Also, change the corresponding printk() format identifiers from `%d` to `%u`, in sti_select_fbfont(). Signed-off-by: Peilin Ye Signed-off-by: Daniel Vetter Link: https://patchwork.freedesktop.org/patch/msgid/20201028105647.1210161-1-yepeilin.cs@gmail.com Cc: Helge Deller Signed-off-by: Greg Kroah-Hartman --- drivers/video/console/sticore.c | 2 +- include/linux/font.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/video/console/sticore.c b/drivers/video/console/sticore.c index 680a573af00b..53ffd5898e85 100644 --- a/drivers/video/console/sticore.c +++ b/drivers/video/console/sticore.c @@ -503,7 +503,7 @@ sti_select_fbfont(struct sti_cooked_rom *cooked_rom, const char *fbfont_name) if (!fbfont) return NULL; - pr_info("STI selected %dx%d framebuffer font %s for sticon\n", + pr_info("STI selected %ux%u framebuffer font %s for sticon\n", fbfont->width, fbfont->height, fbfont->name); bpc = ((fbfont->width+7)/8) * fbfont->height; diff --git a/include/linux/font.h b/include/linux/font.h index b5b312c19e46..4f50d736ea72 100644 --- a/include/linux/font.h +++ b/include/linux/font.h @@ -16,7 +16,7 @@ struct font_desc { int idx; const char *name; - int width, height; + unsigned int width, height; const void *data; int pref; }; From 860e44f21f26d037041bf57b907a26b12bcef851 Mon Sep 17 00:00:00 2001 From: Helge Deller Date: Thu, 2 Jun 2022 13:55:26 +0200 Subject: [PATCH 0329/1842] parisc/stifb: Keep track of hardware path of graphics card commit b046f984814af7985f444150ec28716d42d00d9a upstream. Keep the pa_path (hardware path) of the graphics card in sti_struct and use this info to give more useful info which card is currently being used. Signed-off-by: Helge Deller Cc: stable@vger.kernel.org # v5.10+ Signed-off-by: Greg Kroah-Hartman --- drivers/video/console/sticon.c | 5 ++++- drivers/video/console/sticore.c | 15 +++++++-------- drivers/video/fbdev/sticore.h | 3 +++ 3 files changed, 14 insertions(+), 9 deletions(-) diff --git a/drivers/video/console/sticon.c b/drivers/video/console/sticon.c index 40496e9e9b43..f304163e87e9 100644 --- a/drivers/video/console/sticon.c +++ b/drivers/video/console/sticon.c @@ -46,6 +46,7 @@ #include #include #include +#include #include @@ -392,7 +393,9 @@ static int __init sticonsole_init(void) for (i = 0; i < MAX_NR_CONSOLES; i++) font_data[i] = STI_DEF_FONT; - pr_info("sticon: Initializing STI text console.\n"); + pr_info("sticon: Initializing STI text console on %s at [%s]\n", + sticon_sti->sti_data->inq_outptr.dev_name, + sticon_sti->pa_path); console_lock(); err = do_take_over_console(&sti_con, 0, MAX_NR_CONSOLES - 1, PAGE0->mem_cons.cl_class != CL_DUPLEX); diff --git a/drivers/video/console/sticore.c b/drivers/video/console/sticore.c index 53ffd5898e85..77622ef401d8 100644 --- a/drivers/video/console/sticore.c +++ b/drivers/video/console/sticore.c @@ -34,7 +34,7 @@ #include "../fbdev/sticore.h" -#define STI_DRIVERVERSION "Version 0.9b" +#define STI_DRIVERVERSION "Version 0.9c" static struct sti_struct *default_sti __read_mostly; @@ -503,7 +503,7 @@ sti_select_fbfont(struct sti_cooked_rom *cooked_rom, const char *fbfont_name) if (!fbfont) return NULL; - pr_info("STI selected %ux%u framebuffer font %s for sticon\n", + pr_info(" using %ux%u framebuffer font %s\n", fbfont->width, fbfont->height, fbfont->name); bpc = ((fbfont->width+7)/8) * fbfont->height; @@ -947,6 +947,7 @@ static struct sti_struct *sti_try_rom_generic(unsigned long address, static void sticore_check_for_default_sti(struct sti_struct *sti, char *path) { + pr_info(" located at [%s]\n", sti->pa_path); if (strcmp (path, default_sti_path) == 0) default_sti = sti; } @@ -958,7 +959,6 @@ static void sticore_check_for_default_sti(struct sti_struct *sti, char *path) */ static int __init sticore_pa_init(struct parisc_device *dev) { - char pa_path[21]; struct sti_struct *sti = NULL; int hpa = dev->hpa.start; @@ -971,8 +971,8 @@ static int __init sticore_pa_init(struct parisc_device *dev) if (!sti) return 1; - print_pa_hwpath(dev, pa_path); - sticore_check_for_default_sti(sti, pa_path); + print_pa_hwpath(dev, sti->pa_path); + sticore_check_for_default_sti(sti, sti->pa_path); return 0; } @@ -1008,9 +1008,8 @@ static int sticore_pci_init(struct pci_dev *pd, const struct pci_device_id *ent) sti = sti_try_rom_generic(rom_base, fb_base, pd); if (sti) { - char pa_path[30]; - print_pci_hwpath(pd, pa_path); - sticore_check_for_default_sti(sti, pa_path); + print_pci_hwpath(pd, sti->pa_path); + sticore_check_for_default_sti(sti, sti->pa_path); } if (!sti) { diff --git a/drivers/video/fbdev/sticore.h b/drivers/video/fbdev/sticore.h index c338f7848ae2..0ebdd28a0b81 100644 --- a/drivers/video/fbdev/sticore.h +++ b/drivers/video/fbdev/sticore.h @@ -370,6 +370,9 @@ struct sti_struct { /* pointer to all internal data */ struct sti_all_data *sti_data; + + /* pa_path of this device */ + char pa_path[24]; }; From b4acb8e7f1594607bc9017ef0aacb40b24a003d6 Mon Sep 17 00:00:00 2001 From: Ammar Faizi Date: Tue, 29 Mar 2022 17:47:05 +0700 Subject: [PATCH 0330/1842] x86/MCE/AMD: Fix memory leak when threshold_create_bank() fails commit e5f28623ceb103e13fc3d7bd45edf9818b227fd0 upstream. In mce_threshold_create_device(), if threshold_create_bank() fails, the previously allocated threshold banks array @bp will be leaked because the call to mce_threshold_remove_device() will not free it. This happens because mce_threshold_remove_device() fetches the pointer through the threshold_banks per-CPU variable but bp is written there only after the bank creation is successful, and not before, when threshold_create_bank() fails. Add a helper which unwinds all the bank creation work previously done and pass into it the previously allocated threshold banks array for freeing. [ bp: Massage. ] Fixes: 6458de97fc15 ("x86/mce/amd: Straighten CPU hotplug path") Co-developed-by: Alviro Iskandar Setiawan Signed-off-by: Alviro Iskandar Setiawan Co-developed-by: Yazen Ghannam Signed-off-by: Yazen Ghannam Signed-off-by: Ammar Faizi Signed-off-by: Borislav Petkov Cc: Link: https://lore.kernel.org/r/20220329104705.65256-3-ammarfaizi2@gnuweeb.org Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/mce/amd.c | 32 +++++++++++++++++++------------- 1 file changed, 19 insertions(+), 13 deletions(-) diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c index f73f1184b1c1..09f7c652346a 100644 --- a/arch/x86/kernel/cpu/mce/amd.c +++ b/arch/x86/kernel/cpu/mce/amd.c @@ -1457,10 +1457,23 @@ static void threshold_remove_bank(struct threshold_bank *bank) kfree(bank); } +static void __threshold_remove_device(struct threshold_bank **bp) +{ + unsigned int bank, numbanks = this_cpu_read(mce_num_banks); + + for (bank = 0; bank < numbanks; bank++) { + if (!bp[bank]) + continue; + + threshold_remove_bank(bp[bank]); + bp[bank] = NULL; + } + kfree(bp); +} + int mce_threshold_remove_device(unsigned int cpu) { struct threshold_bank **bp = this_cpu_read(threshold_banks); - unsigned int bank, numbanks = this_cpu_read(mce_num_banks); if (!bp) return 0; @@ -1471,13 +1484,7 @@ int mce_threshold_remove_device(unsigned int cpu) */ this_cpu_write(threshold_banks, NULL); - for (bank = 0; bank < numbanks; bank++) { - if (bp[bank]) { - threshold_remove_bank(bp[bank]); - bp[bank] = NULL; - } - } - kfree(bp); + __threshold_remove_device(bp); return 0; } @@ -1514,15 +1521,14 @@ int mce_threshold_create_device(unsigned int cpu) if (!(this_cpu_read(bank_map) & (1 << bank))) continue; err = threshold_create_bank(bp, cpu, bank); - if (err) - goto out_err; + if (err) { + __threshold_remove_device(bp); + return err; + } } this_cpu_write(threshold_banks, bp); if (thresholding_irq_en) mce_threshold_vector = amd_threshold_interrupt; return 0; -out_err: - mce_threshold_remove_device(cpu); - return err; } From 92fb46536aecaba2b4b8a5078e21b1a8e122a68d Mon Sep 17 00:00:00 2001 From: Kan Liang Date: Wed, 25 May 2022 06:39:52 -0700 Subject: [PATCH 0331/1842] perf/x86/intel: Fix event constraints for ICL commit 86dca369075b3e310c3c0adb0f81e513c562b5e4 upstream. According to the latest event list, the event encoding 0x55 INST_DECODED.DECODERS and 0x56 UOPS_DECODED.DEC0 are only available on the first 4 counters. Add them into the event constraints table. Fixes: 6017608936c1 ("perf/x86/intel: Add Icelake support") Signed-off-by: Kan Liang Signed-off-by: Ingo Molnar Acked-by: Peter Zijlstra Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20220525133952.1660658-1-kan.liang@linux.intel.com Signed-off-by: Greg Kroah-Hartman --- arch/x86/events/intel/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c index 5ba13b00e3a7..f6eadf9320a1 100644 --- a/arch/x86/events/intel/core.c +++ b/arch/x86/events/intel/core.c @@ -254,7 +254,7 @@ static struct event_constraint intel_icl_event_constraints[] = { INTEL_EVENT_CONSTRAINT_RANGE(0x03, 0x0a, 0xf), INTEL_EVENT_CONSTRAINT_RANGE(0x1f, 0x28, 0xf), INTEL_EVENT_CONSTRAINT(0x32, 0xf), /* SW_PREFETCH_ACCESS.* */ - INTEL_EVENT_CONSTRAINT_RANGE(0x48, 0x54, 0xf), + INTEL_EVENT_CONSTRAINT_RANGE(0x48, 0x56, 0xf), INTEL_EVENT_CONSTRAINT_RANGE(0x60, 0x8b, 0xf), INTEL_UEVENT_CONSTRAINT(0x04a3, 0xff), /* CYCLE_ACTIVITY.STALLS_TOTAL */ INTEL_UEVENT_CONSTRAINT(0x10a3, 0xff), /* CYCLE_ACTIVITY.CYCLES_MEM_ANY */ From 6580673b17e0d302a78b31c7b1b4c7155d0a6011 Mon Sep 17 00:00:00 2001 From: "Eric W. Biederman" Date: Tue, 26 Apr 2022 16:30:17 -0500 Subject: [PATCH 0332/1842] ptrace/um: Replace PT_DTRACE with TIF_SINGLESTEP commit c200e4bb44e80b343c09841e7caaaca0aac5e5fa upstream. User mode linux is the last user of the PT_DTRACE flag. Using the flag to indicate single stepping is a little confusing and worse changing tsk->ptrace without locking could potentionally cause problems. So use a thread info flag with a better name instead of flag in tsk->ptrace. Remove the definition PT_DTRACE as uml is the last user. Cc: stable@vger.kernel.org Acked-by: Johannes Berg Tested-by: Kees Cook Reviewed-by: Oleg Nesterov Link: https://lkml.kernel.org/r/20220505182645.497868-3-ebiederm@xmission.com Signed-off-by: "Eric W. Biederman" Signed-off-by: Greg Kroah-Hartman --- arch/um/include/asm/thread_info.h | 2 ++ arch/um/kernel/exec.c | 2 +- arch/um/kernel/process.c | 2 +- arch/um/kernel/ptrace.c | 8 ++++---- arch/um/kernel/signal.c | 4 ++-- include/linux/ptrace.h | 1 - 6 files changed, 10 insertions(+), 9 deletions(-) diff --git a/arch/um/include/asm/thread_info.h b/arch/um/include/asm/thread_info.h index 4c19ce4c49f1..66ab6a07330b 100644 --- a/arch/um/include/asm/thread_info.h +++ b/arch/um/include/asm/thread_info.h @@ -63,6 +63,7 @@ static inline struct thread_info *current_thread_info(void) #define TIF_RESTORE_SIGMASK 7 #define TIF_NOTIFY_RESUME 8 #define TIF_SECCOMP 9 /* secure computing */ +#define TIF_SINGLESTEP 10 /* single stepping userspace */ #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) @@ -70,5 +71,6 @@ static inline struct thread_info *current_thread_info(void) #define _TIF_MEMDIE (1 << TIF_MEMDIE) #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) #define _TIF_SECCOMP (1 << TIF_SECCOMP) +#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) #endif diff --git a/arch/um/kernel/exec.c b/arch/um/kernel/exec.c index e8fd5d540b05..7f7a74c82abb 100644 --- a/arch/um/kernel/exec.c +++ b/arch/um/kernel/exec.c @@ -44,7 +44,7 @@ void start_thread(struct pt_regs *regs, unsigned long eip, unsigned long esp) { PT_REGS_IP(regs) = eip; PT_REGS_SP(regs) = esp; - current->ptrace &= ~PT_DTRACE; + clear_thread_flag(TIF_SINGLESTEP); #ifdef SUBARCH_EXECVE1 SUBARCH_EXECVE1(regs->regs); #endif diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c index 9505a7e87396..8eb8b736abc1 100644 --- a/arch/um/kernel/process.c +++ b/arch/um/kernel/process.c @@ -341,7 +341,7 @@ int singlestepping(void * t) { struct task_struct *task = t ? t : current; - if (!(task->ptrace & PT_DTRACE)) + if (!test_thread_flag(TIF_SINGLESTEP)) return 0; if (task->thread.singlestep_syscall) diff --git a/arch/um/kernel/ptrace.c b/arch/um/kernel/ptrace.c index b425f47bddbb..d37802ced563 100644 --- a/arch/um/kernel/ptrace.c +++ b/arch/um/kernel/ptrace.c @@ -12,7 +12,7 @@ void user_enable_single_step(struct task_struct *child) { - child->ptrace |= PT_DTRACE; + set_tsk_thread_flag(child, TIF_SINGLESTEP); child->thread.singlestep_syscall = 0; #ifdef SUBARCH_SET_SINGLESTEPPING @@ -22,7 +22,7 @@ void user_enable_single_step(struct task_struct *child) void user_disable_single_step(struct task_struct *child) { - child->ptrace &= ~PT_DTRACE; + clear_tsk_thread_flag(child, TIF_SINGLESTEP); child->thread.singlestep_syscall = 0; #ifdef SUBARCH_SET_SINGLESTEPPING @@ -121,7 +121,7 @@ static void send_sigtrap(struct uml_pt_regs *regs, int error_code) } /* - * XXX Check PT_DTRACE vs TIF_SINGLESTEP for singlestepping check and + * XXX Check TIF_SINGLESTEP for singlestepping check and * PT_PTRACED vs TIF_SYSCALL_TRACE for syscall tracing check */ int syscall_trace_enter(struct pt_regs *regs) @@ -145,7 +145,7 @@ void syscall_trace_leave(struct pt_regs *regs) audit_syscall_exit(regs); /* Fake a debug trap */ - if (ptraced & PT_DTRACE) + if (test_thread_flag(TIF_SINGLESTEP)) send_sigtrap(®s->regs, 0); if (!test_thread_flag(TIF_SYSCALL_TRACE)) diff --git a/arch/um/kernel/signal.c b/arch/um/kernel/signal.c index 88cd9b5c1b74..ae4658f576ab 100644 --- a/arch/um/kernel/signal.c +++ b/arch/um/kernel/signal.c @@ -53,7 +53,7 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs) unsigned long sp; int err; - if ((current->ptrace & PT_DTRACE) && (current->ptrace & PT_PTRACED)) + if (test_thread_flag(TIF_SINGLESTEP) && (current->ptrace & PT_PTRACED)) singlestep = 1; /* Did we come from a system call? */ @@ -128,7 +128,7 @@ void do_signal(struct pt_regs *regs) * on the host. The tracing thread will check this flag and * PTRACE_SYSCALL if necessary. */ - if (current->ptrace & PT_DTRACE) + if (test_thread_flag(TIF_SINGLESTEP)) current->thread.singlestep_syscall = is_syscall(PT_REGS_IP(¤t->thread.regs)); diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h index 2a9df80ea887..468bb073c993 100644 --- a/include/linux/ptrace.h +++ b/include/linux/ptrace.h @@ -30,7 +30,6 @@ extern int ptrace_access_vm(struct task_struct *tsk, unsigned long addr, #define PT_SEIZED 0x00010000 /* SEIZE used, enable new behavior */ #define PT_PTRACED 0x00000001 -#define PT_DTRACE 0x00000002 /* delayed trace (used on m68k, i386) */ #define PT_OPT_FLAG_SHIFT 3 /* PT_TRACE_* event enable flags */ From f8ef79687b2e156841b5758cc96e63796fd53542 Mon Sep 17 00:00:00 2001 From: "Eric W. Biederman" Date: Tue, 26 Apr 2022 16:45:37 -0500 Subject: [PATCH 0333/1842] ptrace/xtensa: Replace PT_SINGLESTEP with TIF_SINGLESTEP commit 4a3d2717d140401df7501a95e454180831a0c5af upstream. xtensa is the last user of the PT_SINGLESTEP flag. Changing tsk->ptrace in user_enable_single_step and user_disable_single_step without locking could potentiallly cause problems. So use a thread info flag instead of a flag in tsk->ptrace. Use TIF_SINGLESTEP that xtensa already had defined but unused. Remove the definitions of PT_SINGLESTEP and PT_BLOCKSTEP as they have no more users. Cc: stable@vger.kernel.org Acked-by: Max Filippov Tested-by: Kees Cook Reviewed-by: Oleg Nesterov Link: https://lkml.kernel.org/r/20220505182645.497868-4-ebiederm@xmission.com Signed-off-by: "Eric W. Biederman" Signed-off-by: Greg Kroah-Hartman --- arch/xtensa/kernel/ptrace.c | 4 ++-- arch/xtensa/kernel/signal.c | 4 ++-- include/linux/ptrace.h | 6 ------ 3 files changed, 4 insertions(+), 10 deletions(-) diff --git a/arch/xtensa/kernel/ptrace.c b/arch/xtensa/kernel/ptrace.c index bb3f4797d212..db6cdea471d8 100644 --- a/arch/xtensa/kernel/ptrace.c +++ b/arch/xtensa/kernel/ptrace.c @@ -226,12 +226,12 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task) void user_enable_single_step(struct task_struct *child) { - child->ptrace |= PT_SINGLESTEP; + set_tsk_thread_flag(child, TIF_SINGLESTEP); } void user_disable_single_step(struct task_struct *child) { - child->ptrace &= ~PT_SINGLESTEP; + clear_tsk_thread_flag(child, TIF_SINGLESTEP); } /* diff --git a/arch/xtensa/kernel/signal.c b/arch/xtensa/kernel/signal.c index 1fb1047f905c..1cb230fafdf2 100644 --- a/arch/xtensa/kernel/signal.c +++ b/arch/xtensa/kernel/signal.c @@ -465,7 +465,7 @@ static void do_signal(struct pt_regs *regs) /* Set up the stack frame */ ret = setup_frame(&ksig, sigmask_to_save(), regs); signal_setup_done(ret, &ksig, 0); - if (current->ptrace & PT_SINGLESTEP) + if (test_thread_flag(TIF_SINGLESTEP)) task_pt_regs(current)->icountlevel = 1; return; @@ -491,7 +491,7 @@ static void do_signal(struct pt_regs *regs) /* If there's no signal to deliver, we just restore the saved mask. */ restore_saved_sigmask(); - if (current->ptrace & PT_SINGLESTEP) + if (test_thread_flag(TIF_SINGLESTEP)) task_pt_regs(current)->icountlevel = 1; return; } diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h index 468bb073c993..ae7dbdfa3d83 100644 --- a/include/linux/ptrace.h +++ b/include/linux/ptrace.h @@ -46,12 +46,6 @@ extern int ptrace_access_vm(struct task_struct *tsk, unsigned long addr, #define PT_EXITKILL (PTRACE_O_EXITKILL << PT_OPT_FLAG_SHIFT) #define PT_SUSPEND_SECCOMP (PTRACE_O_SUSPEND_SECCOMP << PT_OPT_FLAG_SHIFT) -/* single stepping state bits (used on ARM and PA-RISC) */ -#define PT_SINGLESTEP_BIT 31 -#define PT_SINGLESTEP (1< Date: Fri, 29 Apr 2022 09:23:55 -0500 Subject: [PATCH 0334/1842] ptrace: Reimplement PTRACE_KILL by always sending SIGKILL commit 6a2d90ba027adba528509ffa27097cffd3879257 upstream. The current implementation of PTRACE_KILL is buggy and has been for many years as it assumes it's target has stopped in ptrace_stop. At a quick skim it looks like this assumption has existed since ptrace support was added in linux v1.0. While PTRACE_KILL has been deprecated we can not remove it as a quick search with google code search reveals many existing programs calling it. When the ptracee is not stopped at ptrace_stop some fields would be set that are ignored except in ptrace_stop. Making the userspace visible behavior of PTRACE_KILL a noop in those case. As the usual rules are not obeyed it is not clear what the consequences are of calling PTRACE_KILL on a running process. Presumably userspace does not do this as it achieves nothing. Replace the implementation of PTRACE_KILL with a simple send_sig_info(SIGKILL) followed by a return 0. This changes the observable user space behavior only in that PTRACE_KILL on a process not stopped in ptrace_stop will also kill it. As that has always been the intent of the code this seems like a reasonable change. Cc: stable@vger.kernel.org Reported-by: Al Viro Suggested-by: Al Viro Tested-by: Kees Cook Reviewed-by: Oleg Nesterov Link: https://lkml.kernel.org/r/20220505182645.497868-7-ebiederm@xmission.com Signed-off-by: "Eric W. Biederman" Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/step.c | 3 +-- kernel/ptrace.c | 5 ++--- 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/step.c b/arch/x86/kernel/step.c index 60d2c3798ba2..2f97d1a1032f 100644 --- a/arch/x86/kernel/step.c +++ b/arch/x86/kernel/step.c @@ -175,8 +175,7 @@ void set_task_blockstep(struct task_struct *task, bool on) * * NOTE: this means that set/clear TIF_BLOCKSTEP is only safe if * task is current or it can't be running, otherwise we can race - * with __switch_to_xtra(). We rely on ptrace_freeze_traced() but - * PTRACE_KILL is not safe. + * with __switch_to_xtra(). We rely on ptrace_freeze_traced(). */ local_irq_disable(); debugctl = get_debugctlmsr(); diff --git a/kernel/ptrace.c b/kernel/ptrace.c index d99f73f83bf5..aab480e24bd6 100644 --- a/kernel/ptrace.c +++ b/kernel/ptrace.c @@ -1219,9 +1219,8 @@ int ptrace_request(struct task_struct *child, long request, return ptrace_resume(child, request, data); case PTRACE_KILL: - if (child->exit_state) /* already dead */ - return 0; - return ptrace_resume(child, request, SIGKILL); + send_sig_info(SIGKILL, SEND_SIG_NOINFO, child); + return 0; #ifdef CONFIG_HAVE_ARCH_TRACEHOOK case PTRACE_GETREGSET: From 4093eea47d9c2cc0cad47d39e6dd066338ee80f8 Mon Sep 17 00:00:00 2001 From: Qu Wenruo Date: Tue, 10 May 2022 15:10:18 +0800 Subject: [PATCH 0335/1842] btrfs: add "0x" prefix for unsupported optional features commit d5321a0fa8bc49f11bea0b470800962c17d92d8f upstream. The following error message lack the "0x" obviously: cannot mount because of unsupported optional features (4000) Add the prefix to make it less confusing. This can happen on older kernels that try to mount a filesystem with newer features so it makes sense to backport to older trees. CC: stable@vger.kernel.org # 4.14+ Reviewed-by: Nikolay Borisov Signed-off-by: Qu Wenruo Reviewed-by: David Sterba Signed-off-by: David Sterba Signed-off-by: Greg Kroah-Hartman --- fs/btrfs/disk-io.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 87e55b024ac2..35acdab56a1c 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -3061,7 +3061,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device ~BTRFS_FEATURE_INCOMPAT_SUPP; if (features) { btrfs_err(fs_info, - "cannot mount because of unsupported optional features (%llx)", + "cannot mount because of unsupported optional features (0x%llx)", features); err = -EINVAL; goto fail_alloc; @@ -3099,7 +3099,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device ~BTRFS_FEATURE_COMPAT_RO_SUPP; if (!sb_rdonly(sb) && features) { btrfs_err(fs_info, - "cannot mount read-write because of unsupported optional features (%llx)", + "cannot mount read-write because of unsupported optional features (0x%llx)", features); err = -EINVAL; goto fail_alloc; From 509e9710b80215892d84c5b7122a9c19d1eb0df6 Mon Sep 17 00:00:00 2001 From: Qu Wenruo Date: Mon, 28 Feb 2022 15:05:53 +0800 Subject: [PATCH 0336/1842] btrfs: repair super block num_devices automatically MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit d201238ccd2f30b9bfcfadaeae0972e3a486a176 upstream. [BUG] There is a report that a btrfs has a bad super block num devices. This makes btrfs to reject the fs completely. BTRFS error (device sdd3): super_num_devices 3 mismatch with num_devices 2 found here BTRFS error (device sdd3): failed to read chunk tree: -22 BTRFS error (device sdd3): open_ctree failed [CAUSE] During btrfs device removal, chunk tree and super block num devs are updated in two different transactions: btrfs_rm_device() |- btrfs_rm_dev_item(device) | |- trans = btrfs_start_transaction() | | Now we got transaction X | | | |- btrfs_del_item() | | Now device item is removed from chunk tree | | | |- btrfs_commit_transaction() | Transaction X got committed, super num devs untouched, | but device item removed from chunk tree. | (AKA, super num devs is already incorrect) | |- cur_devices->num_devices--; |- cur_devices->total_devices--; |- btrfs_set_super_num_devices() All those operations are not in transaction X, thus it will only be written back to disk in next transaction. So after the transaction X in btrfs_rm_dev_item() committed, but before transaction X+1 (which can be minutes away), a power loss happen, then we got the super num mismatch. This has been fixed by commit bbac58698a55 ("btrfs: remove device item and update super block in the same transaction"). [FIX] Make the super_num_devices check less strict, converting it from a hard error to a warning, and reset the value to a correct one for the current or next transaction commit. As the number of device items is the critical information where the super block num_devices is only a cached value (and also useful for cross checking), it's safe to automatically update it. Other device related problems like missing device are handled after that and may require other means to resolve, like degraded mount. With this fix, potentially affected filesystems won't fail mount and require the manual repair by btrfs check. Reported-by: Luca Béla Palkovics Link: https://lore.kernel.org/linux-btrfs/CA+8xDSpvdm_U0QLBAnrH=zqDq_cWCOH5TiV46CKmp3igr44okQ@mail.gmail.com/ CC: stable@vger.kernel.org # 4.14+ Signed-off-by: Qu Wenruo Reviewed-by: David Sterba Signed-off-by: David Sterba Signed-off-by: Greg Kroah-Hartman --- fs/btrfs/volumes.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index 366d04763864..2fdf178aa76f 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -7191,12 +7191,12 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info) * do another round of validation checks. */ if (total_dev != fs_info->fs_devices->total_devices) { - btrfs_err(fs_info, - "super_num_devices %llu mismatch with num_devices %llu found here", + btrfs_warn(fs_info, +"super block num_devices %llu mismatch with DEV_ITEM count %llu, will be repaired on next transaction commit", btrfs_super_num_devices(fs_info->super_copy), total_dev); - ret = -EINVAL; - goto error; + fs_info->fs_devices->total_devices = total_dev; + btrfs_set_super_num_devices(fs_info->super_copy, total_dev); } if (btrfs_super_total_bytes(fs_info->super_copy) < fs_info->fs_devices->total_rw_bytes) { From c6380d9d2d699659633ec1ac0ad9359498fd17fe Mon Sep 17 00:00:00 2001 From: Tejas Upadhyay Date: Wed, 2 Mar 2022 10:02:56 +0530 Subject: [PATCH 0337/1842] iommu/vt-d: Add RPLS to quirk list to skip TE disabling [ Upstream commit 0a967f5bfd9134b89681cae58deb222e20840e76 ] The VT-d spec requires (10.4.4 Global Command Register, TE field) that: Hardware implementations supporting DMA draining must drain any in-flight DMA read/write requests queued within the Root-Complex before completing the translation enable command and reflecting the status of the command through the TES field in the Global Status register. Unfortunately, some integrated graphic devices fail to do so after some kind of power state transition. As the result, the system might stuck in iommu_disable_translati on(), waiting for the completion of TE transition. This adds RPLS to a quirk list for those devices and skips TE disabling if the qurik hits. Link: https://gitlab.freedesktop.org/drm/intel/-/issues/4898 Tested-by: Raviteja Goud Talla Cc: Rodrigo Vivi Acked-by: Lu Baolu Signed-off-by: Tejas Upadhyay Reviewed-by: Rodrigo Vivi Signed-off-by: Rodrigo Vivi Link: https://patchwork.freedesktop.org/patch/msgid/20220302043256.191529-1-tejaskumarx.surendrakumar.upadhyay@intel.com Signed-off-by: Sasha Levin --- drivers/iommu/intel/iommu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c index 21749859ad45..477dde39823c 100644 --- a/drivers/iommu/intel/iommu.c +++ b/drivers/iommu/intel/iommu.c @@ -6296,7 +6296,7 @@ static void quirk_igfx_skip_te_disable(struct pci_dev *dev) ver = (dev->device >> 8) & 0xff; if (ver != 0x45 && ver != 0x46 && ver != 0x4c && ver != 0x4e && ver != 0x8a && ver != 0x98 && - ver != 0x9a) + ver != 0x9a && ver != 0xa7) return; if (risky_device(dev)) From fadc626cae99aaa1325094edc6a9e2b883f3e562 Mon Sep 17 00:00:00 2001 From: Liu Zixian Date: Tue, 22 Mar 2022 17:17:30 +0800 Subject: [PATCH 0338/1842] drm/virtio: fix NULL pointer dereference in virtio_gpu_conn_get_modes [ Upstream commit 194d250cdc4a40ccbd179afd522a9e9846957402 ] drm_cvt_mode may return NULL and we should check it. This bug is found by syzkaller: FAULT_INJECTION stacktrace: [ 168.567394] FAULT_INJECTION: forcing a failure. name failslab, interval 1, probability 0, space 0, times 1 [ 168.567403] CPU: 1 PID: 6425 Comm: syz Kdump: loaded Not tainted 4.19.90-vhulk2201.1.0.h1035.kasan.eulerosv2r10.aarch64 #1 [ 168.567406] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 [ 168.567408] Call trace: [ 168.567414] dump_backtrace+0x0/0x310 [ 168.567418] show_stack+0x28/0x38 [ 168.567423] dump_stack+0xec/0x15c [ 168.567427] should_fail+0x3ac/0x3d0 [ 168.567437] __should_failslab+0xb8/0x120 [ 168.567441] should_failslab+0x28/0xc0 [ 168.567445] kmem_cache_alloc_trace+0x50/0x640 [ 168.567454] drm_mode_create+0x40/0x90 [ 168.567458] drm_cvt_mode+0x48/0xc78 [ 168.567477] virtio_gpu_conn_get_modes+0xa8/0x140 [virtio_gpu] [ 168.567485] drm_helper_probe_single_connector_modes+0x3a4/0xd80 [ 168.567492] drm_mode_getconnector+0x2e0/0xa70 [ 168.567496] drm_ioctl_kernel+0x11c/0x1d8 [ 168.567514] drm_ioctl+0x558/0x6d0 [ 168.567522] do_vfs_ioctl+0x160/0xf30 [ 168.567525] ksys_ioctl+0x98/0xd8 [ 168.567530] __arm64_sys_ioctl+0x50/0xc8 [ 168.567536] el0_svc_common+0xc8/0x320 [ 168.567540] el0_svc_handler+0xf8/0x160 [ 168.567544] el0_svc+0x10/0x218 KASAN stacktrace: [ 168.567561] BUG: KASAN: null-ptr-deref in virtio_gpu_conn_get_modes+0xb4/0x140 [virtio_gpu] [ 168.567565] Read of size 4 at addr 0000000000000054 by task syz/6425 [ 168.567566] [ 168.567571] CPU: 1 PID: 6425 Comm: syz Kdump: loaded Not tainted 4.19.90-vhulk2201.1.0.h1035.kasan.eulerosv2r10.aarch64 #1 [ 168.567573] Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 [ 168.567575] Call trace: [ 168.567578] dump_backtrace+0x0/0x310 [ 168.567582] show_stack+0x28/0x38 [ 168.567586] dump_stack+0xec/0x15c [ 168.567591] kasan_report+0x244/0x2f0 [ 168.567594] __asan_load4+0x58/0xb0 [ 168.567607] virtio_gpu_conn_get_modes+0xb4/0x140 [virtio_gpu] [ 168.567612] drm_helper_probe_single_connector_modes+0x3a4/0xd80 [ 168.567617] drm_mode_getconnector+0x2e0/0xa70 [ 168.567621] drm_ioctl_kernel+0x11c/0x1d8 [ 168.567624] drm_ioctl+0x558/0x6d0 [ 168.567628] do_vfs_ioctl+0x160/0xf30 [ 168.567632] ksys_ioctl+0x98/0xd8 [ 168.567636] __arm64_sys_ioctl+0x50/0xc8 [ 168.567641] el0_svc_common+0xc8/0x320 [ 168.567645] el0_svc_handler+0xf8/0x160 [ 168.567649] el0_svc+0x10/0x218 Signed-off-by: Liu Zixian Link: http://patchwork.freedesktop.org/patch/msgid/20220322091730.1653-1-liuzixian4@huawei.com Signed-off-by: Gerd Hoffmann Signed-off-by: Sasha Levin --- drivers/gpu/drm/virtio/virtgpu_display.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c b/drivers/gpu/drm/virtio/virtgpu_display.c index f84b7e61311b..9b2b99e85342 100644 --- a/drivers/gpu/drm/virtio/virtgpu_display.c +++ b/drivers/gpu/drm/virtio/virtgpu_display.c @@ -177,6 +177,8 @@ static int virtio_gpu_conn_get_modes(struct drm_connector *connector) DRM_DEBUG("add mode: %dx%d\n", width, height); mode = drm_cvt_mode(connector->dev, width, height, 60, false, false, false); + if (!mode) + return count; mode->type |= DRM_MODE_TYPE_PREFERRED; drm_mode_probed_add(connector, mode); count++; From 74ad0d7450206fb22a853c3c0a3a17d8734ecd2b Mon Sep 17 00:00:00 2001 From: Niels Dossche Date: Mon, 21 Mar 2022 23:55:16 +0100 Subject: [PATCH 0339/1842] mwifiex: add mutex lock for call in mwifiex_dfs_chan_sw_work_queue [ Upstream commit 3e12968f6d12a34b540c39cbd696a760cc4616f0 ] cfg80211_ch_switch_notify uses ASSERT_WDEV_LOCK to assert that net_device->ieee80211_ptr->mtx (which is the same as priv->wdev.mtx) is held during the function's execution. mwifiex_dfs_chan_sw_work_queue is one of its callers, which does not hold that lock, therefore violating the assertion. Add a lock around the call. Disclaimer: I am currently working on a static analyser to detect missing locks. This was a reported case. I manually verified the report by looking at the code, so that I do not send wrong information or patches. After concluding that this seems to be a true positive, I created this patch. However, as I do not in fact have this particular hardware, I was unable to test it. Reviewed-by: Brian Norris Signed-off-by: Niels Dossche Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/20220321225515.32113-1-dossche.niels@gmail.com Signed-off-by: Sasha Levin --- drivers/net/wireless/marvell/mwifiex/11h.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/wireless/marvell/mwifiex/11h.c b/drivers/net/wireless/marvell/mwifiex/11h.c index d2ee6469e67b..3fa25cd64cda 100644 --- a/drivers/net/wireless/marvell/mwifiex/11h.c +++ b/drivers/net/wireless/marvell/mwifiex/11h.c @@ -303,5 +303,7 @@ void mwifiex_dfs_chan_sw_work_queue(struct work_struct *work) mwifiex_dbg(priv->adapter, MSG, "indicating channel switch completion to kernel\n"); + mutex_lock(&priv->wdev.mtx); cfg80211_ch_switch_notify(priv->netdev, &priv->dfs_chandef); + mutex_unlock(&priv->wdev.mtx); } From 4ae5a2ccf5da143cb210ce2f82f5f74a822bfd8e Mon Sep 17 00:00:00 2001 From: Haowen Bai Date: Fri, 25 Mar 2022 18:17:13 +0800 Subject: [PATCH 0340/1842] b43legacy: Fix assigning negative value to unsigned variable [ Upstream commit 3f6b867559b3d43a7ce1b4799b755e812fc0d503 ] fix warning reported by smatch: drivers/net/wireless/broadcom/b43legacy/phy.c:1181 b43legacy_phy_lo_b_measure() warn: assigning (-772) to unsigned variable 'fval' Signed-off-by: Haowen Bai Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/1648203433-8736-1-git-send-email-baihaowen@meizu.com Signed-off-by: Sasha Levin --- drivers/net/wireless/broadcom/b43legacy/phy.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/wireless/broadcom/b43legacy/phy.c b/drivers/net/wireless/broadcom/b43legacy/phy.c index 05404fbd1e70..c1395e622759 100644 --- a/drivers/net/wireless/broadcom/b43legacy/phy.c +++ b/drivers/net/wireless/broadcom/b43legacy/phy.c @@ -1123,7 +1123,7 @@ void b43legacy_phy_lo_b_measure(struct b43legacy_wldev *dev) struct b43legacy_phy *phy = &dev->phy; u16 regstack[12] = { 0 }; u16 mls; - u16 fval; + s16 fval; int i; int j; From cc575b855809b9cf12b8e111b3c5f1bc02b9424a Mon Sep 17 00:00:00 2001 From: Haowen Bai Date: Fri, 25 Mar 2022 18:15:15 +0800 Subject: [PATCH 0341/1842] b43: Fix assigning negative value to unsigned variable [ Upstream commit 11800d893b38e0e12d636c170c1abc19c43c730c ] fix warning reported by smatch: drivers/net/wireless/broadcom/b43/phy_n.c:585 b43_nphy_adjust_lna_gain_table() warn: assigning (-2) to unsigned variable '*(lna_gain[0])' Signed-off-by: Haowen Bai Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/1648203315-28093-1-git-send-email-baihaowen@meizu.com Signed-off-by: Sasha Levin --- drivers/net/wireless/broadcom/b43/phy_n.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/wireless/broadcom/b43/phy_n.c b/drivers/net/wireless/broadcom/b43/phy_n.c index 665b737fbb0d..39975b7d1a16 100644 --- a/drivers/net/wireless/broadcom/b43/phy_n.c +++ b/drivers/net/wireless/broadcom/b43/phy_n.c @@ -582,7 +582,7 @@ static void b43_nphy_adjust_lna_gain_table(struct b43_wldev *dev) u16 data[4]; s16 gain[2]; u16 minmax[2]; - static const u16 lna_gain[4] = { -2, 10, 19, 25 }; + static const s16 lna_gain[4] = { -2, 10, 19, 25 }; if (nphy->hang_avoid) b43_nphy_stay_in_carrier_search(dev, 1); From 98d1dc32f890642476dbb78ed3437a456bf421b0 Mon Sep 17 00:00:00 2001 From: Haowen Bai Date: Fri, 1 Apr 2022 15:10:54 +0800 Subject: [PATCH 0342/1842] ipw2x00: Fix potential NULL dereference in libipw_xmit() [ Upstream commit e8366bbabe1d207cf7c5b11ae50e223ae6fc278b ] crypt and crypt->ops could be null, so we need to checking null before dereference Signed-off-by: Haowen Bai Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/1648797055-25730-1-git-send-email-baihaowen@meizu.com Signed-off-by: Sasha Levin --- drivers/net/wireless/intel/ipw2x00/libipw_tx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/wireless/intel/ipw2x00/libipw_tx.c b/drivers/net/wireless/intel/ipw2x00/libipw_tx.c index d9baa2fa603b..e4c60caa6543 100644 --- a/drivers/net/wireless/intel/ipw2x00/libipw_tx.c +++ b/drivers/net/wireless/intel/ipw2x00/libipw_tx.c @@ -383,7 +383,7 @@ netdev_tx_t libipw_xmit(struct sk_buff *skb, struct net_device *dev) /* Each fragment may need to have room for encryption * pre/postfix */ - if (host_encrypt) + if (host_encrypt && crypt && crypt->ops) bytes_per_frag -= crypt->ops->extra_mpdu_prefix_len + crypt->ops->extra_mpdu_postfix_len; From 5bfb65e92ff36cba3bf52e24a04af80ffb41bcf5 Mon Sep 17 00:00:00 2001 From: Niels Dossche Date: Mon, 4 Apr 2022 01:15:24 +0200 Subject: [PATCH 0343/1842] ipv6: fix locking issues with loops over idev->addr_list [ Upstream commit 51454ea42c1ab4e0c2828bb0d4d53957976980de ] idev->addr_list needs to be protected by idev->lock. However, it is not always possible to do so while iterating and performing actions on inet6_ifaddr instances. For example, multiple functions (like addrconf_{join,leave}_anycast) eventually call down to other functions that acquire the idev->lock. The current code temporarily unlocked the idev->lock during the loops, which can cause race conditions. Moving the locks up is also not an appropriate solution as the ordering of lock acquisition will be inconsistent with for example mc_lock. This solution adds an additional field to inet6_ifaddr that is used to temporarily add the instances to a temporary list while holding idev->lock. The temporary list can then be traversed without holding idev->lock. This change was done in two places. In addrconf_ifdown, the list_for_each_entry_safe variant of the list loop is also no longer necessary as there is no deletion within that specific loop. Suggested-by: Paolo Abeni Signed-off-by: Niels Dossche Acked-by: Paolo Abeni Link: https://lore.kernel.org/r/20220403231523.45843-1-dossche.niels@gmail.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- include/net/if_inet6.h | 8 ++++++++ net/ipv6/addrconf.c | 30 ++++++++++++++++++++++++------ 2 files changed, 32 insertions(+), 6 deletions(-) diff --git a/include/net/if_inet6.h b/include/net/if_inet6.h index 8bf5906073bc..e03ba8e80781 100644 --- a/include/net/if_inet6.h +++ b/include/net/if_inet6.h @@ -64,6 +64,14 @@ struct inet6_ifaddr { struct hlist_node addr_lst; struct list_head if_list; + /* + * Used to safely traverse idev->addr_list in process context + * if the idev->lock needed to protect idev->addr_list cannot be held. + * In that case, add the items to this list temporarily and iterate + * without holding idev->lock. + * See addrconf_ifdown and dev_forward_change. + */ + struct list_head if_list_aux; struct list_head tmp_list; struct inet6_ifaddr *ifpub; diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index 86bcb1825698..4584bb50960b 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -789,6 +789,7 @@ static void dev_forward_change(struct inet6_dev *idev) { struct net_device *dev; struct inet6_ifaddr *ifa; + LIST_HEAD(tmp_addr_list); if (!idev) return; @@ -807,14 +808,24 @@ static void dev_forward_change(struct inet6_dev *idev) } } + read_lock_bh(&idev->lock); list_for_each_entry(ifa, &idev->addr_list, if_list) { if (ifa->flags&IFA_F_TENTATIVE) continue; + list_add_tail(&ifa->if_list_aux, &tmp_addr_list); + } + read_unlock_bh(&idev->lock); + + while (!list_empty(&tmp_addr_list)) { + ifa = list_first_entry(&tmp_addr_list, + struct inet6_ifaddr, if_list_aux); + list_del(&ifa->if_list_aux); if (idev->cnf.forwarding) addrconf_join_anycast(ifa); else addrconf_leave_anycast(ifa); } + inet6_netconf_notify_devconf(dev_net(dev), RTM_NEWNETCONF, NETCONFA_FORWARDING, dev->ifindex, &idev->cnf); @@ -3710,7 +3721,8 @@ static int addrconf_ifdown(struct net_device *dev, bool unregister) unsigned long event = unregister ? NETDEV_UNREGISTER : NETDEV_DOWN; struct net *net = dev_net(dev); struct inet6_dev *idev; - struct inet6_ifaddr *ifa, *tmp; + struct inet6_ifaddr *ifa; + LIST_HEAD(tmp_addr_list); bool keep_addr = false; bool was_ready; int state, i; @@ -3802,16 +3814,23 @@ static int addrconf_ifdown(struct net_device *dev, bool unregister) write_lock_bh(&idev->lock); } - list_for_each_entry_safe(ifa, tmp, &idev->addr_list, if_list) { + list_for_each_entry(ifa, &idev->addr_list, if_list) + list_add_tail(&ifa->if_list_aux, &tmp_addr_list); + write_unlock_bh(&idev->lock); + + while (!list_empty(&tmp_addr_list)) { struct fib6_info *rt = NULL; bool keep; + ifa = list_first_entry(&tmp_addr_list, + struct inet6_ifaddr, if_list_aux); + list_del(&ifa->if_list_aux); + addrconf_del_dad_work(ifa); keep = keep_addr && (ifa->flags & IFA_F_PERMANENT) && !addr_is_local(&ifa->addr); - write_unlock_bh(&idev->lock); spin_lock_bh(&ifa->lock); if (keep) { @@ -3842,15 +3861,14 @@ static int addrconf_ifdown(struct net_device *dev, bool unregister) addrconf_leave_solict(ifa->idev, &ifa->addr); } - write_lock_bh(&idev->lock); if (!keep) { + write_lock_bh(&idev->lock); list_del_rcu(&ifa->if_list); + write_unlock_bh(&idev->lock); in6_ifa_put(ifa); } } - write_unlock_bh(&idev->lock); - /* Step 5: Discard anycast and multicast list */ if (unregister) { ipv6_ac_destroy_dev(idev); From 59dd1a07eecfde63cfdbacfca965292ad7d3601b Mon Sep 17 00:00:00 2001 From: Daniel Vetter Date: Tue, 5 Apr 2022 23:03:31 +0200 Subject: [PATCH 0344/1842] fbcon: Consistently protect deferred_takeover with console_lock() [ Upstream commit 43553559121ca90965b572cf8a1d6d0fd618b449 ] This shouldn't be a problem in practice since until we've actually taken over the console there's nothing we've registered with the console/vt subsystem, so the exit/unbind path that check this can't do the wrong thing. But it's confusing, so fix it by moving it a tad later. Acked-by: Sam Ravnborg Signed-off-by: Daniel Vetter Cc: Daniel Vetter Cc: Du Cheng Cc: Tetsuo Handa Cc: Claudio Suarez Cc: Thomas Zimmermann Link: https://patchwork.freedesktop.org/patch/msgid/20220405210335.3434130-14-daniel.vetter@ffwll.ch Signed-off-by: Sasha Levin --- drivers/video/fbdev/core/fbcon.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c index f102519ccefb..13de2bebb09a 100644 --- a/drivers/video/fbdev/core/fbcon.c +++ b/drivers/video/fbdev/core/fbcon.c @@ -3300,6 +3300,9 @@ static void fbcon_register_existing_fbs(struct work_struct *work) console_lock(); + deferred_takeover = false; + logo_shown = FBCON_LOGO_DONTSHOW; + for_each_registered_fb(i) fbcon_fb_registered(registered_fb[i]); @@ -3317,8 +3320,6 @@ static int fbcon_output_notifier(struct notifier_block *nb, pr_info("fbcon: Taking over console\n"); dummycon_unregister_output_notifier(&fbcon_output_nb); - deferred_takeover = false; - logo_shown = FBCON_LOGO_DONTSHOW; /* We may get called in atomic context */ schedule_work(&fbcon_deferred_takeover_work); From 29cb802966c796f38454947d1fcad0275e97321c Mon Sep 17 00:00:00 2001 From: Mike Travis Date: Wed, 6 Apr 2022 14:51:48 -0500 Subject: [PATCH 0345/1842] x86/platform/uv: Update TSC sync state for UV5 [ Upstream commit bb3ab81bdbd53f88f26ffabc9fb15bd8466486ec ] The UV5 platform synchronizes the TSCs among all chassis, and will not proceed to OS boot without achieving synchronization. Previous UV platforms provided a register indicating successful synchronization. This is no longer available on UV5. On this platform TSC_ADJUST should not be reset by the kernel. Signed-off-by: Mike Travis Signed-off-by: Steve Wahl Signed-off-by: Borislav Petkov Reviewed-by: Dimitri Sivanich Acked-by: Thomas Gleixner Link: https://lore.kernel.org/r/20220406195149.228164-3-steve.wahl@hpe.com Signed-off-by: Sasha Levin --- arch/x86/kernel/apic/x2apic_uv_x.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c index 40f466de8924..9c283562dfd4 100644 --- a/arch/x86/kernel/apic/x2apic_uv_x.c +++ b/arch/x86/kernel/apic/x2apic_uv_x.c @@ -199,7 +199,13 @@ static void __init uv_tsc_check_sync(void) int mmr_shift; char *state; - /* Different returns from different UV BIOS versions */ + /* UV5 guarantees synced TSCs; do not zero TSC_ADJUST */ + if (!is_uv(UV2|UV3|UV4)) { + mark_tsc_async_resets("UV5+"); + return; + } + + /* UV2,3,4, UV BIOS TSC sync state available */ mmr = uv_early_read_mmr(UVH_TSC_SYNC_MMR); mmr_shift = is_uv2_hub() ? UVH_TSC_SYNC_SHIFT_UV2K : UVH_TSC_SYNC_SHIFT; From cd97a481ea892cb94a4bb4dbb708770aece89776 Mon Sep 17 00:00:00 2001 From: "Kirill A. Shutemov" Date: Wed, 6 Apr 2022 02:29:38 +0300 Subject: [PATCH 0346/1842] ACPICA: Avoid cache flush inside virtual machines [ Upstream commit e2efb6359e620521d1e13f69b2257de8ceaa9475 ] While running inside virtual machine, the kernel can bypass cache flushing. Changing sleep state in a virtual machine doesn't affect the host system sleep state and cannot lead to data loss. Before entering sleep states, the ACPI code flushes caches to prevent data loss using the WBINVD instruction. This mechanism is required on bare metal. But, any use WBINVD inside of a guest is worthless. Changing sleep state in a virtual machine doesn't affect the host system sleep state and cannot lead to data loss, so most hypervisors simply ignore it. Despite this, the ACPI code calls WBINVD unconditionally anyway. It's useless, but also normally harmless. In TDX guests, though, WBINVD stops being harmless; it triggers a virtualization exception (#VE). If the ACPI cache-flushing WBINVD were left in place, TDX guests would need handling to recover from the exception. Avoid using WBINVD whenever running under a hypervisor. This both removes the useless WBINVDs and saves TDX from implementing WBINVD handling. Signed-off-by: Kirill A. Shutemov Signed-off-by: Dave Hansen Reviewed-by: Dave Hansen Reviewed-by: Dan Williams Reviewed-by: Thomas Gleixner Link: https://lkml.kernel.org/r/20220405232939.73860-30-kirill.shutemov@linux.intel.com Signed-off-by: Sasha Levin --- arch/x86/include/asm/acenv.h | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/acenv.h b/arch/x86/include/asm/acenv.h index 9aff97f0de7f..d937c55e717e 100644 --- a/arch/x86/include/asm/acenv.h +++ b/arch/x86/include/asm/acenv.h @@ -13,7 +13,19 @@ /* Asm macros */ -#define ACPI_FLUSH_CPU_CACHE() wbinvd() +/* + * ACPI_FLUSH_CPU_CACHE() flushes caches on entering sleep states. + * It is required to prevent data loss. + * + * While running inside virtual machine, the kernel can bypass cache flushing. + * Changing sleep state in a virtual machine doesn't affect the host system + * sleep state and cannot lead to data loss. + */ +#define ACPI_FLUSH_CPU_CACHE() \ +do { \ + if (!cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) \ + wbinvd(); \ +} while (0) int __acpi_acquire_global_lock(unsigned int *lock); int __acpi_release_global_lock(unsigned int *lock); From c977d63b8cc45a1ca4ce438c072af43af6f4aa6b Mon Sep 17 00:00:00 2001 From: Liviu Dudau Date: Thu, 2 Dec 2021 17:00:33 +0000 Subject: [PATCH 0347/1842] drm/komeda: return early if drm_universal_plane_init() fails. [ Upstream commit c8f76c37cc3668ee45e081e76a15f24a352ebbdd ] If drm_universal_plane_init() fails early we jump to the common cleanup code that calls komeda_plane_destroy() which in turn could access the uninitalised drm_plane and crash. Return early if an error is detected without going through the common code. Reported-by: Steven Price Reviewed-by: Steven Price Signed-off-by: Liviu Dudau Link: https://lore.kernel.org/dri-devel/20211203100946.2706922-1-liviu.dudau@arm.com Signed-off-by: Sasha Levin --- drivers/gpu/drm/arm/display/komeda/komeda_plane.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_plane.c b/drivers/gpu/drm/arm/display/komeda/komeda_plane.c index 98e915e325dd..a5f57b38d193 100644 --- a/drivers/gpu/drm/arm/display/komeda/komeda_plane.c +++ b/drivers/gpu/drm/arm/display/komeda/komeda_plane.c @@ -274,8 +274,10 @@ static int komeda_plane_add(struct komeda_kms_dev *kms, komeda_put_fourcc_list(formats); - if (err) - goto cleanup; + if (err) { + kfree(kplane); + return err; + } drm_plane_helper_add(plane, &komeda_plane_helper_funcs); From 1c6c3f2336642fb3074593911f5176565f47ec41 Mon Sep 17 00:00:00 2001 From: Padmanabha Srinivasaiah Date: Thu, 17 Feb 2022 16:25:19 +0100 Subject: [PATCH 0348/1842] rcu-tasks: Fix race in schedule and flush work [ Upstream commit f75fd4b9221d93177c50dcfde671b2e907f53e86 ] While booting secondary CPUs, cpus_read_[lock/unlock] is not keeping online cpumask stable. The transient online mask results in below calltrace. [ 0.324121] CPU1: Booted secondary processor 0x0000000001 [0x410fd083] [ 0.346652] Detected PIPT I-cache on CPU2 [ 0.347212] CPU2: Booted secondary processor 0x0000000002 [0x410fd083] [ 0.377255] Detected PIPT I-cache on CPU3 [ 0.377823] CPU3: Booted secondary processor 0x0000000003 [0x410fd083] [ 0.379040] ------------[ cut here ]------------ [ 0.383662] WARNING: CPU: 0 PID: 10 at kernel/workqueue.c:3084 __flush_work+0x12c/0x138 [ 0.384850] Modules linked in: [ 0.385403] CPU: 0 PID: 10 Comm: rcu_tasks_rude_ Not tainted 5.17.0-rc3-v8+ #13 [ 0.386473] Hardware name: Raspberry Pi 4 Model B Rev 1.4 (DT) [ 0.387289] pstate: 20000005 (nzCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 0.388308] pc : __flush_work+0x12c/0x138 [ 0.388970] lr : __flush_work+0x80/0x138 [ 0.389620] sp : ffffffc00aaf3c60 [ 0.390139] x29: ffffffc00aaf3d20 x28: ffffffc009c16af0 x27: ffffff80f761df48 [ 0.391316] x26: 0000000000000004 x25: 0000000000000003 x24: 0000000000000100 [ 0.392493] x23: ffffffffffffffff x22: ffffffc009c16b10 x21: ffffffc009c16b28 [ 0.393668] x20: ffffffc009e53861 x19: ffffff80f77fbf40 x18: 00000000d744fcc9 [ 0.394842] x17: 000000000000000b x16: 00000000000001c2 x15: ffffffc009e57550 [ 0.396016] x14: 0000000000000000 x13: ffffffffffffffff x12: 0000000100000000 [ 0.397190] x11: 0000000000000462 x10: ffffff8040258008 x9 : 0000000100000000 [ 0.398364] x8 : 0000000000000000 x7 : ffffffc0093c8bf4 x6 : 0000000000000000 [ 0.399538] x5 : 0000000000000000 x4 : ffffffc00a976e40 x3 : ffffffc00810444c [ 0.400711] x2 : 0000000000000004 x1 : 0000000000000000 x0 : 0000000000000000 [ 0.401886] Call trace: [ 0.402309] __flush_work+0x12c/0x138 [ 0.402941] schedule_on_each_cpu+0x228/0x278 [ 0.403693] rcu_tasks_rude_wait_gp+0x130/0x144 [ 0.404502] rcu_tasks_kthread+0x220/0x254 [ 0.405264] kthread+0x174/0x1ac [ 0.405837] ret_from_fork+0x10/0x20 [ 0.406456] irq event stamp: 102 [ 0.406966] hardirqs last enabled at (101): [] _raw_spin_unlock_irq+0x78/0xb4 [ 0.408304] hardirqs last disabled at (102): [] el1_dbg+0x24/0x5c [ 0.409410] softirqs last enabled at (54): [] local_bh_enable+0xc/0x2c [ 0.410645] softirqs last disabled at (50): [] local_bh_disable+0xc/0x2c [ 0.411890] ---[ end trace 0000000000000000 ]--- [ 0.413000] smp: Brought up 1 node, 4 CPUs [ 0.413762] SMP: Total of 4 processors activated. [ 0.414566] CPU features: detected: 32-bit EL0 Support [ 0.415414] CPU features: detected: 32-bit EL1 Support [ 0.416278] CPU features: detected: CRC32 instructions [ 0.447021] Callback from call_rcu_tasks_rude() invoked. [ 0.506693] Callback from call_rcu_tasks() invoked. This commit therefore fixes this issue by applying a single-CPU optimization to the RCU Tasks Rude grace-period process. The key point here is that the purpose of this RCU flavor is to force a schedule on each online CPU since some past event. But the rcu_tasks_rude_wait_gp() function runs in the context of the RCU Tasks Rude's grace-period kthread, so there must already have been a context switch on the current CPU since the call to either synchronize_rcu_tasks_rude() or call_rcu_tasks_rude(). So if there is only a single CPU online, RCU Tasks Rude's grace-period kthread does not need to anything at all. It turns out that the rcu_tasks_rude_wait_gp() function's call to schedule_on_each_cpu() causes problems during early boot. During that time, there is only one online CPU, namely the boot CPU. Therefore, applying this single-CPU optimization fixes early-boot instances of this problem. Link: https://lore.kernel.org/lkml/20220210184319.25009-1-treasure4paddy@gmail.com/T/ Suggested-by: Paul E. McKenney Signed-off-by: Padmanabha Srinivasaiah Signed-off-by: Paul E. McKenney Signed-off-by: Sasha Levin --- kernel/rcu/tasks.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index 7c05c5ab7865..14af29fe1377 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -620,6 +620,9 @@ static void rcu_tasks_be_rude(struct work_struct *work) // Wait for one rude RCU-tasks grace period. static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp) { + if (num_online_cpus() <= 1) + return; // Fastpath for only one CPU. + rtp->n_ipis += cpumask_weight(cpu_online_mask); schedule_on_each_cpu(rcu_tasks_be_rude); } From 10f30cba8f6c4bcbc5c17443fd6a9999d3991ae3 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Thu, 17 Mar 2022 09:30:10 -0700 Subject: [PATCH 0349/1842] rcu: Make TASKS_RUDE_RCU select IRQ_WORK [ Upstream commit 46e861be589881e0905b9ade3d8439883858721c ] The TASKS_RUDE_RCU does not select IRQ_WORK, which can result in build failures for kernels that do not otherwise select IRQ_WORK. This commit therefore causes the TASKS_RUDE_RCU Kconfig option to select IRQ_WORK. Reported-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Paul E. McKenney Signed-off-by: Sasha Levin --- kernel/rcu/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig index b71e21f73c40..cd6e11403f1b 100644 --- a/kernel/rcu/Kconfig +++ b/kernel/rcu/Kconfig @@ -86,6 +86,7 @@ config TASKS_RCU config TASKS_RUDE_RCU def_bool 0 + select IRQ_WORK help This option enables a task-based RCU implementation that uses only context switch (including preemption) and user-mode From 005990e30d14b1d70eceaaf712c413046be3b2d6 Mon Sep 17 00:00:00 2001 From: Haowen Bai Date: Mon, 11 Apr 2022 09:32:37 +0800 Subject: [PATCH 0350/1842] sfc: ef10: Fix assigning negative value to unsigned variable [ Upstream commit b8ff3395fbdf3b79a99d0ef410fc34c51044121e ] fix warning reported by smatch: 251 drivers/net/ethernet/sfc/ef10.c:2259 efx_ef10_tx_tso_desc() warn: assigning (-208) to unsigned variable 'ip_tot_len' Signed-off-by: Haowen Bai Acked-by: Edward Cree Link: https://lore.kernel.org/r/1649640757-30041-1-git-send-email-baihaowen@meizu.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/sfc/ef10.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c index 6f950979d25e..fa1a872c4bc8 100644 --- a/drivers/net/ethernet/sfc/ef10.c +++ b/drivers/net/ethernet/sfc/ef10.c @@ -2240,7 +2240,7 @@ int efx_ef10_tx_tso_desc(struct efx_tx_queue *tx_queue, struct sk_buff *skb, * guaranteed to satisfy the second as we only attempt TSO if * inner_network_header <= 208. */ - ip_tot_len = -EFX_TSO2_MAX_HDRLEN; + ip_tot_len = 0x10000 - EFX_TSO2_MAX_HDRLEN; EFX_WARN_ON_ONCE_PARANOID(mss + EFX_TSO2_MAX_HDRLEN + (tcp->doff << 2u) > ip_tot_len); From e2b8681769f6e205382f026b907d28aa5ec9d59a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Amadeusz=20S=C5=82awi=C5=84ski?= Date: Tue, 12 Apr 2022 11:16:28 +0200 Subject: [PATCH 0351/1842] ALSA: jack: Access input_dev under mutex MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 1b6a6fc5280e97559287b61eade2d4b363e836f2 ] It is possible when using ASoC that input_dev is unregistered while calling snd_jack_report, which causes NULL pointer dereference. In order to prevent this serialize access to input_dev using mutex lock. Signed-off-by: Amadeusz Sławiński Reviewed-by: Cezary Rojewski Link: https://lore.kernel.org/r/20220412091628.3056922-1-amadeuszx.slawinski@linux.intel.com Signed-off-by: Takashi Iwai Signed-off-by: Sasha Levin --- include/sound/jack.h | 1 + sound/core/jack.c | 34 +++++++++++++++++++++++++++------- 2 files changed, 28 insertions(+), 7 deletions(-) diff --git a/include/sound/jack.h b/include/sound/jack.h index 9eb2b5ec1ec4..78f3619f3de9 100644 --- a/include/sound/jack.h +++ b/include/sound/jack.h @@ -62,6 +62,7 @@ struct snd_jack { const char *id; #ifdef CONFIG_SND_JACK_INPUT_DEV struct input_dev *input_dev; + struct mutex input_dev_lock; int registered; int type; char name[100]; diff --git a/sound/core/jack.c b/sound/core/jack.c index dc2e06ae2414..45e28db6ea38 100644 --- a/sound/core/jack.c +++ b/sound/core/jack.c @@ -34,8 +34,11 @@ static int snd_jack_dev_disconnect(struct snd_device *device) #ifdef CONFIG_SND_JACK_INPUT_DEV struct snd_jack *jack = device->device_data; - if (!jack->input_dev) + mutex_lock(&jack->input_dev_lock); + if (!jack->input_dev) { + mutex_unlock(&jack->input_dev_lock); return 0; + } /* If the input device is registered with the input subsystem * then we need to use a different deallocator. */ @@ -44,6 +47,7 @@ static int snd_jack_dev_disconnect(struct snd_device *device) else input_free_device(jack->input_dev); jack->input_dev = NULL; + mutex_unlock(&jack->input_dev_lock); #endif /* CONFIG_SND_JACK_INPUT_DEV */ return 0; } @@ -82,8 +86,11 @@ static int snd_jack_dev_register(struct snd_device *device) snprintf(jack->name, sizeof(jack->name), "%s %s", card->shortname, jack->id); - if (!jack->input_dev) + mutex_lock(&jack->input_dev_lock); + if (!jack->input_dev) { + mutex_unlock(&jack->input_dev_lock); return 0; + } jack->input_dev->name = jack->name; @@ -108,6 +115,7 @@ static int snd_jack_dev_register(struct snd_device *device) if (err == 0) jack->registered = 1; + mutex_unlock(&jack->input_dev_lock); return err; } #endif /* CONFIG_SND_JACK_INPUT_DEV */ @@ -228,9 +236,11 @@ int snd_jack_new(struct snd_card *card, const char *id, int type, return -ENOMEM; } - /* don't creat input device for phantom jack */ - if (!phantom_jack) { #ifdef CONFIG_SND_JACK_INPUT_DEV + mutex_init(&jack->input_dev_lock); + + /* don't create input device for phantom jack */ + if (!phantom_jack) { int i; jack->input_dev = input_allocate_device(); @@ -248,8 +258,8 @@ int snd_jack_new(struct snd_card *card, const char *id, int type, input_set_capability(jack->input_dev, EV_SW, jack_switch_types[i]); -#endif /* CONFIG_SND_JACK_INPUT_DEV */ } +#endif /* CONFIG_SND_JACK_INPUT_DEV */ err = snd_device_new(card, SNDRV_DEV_JACK, jack, &ops); if (err < 0) @@ -289,10 +299,14 @@ EXPORT_SYMBOL(snd_jack_new); void snd_jack_set_parent(struct snd_jack *jack, struct device *parent) { WARN_ON(jack->registered); - if (!jack->input_dev) + mutex_lock(&jack->input_dev_lock); + if (!jack->input_dev) { + mutex_unlock(&jack->input_dev_lock); return; + } jack->input_dev->dev.parent = parent; + mutex_unlock(&jack->input_dev_lock); } EXPORT_SYMBOL(snd_jack_set_parent); @@ -340,6 +354,8 @@ EXPORT_SYMBOL(snd_jack_set_key); /** * snd_jack_report - Report the current status of a jack + * Note: This function uses mutexes and should be called from a + * context which can sleep (such as a workqueue). * * @jack: The jack to report status for * @status: The current status of the jack @@ -359,8 +375,11 @@ void snd_jack_report(struct snd_jack *jack, int status) status & jack_kctl->mask_bits); #ifdef CONFIG_SND_JACK_INPUT_DEV - if (!jack->input_dev) + mutex_lock(&jack->input_dev_lock); + if (!jack->input_dev) { + mutex_unlock(&jack->input_dev_lock); return; + } for (i = 0; i < ARRAY_SIZE(jack->key); i++) { int testbit = SND_JACK_BTN_0 >> i; @@ -379,6 +398,7 @@ void snd_jack_report(struct snd_jack *jack, int status) } input_sync(jack->input_dev); + mutex_unlock(&jack->input_dev_lock); #endif /* CONFIG_SND_JACK_INPUT_DEV */ } EXPORT_SYMBOL(snd_jack_report); From fbfeb9bc9479cbd719f4c0cdf389d68d1a03a93b Mon Sep 17 00:00:00 2001 From: Biju Das Date: Mon, 11 Apr 2022 18:31:15 +0100 Subject: [PATCH 0352/1842] spi: spi-rspi: Remove setting {src,dst}_{addr,addr_width} based on DMA direction [ Upstream commit 6f381481a5b236cb53d6de2c49c6ef83a4d0f432 ] The direction field in the DMA config is deprecated. The rspi driver sets {src,dst}_{addr,addr_width} based on the DMA direction and it results in dmaengine_slave_config() failure as RZ DMAC driver validates {src,dst}_addr_width values independent of DMA direction. This patch fixes the issue by passing both {src,dst}_{addr,addr_width} values independent of DMA direction. Signed-off-by: Biju Das Suggested-by: Vinod Koul Reviewed-by: Vinod Koul Reviewed-by: Geert Uytterhoeven Tested-by: Geert Uytterhoeven Link: https://lore.kernel.org/r/20220411173115.6619-1-biju.das.jz@bp.renesas.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- drivers/spi/spi-rspi.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/drivers/spi/spi-rspi.c b/drivers/spi/spi-rspi.c index e39fd38f5180..ea03cc589e61 100644 --- a/drivers/spi/spi-rspi.c +++ b/drivers/spi/spi-rspi.c @@ -1107,14 +1107,11 @@ static struct dma_chan *rspi_request_dma_chan(struct device *dev, } memset(&cfg, 0, sizeof(cfg)); + cfg.dst_addr = port_addr + RSPI_SPDR; + cfg.src_addr = port_addr + RSPI_SPDR; + cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; + cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; cfg.direction = dir; - if (dir == DMA_MEM_TO_DEV) { - cfg.dst_addr = port_addr; - cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; - } else { - cfg.src_addr = port_addr; - cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; - } ret = dmaengine_slave_config(chan, &cfg); if (ret) { @@ -1145,12 +1142,12 @@ static int rspi_request_dma(struct device *dev, struct spi_controller *ctlr, } ctlr->dma_tx = rspi_request_dma_chan(dev, DMA_MEM_TO_DEV, dma_tx_id, - res->start + RSPI_SPDR); + res->start); if (!ctlr->dma_tx) return -ENODEV; ctlr->dma_rx = rspi_request_dma_chan(dev, DMA_DEV_TO_MEM, dma_rx_id, - res->start + RSPI_SPDR); + res->start); if (!ctlr->dma_rx) { dma_release_channel(ctlr->dma_tx); ctlr->dma_tx = NULL; From 3102e9d7e51936d20c362b688c8fc8d4384dc95d Mon Sep 17 00:00:00 2001 From: Len Brown Date: Thu, 10 Feb 2022 21:06:56 -0500 Subject: [PATCH 0353/1842] tools/power turbostat: fix ICX DRAM power numbers [ Upstream commit 6397b6418935773a34b533b3348b03f4ce3d7050 ] ICX (and its duplicates) require special hard-coded DRAM RAPL units, rather than using the generic RAPL energy units. Reported-by: Srinivas Pandruvada Signed-off-by: Len Brown Signed-off-by: Sasha Levin --- tools/power/x86/turbostat/turbostat.c | 1 + 1 file changed, 1 insertion(+) diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c index 424ed19a9d54..ef65f7eed1ec 100644 --- a/tools/power/x86/turbostat/turbostat.c +++ b/tools/power/x86/turbostat/turbostat.c @@ -4189,6 +4189,7 @@ rapl_dram_energy_units_probe(int model, double rapl_energy_units) case INTEL_FAM6_HASWELL_X: /* HSX */ case INTEL_FAM6_BROADWELL_X: /* BDX */ case INTEL_FAM6_XEON_PHI_KNL: /* KNL */ + case INTEL_FAM6_ICELAKE_X: /* ICX */ return (rapl_dram_energy_units = 15.3 / 1000000); default: return (rapl_energy_units); From ca1ce206894dd976275c78ee38dbc19873f22de9 Mon Sep 17 00:00:00 2001 From: Keita Suzuki Date: Tue, 19 Apr 2022 10:37:19 +0000 Subject: [PATCH 0354/1842] drm/amd/pm: fix double free in si_parse_power_table() [ Upstream commit f3fa2becf2fc25b6ac7cf8d8b1a2e4a86b3b72bd ] In function si_parse_power_table(), array adev->pm.dpm.ps and its member is allocated. If the allocation of each member fails, the array itself is freed and returned with an error code. However, the array is later freed again in si_dpm_fini() function which is called when the function returns an error. This leads to potential double free of the array adev->pm.dpm.ps, as well as leak of its array members, since the members are not freed in the allocation function and the array is not nulled when freed. In addition adev->pm.dpm.num_ps, which keeps track of the allocated array member, is not updated until the member allocation is successfully finished, this could also lead to either use after free, or uninitialized variable access in si_dpm_fini(). Fix this by postponing the free of the array until si_dpm_fini() and increment adev->pm.dpm.num_ps everytime the array member is allocated. Signed-off-by: Keita Suzuki Signed-off-by: Alex Deucher Signed-off-by: Sasha Levin --- drivers/gpu/drm/amd/pm/powerplay/si_dpm.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c index a1e7ba5995c5..d6544a6dabc7 100644 --- a/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c +++ b/drivers/gpu/drm/amd/pm/powerplay/si_dpm.c @@ -7250,17 +7250,15 @@ static int si_parse_power_table(struct amdgpu_device *adev) if (!adev->pm.dpm.ps) return -ENOMEM; power_state_offset = (u8 *)state_array->states; - for (i = 0; i < state_array->ucNumEntries; i++) { + for (adev->pm.dpm.num_ps = 0, i = 0; i < state_array->ucNumEntries; i++) { u8 *idx; power_state = (union pplib_power_state *)power_state_offset; non_clock_array_index = power_state->v2.nonClockInfoIndex; non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *) &non_clock_info_array->nonClockInfo[non_clock_array_index]; ps = kzalloc(sizeof(struct si_ps), GFP_KERNEL); - if (ps == NULL) { - kfree(adev->pm.dpm.ps); + if (ps == NULL) return -ENOMEM; - } adev->pm.dpm.ps[i].ps_priv = ps; si_parse_pplib_non_clock_info(adev, &adev->pm.dpm.ps[i], non_clock_info, @@ -7282,8 +7280,8 @@ static int si_parse_power_table(struct amdgpu_device *adev) k++; } power_state_offset += 2 + power_state->v2.ucNumDPMLevels; + adev->pm.dpm.num_ps++; } - adev->pm.dpm.num_ps = state_array->ucNumEntries; /* fill in the vce power states */ for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++) { From e68270a78681605b6b8dfd2cb1086e367248f605 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Thibaut=20VAR=C3=88NE?= Date: Sun, 17 Apr 2022 16:51:45 +0200 Subject: [PATCH 0355/1842] ath9k: fix QCA9561 PA bias level MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit e999a5da28a0e0f7de242d841ef7d5e48f4646ae ] This patch fixes an invalid TX PA DC bias level on QCA9561, which results in a very low output power and very low throughput as devices are further away from the AP (compared to other 2.4GHz APs). This patch was suggested by Felix Fietkau, who noted[1]: "The value written to that register is wrong, because while the mask definition AR_CH0_TOP2_XPABIASLVL uses a different value for 9561, the shift definition AR_CH0_TOP2_XPABIASLVL_S is hardcoded to 12, which is wrong for 9561." In real life testing, without this patch the 2.4GHz throughput on Yuncore XD3200 is around 10Mbps sitting next to the AP, and closer to practical maximum with the patch applied. [1] https://lore.kernel.org/all/91c58969-c60e-2f41-00ac-737786d435ae@nbd.name Signed-off-by: Thibaut VARÈNE Acked-by: Felix Fietkau Acked-by: Toke Høiland-Jørgensen Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/20220417145145.1847-1-hacks+kernel@slashdirt.org Signed-off-by: Sasha Levin --- drivers/net/wireless/ath/ath9k/ar9003_phy.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/wireless/ath/ath9k/ar9003_phy.h b/drivers/net/wireless/ath/ath9k/ar9003_phy.h index a171dbb29fbb..ad949eb02f3d 100644 --- a/drivers/net/wireless/ath/ath9k/ar9003_phy.h +++ b/drivers/net/wireless/ath/ath9k/ar9003_phy.h @@ -720,7 +720,7 @@ #define AR_CH0_TOP2 (AR_SREV_9300(ah) ? 0x1628c : \ (AR_SREV_9462(ah) ? 0x16290 : 0x16284)) #define AR_CH0_TOP2_XPABIASLVL (AR_SREV_9561(ah) ? 0x1e00 : 0xf000) -#define AR_CH0_TOP2_XPABIASLVL_S 12 +#define AR_CH0_TOP2_XPABIASLVL_S (AR_SREV_9561(ah) ? 9 : 12) #define AR_CH0_XTAL (AR_SREV_9300(ah) ? 0x16294 : \ ((AR_SREV_9462(ah) || AR_SREV_9565(ah)) ? 0x16298 : \ From 27ad46da44177a78a4a0cae6fe03906888c61aa1 Mon Sep 17 00:00:00 2001 From: Luca Weiss Date: Fri, 14 Jan 2022 11:02:26 +0000 Subject: [PATCH 0356/1842] media: venus: hfi: avoid null dereference in deinit [ Upstream commit 86594f6af867b5165d2ba7b5a71fae3a5961e56c ] If venus_probe fails at pm_runtime_put_sync the error handling first calls hfi_destroy and afterwards hfi_core_deinit. As hfi_destroy sets core->ops to NULL, hfi_core_deinit cannot call the core_deinit function anymore. Avoid this null pointer derefence by skipping the call when necessary. Signed-off-by: Luca Weiss Signed-off-by: Stanimir Varbanov Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/media/platform/qcom/venus/hfi.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/media/platform/qcom/venus/hfi.c b/drivers/media/platform/qcom/venus/hfi.c index a59022adb14c..966b4d9b57a9 100644 --- a/drivers/media/platform/qcom/venus/hfi.c +++ b/drivers/media/platform/qcom/venus/hfi.c @@ -104,6 +104,9 @@ int hfi_core_deinit(struct venus_core *core, bool blocking) mutex_lock(&core->lock); } + if (!core->ops) + goto unlock; + ret = core->ops->core_deinit(core); if (!ret) From ca17e7a532d1a55466cc007b3f4d319541a27493 Mon Sep 17 00:00:00 2001 From: Zheyu Ma Date: Sun, 10 Apr 2022 08:34:41 +0100 Subject: [PATCH 0357/1842] media: pci: cx23885: Fix the error handling in cx23885_initdev() [ Upstream commit e8123311cf06d7dae71e8c5fe78e0510d20cd30b ] When the driver fails to call the dma_set_mask(), the driver will get the following splat: [ 55.853884] BUG: KASAN: use-after-free in __process_removed_driver+0x3c/0x240 [ 55.854486] Read of size 8 at addr ffff88810de60408 by task modprobe/590 [ 55.856822] Call Trace: [ 55.860327] __process_removed_driver+0x3c/0x240 [ 55.861347] bus_for_each_dev+0x102/0x160 [ 55.861681] i2c_del_driver+0x2f/0x50 This is because the driver has initialized the i2c related resources in cx23885_dev_setup() but not released them in error handling, fix this bug by modifying the error path that jumps after failing to call the dma_set_mask(). Signed-off-by: Zheyu Ma Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/media/pci/cx23885/cx23885-core.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/media/pci/cx23885/cx23885-core.c b/drivers/media/pci/cx23885/cx23885-core.c index 4e8132d4b2df..a23c025595a0 100644 --- a/drivers/media/pci/cx23885/cx23885-core.c +++ b/drivers/media/pci/cx23885/cx23885-core.c @@ -2154,7 +2154,7 @@ static int cx23885_initdev(struct pci_dev *pci_dev, err = pci_set_dma_mask(pci_dev, 0xffffffff); if (err) { pr_err("%s/0: Oops: no 32bit PCI DMA ???\n", dev->name); - goto fail_ctrl; + goto fail_dma_set_mask; } err = request_irq(pci_dev->irq, cx23885_irq, @@ -2162,7 +2162,7 @@ static int cx23885_initdev(struct pci_dev *pci_dev, if (err < 0) { pr_err("%s: can't get IRQ %d\n", dev->name, pci_dev->irq); - goto fail_irq; + goto fail_dma_set_mask; } switch (dev->board) { @@ -2184,7 +2184,7 @@ static int cx23885_initdev(struct pci_dev *pci_dev, return 0; -fail_irq: +fail_dma_set_mask: cx23885_dev_unregister(dev); fail_ctrl: v4l2_ctrl_handler_free(hdl); From 3f94169affa33c9db4a439d88f09cb2ed3a33332 Mon Sep 17 00:00:00 2001 From: Zheyu Ma Date: Sun, 10 Apr 2022 08:44:09 +0100 Subject: [PATCH 0358/1842] media: cx25821: Fix the warning when removing the module [ Upstream commit 2203436a4d24302871617373a7eb21bc17e38762 ] When removing the module, we will get the following warning: [ 14.746697] remove_proc_entry: removing non-empty directory 'irq/21', leaking at least 'cx25821[1]' [ 14.747449] WARNING: CPU: 4 PID: 368 at fs/proc/generic.c:717 remove_proc_entry+0x389/0x3f0 [ 14.751611] RIP: 0010:remove_proc_entry+0x389/0x3f0 [ 14.759589] Call Trace: [ 14.759792] [ 14.759975] unregister_irq_proc+0x14c/0x170 [ 14.760340] irq_free_descs+0x94/0xe0 [ 14.760640] mp_unmap_irq+0xb6/0x100 [ 14.760937] acpi_unregister_gsi_ioapic+0x27/0x40 [ 14.761334] acpi_pci_irq_disable+0x1d3/0x320 [ 14.761688] pci_disable_device+0x1ad/0x380 [ 14.762027] ? _raw_spin_unlock_irqrestore+0x2d/0x60 [ 14.762442] ? cx25821_shutdown+0x20/0x9f0 [cx25821] [ 14.762848] cx25821_finidev+0x48/0xc0 [cx25821] [ 14.763242] pci_device_remove+0x92/0x240 Fix this by freeing the irq before call pci_disable_device(). Signed-off-by: Zheyu Ma Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/media/pci/cx25821/cx25821-core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/media/pci/cx25821/cx25821-core.c b/drivers/media/pci/cx25821/cx25821-core.c index 285047b32c44..a3d45287a534 100644 --- a/drivers/media/pci/cx25821/cx25821-core.c +++ b/drivers/media/pci/cx25821/cx25821-core.c @@ -1340,11 +1340,11 @@ static void cx25821_finidev(struct pci_dev *pci_dev) struct cx25821_dev *dev = get_cx25821(v4l2_dev); cx25821_shutdown(dev); - pci_disable_device(pci_dev); /* unregister stuff */ if (pci_dev->irq) free_irq(pci_dev->irq, dev); + pci_disable_device(pci_dev); cx25821_dev_unregister(dev); v4l2_device_unregister(v4l2_dev); From e69e93120f6219b9cc4fba3b515b6ababd8548aa Mon Sep 17 00:00:00 2001 From: Heming Zhao Date: Fri, 1 Apr 2022 10:13:16 +0800 Subject: [PATCH 0359/1842] md/bitmap: don't set sb values if can't pass sanity check [ Upstream commit e68cb83a57a458b01c9739e2ad9cb70b04d1e6d2 ] If bitmap area contains invalid data, kernel will crash then mdadm triggers "Segmentation fault". This is cluster-md speical bug. In non-clustered env, mdadm will handle broken metadata case. In clustered array, only kernel space handles bitmap slot info. But even this bug only happened in clustered env, current sanity check is wrong, the code should be changed. How to trigger: (faulty injection) dd if=/dev/zero bs=1M count=1 oflag=direct of=/dev/sda dd if=/dev/zero bs=1M count=1 oflag=direct of=/dev/sdb mdadm -C /dev/md0 -b clustered -e 1.2 -n 2 -l mirror /dev/sda /dev/sdb mdadm -Ss echo aaa > magic.txt == below modifying slot 2 bitmap data == dd if=magic.txt of=/dev/sda seek=16384 bs=1 count=3 <== destroy magic dd if=/dev/zero of=/dev/sda seek=16436 bs=1 count=4 <== ZERO chunksize mdadm -A /dev/md0 /dev/sda /dev/sdb == kernel crashes. mdadm outputs "Segmentation fault" == Reason of kernel crash: In md_bitmap_read_sb (called by md_bitmap_create), bad bitmap magic didn't block chunksize assignment, and zero value made DIV_ROUND_UP_SECTOR_T() trigger "divide error". Crash log: kernel: md: md0 stopped. kernel: md/raid1:md0: not clean -- starting background reconstruction kernel: md/raid1:md0: active with 2 out of 2 mirrors kernel: dlm: ... ... kernel: md-cluster: Joined cluster 44810aba-38bb-e6b8-daca-bc97a0b254aa slot 1 kernel: md0: invalid bitmap file superblock: bad magic kernel: md_bitmap_copy_from_slot can't get bitmap from slot 2 kernel: md-cluster: Could not gather bitmaps from slot 2 kernel: divide error: 0000 [#1] SMP NOPTI kernel: CPU: 0 PID: 1603 Comm: mdadm Not tainted 5.14.6-1-default kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996) kernel: RIP: 0010:md_bitmap_create+0x1d1/0x850 [md_mod] kernel: RSP: 0018:ffffc22ac0843ba0 EFLAGS: 00010246 kernel: ... ... kernel: Call Trace: kernel: ? dlm_lock_sync+0xd0/0xd0 [md_cluster 77fe..7a0] kernel: md_bitmap_copy_from_slot+0x2c/0x290 [md_mod 24ea..d3a] kernel: load_bitmaps+0xec/0x210 [md_cluster 77fe..7a0] kernel: md_bitmap_load+0x81/0x1e0 [md_mod 24ea..d3a] kernel: do_md_run+0x30/0x100 [md_mod 24ea..d3a] kernel: md_ioctl+0x1290/0x15a0 [md_mod 24ea....d3a] kernel: ? mddev_unlock+0xaa/0x130 [md_mod 24ea..d3a] kernel: ? blkdev_ioctl+0xb1/0x2b0 kernel: block_ioctl+0x3b/0x40 kernel: __x64_sys_ioctl+0x7f/0xb0 kernel: do_syscall_64+0x59/0x80 kernel: ? exit_to_user_mode_prepare+0x1ab/0x230 kernel: ? syscall_exit_to_user_mode+0x18/0x40 kernel: ? do_syscall_64+0x69/0x80 kernel: entry_SYSCALL_64_after_hwframe+0x44/0xae kernel: RIP: 0033:0x7f4a15fa722b kernel: ... ... kernel: ---[ end trace 8afa7612f559c868 ]--- kernel: RIP: 0010:md_bitmap_create+0x1d1/0x850 [md_mod] Reported-by: kernel test robot Reported-by: Dan Carpenter Acked-by: Guoqing Jiang Signed-off-by: Heming Zhao Signed-off-by: Song Liu Signed-off-by: Sasha Levin --- drivers/md/md-bitmap.c | 44 ++++++++++++++++++++++-------------------- 1 file changed, 23 insertions(+), 21 deletions(-) diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c index ea3130e11680..d377ea060925 100644 --- a/drivers/md/md-bitmap.c +++ b/drivers/md/md-bitmap.c @@ -639,14 +639,6 @@ static int md_bitmap_read_sb(struct bitmap *bitmap) daemon_sleep = le32_to_cpu(sb->daemon_sleep) * HZ; write_behind = le32_to_cpu(sb->write_behind); sectors_reserved = le32_to_cpu(sb->sectors_reserved); - /* Setup nodes/clustername only if bitmap version is - * cluster-compatible - */ - if (sb->version == cpu_to_le32(BITMAP_MAJOR_CLUSTERED)) { - nodes = le32_to_cpu(sb->nodes); - strlcpy(bitmap->mddev->bitmap_info.cluster_name, - sb->cluster_name, 64); - } /* verify that the bitmap-specific fields are valid */ if (sb->magic != cpu_to_le32(BITMAP_MAGIC)) @@ -668,6 +660,16 @@ static int md_bitmap_read_sb(struct bitmap *bitmap) goto out; } + /* + * Setup nodes/clustername only if bitmap version is + * cluster-compatible + */ + if (sb->version == cpu_to_le32(BITMAP_MAJOR_CLUSTERED)) { + nodes = le32_to_cpu(sb->nodes); + strlcpy(bitmap->mddev->bitmap_info.cluster_name, + sb->cluster_name, 64); + } + /* keep the array size field of the bitmap superblock up to date */ sb->sync_size = cpu_to_le64(bitmap->mddev->resync_max_sectors); @@ -700,9 +702,9 @@ static int md_bitmap_read_sb(struct bitmap *bitmap) out: kunmap_atomic(sb); - /* Assigning chunksize is required for "re_read" */ - bitmap->mddev->bitmap_info.chunksize = chunksize; if (err == 0 && nodes && (bitmap->cluster_slot < 0)) { + /* Assigning chunksize is required for "re_read" */ + bitmap->mddev->bitmap_info.chunksize = chunksize; err = md_setup_cluster(bitmap->mddev, nodes); if (err) { pr_warn("%s: Could not setup cluster service (%d)\n", @@ -713,18 +715,18 @@ static int md_bitmap_read_sb(struct bitmap *bitmap) goto re_read; } - out_no_sb: - if (test_bit(BITMAP_STALE, &bitmap->flags)) - bitmap->events_cleared = bitmap->mddev->events; - bitmap->mddev->bitmap_info.chunksize = chunksize; - bitmap->mddev->bitmap_info.daemon_sleep = daemon_sleep; - bitmap->mddev->bitmap_info.max_write_behind = write_behind; - bitmap->mddev->bitmap_info.nodes = nodes; - if (bitmap->mddev->bitmap_info.space == 0 || - bitmap->mddev->bitmap_info.space > sectors_reserved) - bitmap->mddev->bitmap_info.space = sectors_reserved; - if (err) { + if (err == 0) { + if (test_bit(BITMAP_STALE, &bitmap->flags)) + bitmap->events_cleared = bitmap->mddev->events; + bitmap->mddev->bitmap_info.chunksize = chunksize; + bitmap->mddev->bitmap_info.daemon_sleep = daemon_sleep; + bitmap->mddev->bitmap_info.max_write_behind = write_behind; + bitmap->mddev->bitmap_info.nodes = nodes; + if (bitmap->mddev->bitmap_info.space == 0 || + bitmap->mddev->bitmap_info.space > sectors_reserved) + bitmap->mddev->bitmap_info.space = sectors_reserved; + } else { md_bitmap_print_sb(bitmap); if (bitmap->cluster_slot < 0) md_cluster_stop(bitmap->mddev); From 90281cadf5077f2d2bec8b08c2ead1f8cd12660e Mon Sep 17 00:00:00 2001 From: Aidan MacDonald Date: Mon, 11 Apr 2022 16:37:53 +0100 Subject: [PATCH 0360/1842] mmc: jz4740: Apply DMA engine limits to maximum segment size [ Upstream commit afadb04f1d6e74b18a253403f5274cde5e3fd7bd ] Do what is done in other DMA-enabled MMC host drivers (cf. host/mmci.c) and limit the maximum segment size based on the DMA engine's capabilities. This is needed to avoid warnings like the following with CONFIG_DMA_API_DEBUG=y. ------------[ cut here ]------------ WARNING: CPU: 0 PID: 21 at kernel/dma/debug.c:1162 debug_dma_map_sg+0x2f4/0x39c DMA-API: jz4780-dma 13420000.dma-controller: mapping sg segment longer than device claims to support [len=98304] [max=65536] CPU: 0 PID: 21 Comm: kworker/0:1H Not tainted 5.18.0-rc1 #19 Workqueue: kblockd blk_mq_run_work_fn Stack : 81575aec 00000004 80620000 80620000 80620000 805e7358 00000009 801537ac 814c832c 806276e3 806e34b4 80620000 81575aec 00000001 81575ab8 09291444 00000000 00000000 805e7358 81575958 ffffffea 8157596c 00000000 636f6c62 6220646b 80387a70 0000000f 6d5f6b6c 80620000 00000000 81575ba4 00000009 805e170c 80896640 00000001 00010000 00000000 00000000 00006098 806e0000 ... Call Trace: [<80107670>] show_stack+0x84/0x120 [<80528cd8>] __warn+0xb8/0xec [<80528d78>] warn_slowpath_fmt+0x6c/0xb8 [<8016f1d4>] debug_dma_map_sg+0x2f4/0x39c [<80169d4c>] __dma_map_sg_attrs+0xf0/0x118 [<8016a27c>] dma_map_sg_attrs+0x14/0x28 [<804f66b4>] jz4740_mmc_prepare_dma_data+0x74/0xa4 [<804f6714>] jz4740_mmc_pre_request+0x30/0x54 [<804f4ff4>] mmc_blk_mq_issue_rq+0x6e0/0x7bc [<804f5590>] mmc_mq_queue_rq+0x220/0x2d4 [<8038b2c0>] blk_mq_dispatch_rq_list+0x480/0x664 [<80391040>] blk_mq_do_dispatch_sched+0x2dc/0x370 [<80391468>] __blk_mq_sched_dispatch_requests+0xec/0x164 [<80391540>] blk_mq_sched_dispatch_requests+0x44/0x94 [<80387900>] __blk_mq_run_hw_queue+0xb0/0xcc [<80134c14>] process_one_work+0x1b8/0x264 [<80134ff8>] worker_thread+0x2ec/0x3b8 [<8013b13c>] kthread+0x104/0x10c [<80101dcc>] ret_from_kernel_thread+0x14/0x1c ---[ end trace 0000000000000000 ]--- Signed-off-by: Aidan MacDonald Link: https://lore.kernel.org/r/20220411153753.50443-1-aidanmacdonald.0x0@gmail.com Signed-off-by: Ulf Hansson Signed-off-by: Sasha Levin --- drivers/mmc/host/jz4740_mmc.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/drivers/mmc/host/jz4740_mmc.c b/drivers/mmc/host/jz4740_mmc.c index a1f92fed2a55..aa3dfb9c1071 100644 --- a/drivers/mmc/host/jz4740_mmc.c +++ b/drivers/mmc/host/jz4740_mmc.c @@ -236,6 +236,26 @@ static int jz4740_mmc_acquire_dma_channels(struct jz4740_mmc_host *host) return PTR_ERR(host->dma_rx); } + /* + * Limit the maximum segment size in any SG entry according to + * the parameters of the DMA engine device. + */ + if (host->dma_tx) { + struct device *dev = host->dma_tx->device->dev; + unsigned int max_seg_size = dma_get_max_seg_size(dev); + + if (max_seg_size < host->mmc->max_seg_size) + host->mmc->max_seg_size = max_seg_size; + } + + if (host->dma_rx) { + struct device *dev = host->dma_rx->device->dev; + unsigned int max_seg_size = dma_get_max_seg_size(dev); + + if (max_seg_size < host->mmc->max_seg_size) + host->mmc->max_seg_size = max_seg_size; + } + return 0; } From 95050b984715e3467dd43dd0178cf99465c09afb Mon Sep 17 00:00:00 2001 From: Vignesh Raghavendra Date: Mon, 25 Apr 2022 12:01:20 +0530 Subject: [PATCH 0361/1842] drivers: mmc: sdhci_am654: Add the quirk to set TESTCD bit [ Upstream commit c7666240ec76422cb7546bd07cc8ae80dc0ccdd2 ] The ARASAN MMC controller on Keystone 3 class of devices need the SDCD line to be connected for proper functioning. Similar to the issue pointed out in sdhci-of-arasan.c driver, commit 3794c542641f ("mmc: sdhci-of-arasan: Set controller to test mode when no CD bit"). In cases where this can't be connected, add a quirk to force the controller into test mode and set the TESTCD bit. Use the flag "ti,fails-without-test-cd", to implement this above quirk when required. Signed-off-by: Vignesh Raghavendra Signed-off-by: Aswath Govindraju Link: https://lore.kernel.org/r/20220425063120.10135-3-a-govindraju@ti.com Signed-off-by: Ulf Hansson Signed-off-by: Sasha Levin --- drivers/mmc/host/sdhci_am654.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c index a64ea143d185..7cab9d831afb 100644 --- a/drivers/mmc/host/sdhci_am654.c +++ b/drivers/mmc/host/sdhci_am654.c @@ -147,6 +147,9 @@ struct sdhci_am654_data { int drv_strength; int strb_sel; u32 flags; + u32 quirks; + +#define SDHCI_AM654_QUIRK_FORCE_CDTEST BIT(0) }; struct sdhci_am654_driver_data { @@ -369,6 +372,21 @@ static void sdhci_am654_write_b(struct sdhci_host *host, u8 val, int reg) } } +static void sdhci_am654_reset(struct sdhci_host *host, u8 mask) +{ + u8 ctrl; + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); + struct sdhci_am654_data *sdhci_am654 = sdhci_pltfm_priv(pltfm_host); + + sdhci_reset(host, mask); + + if (sdhci_am654->quirks & SDHCI_AM654_QUIRK_FORCE_CDTEST) { + ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL); + ctrl |= SDHCI_CTRL_CDTEST_INS | SDHCI_CTRL_CDTEST_EN; + sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL); + } +} + static int sdhci_am654_execute_tuning(struct mmc_host *mmc, u32 opcode) { struct sdhci_host *host = mmc_priv(mmc); @@ -500,7 +518,7 @@ static struct sdhci_ops sdhci_j721e_4bit_ops = { .set_clock = sdhci_j721e_4bit_set_clock, .write_b = sdhci_am654_write_b, .irq = sdhci_am654_cqhci_irq, - .reset = sdhci_reset, + .reset = sdhci_am654_reset, }; static const struct sdhci_pltfm_data sdhci_j721e_4bit_pdata = { @@ -719,6 +737,9 @@ static int sdhci_am654_get_of_property(struct platform_device *pdev, device_property_read_u32(dev, "ti,clkbuf-sel", &sdhci_am654->clkbuf_sel); + if (device_property_read_bool(dev, "ti,fails-without-test-cd")) + sdhci_am654->quirks |= SDHCI_AM654_QUIRK_FORCE_CDTEST; + sdhci_get_of_property(pdev); return 0; From 508add11af0954a10abc76974d47240b223fd3b6 Mon Sep 17 00:00:00 2001 From: Lv Ruyi Date: Mon, 18 Apr 2022 10:57:55 +0000 Subject: [PATCH 0362/1842] scsi: megaraid: Fix error check return value of register_chrdev() [ Upstream commit c5acd61dbb32b6bda0f3a354108f2b8dcb788985 ] If major equals 0, register_chrdev() returns an error code when it fails. This function dynamically allocates a major and returns its number on success, so we should use "< 0" to check it instead of "!". Link: https://lore.kernel.org/r/20220418105755.2558828-1-lv.ruyi@zte.com.cn Reported-by: Zeal Robot Signed-off-by: Lv Ruyi Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin --- drivers/scsi/megaraid.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c index 80f546976c7e..daffa36988ae 100644 --- a/drivers/scsi/megaraid.c +++ b/drivers/scsi/megaraid.c @@ -4634,7 +4634,7 @@ static int __init megaraid_init(void) * major number allocation. */ major = register_chrdev(0, "megadev_legacy", &megadev_fops); - if (!major) { + if (major < 0) { printk(KERN_WARNING "megaraid: failed to register char device\n"); } From 1869f9bfafe1166340205f94c56467bbc7ae613f Mon Sep 17 00:00:00 2001 From: Minghao Chi Date: Wed, 20 Apr 2022 09:03:52 +0000 Subject: [PATCH 0363/1842] scsi: ufs: Use pm_runtime_resume_and_get() instead of pm_runtime_get_sync() [ Upstream commit 75b8715e20a20bc7b4844835e4035543a2674200 ] Using pm_runtime_resume_and_get() to replace pm_runtime_get_sync() and pm_runtime_put_noidle(). This change is just to simplify the code, no actual functional changes. Link: https://lore.kernel.org/r/20220420090353.2588804-1-chi.minghao@zte.com.cn Reported-by: Zeal Robot Signed-off-by: Minghao Chi Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin --- drivers/scsi/ufs/ti-j721e-ufs.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/drivers/scsi/ufs/ti-j721e-ufs.c b/drivers/scsi/ufs/ti-j721e-ufs.c index eafe0db98d54..122d650d0810 100644 --- a/drivers/scsi/ufs/ti-j721e-ufs.c +++ b/drivers/scsi/ufs/ti-j721e-ufs.c @@ -29,11 +29,9 @@ static int ti_j721e_ufs_probe(struct platform_device *pdev) return PTR_ERR(regbase); pm_runtime_enable(dev); - ret = pm_runtime_get_sync(dev); - if (ret < 0) { - pm_runtime_put_noidle(dev); + ret = pm_runtime_resume_and_get(dev); + if (ret < 0) goto disable_pm; - } /* Select MPHY refclk frequency */ clk = devm_clk_get(dev, NULL); From fa1b509d41c5433672f72c0615cf4aefa0611c99 Mon Sep 17 00:00:00 2001 From: James Smart Date: Tue, 26 Apr 2022 11:14:19 -0700 Subject: [PATCH 0364/1842] scsi: lpfc: Fix resource leak in lpfc_sli4_send_seq_to_ulp() [ Upstream commit 646db1a560f44236b7278b822ca99a1d3b6ea72c ] If no handler is found in lpfc_complete_unsol_iocb() to match the rctl of a received frame, the frame is dropped and resources are leaked. Fix by returning resources when discarding an unhandled frame type. Update lpfc_fc_frame_check() handling of NOP basic link service. Link: https://lore.kernel.org/r/20220426181419.9154-1-jsmart2021@gmail.com Co-developed-by: Justin Tee Signed-off-by: Justin Tee Signed-off-by: James Smart Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin --- drivers/scsi/lpfc/lpfc_sli.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c index a50f870c5f72..755d68b98160 100644 --- a/drivers/scsi/lpfc/lpfc_sli.c +++ b/drivers/scsi/lpfc/lpfc_sli.c @@ -17445,7 +17445,6 @@ lpfc_fc_frame_check(struct lpfc_hba *phba, struct fc_frame_header *fc_hdr) case FC_RCTL_ELS_REP: /* extended link services reply */ case FC_RCTL_ELS4_REQ: /* FC-4 ELS request */ case FC_RCTL_ELS4_REP: /* FC-4 ELS reply */ - case FC_RCTL_BA_NOP: /* basic link service NOP */ case FC_RCTL_BA_ABTS: /* basic link service abort */ case FC_RCTL_BA_RMC: /* remove connection */ case FC_RCTL_BA_ACC: /* basic accept */ @@ -17466,6 +17465,7 @@ lpfc_fc_frame_check(struct lpfc_hba *phba, struct fc_frame_header *fc_hdr) fc_vft_hdr = (struct fc_vft_header *)fc_hdr; fc_hdr = &((struct fc_frame_header *)fc_vft_hdr)[1]; return lpfc_fc_frame_check(phba, fc_hdr); + case FC_RCTL_BA_NOP: /* basic link service NOP */ default: goto drop; } @@ -18284,12 +18284,14 @@ lpfc_sli4_send_seq_to_ulp(struct lpfc_vport *vport, if (!lpfc_complete_unsol_iocb(phba, phba->sli4_hba.els_wq->pring, iocbq, fc_hdr->fh_r_ctl, - fc_hdr->fh_type)) + fc_hdr->fh_type)) { lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, "2540 Ring %d handler: unexpected Rctl " "x%x Type x%x received\n", LPFC_ELS_RING, fc_hdr->fh_r_ctl, fc_hdr->fh_type); + lpfc_in_buf_free(phba, &seq_dmabuf->dbuf); + } /* Free iocb created in lpfc_prep_seq */ list_for_each_entry_safe(curr_iocb, next_iocb, From 60afa4f4e1350c876d8a061182a70c224de275dd Mon Sep 17 00:00:00 2001 From: Hari Chandrakanthan Date: Sat, 23 Apr 2022 12:36:47 +0300 Subject: [PATCH 0365/1842] ath11k: disable spectral scan during spectral deinit [ Upstream commit 161c64de239c7018e0295e7e0520a19f00aa32dc ] When ath11k modules are removed using rmmod with spectral scan enabled, crash is observed. Different crash trace is observed for each crash. Send spectral scan disable WMI command to firmware before cleaning the spectral dbring in the spectral_deinit API to avoid this crash. call trace from one of the crash observed: [ 1252.880802] Unable to handle kernel NULL pointer dereference at virtual address 00000008 [ 1252.882722] pgd = 0f42e886 [ 1252.890955] [00000008] *pgd=00000000 [ 1252.893478] Internal error: Oops: 5 [#1] PREEMPT SMP ARM [ 1253.093035] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.4.89 #0 [ 1253.115261] Hardware name: Generic DT based system [ 1253.121149] PC is at ath11k_spectral_process_data+0x434/0x574 [ath11k] [ 1253.125940] LR is at 0x88e31017 [ 1253.132448] pc : [<7f9387b8>] lr : [<88e31017>] psr: a0000193 [ 1253.135488] sp : 80d01bc8 ip : 00000001 fp : 970e0000 [ 1253.141737] r10: 88e31000 r9 : 970ec000 r8 : 00000080 [ 1253.146946] r7 : 94734040 r6 : a0000113 r5 : 00000057 r4 : 00000000 [ 1253.152159] r3 : e18cb694 r2 : 00000217 r1 : 1df1f000 r0 : 00000001 [ 1253.158755] Flags: NzCv IRQs off FIQs on Mode SVC_32 ISA ARM Segment user [ 1253.165266] Control: 10c0383d Table: 5e71006a DAC: 00000055 [ 1253.172472] Process swapper/0 (pid: 0, stack limit = 0x60870141) [ 1253.458055] [<7f9387b8>] (ath11k_spectral_process_data [ath11k]) from [<7f917fdc>] (ath11k_dbring_buffer_release_event+0x214/0x2e4 [ath11k]) [ 1253.466139] [<7f917fdc>] (ath11k_dbring_buffer_release_event [ath11k]) from [<7f8ea3c4>] (ath11k_wmi_tlv_op_rx+0x1840/0x29cc [ath11k]) [ 1253.478807] [<7f8ea3c4>] (ath11k_wmi_tlv_op_rx [ath11k]) from [<7f8fe868>] (ath11k_htc_rx_completion_handler+0x180/0x4e0 [ath11k]) [ 1253.490699] [<7f8fe868>] (ath11k_htc_rx_completion_handler [ath11k]) from [<7f91308c>] (ath11k_ce_per_engine_service+0x2c4/0x3b4 [ath11k]) [ 1253.502386] [<7f91308c>] (ath11k_ce_per_engine_service [ath11k]) from [<7f9a4198>] (ath11k_pci_ce_tasklet+0x28/0x80 [ath11k_pci]) [ 1253.514811] [<7f9a4198>] (ath11k_pci_ce_tasklet [ath11k_pci]) from [<8032227c>] (tasklet_action_common.constprop.2+0x64/0xe8) [ 1253.526476] [<8032227c>] (tasklet_action_common.constprop.2) from [<803021e8>] (__do_softirq+0x130/0x2d0) [ 1253.537756] [<803021e8>] (__do_softirq) from [<80322610>] (irq_exit+0xcc/0xe8) [ 1253.547304] [<80322610>] (irq_exit) from [<8036a4a4>] (__handle_domain_irq+0x60/0xb4) [ 1253.554428] [<8036a4a4>] (__handle_domain_irq) from [<805eb348>] (gic_handle_irq+0x4c/0x90) [ 1253.562321] [<805eb348>] (gic_handle_irq) from [<80301a78>] (__irq_svc+0x58/0x8c) Tested-on: QCN6122 hw1.0 AHB WLAN.HK.2.6.0.1-00851-QCAHKSWPL_SILICONZ-1 Signed-off-by: Hari Chandrakanthan Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/1649396345-349-1-git-send-email-quic_haric@quicinc.com Signed-off-by: Sasha Levin --- drivers/net/wireless/ath/ath11k/spectral.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/drivers/net/wireless/ath/ath11k/spectral.c b/drivers/net/wireless/ath/ath11k/spectral.c index ac2a8cfdc1c0..f5ab455ea1a2 100644 --- a/drivers/net/wireless/ath/ath11k/spectral.c +++ b/drivers/net/wireless/ath/ath11k/spectral.c @@ -214,7 +214,10 @@ static int ath11k_spectral_scan_config(struct ath11k *ar, return -ENODEV; arvif->spectral_enabled = (mode != ATH11K_SPECTRAL_DISABLED); + + spin_lock_bh(&ar->spectral.lock); ar->spectral.mode = mode; + spin_unlock_bh(&ar->spectral.lock); ret = ath11k_wmi_vdev_spectral_enable(ar, arvif->vdev_id, ATH11K_WMI_SPECTRAL_TRIGGER_CMD_CLEAR, @@ -829,9 +832,6 @@ static inline void ath11k_spectral_ring_free(struct ath11k *ar) { struct ath11k_spectral *sp = &ar->spectral; - if (!sp->enabled) - return; - ath11k_dbring_srng_cleanup(ar, &sp->rx_ring); ath11k_dbring_buf_cleanup(ar, &sp->rx_ring); } @@ -883,15 +883,16 @@ void ath11k_spectral_deinit(struct ath11k_base *ab) if (!sp->enabled) continue; - ath11k_spectral_debug_unregister(ar); - ath11k_spectral_ring_free(ar); + mutex_lock(&ar->conf_mutex); + ath11k_spectral_scan_config(ar, ATH11K_SPECTRAL_DISABLED); + mutex_unlock(&ar->conf_mutex); spin_lock_bh(&sp->lock); - - sp->mode = ATH11K_SPECTRAL_DISABLED; sp->enabled = false; - spin_unlock_bh(&sp->lock); + + ath11k_spectral_debug_unregister(ar); + ath11k_spectral_ring_free(ar); } } From 8c3fe9ff807e0bbddf620d7136931125a4addcb9 Mon Sep 17 00:00:00 2001 From: Hans de Goede Date: Wed, 27 Apr 2022 15:49:18 +0200 Subject: [PATCH 0366/1842] ASoC: Intel: bytcr_rt5640: Add quirk for the HP Pro Tablet 408 [ Upstream commit ce216cfa84a4e1c23b105e652c550bdeaac9e922 ] Add a quirk for the HP Pro Tablet 408, this BYTCR tablet has no CHAN package in its ACPI tables and uses SSP0-AIF1 rather then SSP0-AIF2 which is the default for BYTCR devices. It also uses DMIC1 for the internal mic rather then the default IN3 and it uses JD2 rather then the default JD1 for jack-detect. BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=211485 Signed-off-by: Hans de Goede Acked-by: Pierre-Louis Bossart Link: https://lore.kernel.org/r/20220427134918.527381-1-hdegoede@redhat.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/intel/boards/bytcr_rt5640.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c index 43ee3d095a1b..3020a993f6ef 100644 --- a/sound/soc/intel/boards/bytcr_rt5640.c +++ b/sound/soc/intel/boards/bytcr_rt5640.c @@ -615,6 +615,18 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = { BYT_RT5640_OVCD_SF_0P75 | BYT_RT5640_MCLK_EN), }, + { /* HP Pro Tablet 408 */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), + DMI_MATCH(DMI_PRODUCT_NAME, "HP Pro Tablet 408"), + }, + .driver_data = (void *)(BYT_RT5640_DMIC1_MAP | + BYT_RT5640_JD_SRC_JD2_IN4N | + BYT_RT5640_OVCD_TH_1500UA | + BYT_RT5640_OVCD_SF_0P75 | + BYT_RT5640_SSP0_AIF1 | + BYT_RT5640_MCLK_EN), + }, { /* HP Stream 7 */ .matches = { DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), From b5cd108143513e4498027b96ec4710702d186f11 Mon Sep 17 00:00:00 2001 From: Steven Price Date: Fri, 3 Dec 2021 10:28:15 +0000 Subject: [PATCH 0367/1842] drm/plane: Move range check for format_count earlier [ Upstream commit 4b674dd69701c2e22e8e7770c1706a69f3b17269 ] While the check for format_count > 64 in __drm_universal_plane_init() shouldn't be hit (it's a WARN_ON), in its current position it will then leak the plane->format_types array and fail to call drm_mode_object_unregister() leaking the modeset identifier. Move it to the start of the function to avoid allocating those resources in the first place. Signed-off-by: Steven Price Signed-off-by: Liviu Dudau Link: https://lore.kernel.org/dri-devel/20211203102815.38624-1-steven.price@arm.com/ Signed-off-by: Sasha Levin --- drivers/gpu/drm/drm_plane.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c index affe1cfed009..24f643982903 100644 --- a/drivers/gpu/drm/drm_plane.c +++ b/drivers/gpu/drm/drm_plane.c @@ -186,6 +186,13 @@ int drm_universal_plane_init(struct drm_device *dev, struct drm_plane *plane, if (WARN_ON(config->num_total_plane >= 32)) return -EINVAL; + /* + * First driver to need more than 64 formats needs to fix this. Each + * format is encoded as a bit and the current code only supports a u64. + */ + if (WARN_ON(format_count > 64)) + return -EINVAL; + WARN_ON(drm_drv_uses_atomic_modeset(dev) && (!funcs->atomic_destroy_state || !funcs->atomic_duplicate_state)); @@ -207,13 +214,6 @@ int drm_universal_plane_init(struct drm_device *dev, struct drm_plane *plane, return -ENOMEM; } - /* - * First driver to need more than 64 formats needs to fix this. Each - * format is encoded as a bit and the current code only supports a u64. - */ - if (WARN_ON(format_count > 64)) - return -EINVAL; - if (format_modifiers) { const uint64_t *temp_modifiers = format_modifiers; From 20ad91d08a802ca219d0471e66e045ab8240939f Mon Sep 17 00:00:00 2001 From: Evan Quan Date: Mon, 25 Apr 2022 10:16:46 +0800 Subject: [PATCH 0368/1842] drm/amd/pm: fix the compile warning [ Upstream commit 555238d92ac32dbad2d77ad2bafc48d17391990c ] Fix the compile warning below: drivers/gpu/drm/amd/amdgpu/../pm/legacy-dpm/kv_dpm.c:1641 kv_get_acp_boot_level() warn: always true condition '(table->entries[i]->clk >= 0) => (0-u32max >= 0)' Reported-by: kernel test robot CC: Alex Deucher Signed-off-by: Evan Quan Reviewed-by: Alex Deucher Signed-off-by: Alex Deucher Signed-off-by: Sasha Levin --- drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c | 14 +------------- 1 file changed, 1 insertion(+), 13 deletions(-) diff --git a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c index 4b3faaccecb9..c8a5a5698edd 100644 --- a/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c +++ b/drivers/gpu/drm/amd/pm/powerplay/kv_dpm.c @@ -1609,19 +1609,7 @@ static int kv_update_samu_dpm(struct amdgpu_device *adev, bool gate) static u8 kv_get_acp_boot_level(struct amdgpu_device *adev) { - u8 i; - struct amdgpu_clock_voltage_dependency_table *table = - &adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table; - - for (i = 0; i < table->count; i++) { - if (table->entries[i].clk >= 0) /* XXX */ - break; - } - - if (i >= table->count) - i = table->count - 1; - - return i; + return 0; } static void kv_update_acp_boot_level(struct amdgpu_device *adev) From 8aa3750986ffcf73e0692db3b40dd3a8e8c0c575 Mon Sep 17 00:00:00 2001 From: Abhishek Kumar Date: Wed, 27 Apr 2022 10:37:33 +0300 Subject: [PATCH 0369/1842] ath10k: skip ath10k_halt during suspend for driver state RESTARTING [ Upstream commit b72a4aff947ba807177bdabb43debaf2c66bee05 ] Double free crash is observed when FW recovery(caused by wmi timeout/crash) is followed by immediate suspend event. The FW recovery is triggered by ath10k_core_restart() which calls driver clean up via ath10k_halt(). When the suspend event occurs between the FW recovery, the restart worker thread is put into frozen state until suspend completes. The suspend event triggers ath10k_stop() which again triggers ath10k_halt() The double invocation of ath10k_halt() causes ath10k_htt_rx_free() to be called twice(Note: ath10k_htt_rx_alloc was not called by restart worker thread because of its frozen state), causing the crash. To fix this, during the suspend flow, skip call to ath10k_halt() in ath10k_stop() when the current driver state is ATH10K_STATE_RESTARTING. Also, for driver state ATH10K_STATE_RESTARTING, call ath10k_wait_for_suspend() in ath10k_stop(). This is because call to ath10k_wait_for_suspend() is skipped later in [ath10k_halt() > ath10k_core_stop()] for the driver state ATH10K_STATE_RESTARTING. The frozen restart worker thread will be cancelled during resume when the device comes out of suspend. Below is the crash stack for reference: [ 428.469167] ------------[ cut here ]------------ [ 428.469180] kernel BUG at mm/slub.c:4150! [ 428.469193] invalid opcode: 0000 [#1] PREEMPT SMP NOPTI [ 428.469219] Workqueue: events_unbound async_run_entry_fn [ 428.469230] RIP: 0010:kfree+0x319/0x31b [ 428.469241] RSP: 0018:ffffa1fac015fc30 EFLAGS: 00010246 [ 428.469247] RAX: ffffedb10419d108 RBX: ffff8c05262b0000 [ 428.469252] RDX: ffff8c04a8c07000 RSI: 0000000000000000 [ 428.469256] RBP: ffffa1fac015fc78 R08: 0000000000000000 [ 428.469276] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 428.469285] Call Trace: [ 428.469295] ? dma_free_attrs+0x5f/0x7d [ 428.469320] ath10k_core_stop+0x5b/0x6f [ 428.469336] ath10k_halt+0x126/0x177 [ 428.469352] ath10k_stop+0x41/0x7e [ 428.469387] drv_stop+0x88/0x10e [ 428.469410] __ieee80211_suspend+0x297/0x411 [ 428.469441] rdev_suspend+0x6e/0xd0 [ 428.469462] wiphy_suspend+0xb1/0x105 [ 428.469483] ? name_show+0x2d/0x2d [ 428.469490] dpm_run_callback+0x8c/0x126 [ 428.469511] ? name_show+0x2d/0x2d [ 428.469517] __device_suspend+0x2e7/0x41b [ 428.469523] async_suspend+0x1f/0x93 [ 428.469529] async_run_entry_fn+0x3d/0xd1 [ 428.469535] process_one_work+0x1b1/0x329 [ 428.469541] worker_thread+0x213/0x372 [ 428.469547] kthread+0x150/0x15f [ 428.469552] ? pr_cont_work+0x58/0x58 [ 428.469558] ? kthread_blkcg+0x31/0x31 Tested-on: QCA6174 hw3.2 PCI WLAN.RM.4.4.1-00288-QCARMSWPZ-1 Co-developed-by: Wen Gong Signed-off-by: Wen Gong Signed-off-by: Abhishek Kumar Reviewed-by: Brian Norris Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/20220426221859.v2.1.I650b809482e1af8d0156ed88b5dc2677a0711d46@changeid Signed-off-by: Sasha Levin --- drivers/net/wireless/ath/ath10k/mac.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c index b59d482d9c23..b61cd275fbda 100644 --- a/drivers/net/wireless/ath/ath10k/mac.c +++ b/drivers/net/wireless/ath/ath10k/mac.c @@ -5170,13 +5170,29 @@ static int ath10k_start(struct ieee80211_hw *hw) static void ath10k_stop(struct ieee80211_hw *hw) { struct ath10k *ar = hw->priv; + u32 opt; ath10k_drain_tx(ar); mutex_lock(&ar->conf_mutex); if (ar->state != ATH10K_STATE_OFF) { - if (!ar->hw_rfkill_on) - ath10k_halt(ar); + if (!ar->hw_rfkill_on) { + /* If the current driver state is RESTARTING but not yet + * fully RESTARTED because of incoming suspend event, + * then ath10k_halt() is already called via + * ath10k_core_restart() and should not be called here. + */ + if (ar->state != ATH10K_STATE_RESTARTING) { + ath10k_halt(ar); + } else { + /* Suspending here, because when in RESTARTING + * state, ath10k_core_stop() skips + * ath10k_wait_for_suspend(). + */ + opt = WMI_PDEV_SUSPEND_AND_DISABLE_INTR; + ath10k_wait_for_suspend(ar, opt); + } + } ar->state = ATH10K_STATE_OFF; } mutex_unlock(&ar->conf_mutex); From ad97425d23af3c3b8d4f6a2bb666cb485087c007 Mon Sep 17 00:00:00 2001 From: Alexandru Elisei Date: Mon, 25 Apr 2022 12:44:41 +0100 Subject: [PATCH 0370/1842] arm64: compat: Do not treat syscall number as ESR_ELx for a bad syscall [ Upstream commit 3fed9e551417b84038b15117732ea4505eee386b ] If a compat process tries to execute an unknown system call above the __ARM_NR_COMPAT_END number, the kernel sends a SIGILL signal to the offending process. Information about the error is printed to dmesg in compat_arm_syscall() -> arm64_notify_die() -> arm64_force_sig_fault() -> arm64_show_signal(). arm64_show_signal() interprets a non-zero value for current->thread.fault_code as an exception syndrome and displays the message associated with the ESR_ELx.EC field (bits 31:26). current->thread.fault_code is set in compat_arm_syscall() -> arm64_notify_die() with the bad syscall number instead of a valid ESR_ELx value. This means that the ESR_ELx.EC field has the value that the user set for the syscall number and the kernel can end up printing bogus exception messages*. For example, for the syscall number 0x68000000, which evaluates to ESR_ELx.EC value of 0x1A (ESR_ELx_EC_FPAC) the kernel prints this error: [ 18.349161] syscall[300]: unhandled exception: ERET/ERETAA/ERETAB, ESR 0x68000000, Oops - bad compat syscall(2) in syscall[10000+50000] [ 18.350639] CPU: 2 PID: 300 Comm: syscall Not tainted 5.18.0-rc1 #79 [ 18.351249] Hardware name: Pine64 RockPro64 v2.0 (DT) [..] which is misleading, as the bad compat syscall has nothing to do with pointer authentication. Stop arm64_show_signal() from printing exception syndrome information by having compat_arm_syscall() set the ESR_ELx value to 0, as it has no meaning for an invalid system call number. The example above now becomes: [ 19.935275] syscall[301]: unhandled exception: Oops - bad compat syscall(2) in syscall[10000+50000] [ 19.936124] CPU: 1 PID: 301 Comm: syscall Not tainted 5.18.0-rc1-00005-g7e08006d4102 #80 [ 19.936894] Hardware name: Pine64 RockPro64 v2.0 (DT) [..] which although shows less information because the syscall number, wrongfully advertised as the ESR value, is missing, it is better than showing plainly wrong information. The syscall number can be easily obtained with strace. *A 32-bit value above or equal to 0x8000_0000 is interpreted as a negative integer in compat_arm_syscal() and the condition scno < __ARM_NR_COMPAT_END evaluates to true; the syscall will exit to userspace in this case with the ENOSYS error code instead of arm64_notify_die() being called. Signed-off-by: Alexandru Elisei Reviewed-by: Marc Zyngier Link: https://lore.kernel.org/r/20220425114444.368693-3-alexandru.elisei@arm.com Signed-off-by: Catalin Marinas Signed-off-by: Sasha Levin --- arch/arm64/kernel/sys_compat.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c index 3c18c2454089..51274bab2565 100644 --- a/arch/arm64/kernel/sys_compat.c +++ b/arch/arm64/kernel/sys_compat.c @@ -115,6 +115,6 @@ long compat_arm_syscall(struct pt_regs *regs, int scno) (compat_thumb_mode(regs) ? 2 : 4); arm64_notify_die("Oops - bad compat syscall(2)", regs, - SIGILL, ILL_ILLTRP, addr, scno); + SIGILL, ILL_ILLTRP, addr, 0); return 0; } From 0325c08ae202da9a33d0d2a948ed3215e9052f00 Mon Sep 17 00:00:00 2001 From: Lv Ruyi Date: Sun, 24 Apr 2022 03:19:59 +0000 Subject: [PATCH 0371/1842] drm: msm: fix error check return value of irq_of_parse_and_map() [ Upstream commit b9e4f1d2b505df8e2439b63e67afaa287c1c43e2 ] The irq_of_parse_and_map() function returns 0 on failure, and does not return an negative value. Reported-by: Zeal Robot Signed-off-by: Lv Ruyi Reviewed-by: Dmitry Baryshkov Patchwork: https://patchwork.freedesktop.org/patch/483175/ Link: https://lore.kernel.org/r/20220424031959.3172406-1-lv.ruyi@zte.com.cn Signed-off-by: Dmitry Baryshkov Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c index e193865ce9a2..9baaaef706ab 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c @@ -598,9 +598,9 @@ struct msm_kms *mdp5_kms_init(struct drm_device *dev) pdev = mdp5_kms->pdev; irq = irq_of_parse_and_map(pdev->dev.of_node, 0); - if (irq < 0) { - ret = irq; - DRM_DEV_ERROR(&pdev->dev, "failed to get irq: %d\n", ret); + if (!irq) { + ret = -EINVAL; + DRM_DEV_ERROR(&pdev->dev, "failed to get irq\n"); goto fail; } From 4d85201adb65013b1455050a8a814f792d4ee3a6 Mon Sep 17 00:00:00 2001 From: jianghaoran Date: Fri, 29 Apr 2022 13:38:02 +0800 Subject: [PATCH 0372/1842] ipv6: Don't send rs packets to the interface of ARPHRD_TUNNEL [ Upstream commit b52e1cce31ca721e937d517411179f9196ee6135 ] ARPHRD_TUNNEL interface can't process rs packets and will generate TX errors ex: ip tunnel add ethn mode ipip local 192.168.1.1 remote 192.168.1.2 ifconfig ethn x.x.x.x ethn: flags=209 mtu 1480 inet x.x.x.x netmask 255.255.255.255 destination x.x.x.x inet6 fe80::5efe:ac1e:3cdb prefixlen 64 scopeid 0x20 tunnel txqueuelen 1000 (IPIP Tunnel) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 3 dropped 0 overruns 0 carrier 0 collisions 0 Signed-off-by: jianghaoran Link: https://lore.kernel.org/r/20220429053802.246681-1-jianghaoran@kylinos.cn Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- net/ipv6/addrconf.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index 4584bb50960b..0562fb321959 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -4200,7 +4200,8 @@ static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id, send_rs = send_mld && ipv6_accept_ra(ifp->idev) && ifp->idev->cnf.rtr_solicits != 0 && - (dev->flags&IFF_LOOPBACK) == 0; + (dev->flags & IFF_LOOPBACK) == 0 && + (dev->type != ARPHRD_TUNNEL); read_unlock_bh(&ifp->idev->lock); /* While dad is in progress mld report's source address is in6_addrany. From 9b42659cb3c44caa605ee1a9f91147d7bff16bf1 Mon Sep 17 00:00:00 2001 From: Mark Bloch Date: Tue, 15 Mar 2022 11:23:40 +0000 Subject: [PATCH 0373/1842] net/mlx5: fs, delete the FTE when there are no rules attached to it [ Upstream commit 7b0c6338597613f465d131bd939a51844a00455a ] When an FTE has no children is means all the rules where removed and the FTE can be deleted regardless of the dests_size value. While dests_size should be 0 when there are no children be extra careful not to leak memory or get firmware syndrome if the proper bookkeeping of dests_size wasn't done. Signed-off-by: Mark Bloch Reviewed-by: Maor Gottlieb Signed-off-by: Saeed Mahameed Signed-off-by: Sasha Levin --- drivers/net/ethernet/mellanox/mlx5/core/fs_core.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c index 55772f0cbbf8..15472fb15d7d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -2024,16 +2024,16 @@ void mlx5_del_flow_rules(struct mlx5_flow_handle *handle) down_write_ref_node(&fte->node, false); for (i = handle->num_rules - 1; i >= 0; i--) tree_remove_node(&handle->rule[i]->node, true); - if (fte->dests_size) { - if (fte->modify_mask) - modify_fte(fte); - up_write_ref_node(&fte->node, false); - } else if (list_empty(&fte->node.children)) { + if (list_empty(&fte->node.children)) { del_hw_fte(&fte->node); /* Avoid double call to del_hw_fte */ fte->node.del_hw_func = NULL; up_write_ref_node(&fte->node, false); tree_put_node(&fte->node, false); + } else if (fte->dests_size) { + if (fte->modify_mask) + modify_fte(fte); + up_write_ref_node(&fte->node, false); } else { up_write_ref_node(&fte->node, false); } From d68a5eb7b3e03e5836c0f2fc42d6fa9150bf5db6 Mon Sep 17 00:00:00 2001 From: Mark Brown Date: Thu, 28 Apr 2022 17:18:32 +0100 Subject: [PATCH 0374/1842] ASoC: dapm: Don't fold register value changes into notifications [ Upstream commit ad685980469b9f9b99d4d6ea05f4cb8f57cb2234 ] DAPM tracks and reports the value presented to the user from DAPM controls separately to the register value, these may diverge during initialisation or when an autodisable control is in use. When writing DAPM controls we currently report that a change has occurred if either the DAPM value or the value stored in the register has changed, meaning that if the two are out of sync we may appear to report a spurious event to userspace. Since we use this folded in value for nothing other than the value reported to userspace simply drop the folding in of the register change. Signed-off-by: Mark Brown Link: https://lore.kernel.org/r/20220428161833.3690050-1-broonie@kernel.org Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/soc-dapm.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c index 417732bdf286..f2f7f2dde93c 100644 --- a/sound/soc/soc-dapm.c +++ b/sound/soc/soc-dapm.c @@ -3427,7 +3427,6 @@ int snd_soc_dapm_put_volsw(struct snd_kcontrol *kcontrol, update.val = val; card->update = &update; } - change |= reg_change; ret = soc_dapm_mixer_update_power(card, kcontrol, connect, rconnect); @@ -3529,7 +3528,6 @@ int snd_soc_dapm_put_enum_double(struct snd_kcontrol *kcontrol, update.val = val; card->update = &update; } - change |= reg_change; ret = soc_dapm_mux_update_power(card, kcontrol, item[0], e); From b30e727f09163dcdef514ededf6781c0a5e94fd3 Mon Sep 17 00:00:00 2001 From: Petr Machata Date: Wed, 4 May 2022 09:29:05 +0300 Subject: [PATCH 0375/1842] mlxsw: spectrum_dcb: Do not warn about priority changes [ Upstream commit b6b584562cbe7dc357083459d6dd5b171e12cadb ] The idea behind the warnings is that the user would get warned in case when more than one priority is configured for a given DSCP value on a netdevice. The warning is currently wrong, because dcb_ieee_getapp_mask() returns the first matching entry, not all of them, and the warning will then claim that some priority is "current", when in fact it is not. But more importantly, the warning is misleading in general. Consider the following commands: # dcb app flush dev swp19 dscp-prio # dcb app add dev swp19 dscp-prio 24:3 # dcb app replace dev swp19 dscp-prio 24:2 The last command will issue the following warning: mlxsw_spectrum3 0000:07:00.0 swp19: Ignoring new priority 2 for DSCP 24 in favor of current value of 3 The reason is that the "replace" command works by first adding the new value, and then removing all old values. This is the only way to make the replacement without causing the traffic to be prioritized to whatever the chip defaults to. The warning is issued in response to adding the new priority, and then no warning is shown when the old priority is removed. The upshot is that the canonical way to change traffic prioritization always produces a warning about ignoring the new priority, but what gets configured is in fact what the user intended. An option to just emit warning every time that the prioritization changes just to make it clear that it happened is obviously unsatisfactory. Therefore, in this patch, remove the warnings. Reported-by: Maksym Yaremchuk Signed-off-by: Petr Machata Signed-off-by: Ido Schimmel Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c | 13 ------------- 1 file changed, 13 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c index 5f92b1691360..aff6d4f35cd2 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c @@ -168,8 +168,6 @@ static int mlxsw_sp_dcbnl_ieee_setets(struct net_device *dev, static int mlxsw_sp_dcbnl_app_validate(struct net_device *dev, struct dcb_app *app) { - int prio; - if (app->priority >= IEEE_8021QAZ_MAX_TCS) { netdev_err(dev, "APP entry with priority value %u is invalid\n", app->priority); @@ -183,17 +181,6 @@ static int mlxsw_sp_dcbnl_app_validate(struct net_device *dev, app->protocol); return -EINVAL; } - - /* Warn about any DSCP APP entries with the same PID. */ - prio = fls(dcb_ieee_getapp_mask(dev, app)); - if (prio--) { - if (prio < app->priority) - netdev_warn(dev, "Choosing priority %d for DSCP %d in favor of previously-active value of %d\n", - app->priority, app->protocol, prio); - else if (prio > app->priority) - netdev_warn(dev, "Ignoring new priority %d for DSCP %d in favor of current value of %d\n", - app->priority, app->protocol, prio); - } break; case IEEE_8021QAZ_APP_SEL_ETHERTYPE: From 6f19abe031e33764001cd43b5d6f76724e219743 Mon Sep 17 00:00:00 2001 From: Petr Machata Date: Wed, 4 May 2022 09:29:06 +0300 Subject: [PATCH 0376/1842] mlxsw: Treat LLDP packets as control [ Upstream commit 0106668cd2f91bf913fb78972840dedfba80a3c3 ] When trapping packets for on-CPU processing, Spectrum machines differentiate between control and non-control traps. Traffic trapped through non-control traps is treated as data and kept in shared buffer in pools 0-4. Traffic trapped through control traps is kept in the dedicated control buffer 9. The advantage of marking traps as control is that pressure in the data plane does not prevent the control traffic to be processed. When the LLDP trap was introduced, it was marked as a control trap. But then in commit aed4b5721143 ("mlxsw: spectrum: PTP: Hook into packet receive path"), PTP traps were introduced. Because Ethernet-encapsulated PTP packets look to the Spectrum-1 ASIC as LLDP traffic and are trapped under the LLDP trap, this trap was reconfigured as non-control, in sync with the PTP traps. There is however no requirement that PTP traffic be handled as data. Besides, the usual encapsulation for PTP traffic is UDP, not bare Ethernet, and that is in deployments that even need PTP, which is far less common than LLDP. This is reflected by the default policer, which was not bumped up to the 19Kpps / 24Kpps that is the expected load of a PTP-enabled Spectrum-1 switch. Marking of LLDP trap as non-control was therefore probably misguided. In this patch, change it back to control. Reported-by: Maksym Yaremchuk Signed-off-by: Petr Machata Signed-off-by: Ido Schimmel Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c index 433f14ade464..02ba6aa01105 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_trap.c @@ -709,7 +709,7 @@ static const struct mlxsw_sp_trap_item mlxsw_sp_trap_items_arr[] = { .trap = MLXSW_SP_TRAP_CONTROL(LLDP, LLDP, TRAP), .listeners_arr = { MLXSW_RXL(mlxsw_sp_rx_ptp_listener, LLDP, TRAP_TO_CPU, - false, SP_LLDP, DISCARD), + true, SP_LLDP, DISCARD), }, }, { From 3ee67465f7115cea05e8ef59840a3529b5911fa4 Mon Sep 17 00:00:00 2001 From: Alice Wong Date: Mon, 2 May 2022 11:40:18 -0400 Subject: [PATCH 0377/1842] drm/amdgpu/ucode: Remove firmware load type check in amdgpu_ucode_free_bo [ Upstream commit ab0cd4a9ae5b4679b714d8dbfedc0901fecdce9f ] When psp_hw_init failed, it will set the load_type to AMDGPU_FW_LOAD_DIRECT. During amdgpu_device_ip_fini, amdgpu_ucode_free_bo checks that load_type is AMDGPU_FW_LOAD_DIRECT and skips deallocating fw_buf causing memory leak. Remove load_type check in amdgpu_ucode_free_bo. Signed-off-by: Alice Wong Reviewed-by: Alex Deucher Signed-off-by: Alex Deucher Signed-off-by: Sasha Levin --- drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c index b313ce4c3e97..30005ed8156f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c @@ -625,8 +625,7 @@ int amdgpu_ucode_create_bo(struct amdgpu_device *adev) void amdgpu_ucode_free_bo(struct amdgpu_device *adev) { - if (adev->firmware.load_type != AMDGPU_FW_LOAD_DIRECT) - amdgpu_bo_free_kernel(&adev->firmware.fw_buf, + amdgpu_bo_free_kernel(&adev->firmware.fw_buf, &adev->firmware.fw_buf_mc, &adev->firmware.fw_buf_ptr); } From 296f8ca0f73f5268cd9b85cf72ff783596b2264e Mon Sep 17 00:00:00 2001 From: Dongliang Mu Date: Fri, 6 May 2022 15:24:25 +0800 Subject: [PATCH 0378/1842] HID: bigben: fix slab-out-of-bounds Write in bigben_probe [ Upstream commit fc4ef9d5724973193bfa5ebed181dba6de3a56db ] There is a slab-out-of-bounds Write bug in hid-bigbenff driver. The problem is the driver assumes the device must have an input but some malicious devices violate this assumption. Fix this by checking hid_device's input is non-empty before its usage. Reported-by: syzkaller Signed-off-by: Dongliang Mu Signed-off-by: Jiri Kosina Signed-off-by: Sasha Levin --- drivers/hid/hid-bigbenff.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/hid/hid-bigbenff.c b/drivers/hid/hid-bigbenff.c index 74ad8bf98bfd..e8c5e3ac9fff 100644 --- a/drivers/hid/hid-bigbenff.c +++ b/drivers/hid/hid-bigbenff.c @@ -347,6 +347,12 @@ static int bigben_probe(struct hid_device *hid, bigben->report = list_entry(report_list->next, struct hid_report, list); + if (list_empty(&hid->inputs)) { + hid_err(hid, "no inputs found\n"); + error = -ENODEV; + goto error_hw_stop; + } + hidinput = list_first_entry(&hid->inputs, struct hid_input, list); set_bit(FF_RUMBLE, hidinput->input->ffbit); From 4f99bde59eef838685dfe5fcb4fbbfb02fb2265a Mon Sep 17 00:00:00 2001 From: Charles Keepax Date: Wed, 4 May 2022 18:08:52 +0100 Subject: [PATCH 0379/1842] ASoC: tscs454: Add endianness flag in snd_soc_component_driver [ Upstream commit ff69ec96b87dccb3a29edef8cec5d4fefbbc2055 ] The endianness flag is used on the CODEC side to specify an ambivalence to endian, typically because it is lost over the hardware link. This device receives audio over an I2S DAI and as such should have endianness applied. A fixup is also required to use the width directly rather than relying on the format in hw_params, now both little and big endian would be supported. It is worth noting this changes the behaviour of S24_LE to use a word length of 24 rather than 32. This would appear to be a correction since the fact S24_LE is stored as 32 bits should not be presented over the bus. Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220504170905.332415-26-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/tscs454.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/sound/soc/codecs/tscs454.c b/sound/soc/codecs/tscs454.c index d0af16b4db2f..a6f339bb4771 100644 --- a/sound/soc/codecs/tscs454.c +++ b/sound/soc/codecs/tscs454.c @@ -3115,18 +3115,17 @@ static int set_aif_sample_format(struct snd_soc_component *component, unsigned int width; int ret; - switch (format) { - case SNDRV_PCM_FORMAT_S16_LE: + switch (snd_pcm_format_width(format)) { + case 16: width = FV_WL_16; break; - case SNDRV_PCM_FORMAT_S20_3LE: + case 20: width = FV_WL_20; break; - case SNDRV_PCM_FORMAT_S24_3LE: + case 24: width = FV_WL_24; break; - case SNDRV_PCM_FORMAT_S24_LE: - case SNDRV_PCM_FORMAT_S32_LE: + case 32: width = FV_WL_32; break; default: @@ -3321,6 +3320,7 @@ static const struct snd_soc_component_driver soc_component_dev_tscs454 = { .num_dapm_routes = ARRAY_SIZE(tscs454_intercon), .controls = tscs454_snd_controls, .num_controls = ARRAY_SIZE(tscs454_snd_controls), + .endianness = 1, }; #define TSCS454_RATES SNDRV_PCM_RATE_8000_96000 From 312c43e98ed190bd8fd7a71a0addf9539d5b8ab1 Mon Sep 17 00:00:00 2001 From: Eric Dumazet Date: Mon, 9 May 2022 20:57:40 -0700 Subject: [PATCH 0380/1842] net: remove two BUG() from skb_checksum_help() [ Upstream commit d7ea0d9df2a6265b2b180d17ebc64b38105968fc ] I have a syzbot report that managed to get a crash in skb_checksum_help() If syzbot can trigger these BUG(), it makes sense to replace them with more friendly WARN_ON_ONCE() since skb_checksum_help() can instead return an error code. Note that syzbot will still crash there, until real bug is fixed. Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/core/dev.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/net/core/dev.c b/net/core/dev.c index 0bab2aca07fd..af52050b0f38 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -3241,11 +3241,15 @@ int skb_checksum_help(struct sk_buff *skb) } offset = skb_checksum_start_offset(skb); - BUG_ON(offset >= skb_headlen(skb)); + ret = -EINVAL; + if (WARN_ON_ONCE(offset >= skb_headlen(skb))) + goto out; + csum = skb_checksum(skb, offset, skb->len - offset, 0); offset += skb->csum_offset; - BUG_ON(offset + sizeof(__sum16) > skb_headlen(skb)); + if (WARN_ON_ONCE(offset + sizeof(__sum16) > skb_headlen(skb))) + goto out; ret = skb_ensure_writable(skb, offset + sizeof(__sum16)); if (ret) From 46054583988335df39a54862554a8952a69c5649 Mon Sep 17 00:00:00 2001 From: Heiko Carstens Date: Fri, 6 May 2022 11:33:19 +0200 Subject: [PATCH 0381/1842] s390/preempt: disable __preempt_count_add() optimization for PROFILE_ALL_BRANCHES MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 63678eecec57fc51b778be3da35a397931287170 ] gcc 12 does not (always) optimize away code that should only be generated if parameters are constant and within in a certain range. This depends on various obscure kernel config options, however in particular PROFILE_ALL_BRANCHES can trigger this compile error: In function ‘__atomic_add_const’, inlined from ‘__preempt_count_add.part.0’ at ./arch/s390/include/asm/preempt.h:50:3: ./arch/s390/include/asm/atomic_ops.h:80:9: error: impossible constraint in ‘asm’ 80 | asm volatile( \ | ^~~ Workaround this by simply disabling the optimization for PROFILE_ALL_BRANCHES, since the kernel will be so slow, that this optimization won't matter at all. Reported-by: Thomas Richter Reviewed-by: Sven Schnelle Signed-off-by: Heiko Carstens Signed-off-by: Sasha Levin --- arch/s390/include/asm/preempt.h | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/arch/s390/include/asm/preempt.h b/arch/s390/include/asm/preempt.h index b5f545db461a..60e101b8460c 100644 --- a/arch/s390/include/asm/preempt.h +++ b/arch/s390/include/asm/preempt.h @@ -46,10 +46,17 @@ static inline bool test_preempt_need_resched(void) static inline void __preempt_count_add(int val) { - if (__builtin_constant_p(val) && (val >= -128) && (val <= 127)) - __atomic_add_const(val, &S390_lowcore.preempt_count); - else - __atomic_add(val, &S390_lowcore.preempt_count); + /* + * With some obscure config options and CONFIG_PROFILE_ALL_BRANCHES + * enabled, gcc 12 fails to handle __builtin_constant_p(). + */ + if (!IS_ENABLED(CONFIG_PROFILE_ALL_BRANCHES)) { + if (__builtin_constant_p(val) && (val >= -128) && (val <= 127)) { + __atomic_add_const(val, &S390_lowcore.preempt_count); + return; + } + } + __atomic_add(val, &S390_lowcore.preempt_count); } static inline void __preempt_count_sub(int val) From 0c05c03c51e5c85f7ec4e20840da3f51d38ef315 Mon Sep 17 00:00:00 2001 From: Ravi Bangoria Date: Mon, 9 May 2022 10:19:07 +0530 Subject: [PATCH 0382/1842] perf/amd/ibs: Cascade pmu init functions' return value [ Upstream commit 39b2ca75eec8a33e2ffdb8aa0c4840ec3e3b472c ] IBS pmu initialization code ignores return value provided by callee functions. Fix it. Signed-off-by: Ravi Bangoria Signed-off-by: Peter Zijlstra (Intel) Link: https://lore.kernel.org/r/20220509044914.1473-2-ravi.bangoria@amd.com Signed-off-by: Sasha Levin --- arch/x86/events/amd/ibs.c | 37 +++++++++++++++++++++++++++++-------- 1 file changed, 29 insertions(+), 8 deletions(-) diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c index ccc9ee1971e8..780d89d2ae32 100644 --- a/arch/x86/events/amd/ibs.c +++ b/arch/x86/events/amd/ibs.c @@ -764,9 +764,10 @@ static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name) return ret; } -static __init void perf_event_ibs_init(void) +static __init int perf_event_ibs_init(void) { struct attribute **attr = ibs_op_format_attrs; + int ret; /* * Some chips fail to reset the fetch count when it is written; instead @@ -778,7 +779,9 @@ static __init void perf_event_ibs_init(void) if (boot_cpu_data.x86 == 0x19 && boot_cpu_data.x86_model < 0x10) perf_ibs_fetch.fetch_ignore_if_zero_rip = 1; - perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch"); + ret = perf_ibs_pmu_init(&perf_ibs_fetch, "ibs_fetch"); + if (ret) + return ret; if (ibs_caps & IBS_CAPS_OPCNT) { perf_ibs_op.config_mask |= IBS_OP_CNT_CTL; @@ -791,15 +794,35 @@ static __init void perf_event_ibs_init(void) perf_ibs_op.cnt_mask |= IBS_OP_MAX_CNT_EXT_MASK; } - perf_ibs_pmu_init(&perf_ibs_op, "ibs_op"); + ret = perf_ibs_pmu_init(&perf_ibs_op, "ibs_op"); + if (ret) + goto err_op; + + ret = register_nmi_handler(NMI_LOCAL, perf_ibs_nmi_handler, 0, "perf_ibs"); + if (ret) + goto err_nmi; - register_nmi_handler(NMI_LOCAL, perf_ibs_nmi_handler, 0, "perf_ibs"); pr_info("perf: AMD IBS detected (0x%08x)\n", ibs_caps); + return 0; + +err_nmi: + perf_pmu_unregister(&perf_ibs_op.pmu); + free_percpu(perf_ibs_op.pcpu); + perf_ibs_op.pcpu = NULL; +err_op: + perf_pmu_unregister(&perf_ibs_fetch.pmu); + free_percpu(perf_ibs_fetch.pcpu); + perf_ibs_fetch.pcpu = NULL; + + return ret; } #else /* defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_AMD) */ -static __init void perf_event_ibs_init(void) { } +static __init int perf_event_ibs_init(void) +{ + return 0; +} #endif @@ -1069,9 +1092,7 @@ static __init int amd_ibs_init(void) x86_pmu_amd_ibs_starting_cpu, x86_pmu_amd_ibs_dying_cpu); - perf_event_ibs_init(); - - return 0; + return perf_event_ibs_init(); } /* Since we need the pci subsystem to init ibs we can't do this earlier: */ From a61583744ef63157d52117bb1cf08ad4de455ebf Mon Sep 17 00:00:00 2001 From: Patrice Chotard Date: Wed, 11 May 2022 09:46:42 +0200 Subject: [PATCH 0383/1842] spi: stm32-qspi: Fix wait_cmd timeout in APM mode [ Upstream commit d83d89ea68b4726700fa87b22db075e4217e691c ] In APM mode, TCF and TEF flags are not set. To avoid timeout in stm32_qspi_wait_cmd(), don't check if TCF/TEF are set. Signed-off-by: Patrice Chotard Reported-by: eberhard.stoll@kontron.de Link: https://lore.kernel.org/r/20220511074644.558874-2-patrice.chotard@foss.st.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- drivers/spi/spi-stm32-qspi.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/spi/spi-stm32-qspi.c b/drivers/spi/spi-stm32-qspi.c index 4f24f6392212..9c58dcd7b324 100644 --- a/drivers/spi/spi-stm32-qspi.c +++ b/drivers/spi/spi-stm32-qspi.c @@ -295,7 +295,8 @@ static int stm32_qspi_wait_cmd(struct stm32_qspi *qspi, if (!op->data.nbytes) goto wait_nobusy; - if (readl_relaxed(qspi->io_base + QSPI_SR) & SR_TCF) + if ((readl_relaxed(qspi->io_base + QSPI_SR) & SR_TCF) || + qspi->fmode == CCR_FMODE_APM) goto out; reinit_completion(&qspi->data_completion); From 1ecd01d77c9bf5517617862d87b817804b67c771 Mon Sep 17 00:00:00 2001 From: Mikulas Patocka Date: Tue, 10 May 2022 13:17:32 -0400 Subject: [PATCH 0384/1842] dma-debug: change allocation mode from GFP_NOWAIT to GFP_ATIOMIC [ Upstream commit 84bc4f1dbbbb5f8aa68706a96711dccb28b518e5 ] We observed the error "cacheline tracking ENOMEM, dma-debug disabled" during a light system load (copying some files). The reason for this error is that the dma_active_cacheline radix tree uses GFP_NOWAIT allocation - so it can't access the emergency memory reserves and it fails as soon as anybody reaches the watermark. This patch changes GFP_NOWAIT to GFP_ATOMIC, so that it can access the emergency memory reserves. Signed-off-by: Mikulas Patocka Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- kernel/dma/debug.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index f8ae54679865..ee7da1f2462f 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -448,7 +448,7 @@ void debug_dma_dump_mappings(struct device *dev) * other hand, consumes a single dma_debug_entry, but inserts 'nents' * entries into the tree. */ -static RADIX_TREE(dma_active_cacheline, GFP_NOWAIT); +static RADIX_TREE(dma_active_cacheline, GFP_ATOMIC); static DEFINE_SPINLOCK(radix_lock); #define ACTIVE_CACHELINE_MAX_OVERLAP ((1 << RADIX_TREE_MAX_TAGS) - 1) #define CACHELINE_PER_PAGE_SHIFT (PAGE_SHIFT - L1_CACHE_SHIFT) From 0b7c1dc7ee67f3dd56d7bc64dbf3c0dc86aed4a1 Mon Sep 17 00:00:00 2001 From: Mario Limonciello Date: Tue, 10 May 2022 08:11:36 -0500 Subject: [PATCH 0385/1842] ACPI: PM: Block ASUS B1400CEAE from suspend to idle by default [ Upstream commit d52848620de00cde4a3a5df908e231b8c8868250 ] ASUS B1400CEAE fails to resume from suspend to idle by default. This was bisected back to commit df4f9bc4fb9c ("nvme-pci: add support for ACPI StorageD3Enable property") but this is a red herring to the problem. Before this commit the system wasn't getting into deepest sleep state. Presumably this commit is allowing entry into deepest sleep state as advertised by firmware, but there are some other problems related to the wakeup. As it is confirmed the system works properly with S3, set the default for this system to S3. Reported-by: Jian-Hong Pan Link: https://bugzilla.kernel.org/show_bug.cgi?id=215742 Signed-off-by: Mario Limonciello Tested-by: Jian-Hong Pan Signed-off-by: Rafael J. Wysocki Signed-off-by: Sasha Levin --- drivers/acpi/sleep.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c index 503935b1deeb..cfda5720de02 100644 --- a/drivers/acpi/sleep.c +++ b/drivers/acpi/sleep.c @@ -377,6 +377,18 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = { DMI_MATCH(DMI_PRODUCT_NAME, "20GGA00L00"), }, }, + /* + * ASUS B1400CEAE hangs on resume from suspend (see + * https://bugzilla.kernel.org/show_bug.cgi?id=215742). + */ + { + .callback = init_default_s3, + .ident = "ASUS B1400CEAE", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), + DMI_MATCH(DMI_PRODUCT_NAME, "ASUS EXPERTBOOK B1400CEAE"), + }, + }, {}, }; From fa390c8b6256b1095341a9ab9f8fc0053f53bf9f Mon Sep 17 00:00:00 2001 From: Corey Minyard Date: Fri, 1 Apr 2022 07:44:53 -0500 Subject: [PATCH 0386/1842] ipmi:ssif: Check for NULL msg when handling events and messages [ Upstream commit 7602b957e2404e5f98d9a40b68f1fd27f0028712 ] Even though it's not possible to get into the SSIF_GETTING_MESSAGES and SSIF_GETTING_EVENTS states without a valid message in the msg field, it's probably best to be defensive here and check and print a log, since that means something else went wrong. Also add a default clause to that switch statement to release the lock and print a log, in case the state variable gets messed up somehow. Reported-by: Haowen Bai Signed-off-by: Corey Minyard Signed-off-by: Sasha Levin --- drivers/char/ipmi/ipmi_ssif.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c index 3de679723648..477139749513 100644 --- a/drivers/char/ipmi/ipmi_ssif.c +++ b/drivers/char/ipmi/ipmi_ssif.c @@ -840,6 +840,14 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result, break; case SSIF_GETTING_EVENTS: + if (!msg) { + /* Should never happen, but just in case. */ + dev_warn(&ssif_info->client->dev, + "No message set while getting events\n"); + ipmi_ssif_unlock_cond(ssif_info, flags); + break; + } + if ((result < 0) || (len < 3) || (msg->rsp[2] != 0)) { /* Error getting event, probably done. */ msg->done(msg); @@ -864,6 +872,14 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result, break; case SSIF_GETTING_MESSAGES: + if (!msg) { + /* Should never happen, but just in case. */ + dev_warn(&ssif_info->client->dev, + "No message set while getting messages\n"); + ipmi_ssif_unlock_cond(ssif_info, flags); + break; + } + if ((result < 0) || (len < 3) || (msg->rsp[2] != 0)) { /* Error getting event, probably done. */ msg->done(msg); @@ -887,6 +903,13 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result, deliver_recv_msg(ssif_info, msg); } break; + + default: + /* Should never happen, but just in case. */ + dev_warn(&ssif_info->client->dev, + "Invalid state in message done handling: %d\n", + ssif_info->ssif_state); + ipmi_ssif_unlock_cond(ssif_info, flags); } flags = ipmi_ssif_lock_cond(ssif_info, &oflags); From eb7a71b7b2b8ed33d05bc68f6c6f408cc3dc96f3 Mon Sep 17 00:00:00 2001 From: Corey Minyard Date: Fri, 15 Apr 2022 07:23:32 -0500 Subject: [PATCH 0387/1842] ipmi: Fix pr_fmt to avoid compilation issues [ Upstream commit 2ebaf18a0b7fb764bba6c806af99fe868cee93de ] The was it was wouldn't work in some situations, simplify it. What was there was unnecessary complexity. Reported-by: kernel test robot Signed-off-by: Corey Minyard Signed-off-by: Sasha Levin --- drivers/char/ipmi/ipmi_msghandler.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c index 8f147274f826..05e7339752ac 100644 --- a/drivers/char/ipmi/ipmi_msghandler.c +++ b/drivers/char/ipmi/ipmi_msghandler.c @@ -11,8 +11,8 @@ * Copyright 2002 MontaVista Software Inc. */ -#define pr_fmt(fmt) "%s" fmt, "IPMI message handler: " -#define dev_fmt pr_fmt +#define pr_fmt(fmt) "IPMI message handler: " fmt +#define dev_fmt(fmt) pr_fmt(fmt) #include #include From f9413b90230d3339fcbe1e3ffc7d4338417b8091 Mon Sep 17 00:00:00 2001 From: Dongliang Mu Date: Wed, 11 May 2022 09:44:52 +0800 Subject: [PATCH 0388/1842] rtlwifi: Use pr_warn instead of WARN_ONCE [ Upstream commit ad732da434a2936128769216eddaece3b1af4588 ] This memory allocation failure can be triggered by fault injection or high pressure testing, resulting a WARN. Fix this by replacing WARN with pr_warn. Reported-by: syzkaller Signed-off-by: Dongliang Mu Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/20220511014453.1621366-1-dzm91@hust.edu.cn Signed-off-by: Sasha Levin --- drivers/net/wireless/realtek/rtlwifi/usb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/wireless/realtek/rtlwifi/usb.c b/drivers/net/wireless/realtek/rtlwifi/usb.c index 06e073defad6..c6e4fda7e431 100644 --- a/drivers/net/wireless/realtek/rtlwifi/usb.c +++ b/drivers/net/wireless/realtek/rtlwifi/usb.c @@ -1015,7 +1015,7 @@ int rtl_usb_probe(struct usb_interface *intf, hw = ieee80211_alloc_hw(sizeof(struct rtl_priv) + sizeof(struct rtl_usb_priv), &rtl_ops); if (!hw) { - WARN_ONCE(true, "rtl_usb: ieee80211 alloc failed\n"); + pr_warn("rtl_usb: ieee80211 alloc failed\n"); return -ENOMEM; } rtlpriv = hw->priv; From 8ddc89437ccefa18279918c19a61fd81527f40b9 Mon Sep 17 00:00:00 2001 From: Hangyu Hua Date: Thu, 24 Mar 2022 09:37:24 +0100 Subject: [PATCH 0389/1842] media: rga: fix possible memory leak in rga_probe [ Upstream commit a71eb6025305192e646040cd76ccacb5bd48a1b5 ] rga->m2m_dev needs to be freed when rga_probe fails. Signed-off-by: Hangyu Hua Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/media/platform/rockchip/rga/rga.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c index d99ea8973b67..e3246344fb72 100644 --- a/drivers/media/platform/rockchip/rga/rga.c +++ b/drivers/media/platform/rockchip/rga/rga.c @@ -868,7 +868,7 @@ static int rga_probe(struct platform_device *pdev) ret = pm_runtime_resume_and_get(rga->dev); if (ret < 0) - goto rel_vdev; + goto rel_m2m; rga->version.major = (rga_read(rga, RGA_VERSION_INFO) >> 24) & 0xFF; rga->version.minor = (rga_read(rga, RGA_VERSION_INFO) >> 20) & 0x0F; @@ -884,7 +884,7 @@ static int rga_probe(struct platform_device *pdev) DMA_ATTR_WRITE_COMBINE); if (!rga->cmdbuf_virt) { ret = -ENOMEM; - goto rel_vdev; + goto rel_m2m; } rga->src_mmu_pages = @@ -921,6 +921,8 @@ static int rga_probe(struct platform_device *pdev) free_dma: dma_free_attrs(rga->dev, RGA_CMDBUF_SIZE, rga->cmdbuf_virt, rga->cmdbuf_phy, DMA_ATTR_WRITE_COMBINE); +rel_m2m: + v4l2_m2m_release(rga->m2m_dev); rel_vdev: video_device_release(vfd); unreg_v4l2_dev: From f3915b46651a771f1f421cc1004198caa45f63e7 Mon Sep 17 00:00:00 2001 From: Philipp Zabel Date: Tue, 26 Apr 2022 11:15:55 +0200 Subject: [PATCH 0390/1842] media: coda: limit frame interval enumeration to supported encoder frame sizes [ Upstream commit 67e33dd957880879e785cfea83a3aa24bd5c5577 ] Let VIDIOC_ENUM_FRAMEINTERVALS return -EINVAL if userspace queries frame intervals for frame sizes unsupported by the encoder. Fixes the following v4l2-compliance failure: fail: v4l2-test-formats.cpp(123): found frame intervals for invalid size 47x16 fail: v4l2-test-formats.cpp(282): node->codec_mask & STATEFUL_ENCODER test VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: FAIL [hverkuil: drop incorrect 'For decoder devices, return -ENOTTY.' in the commit log] Signed-off-by: Philipp Zabel Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/media/platform/coda/coda-common.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c index 2333079a83c7..99f6d22e0c3c 100644 --- a/drivers/media/platform/coda/coda-common.c +++ b/drivers/media/platform/coda/coda-common.c @@ -1318,7 +1318,8 @@ static int coda_enum_frameintervals(struct file *file, void *fh, struct v4l2_frmivalenum *f) { struct coda_ctx *ctx = fh_to_ctx(fh); - int i; + struct coda_q_data *q_data; + const struct coda_codec *codec; if (f->index) return -EINVAL; @@ -1327,12 +1328,19 @@ static int coda_enum_frameintervals(struct file *file, void *fh, if (!ctx->vdoa && f->pixel_format == V4L2_PIX_FMT_YUYV) return -EINVAL; - for (i = 0; i < CODA_MAX_FORMATS; i++) { - if (f->pixel_format == ctx->cvd->src_formats[i] || - f->pixel_format == ctx->cvd->dst_formats[i]) - break; + if (coda_format_normalize_yuv(f->pixel_format) == V4L2_PIX_FMT_YUV420) { + q_data = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE); + codec = coda_find_codec(ctx->dev, f->pixel_format, + q_data->fourcc); + } else { + codec = coda_find_codec(ctx->dev, V4L2_PIX_FMT_YUV420, + f->pixel_format); } - if (i == CODA_MAX_FORMATS) + if (!codec) + return -EINVAL; + + if (f->width < MIN_W || f->width > codec->max_w || + f->height < MIN_H || f->height > codec->max_h) return -EINVAL; f->type = V4L2_FRMIVAL_TYPE_CONTINUOUS; From 4cf6ba93678a503558d2f524d55e4f8ad0169166 Mon Sep 17 00:00:00 2001 From: Tetsuo Handa Date: Mon, 2 May 2022 05:49:04 +0200 Subject: [PATCH 0391/1842] media: imon: reorganize serialization [ Upstream commit db264d4c66c0fe007b5d19fd007707cd0697603d ] Since usb_register_dev() from imon_init_display() from imon_probe() holds minor_rwsem while display_open() which holds driver_lock and ictx->lock is called with minor_rwsem held from usb_open(), holding driver_lock or ictx->lock when calling usb_register_dev() causes circular locking dependency problem. Since usb_deregister_dev() from imon_disconnect() holds minor_rwsem while display_open() which holds driver_lock is called with minor_rwsem held, holding driver_lock when calling usb_deregister_dev() also causes circular locking dependency problem. Sean Young explained that the problem is there are imon devices which have two usb interfaces, even though it is one device. The probe and disconnect function of both usb interfaces can run concurrently. Alan Stern responded that the driver and USB cores guarantee that when an interface is probed, both the interface and its USB device are locked. Ditto for when the disconnect callback gets run. So concurrent probing/ disconnection of multiple interfaces on the same device is not possible. Therefore, we don't need locks for handling race between imon_probe() and imon_disconnect(). But we still need to handle race between display_open() /vfd_write()/lcd_write()/display_close() and imon_disconnect(), for disconnect event can happen while file descriptors are in use. Since "struct file"->private_data is set by display_open(), vfd_write()/ lcd_write()/display_close() can assume that "struct file"->private_data is not NULL even after usb_set_intfdata(interface, NULL) was called. Replace insufficiently held driver_lock with refcount_t based management. Add a boolean flag for recording whether imon_disconnect() was already called. Use RCU for accessing this boolean flag and refcount_t. Since the boolean flag for imon_disconnect() is shared, disconnect event on either intf0 or intf1 affects both interfaces. But I assume that this change does not matter, for usually disconnect event would not happen while interfaces are in use. Link: https://syzkaller.appspot.com/bug?extid=c558267ad910fc494497 Reported-by: syzbot Signed-off-by: Tetsuo Handa Tested-by: syzbot Cc: Alan Stern Signed-off-by: Sean Young Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/media/rc/imon.c | 99 +++++++++++++++++++---------------------- 1 file changed, 47 insertions(+), 52 deletions(-) diff --git a/drivers/media/rc/imon.c b/drivers/media/rc/imon.c index a7962ca2ac8e..bc9ac6002e25 100644 --- a/drivers/media/rc/imon.c +++ b/drivers/media/rc/imon.c @@ -153,6 +153,24 @@ struct imon_context { const struct imon_usb_dev_descr *dev_descr; /* device description with key */ /* table for front panels */ + /* + * Fields for deferring free_imon_context(). + * + * Since reference to "struct imon_context" is stored into + * "struct file"->private_data, we need to remember + * how many file descriptors might access this "struct imon_context". + */ + refcount_t users; + /* + * Use a flag for telling display_open()/vfd_write()/lcd_write() that + * imon_disconnect() was already called. + */ + bool disconnected; + /* + * We need to wait for RCU grace period in order to allow + * display_open() to safely check ->disconnected and increment ->users. + */ + struct rcu_head rcu; }; #define TOUCH_TIMEOUT (HZ/30) @@ -160,18 +178,18 @@ struct imon_context { /* vfd character device file operations */ static const struct file_operations vfd_fops = { .owner = THIS_MODULE, - .open = &display_open, - .write = &vfd_write, - .release = &display_close, + .open = display_open, + .write = vfd_write, + .release = display_close, .llseek = noop_llseek, }; /* lcd character device file operations */ static const struct file_operations lcd_fops = { .owner = THIS_MODULE, - .open = &display_open, - .write = &lcd_write, - .release = &display_close, + .open = display_open, + .write = lcd_write, + .release = display_close, .llseek = noop_llseek, }; @@ -439,9 +457,6 @@ static struct usb_driver imon_driver = { .id_table = imon_usb_id_table, }; -/* to prevent races between open() and disconnect(), probing, etc */ -static DEFINE_MUTEX(driver_lock); - /* Module bookkeeping bits */ MODULE_AUTHOR(MOD_AUTHOR); MODULE_DESCRIPTION(MOD_DESC); @@ -481,9 +496,11 @@ static void free_imon_context(struct imon_context *ictx) struct device *dev = ictx->dev; usb_free_urb(ictx->tx_urb); + WARN_ON(ictx->dev_present_intf0); usb_free_urb(ictx->rx_urb_intf0); + WARN_ON(ictx->dev_present_intf1); usb_free_urb(ictx->rx_urb_intf1); - kfree(ictx); + kfree_rcu(ictx, rcu); dev_dbg(dev, "%s: iMON context freed\n", __func__); } @@ -499,9 +516,6 @@ static int display_open(struct inode *inode, struct file *file) int subminor; int retval = 0; - /* prevent races with disconnect */ - mutex_lock(&driver_lock); - subminor = iminor(inode); interface = usb_find_interface(&imon_driver, subminor); if (!interface) { @@ -509,13 +523,16 @@ static int display_open(struct inode *inode, struct file *file) retval = -ENODEV; goto exit; } - ictx = usb_get_intfdata(interface); - if (!ictx) { + rcu_read_lock(); + ictx = usb_get_intfdata(interface); + if (!ictx || ictx->disconnected || !refcount_inc_not_zero(&ictx->users)) { + rcu_read_unlock(); pr_err("no context found for minor %d\n", subminor); retval = -ENODEV; goto exit; } + rcu_read_unlock(); mutex_lock(&ictx->lock); @@ -533,8 +550,10 @@ static int display_open(struct inode *inode, struct file *file) mutex_unlock(&ictx->lock); + if (retval && refcount_dec_and_test(&ictx->users)) + free_imon_context(ictx); + exit: - mutex_unlock(&driver_lock); return retval; } @@ -544,16 +563,9 @@ static int display_open(struct inode *inode, struct file *file) */ static int display_close(struct inode *inode, struct file *file) { - struct imon_context *ictx = NULL; + struct imon_context *ictx = file->private_data; int retval = 0; - ictx = file->private_data; - - if (!ictx) { - pr_err("no context for device\n"); - return -ENODEV; - } - mutex_lock(&ictx->lock); if (!ictx->display_supported) { @@ -568,6 +580,8 @@ static int display_close(struct inode *inode, struct file *file) } mutex_unlock(&ictx->lock); + if (refcount_dec_and_test(&ictx->users)) + free_imon_context(ictx); return retval; } @@ -937,15 +951,12 @@ static ssize_t vfd_write(struct file *file, const char __user *buf, int offset; int seq; int retval = 0; - struct imon_context *ictx; + struct imon_context *ictx = file->private_data; static const unsigned char vfd_packet6[] = { 0x01, 0x00, 0x00, 0x00, 0x00, 0xFF, 0xFF }; - ictx = file->private_data; - if (!ictx) { - pr_err_ratelimited("no context for device\n"); + if (ictx->disconnected) return -ENODEV; - } mutex_lock(&ictx->lock); @@ -1021,13 +1032,10 @@ static ssize_t lcd_write(struct file *file, const char __user *buf, size_t n_bytes, loff_t *pos) { int retval = 0; - struct imon_context *ictx; + struct imon_context *ictx = file->private_data; - ictx = file->private_data; - if (!ictx) { - pr_err_ratelimited("no context for device\n"); + if (ictx->disconnected) return -ENODEV; - } mutex_lock(&ictx->lock); @@ -2405,7 +2413,6 @@ static int imon_probe(struct usb_interface *interface, int ifnum, sysfs_err; int ret = 0; struct imon_context *ictx = NULL; - struct imon_context *first_if_ctx = NULL; u16 vendor, product; usbdev = usb_get_dev(interface_to_usbdev(interface)); @@ -2417,17 +2424,12 @@ static int imon_probe(struct usb_interface *interface, dev_dbg(dev, "%s: found iMON device (%04x:%04x, intf%d)\n", __func__, vendor, product, ifnum); - /* prevent races probing devices w/multiple interfaces */ - mutex_lock(&driver_lock); - first_if = usb_ifnum_to_if(usbdev, 0); if (!first_if) { ret = -ENODEV; goto fail; } - first_if_ctx = usb_get_intfdata(first_if); - if (ifnum == 0) { ictx = imon_init_intf0(interface, id); if (!ictx) { @@ -2435,9 +2437,11 @@ static int imon_probe(struct usb_interface *interface, ret = -ENODEV; goto fail; } + refcount_set(&ictx->users, 1); } else { /* this is the secondary interface on the device */ + struct imon_context *first_if_ctx = usb_get_intfdata(first_if); /* fail early if first intf failed to register */ if (!first_if_ctx) { @@ -2451,14 +2455,13 @@ static int imon_probe(struct usb_interface *interface, ret = -ENODEV; goto fail; } + refcount_inc(&ictx->users); } usb_set_intfdata(interface, ictx); if (ifnum == 0) { - mutex_lock(&ictx->lock); - if (product == 0xffdc && ictx->rf_device) { sysfs_err = sysfs_create_group(&interface->dev.kobj, &imon_rf_attr_group); @@ -2469,21 +2472,17 @@ static int imon_probe(struct usb_interface *interface, if (ictx->display_supported) imon_init_display(ictx, interface); - - mutex_unlock(&ictx->lock); } dev_info(dev, "iMON device (%04x:%04x, intf%d) on usb<%d:%d> initialized\n", vendor, product, ifnum, usbdev->bus->busnum, usbdev->devnum); - mutex_unlock(&driver_lock); usb_put_dev(usbdev); return 0; fail: - mutex_unlock(&driver_lock); usb_put_dev(usbdev); dev_err(dev, "unable to register, err %d\n", ret); @@ -2499,10 +2498,8 @@ static void imon_disconnect(struct usb_interface *interface) struct device *dev; int ifnum; - /* prevent races with multi-interface device probing and display_open */ - mutex_lock(&driver_lock); - ictx = usb_get_intfdata(interface); + ictx->disconnected = true; dev = ictx->dev; ifnum = interface->cur_altsetting->desc.bInterfaceNumber; @@ -2543,11 +2540,9 @@ static void imon_disconnect(struct usb_interface *interface) } } - if (!ictx->dev_present_intf0 && !ictx->dev_present_intf1) + if (refcount_dec_and_test(&ictx->users)) free_imon_context(ictx); - mutex_unlock(&driver_lock); - dev_dbg(dev, "%s: iMON device (intf%d) disconnected\n", __func__, ifnum); } From 22cdbb1354985acf9a650e01bca987d0615330ac Mon Sep 17 00:00:00 2001 From: Hans Verkuil Date: Fri, 6 May 2022 09:43:25 +0200 Subject: [PATCH 0392/1842] media: cec-adap.c: fix is_configuring state [ Upstream commit 59267fc34f4900dcd2ec3295f6be04b79aee2186 ] If an adapter is trying to claim a free logical address then it is in the 'is_configuring' state. If during that process the cable is disconnected (HPD goes low, which in turn invalidates the physical address), then cec_adap_unconfigure() is called, and that set the is_configuring boolean to false, even though the thread that's trying to claim an LA is still running. Don't touch the is_configuring bool in cec_adap_unconfigure(), it will eventually be cleared by the thread. By making that change the cec_config_log_addr() function also had to change: it was aborting if is_configuring became false (since that is what cec_adap_unconfigure() did), but that no longer works. Instead check if the physical address is invalid. That is a much more appropriate check anyway. This fixes a bug where the the adapter could be disabled even though the device was still configuring. This could cause POLL transmits to time out. Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/media/cec/core/cec-adap.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/media/cec/core/cec-adap.c b/drivers/media/cec/core/cec-adap.c index 2e5698fbc3a8..e23aa608f66f 100644 --- a/drivers/media/cec/core/cec-adap.c +++ b/drivers/media/cec/core/cec-adap.c @@ -1271,7 +1271,7 @@ static int cec_config_log_addr(struct cec_adapter *adap, * While trying to poll the physical address was reset * and the adapter was unconfigured, so bail out. */ - if (!adap->is_configuring) + if (adap->phys_addr == CEC_PHYS_ADDR_INVALID) return -EINTR; if (err) @@ -1328,7 +1328,6 @@ static void cec_adap_unconfigure(struct cec_adapter *adap) adap->phys_addr != CEC_PHYS_ADDR_INVALID) WARN_ON(adap->ops->adap_log_addr(adap, CEC_LOG_ADDR_INVALID)); adap->log_addrs.log_addr_mask = 0; - adap->is_configuring = false; adap->is_configured = false; cec_flush(adap); wake_up_interruptible(&adap->kthread_waitq); @@ -1520,9 +1519,10 @@ static int cec_config_thread_func(void *arg) for (i = 0; i < las->num_log_addrs; i++) las->log_addr[i] = CEC_LOG_ADDR_INVALID; cec_adap_unconfigure(adap); + adap->is_configuring = false; adap->kthread_config = NULL; - mutex_unlock(&adap->lock); complete(&adap->config_completion); + mutex_unlock(&adap->lock); return 0; } From 8671aeeef29d440b4fcb446426781b04970ccd70 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 23 Apr 2022 21:11:41 +0200 Subject: [PATCH 0393/1842] openrisc: start CPU timer early in boot [ Upstream commit 516dd4aacd67a0f27da94f3fe63fe0f4dbab6e2b ] In order to measure the boot process, the timer should be switched on as early in boot as possible. As well, the commit defines the get_cycles macro, like the previous patches in this series, so that generic code is aware that it's implemented by the platform, as is done on other archs. Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Jonas Bonn Cc: Stefan Kristiansson Acked-by: Stafford Horne Reported-by: Guenter Roeck Signed-off-by: Jason A. Donenfeld Signed-off-by: Sasha Levin --- arch/openrisc/include/asm/timex.h | 1 + arch/openrisc/kernel/head.S | 9 +++++++++ 2 files changed, 10 insertions(+) diff --git a/arch/openrisc/include/asm/timex.h b/arch/openrisc/include/asm/timex.h index d52b4e536e3f..5487fa93dd9b 100644 --- a/arch/openrisc/include/asm/timex.h +++ b/arch/openrisc/include/asm/timex.h @@ -23,6 +23,7 @@ static inline cycles_t get_cycles(void) { return mfspr(SPR_TTCR); } +#define get_cycles get_cycles /* This isn't really used any more */ #define CLOCK_TICK_RATE 1000 diff --git a/arch/openrisc/kernel/head.S b/arch/openrisc/kernel/head.S index af355e3f4619..459b0a1e4eb2 100644 --- a/arch/openrisc/kernel/head.S +++ b/arch/openrisc/kernel/head.S @@ -521,6 +521,15 @@ _start: l.ori r3,r0,0x1 l.mtspr r0,r3,SPR_SR + /* + * Start the TTCR as early as possible, so that the RNG can make use of + * measurements of boot time from the earliest opportunity. Especially + * important is that the TTCR does not return zero by the time we reach + * rand_initialize(). + */ + l.movhi r3,hi(SPR_TTMR_CR) + l.mtspr r0,r3,SPR_TTMR + CLEAR_GPR(r1) CLEAR_GPR(r2) CLEAR_GPR(r3) From af98940dd33c9f9e1beb4f71c0a39260100e2a65 Mon Sep 17 00:00:00 2001 From: "Smith, Kyle Miller (Nimble Kernel)" Date: Fri, 22 Apr 2022 14:40:32 +0000 Subject: [PATCH 0394/1842] nvme-pci: fix a NULL pointer dereference in nvme_alloc_admin_tags [ Upstream commit da42761181627e9bdc37d18368b827948a583929 ] In nvme_alloc_admin_tags, the admin_q can be set to an error (typically -ENOMEM) if the blk_mq_init_queue call fails to set up the queue, which is checked immediately after the call. However, when we return the error message up the stack, to nvme_reset_work the error takes us to nvme_remove_dead_ctrl() nvme_dev_disable() nvme_suspend_queue(&dev->queues[0]). Here, we only check that the admin_q is non-NULL, rather than not an error or NULL, and begin quiescing a queue that never existed, leading to bad / NULL pointer dereference. Signed-off-by: Kyle Smith Reviewed-by: Chaitanya Kulkarni Reviewed-by: Hannes Reinecke Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/pci.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index a36db0701d17..7de24a10dd92 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1666,6 +1666,7 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev) dev->ctrl.admin_q = blk_mq_init_queue(&dev->admin_tagset); if (IS_ERR(dev->ctrl.admin_q)) { blk_mq_free_tag_set(&dev->admin_tagset); + dev->ctrl.admin_q = NULL; return -ENOMEM; } if (!blk_get_queue(dev->ctrl.admin_q)) { From 1a5a3dfd9f172dcb115072f0aea5e27d3083c20e Mon Sep 17 00:00:00 2001 From: Lin Ma Date: Mon, 16 May 2022 17:20:35 +0800 Subject: [PATCH 0395/1842] ASoC: rt5645: Fix errorenous cleanup order [ Upstream commit 2def44d3aec59e38d2701c568d65540783f90f2f ] There is a logic error when removing rt5645 device as the function rt5645_i2c_remove() first cancel the &rt5645->jack_detect_work and delete the &rt5645->btn_check_timer latter. However, since the timer handler rt5645_btn_check_callback() will re-queue the jack_detect_work, this cleanup order is buggy. That is, once the del_timer_sync in rt5645_i2c_remove is concurrently run with the rt5645_btn_check_callback, the canceled jack_detect_work will be rescheduled again, leading to possible use-after-free. This patch fix the issue by placing the del_timer_sync function before the cancel_delayed_work_sync. Signed-off-by: Lin Ma Link: https://lore.kernel.org/r/20220516092035.28283-1-linma@zju.edu.cn Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/rt5645.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c index 420003d062c7..d1533e95a74f 100644 --- a/sound/soc/codecs/rt5645.c +++ b/sound/soc/codecs/rt5645.c @@ -4095,9 +4095,14 @@ static int rt5645_i2c_remove(struct i2c_client *i2c) if (i2c->irq) free_irq(i2c->irq, rt5645); + /* + * Since the rt5645_btn_check_callback() can queue jack_detect_work, + * the timer need to be delted first + */ + del_timer_sync(&rt5645->btn_check_timer); + cancel_delayed_work_sync(&rt5645->jack_detect_work); cancel_delayed_work_sync(&rt5645->rcclock_work); - del_timer_sync(&rt5645->btn_check_timer); regulator_bulk_disable(ARRAY_SIZE(rt5645->supplies), rt5645->supplies); From 8d33585ffa2ecbfebacc840a3d5b523bd0966e69 Mon Sep 17 00:00:00 2001 From: Xie Yongji Date: Tue, 22 Mar 2022 16:06:39 +0800 Subject: [PATCH 0396/1842] nbd: Fix hung on disconnect request if socket is closed before [ Upstream commit 491bf8f236fdeec698fa6744993f1ecf3fafd1a5 ] When userspace closes the socket before sending a disconnect request, the following I/O requests will be blocked in wait_for_reconnect() until dead timeout. This will cause the following disconnect request also hung on blk_mq_quiesce_queue(). That means we have no way to disconnect a nbd device if there are some I/O requests waiting for reconnecting until dead timeout. It's not expected. So let's wake up the thread waiting for reconnecting directly when a disconnect request is sent. Reported-by: Xu Jianhai Signed-off-by: Xie Yongji Reviewed-by: Josef Bacik Link: https://lore.kernel.org/r/20220322080639.142-1-xieyongji@bytedance.com Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- drivers/block/nbd.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 59c452fff835..ecde800ba210 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -880,11 +880,15 @@ static int wait_for_reconnect(struct nbd_device *nbd) struct nbd_config *config = nbd->config; if (!config->dead_conn_timeout) return 0; - if (test_bit(NBD_RT_DISCONNECTED, &config->runtime_flags)) + + if (!wait_event_timeout(config->conn_wait, + test_bit(NBD_RT_DISCONNECTED, + &config->runtime_flags) || + atomic_read(&config->live_connections) > 0, + config->dead_conn_timeout)) return 0; - return wait_event_timeout(config->conn_wait, - atomic_read(&config->live_connections) > 0, - config->dead_conn_timeout) > 0; + + return !test_bit(NBD_RT_DISCONNECTED, &config->runtime_flags); } static int nbd_handle_cmd(struct nbd_cmd *cmd, int index) @@ -2029,6 +2033,7 @@ static void nbd_disconnect_and_put(struct nbd_device *nbd) mutex_lock(&nbd->config_lock); nbd_disconnect(nbd); sock_shutdown(nbd); + wake_up(&nbd->config->conn_wait); /* * Make sure recv thread has finished, so it does not drop the last * config ref and try to destroy the workqueue from inside the work From abb5594ae2ba7b82cce85917cc6337ec5d774837 Mon Sep 17 00:00:00 2001 From: Fabio Estevam Date: Fri, 13 May 2022 08:46:12 -0300 Subject: [PATCH 0397/1842] net: phy: micrel: Allow probing without .driver_data [ Upstream commit f2ef6f7539c68c6bd6c32323d8845ee102b7c450 ] Currently, if the .probe element is present in the phy_driver structure and the .driver_data is not, a NULL pointer dereference happens. Allow passing .probe without .driver_data by inserting NULL checks for priv->type. Signed-off-by: Fabio Estevam Reviewed-by: Andrew Lunn Link: https://lore.kernel.org/r/20220513114613.762810-1-festevam@gmail.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/phy/micrel.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c index 92e94ac94a34..bbbe198f83e8 100644 --- a/drivers/net/phy/micrel.c +++ b/drivers/net/phy/micrel.c @@ -283,7 +283,7 @@ static int kszphy_config_reset(struct phy_device *phydev) } } - if (priv->led_mode >= 0) + if (priv->type && priv->led_mode >= 0) kszphy_setup_led(phydev, priv->type->led_mode_reg, priv->led_mode); return 0; @@ -299,10 +299,10 @@ static int kszphy_config_init(struct phy_device *phydev) type = priv->type; - if (type->has_broadcast_disable) + if (type && type->has_broadcast_disable) kszphy_broadcast_disable(phydev); - if (type->has_nand_tree_disable) + if (type && type->has_nand_tree_disable) kszphy_nand_tree_disable(phydev); return kszphy_config_reset(phydev); @@ -1112,7 +1112,7 @@ static int kszphy_probe(struct phy_device *phydev) priv->type = type; - if (type->led_mode_reg) { + if (type && type->led_mode_reg) { ret = of_property_read_u32(np, "micrel,led-mode", &priv->led_mode); if (ret) @@ -1133,7 +1133,8 @@ static int kszphy_probe(struct phy_device *phydev) unsigned long rate = clk_get_rate(clk); bool rmii_ref_clk_sel_25_mhz; - priv->rmii_ref_clk_sel = type->has_rmii_ref_clk_sel; + if (type) + priv->rmii_ref_clk_sel = type->has_rmii_ref_clk_sel; rmii_ref_clk_sel_25_mhz = of_property_read_bool(np, "micrel,rmii-reference-clock-select-25-mhz"); From 86c02171bded33534e418c66320dfd930fda62b3 Mon Sep 17 00:00:00 2001 From: Kwanghoon Son Date: Wed, 27 Apr 2022 03:16:45 +0200 Subject: [PATCH 0398/1842] media: exynos4-is: Fix compile warning [ Upstream commit e080f5c1f2b6d02c02ee5d674e0e392ccf63bbaf ] Declare static on function 'fimc_isp_video_device_unregister'. When VIDEO_EXYNOS4_ISP_DMA_CAPTURE=n, compiler warns about warning: no previous prototype for function [-Wmissing-prototypes] Reported-by: kernel test robot Signed-off-by: Kwanghoon Son Signed-off-by: Sakari Ailus Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/media/platform/exynos4-is/fimc-isp-video.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/media/platform/exynos4-is/fimc-isp-video.h b/drivers/media/platform/exynos4-is/fimc-isp-video.h index edcb3a5e3cb9..2dd4ddbc748a 100644 --- a/drivers/media/platform/exynos4-is/fimc-isp-video.h +++ b/drivers/media/platform/exynos4-is/fimc-isp-video.h @@ -32,7 +32,7 @@ static inline int fimc_isp_video_device_register(struct fimc_isp *isp, return 0; } -void fimc_isp_video_device_unregister(struct fimc_isp *isp, +static inline void fimc_isp_video_device_unregister(struct fimc_isp *isp, enum v4l2_buf_type type) { } From c73aee194680523db642a180303978a04f41c21d Mon Sep 17 00:00:00 2001 From: Pierre-Louis Bossart Date: Tue, 17 May 2022 12:26:46 -0500 Subject: [PATCH 0399/1842] ASoC: max98357a: remove dependency on GPIOLIB MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 21ca3274333f5c1cbbf9d91e5b33f4f2463859b2 ] commit dcc2c012c7691 ("ASoC: Fix gpiolib dependencies") removed a series of unnecessary dependencies on GPIOLIB when the gpio was optional. A similar simplification seems valid for max98357a, so remove the dependency as well. This will avoid the following warning WARNING: unmet direct dependencies detected for SND_SOC_MAX98357A Depends on [n]: SOUND [=y] && !UML && SND [=y] && SND_SOC [=y] && GPIOLIB [=n] Selected by [y]: - SND_SOC_INTEL_SOF_CS42L42_MACH [=y] && SOUND [=y] && !UML && SND [=y] && SND_SOC [=y] && SND_SOC_INTEL_MACH [=y] && (SND_SOC_SOF_HDA_LINK [=y] || SND_SOC_SOF_BAYTRAIL [=n]) && I2C [=y] && ACPI [=y] && SND_HDA_CODEC_HDMI [=y] && SND_SOC_SOF_HDA_AUDIO_CODEC [=y] && (MFD_INTEL_LPSS [=y] || COMPILE_TEST [=n]) Reported-by: kernel test robot Signed-off-by: Pierre-Louis Bossart Reviewed-by: Péter Ujfalusi Link: https://lore.kernel.org/r/20220517172647.468244-2-pierre-louis.bossart@linux.intel.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/Kconfig | 1 - 1 file changed, 1 deletion(-) diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig index 52c89a6f54e9..612fd7516666 100644 --- a/sound/soc/codecs/Kconfig +++ b/sound/soc/codecs/Kconfig @@ -857,7 +857,6 @@ config SND_SOC_MAX98095 config SND_SOC_MAX98357A tristate "Maxim MAX98357A CODEC" - depends on GPIOLIB config SND_SOC_MAX98371 tristate From ff383c18799d12f9583f7d824c70c2e78d7b832f Mon Sep 17 00:00:00 2001 From: Pierre-Louis Bossart Date: Tue, 17 May 2022 12:26:47 -0500 Subject: [PATCH 0400/1842] ASoC: rt1015p: remove dependency on GPIOLIB MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit b390c25c6757b9d56cecdfbf6d55f15fc89a6386 ] commit dcc2c012c7691 ("ASoC: Fix gpiolib dependencies") removed a series of unnecessary dependencies on GPIOLIB when the gpio was optional. A similar simplification seems valid for rt1015p, so remove the dependency as well. This will avoid the following warning WARNING: unmet direct dependencies detected for SND_SOC_RT1015P Depends on [n]: SOUND [=y] && !UML && SND [=y] && SND_SOC [=y] && GPIOLIB [=n] Selected by [y]: - SND_SOC_INTEL_SOF_RT5682_MACH [=y] && SOUND [=y] && !UML && SND [=y] && SND_SOC [=y] && SND_SOC_INTEL_MACH [=y] && (SND_SOC_SOF_HDA_LINK [=y] || SND_SOC_SOF_BAYTRAIL [=n]) && I2C [=y] && ACPI [=y] && (SND_HDA_CODEC_HDMI [=y] && SND_SOC_SOF_HDA_AUDIO_CODEC [=y] && (MFD_INTEL_LPSS [=y] || COMPILE_TEST [=y]) || SND_SOC_SOF_BAYTRAIL [=n] && (X86_INTEL_LPSS [=n] || COMPILE_TEST [=y])) Reported-by: kernel test robot Signed-off-by: Pierre-Louis Bossart Reviewed-by: Péter Ujfalusi Link: https://lore.kernel.org/r/20220517172647.468244-3-pierre-louis.bossart@linux.intel.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/Kconfig | 1 - 1 file changed, 1 deletion(-) diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig index 612fd7516666..25f331551f68 100644 --- a/sound/soc/codecs/Kconfig +++ b/sound/soc/codecs/Kconfig @@ -1098,7 +1098,6 @@ config SND_SOC_RT1015 config SND_SOC_RT1015P tristate - depends on GPIOLIB config SND_SOC_RT1305 tristate From f29fb4623296f74a2757db147abab2631ba8f6ce Mon Sep 17 00:00:00 2001 From: Vincent Mailhol Date: Wed, 18 May 2022 20:43:57 +0900 Subject: [PATCH 0401/1842] can: mcp251xfd: silence clang's -Wunaligned-access warning [ Upstream commit 1a6dd9996699889313327be03981716a8337656b ] clang emits a -Wunaligned-access warning on union mcp251xfd_tx_ojb_load_buf. The reason is that field hw_tx_obj (not declared as packed) is being packed right after a 16 bits field inside a packed struct: | union mcp251xfd_tx_obj_load_buf { | struct __packed { | struct mcp251xfd_buf_cmd cmd; | /* ^ 16 bits fields */ | struct mcp251xfd_hw_tx_obj_raw hw_tx_obj; | /* ^ not declared as packed */ | } nocrc; | struct __packed { | struct mcp251xfd_buf_cmd_crc cmd; | struct mcp251xfd_hw_tx_obj_raw hw_tx_obj; | __be16 crc; | } crc; | } ____cacheline_aligned; Starting from LLVM 14, having an unpacked struct nested in a packed struct triggers a warning. c.f. [1]. This is a false positive because the field is always being accessed with the relevant put_unaligned_*() function. Adding __packed to the structure declaration silences the warning. [1] https://github.com/llvm/llvm-project/issues/55520 Link: https://lore.kernel.org/all/20220518114357.55452-1-mailhol.vincent@wanadoo.fr Signed-off-by: Vincent Mailhol Reported-by: kernel test robot Tested-by: Nathan Chancellor # build Signed-off-by: Marc Kleine-Budde Signed-off-by: Sasha Levin --- drivers/net/can/spi/mcp251xfd/mcp251xfd.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h index fa1246e39980..766dbd19bba6 100644 --- a/drivers/net/can/spi/mcp251xfd/mcp251xfd.h +++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd.h @@ -426,7 +426,7 @@ struct mcp251xfd_hw_tef_obj { /* The tx_obj_raw version is used in spi async, i.e. without * regmap. We have to take care of endianness ourselves. */ -struct mcp251xfd_hw_tx_obj_raw { +struct __packed mcp251xfd_hw_tx_obj_raw { __le32 id; __le32 flags; u8 data[sizeof_field(struct canfd_frame, data)]; From 76744a016e781e3ca423ed621babe4eb9ba38909 Mon Sep 17 00:00:00 2001 From: Borislav Petkov Date: Thu, 19 May 2022 16:59:13 +0200 Subject: [PATCH 0402/1842] x86/microcode: Add explicit CPU vendor dependency [ Upstream commit 9c55d99e099bd7aa6b91fce8718505c35d5dfc65 ] Add an explicit dependency to the respective CPU vendor so that the respective microcode support for it gets built only when that support is enabled. Reported-by: Randy Dunlap Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/8ead0da9-9545-b10d-e3db-7df1a1f219e4@infradead.org Signed-off-by: Sasha Levin --- arch/x86/Kconfig | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index db95ac482e0e..ed713840d469 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1321,7 +1321,7 @@ config MICROCODE config MICROCODE_INTEL bool "Intel microcode loading support" - depends on MICROCODE + depends on CPU_SUP_INTEL && MICROCODE default MICROCODE help This options enables microcode patch loading support for Intel @@ -1333,7 +1333,7 @@ config MICROCODE_INTEL config MICROCODE_AMD bool "AMD microcode loading support" - depends on MICROCODE + depends on CPU_SUP_AMD && MICROCODE help If you select this option, microcode patch loading support for AMD processors will be enabled. From 6c18a0fcd6605c208888d539208fc54c2b4bada1 Mon Sep 17 00:00:00 2001 From: Geert Uytterhoeven Date: Fri, 20 May 2022 16:32:30 +0200 Subject: [PATCH 0403/1842] m68k: atari: Make Atari ROM port I/O write macros return void [ Upstream commit 30b5e6ef4a32ea4985b99200e06d6660a69f9246 ] The macros implementing Atari ROM port I/O writes do not cast away their output, unlike similar implementations for other I/O buses. When they are combined using conditional expressions in the definitions of outb() and friends, this triggers sparse warnings like: drivers/net/appletalk/cops.c:382:17: error: incompatible types in conditional expression (different base types): drivers/net/appletalk/cops.c:382:17: unsigned char drivers/net/appletalk/cops.c:382:17: void Fix this by adding casts to "void". Reported-by: kernel test robot Reported-by: Guenter Roeck Signed-off-by: Geert Uytterhoeven Reviewed-by: Guenter Roeck Reviewed-by: Michael Schmitz Link: https://lore.kernel.org/r/c15bedc83d90a14fffcd5b1b6bfb32b8a80282c5.1653057096.git.geert@linux-m68k.org Signed-off-by: Sasha Levin --- arch/m68k/include/asm/raw_io.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/m68k/include/asm/raw_io.h b/arch/m68k/include/asm/raw_io.h index 80eb2396d01e..3ba40bc1dfaa 100644 --- a/arch/m68k/include/asm/raw_io.h +++ b/arch/m68k/include/asm/raw_io.h @@ -80,14 +80,14 @@ ({ u16 __v = le16_to_cpu(*(__force volatile u16 *) (addr)); __v; }) #define rom_out_8(addr, b) \ - ({u8 __maybe_unused __w, __v = (b); u32 _addr = ((u32) (addr)); \ + (void)({u8 __maybe_unused __w, __v = (b); u32 _addr = ((u32) (addr)); \ __w = ((*(__force volatile u8 *) ((_addr | 0x10000) + (__v<<1)))); }) #define rom_out_be16(addr, w) \ - ({u16 __maybe_unused __w, __v = (w); u32 _addr = ((u32) (addr)); \ + (void)({u16 __maybe_unused __w, __v = (w); u32 _addr = ((u32) (addr)); \ __w = ((*(__force volatile u16 *) ((_addr & 0xFFFF0000UL) + ((__v & 0xFF)<<1)))); \ __w = ((*(__force volatile u16 *) ((_addr | 0x10000) + ((__v >> 8)<<1)))); }) #define rom_out_le16(addr, w) \ - ({u16 __maybe_unused __w, __v = (w); u32 _addr = ((u32) (addr)); \ + (void)({u16 __maybe_unused __w, __v = (w); u32 _addr = ((u32) (addr)); \ __w = ((*(__force volatile u16 *) ((_addr & 0xFFFF0000UL) + ((__v >> 8)<<1)))); \ __w = ((*(__force volatile u16 *) ((_addr | 0x10000) + ((__v & 0xFF)<<1)))); }) From 4a4e2e90ecec7ce8cfa2a816f1bf1b597d68a017 Mon Sep 17 00:00:00 2001 From: David Howells Date: Sat, 21 May 2022 08:45:41 +0100 Subject: [PATCH 0404/1842] rxrpc: Return an error to sendmsg if call failed [ Upstream commit 4ba68c5192554876bd8c3afd904e3064d2915341 ] If at the end of rxrpc sendmsg() or rxrpc_kernel_send_data() the call that was being given data was aborted remotely or otherwise failed, return an error rather than returning the amount of data buffered for transmission. The call (presumably) did not complete, so there's not much point continuing with it. AF_RXRPC considers it "complete" and so will be unwilling to do anything else with it - and won't send a notification for it, deeming the return from sendmsg sufficient. Not returning an error causes afs to incorrectly handle a StoreData operation that gets interrupted by a change of address due to NAT reconfiguration. This doesn't normally affect most operations since their request parameters tend to fit into a single UDP packet and afs_make_call() returns before the server responds; StoreData is different as it involves transmission of a lot of data. This can be triggered on a client by doing something like: dd if=/dev/zero of=/afs/example.com/foo bs=1M count=512 at one prompt, and then changing the network address at another prompt, e.g.: ifconfig enp6s0 inet 192.168.6.2 && route add 192.168.6.1 dev enp6s0 Tracing packets on an Auristor fileserver looks something like: 192.168.6.1 -> 192.168.6.3 RX 107 ACK Idle Seq: 0 Call: 4 Source Port: 7000 Destination Port: 7001 192.168.6.3 -> 192.168.6.1 AFS (RX) 1482 FS Request: Unknown(64538) (64538) 192.168.6.3 -> 192.168.6.1 AFS (RX) 1482 FS Request: Unknown(64538) (64538) 192.168.6.1 -> 192.168.6.3 RX 107 ACK Idle Seq: 0 Call: 4 Source Port: 7000 Destination Port: 7001 192.168.6.2 -> 192.168.6.1 AFS (RX) 1482 FS Request: Unknown(0) (0) 192.168.6.2 -> 192.168.6.1 AFS (RX) 1482 FS Request: Unknown(0) (0) 192.168.6.1 -> 192.168.6.2 RX 107 ACK Exceeds Window Seq: 0 Call: 4 Source Port: 7000 Destination Port: 7001 192.168.6.1 -> 192.168.6.2 RX 74 ABORT Seq: 0 Call: 4 Source Port: 7000 Destination Port: 7001 192.168.6.1 -> 192.168.6.2 RX 74 ABORT Seq: 29321 Call: 4 Source Port: 7000 Destination Port: 7001 The Auristor fileserver logs code -453 (RXGEN_SS_UNMARSHAL), but the abort code received by kafs is -5 (RX_PROTOCOL_ERROR) as the rx layer sees the condition and generates an abort first and the unmarshal error is a consequence of that at the application layer. Reported-by: Marc Dionne Signed-off-by: David Howells cc: linux-afs@lists.infradead.org Link: http://lists.infradead.org/pipermail/linux-afs/2021-December/004810.html # v1 Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/rxrpc/sendmsg.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c index d27140c836cc..aa23ba4e2566 100644 --- a/net/rxrpc/sendmsg.c +++ b/net/rxrpc/sendmsg.c @@ -461,6 +461,12 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, success: ret = copied; + if (READ_ONCE(call->state) == RXRPC_CALL_COMPLETE) { + read_lock_bh(&call->state_lock); + if (call->error < 0) + ret = call->error; + read_unlock_bh(&call->state_lock); + } out: call->tx_pending = skb; _leave(" = %d", ret); From cb2ca93f8fe350e89e2623b9966f6940ee587c1e Mon Sep 17 00:00:00 2001 From: David Howells Date: Sat, 21 May 2022 08:45:48 +0100 Subject: [PATCH 0405/1842] rxrpc, afs: Fix selection of abort codes [ Upstream commit de696c4784f0706884458893c5a6c39b3a3ff65c ] The RX_USER_ABORT code should really only be used to indicate that the user of the rxrpc service (ie. userspace) implicitly caused a call to be aborted - for instance if the AF_RXRPC socket is closed whilst the call was in progress. (The user may also explicitly abort a call and specify the abort code to use). Change some of the points of generation to use other abort codes instead: (1) Abort the call with RXGEN_SS_UNMARSHAL or RXGEN_CC_UNMARSHAL if we see ENOMEM and EFAULT during received data delivery and abort with RX_CALL_DEAD in the default case. (2) Abort with RXGEN_SS_MARSHAL if we get ENOMEM whilst trying to send a reply. (3) Abort with RX_CALL_DEAD if we stop hearing from the peer if we had heard from the peer and abort with RX_CALL_TIMEOUT if we hadn't. (4) Abort with RX_CALL_DEAD if we try to disconnect a call that's not completed successfully or been aborted. Reported-by: Jeffrey Altman Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- fs/afs/rxrpc.c | 8 +++++--- net/rxrpc/call_event.c | 4 ++-- net/rxrpc/conn_object.c | 2 +- 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c index 8be709cb8542..efe0fb3ad8bd 100644 --- a/fs/afs/rxrpc.c +++ b/fs/afs/rxrpc.c @@ -572,6 +572,8 @@ static void afs_deliver_to_call(struct afs_call *call) case -ENODATA: case -EBADMSG: case -EMSGSIZE: + case -ENOMEM: + case -EFAULT: abort_code = RXGEN_CC_UNMARSHAL; if (state != AFS_CALL_CL_AWAIT_REPLY) abort_code = RXGEN_SS_UNMARSHAL; @@ -579,7 +581,7 @@ static void afs_deliver_to_call(struct afs_call *call) abort_code, ret, "KUM"); goto local_abort; default: - abort_code = RX_USER_ABORT; + abort_code = RX_CALL_DEAD; rxrpc_kernel_abort_call(call->net->socket, call->rxcall, abort_code, ret, "KER"); goto local_abort; @@ -871,7 +873,7 @@ void afs_send_empty_reply(struct afs_call *call) case -ENOMEM: _debug("oom"); rxrpc_kernel_abort_call(net->socket, call->rxcall, - RX_USER_ABORT, -ENOMEM, "KOO"); + RXGEN_SS_MARSHAL, -ENOMEM, "KOO"); fallthrough; default: _leave(" [error]"); @@ -913,7 +915,7 @@ void afs_send_simple_reply(struct afs_call *call, const void *buf, size_t len) if (n == -ENOMEM) { _debug("oom"); rxrpc_kernel_abort_call(net->socket, call->rxcall, - RX_USER_ABORT, -ENOMEM, "KOO"); + RXGEN_SS_MARSHAL, -ENOMEM, "KOO"); } _leave(" [error]"); } diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index 22e05de5d1ca..e426f6831aab 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -377,9 +377,9 @@ void rxrpc_process_call(struct work_struct *work) if (test_bit(RXRPC_CALL_RX_HEARD, &call->flags) && (int)call->conn->hi_serial - (int)call->rx_serial > 0) { trace_rxrpc_call_reset(call); - rxrpc_abort_call("EXP", call, 0, RX_USER_ABORT, -ECONNRESET); + rxrpc_abort_call("EXP", call, 0, RX_CALL_DEAD, -ECONNRESET); } else { - rxrpc_abort_call("EXP", call, 0, RX_USER_ABORT, -ETIME); + rxrpc_abort_call("EXP", call, 0, RX_CALL_TIMEOUT, -ETIME); } set_bit(RXRPC_CALL_EV_ABORT, &call->events); goto recheck_state; diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c index 3bcbe0665f91..3ef05a0e90ad 100644 --- a/net/rxrpc/conn_object.c +++ b/net/rxrpc/conn_object.c @@ -184,7 +184,7 @@ void __rxrpc_disconnect_call(struct rxrpc_connection *conn, chan->last_type = RXRPC_PACKET_TYPE_ABORT; break; default: - chan->last_abort = RX_USER_ABORT; + chan->last_abort = RX_CALL_DEAD; chan->last_type = RXRPC_PACKET_TYPE_ABORT; break; } From 340cf91293a3e6477daef1d883ace10c327d7cc9 Mon Sep 17 00:00:00 2001 From: Jakub Kicinski Date: Fri, 20 May 2022 12:56:05 -0700 Subject: [PATCH 0406/1842] eth: tg3: silence the GCC 12 array-bounds warning MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 9dec850fd7c210a04b4707df8e6c95bfafdd6a4b ] GCC 12 currently generates a rather inconsistent warning: drivers/net/ethernet/broadcom/tg3.c:17795:51: warning: array subscript 5 is above array bounds of ‘struct tg3_napi[5]’ [-Warray-bounds] 17795 | struct tg3_napi *tnapi = &tp->napi[i]; | ~~~~~~~~^~~ i is guaranteed < tp->irq_max which in turn is either 1 or 5. There are more loops like this one in the driver, but strangely GCC 12 dislikes only this single one. Silence this silliness for now. Signed-off-by: Jakub Kicinski Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/broadcom/Makefile | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/ethernet/broadcom/Makefile b/drivers/net/ethernet/broadcom/Makefile index 7046ad6d3d0e..ac50da49ca77 100644 --- a/drivers/net/ethernet/broadcom/Makefile +++ b/drivers/net/ethernet/broadcom/Makefile @@ -16,3 +16,8 @@ obj-$(CONFIG_BGMAC_BCMA) += bgmac-bcma.o bgmac-bcma-mdio.o obj-$(CONFIG_BGMAC_PLATFORM) += bgmac-platform.o obj-$(CONFIG_SYSTEMPORT) += bcmsysport.o obj-$(CONFIG_BNXT) += bnxt/ + +# FIXME: temporarily silence -Warray-bounds on non W=1+ builds +ifndef KBUILD_EXTRA_WARN +CFLAGS_tg3.o += -Wno-array-bounds +endif From 92ef7a87192cf69d856debdd501d6e6d32bcf320 Mon Sep 17 00:00:00 2001 From: Yonghong Song Date: Mon, 23 May 2022 08:20:44 -0700 Subject: [PATCH 0407/1842] selftests/bpf: fix btf_dump/btf_dump due to recent clang change [ Upstream commit 4050764cbaa25760aab40857f723393c07898474 ] Latest llvm-project upstream had a change of behavior related to qualifiers on function return type ([1]). This caused selftests btf_dump/btf_dump failure. The following example shows what changed. $ cat t.c typedef const char * const (* const (* const fn_ptr_arr2_t[5])())(char * (*)(int)); struct t { int a; fn_ptr_arr2_t l; }; int foo(struct t *arg) { return arg->a; } Compiled with latest upstream llvm15, $ clang -O2 -g -target bpf -S -emit-llvm t.c The related generated debuginfo IR looks like: !16 = !DIDerivedType(tag: DW_TAG_typedef, name: "fn_ptr_arr2_t", file: !1, line: 1, baseType: !17) !17 = !DICompositeType(tag: DW_TAG_array_type, baseType: !18, size: 320, elements: !32) !18 = !DIDerivedType(tag: DW_TAG_const_type, baseType: !19) !19 = !DIDerivedType(tag: DW_TAG_pointer_type, baseType: !20, size: 64) !20 = !DISubroutineType(types: !21) !21 = !{!22, null} !22 = !DIDerivedType(tag: DW_TAG_pointer_type, baseType: !23, size: 64) !23 = !DISubroutineType(types: !24) !24 = !{!25, !28} !25 = !DIDerivedType(tag: DW_TAG_pointer_type, baseType: !26, size: 64) !26 = !DIDerivedType(tag: DW_TAG_const_type, baseType: !27) !27 = !DIBasicType(name: "char", size: 8, encoding: DW_ATE_signed_char) You can see two intermediate const qualifier to pointer are dropped in debuginfo IR. With llvm14, we have following debuginfo IR: !16 = !DIDerivedType(tag: DW_TAG_typedef, name: "fn_ptr_arr2_t", file: !1, line: 1, baseType: !17) !17 = !DICompositeType(tag: DW_TAG_array_type, baseType: !18, size: 320, elements: !34) !18 = !DIDerivedType(tag: DW_TAG_const_type, baseType: !19) !19 = !DIDerivedType(tag: DW_TAG_pointer_type, baseType: !20, size: 64) !20 = !DISubroutineType(types: !21) !21 = !{!22, null} !22 = !DIDerivedType(tag: DW_TAG_const_type, baseType: !23) !23 = !DIDerivedType(tag: DW_TAG_pointer_type, baseType: !24, size: 64) !24 = !DISubroutineType(types: !25) !25 = !{!26, !30} !26 = !DIDerivedType(tag: DW_TAG_const_type, baseType: !27) !27 = !DIDerivedType(tag: DW_TAG_pointer_type, baseType: !28, size: 64) !28 = !DIDerivedType(tag: DW_TAG_const_type, baseType: !29) !29 = !DIBasicType(name: "char", size: 8, encoding: DW_ATE_signed_char) All const qualifiers are preserved. To adapt the selftest to both old and new llvm, this patch removed the intermediate const qualifier in const-to-ptr types, to make the test succeed again. [1] https://reviews.llvm.org/D125919 Reported-by: Mykola Lysenko Signed-off-by: Yonghong Song Link: https://lore.kernel.org/r/20220523152044.3905809-1-yhs@fb.com Signed-off-by: Alexei Starovoitov Signed-off-by: Sasha Levin --- tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c b/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c index 31975c96e2c9..fe43556e1a61 100644 --- a/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c +++ b/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c @@ -94,7 +94,7 @@ typedef void (* (*signal_t)(int, void (*)(int)))(int); typedef char * (*fn_ptr_arr1_t[10])(int **); -typedef char * (* const (* const fn_ptr_arr2_t[5])())(char * (*)(int)); +typedef char * (* (* const fn_ptr_arr2_t[5])())(char * (*)(int)); struct struct_w_typedefs { int_t a; From 6a2e275834c4d08f31e4e5e981e68b516990aeab Mon Sep 17 00:00:00 2001 From: Bob Peterson Date: Fri, 11 Feb 2022 10:50:08 -0500 Subject: [PATCH 0408/1842] gfs2: use i_lock spin_lock for inode qadata [ Upstream commit 5fcff61eea9efd1f4b60e89d2d686b5feaea100f ] Before this patch, functions gfs2_qa_get and _put used the i_rw_mutex to prevent simultaneous access to its i_qadata. But i_rw_mutex is now used for many other things, including iomap_begin and end, which causes a conflict according to lockdep. We cannot just remove the lock since simultaneous opens (gfs2_open -> gfs2_open_common -> gfs2_qa_get) can then stomp on each others values for i_qadata. This patch solves the conflict by using the i_lock spin_lock in the inode to prevent simultaneous access. Signed-off-by: Bob Peterson Signed-off-by: Andreas Gruenbacher Signed-off-by: Sasha Levin --- fs/gfs2/quota.c | 32 ++++++++++++++++++++------------ 1 file changed, 20 insertions(+), 12 deletions(-) diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c index 6e173ae378c4..ad953ecb5853 100644 --- a/fs/gfs2/quota.c +++ b/fs/gfs2/quota.c @@ -531,34 +531,42 @@ static void qdsb_put(struct gfs2_quota_data *qd) */ int gfs2_qa_get(struct gfs2_inode *ip) { - int error = 0; struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode); + struct inode *inode = &ip->i_inode; if (sdp->sd_args.ar_quota == GFS2_QUOTA_OFF) return 0; - down_write(&ip->i_rw_mutex); + spin_lock(&inode->i_lock); if (ip->i_qadata == NULL) { - ip->i_qadata = kmem_cache_zalloc(gfs2_qadata_cachep, GFP_NOFS); - if (!ip->i_qadata) { - error = -ENOMEM; - goto out; - } + struct gfs2_qadata *tmp; + + spin_unlock(&inode->i_lock); + tmp = kmem_cache_zalloc(gfs2_qadata_cachep, GFP_NOFS); + if (!tmp) + return -ENOMEM; + + spin_lock(&inode->i_lock); + if (ip->i_qadata == NULL) + ip->i_qadata = tmp; + else + kmem_cache_free(gfs2_qadata_cachep, tmp); } ip->i_qadata->qa_ref++; -out: - up_write(&ip->i_rw_mutex); - return error; + spin_unlock(&inode->i_lock); + return 0; } void gfs2_qa_put(struct gfs2_inode *ip) { - down_write(&ip->i_rw_mutex); + struct inode *inode = &ip->i_inode; + + spin_lock(&inode->i_lock); if (ip->i_qadata && --ip->i_qadata->qa_ref == 0) { kmem_cache_free(gfs2_qadata_cachep, ip->i_qadata); ip->i_qadata = NULL; } - up_write(&ip->i_rw_mutex); + spin_unlock(&inode->i_lock); } int gfs2_quota_hold(struct gfs2_inode *ip, kuid_t uid, kgid_t gid) From 968a6683761d3c1199c788ec0c20a3437a28ac78 Mon Sep 17 00:00:00 2001 From: Niels Dossche Date: Mon, 28 Feb 2022 20:51:44 +0100 Subject: [PATCH 0409/1842] IB/rdmavt: add missing locks in rvt_ruc_loopback [ Upstream commit 22cbc6c2681a0a4fe76150270426e763d52353a4 ] The documentation of the function rvt_error_qp says both r_lock and s_lock need to be held when calling that function. It also asserts using lockdep that both of those locks are held. rvt_error_qp is called form rvt_send_cq, which is called from rvt_qp_complete_swqe, which is called from rvt_send_complete, which is called from rvt_ruc_loopback in two places. Both of these places do not hold r_lock. Fix this by acquiring a spin_lock of r_lock in both of these places. The r_lock acquiring cannot be added in rvt_qp_complete_swqe because some of its other callers already have r_lock acquired. Link: https://lore.kernel.org/r/20220228195144.71946-1-dossche.niels@gmail.com Signed-off-by: Niels Dossche Signed-off-by: Jason Gunthorpe Signed-off-by: Sasha Levin --- drivers/infiniband/sw/rdmavt/qp.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c index d8d52a00a1be..585a9c76e518 100644 --- a/drivers/infiniband/sw/rdmavt/qp.c +++ b/drivers/infiniband/sw/rdmavt/qp.c @@ -2826,7 +2826,7 @@ void rvt_qp_iter(struct rvt_dev_info *rdi, EXPORT_SYMBOL(rvt_qp_iter); /* - * This should be called with s_lock held. + * This should be called with s_lock and r_lock held. */ void rvt_send_complete(struct rvt_qp *qp, struct rvt_swqe *wqe, enum ib_wc_status status) @@ -3185,7 +3185,9 @@ void rvt_ruc_loopback(struct rvt_qp *sqp) rvp->n_loop_pkts++; flush_send: sqp->s_rnr_retry = sqp->s_rnr_retry_cnt; + spin_lock(&sqp->r_lock); rvt_send_complete(sqp, wqe, send_status); + spin_unlock(&sqp->r_lock); if (local_ops) { atomic_dec(&sqp->local_ops_pending); local_ops = 0; @@ -3239,7 +3241,9 @@ void rvt_ruc_loopback(struct rvt_qp *sqp) spin_unlock_irqrestore(&qp->r_lock, flags); serr_no_r_lock: spin_lock_irqsave(&sqp->s_lock, flags); + spin_lock(&sqp->r_lock); rvt_send_complete(sqp, wqe, send_status); + spin_unlock(&sqp->r_lock); if (sqp->ibqp.qp_type == IB_QPT_RC) { int lastwqe; From 0521c5297885518eb5b380dd5f20b4298e6399d7 Mon Sep 17 00:00:00 2001 From: Krzysztof Kozlowski Date: Thu, 7 Apr 2022 21:29:59 +0200 Subject: [PATCH 0410/1842] ARM: dts: ox820: align interrupt controller node name with dtschema [ Upstream commit fbcd5ad7a419ad40644a0bb8b4152bc660172d8a ] Fixes dtbs_check warnings like: gic@1000: $nodename:0: 'gic@1000' does not match '^interrupt-controller(@[0-9a-f,]+)*$' Signed-off-by: Krzysztof Kozlowski Acked-by: Neil Armstrong Link: https://lore.kernel.org/r/20220317115705.450427-1-krzysztof.kozlowski@canonical.com Signed-off-by: Sasha Levin --- arch/arm/boot/dts/ox820.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/boot/dts/ox820.dtsi b/arch/arm/boot/dts/ox820.dtsi index 90846a7655b4..dde4364892bf 100644 --- a/arch/arm/boot/dts/ox820.dtsi +++ b/arch/arm/boot/dts/ox820.dtsi @@ -287,7 +287,7 @@ local-timer@600 { clocks = <&armclk>; }; - gic: gic@1000 { + gic: interrupt-controller@1000 { compatible = "arm,arm11mp-gic"; interrupt-controller; #interrupt-cells = <3>; From c400439adc36e2ae4cd293657e27a3304ca22c25 Mon Sep 17 00:00:00 2001 From: Krzysztof Kozlowski Date: Sun, 27 Mar 2022 11:08:54 -0700 Subject: [PATCH 0411/1842] ARM: dts: s5pv210: align DMA channels with dtschema [ Upstream commit 9e916fb9bc3d16066286f19fc9c51d26a6aec6bd ] dtschema expects DMA channels in specific order (tx, rx and tx-sec). The order actually should not matter because dma-names is used however let's make it aligned with dtschema to suppress warnings like: i2s@eee30000: dma-names: ['rx', 'tx', 'tx-sec'] is not valid under any of the given schemas Signed-off-by: Krzysztof Kozlowski Co-developed-by: Jonathan Bakker Signed-off-by: Jonathan Bakker Link: https://lore.kernel.org/r/CY4PR04MB056779A9C50DC95987C5272ACB1C9@CY4PR04MB0567.namprd04.prod.outlook.com Signed-off-by: Krzysztof Kozlowski Signed-off-by: Sasha Levin --- arch/arm/boot/dts/s5pv210-aries.dtsi | 2 +- arch/arm/boot/dts/s5pv210.dtsi | 12 ++++++------ 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/arm/boot/dts/s5pv210-aries.dtsi b/arch/arm/boot/dts/s5pv210-aries.dtsi index 986fa0b1a877..d9b4c51a00d9 100644 --- a/arch/arm/boot/dts/s5pv210-aries.dtsi +++ b/arch/arm/boot/dts/s5pv210-aries.dtsi @@ -637,7 +637,7 @@ touchscreen@4a { }; &i2s0 { - dmas = <&pdma0 9>, <&pdma0 10>, <&pdma0 11>; + dmas = <&pdma0 10>, <&pdma0 9>, <&pdma0 11>; status = "okay"; }; diff --git a/arch/arm/boot/dts/s5pv210.dtsi b/arch/arm/boot/dts/s5pv210.dtsi index 2871351ab907..eb7e3660ada7 100644 --- a/arch/arm/boot/dts/s5pv210.dtsi +++ b/arch/arm/boot/dts/s5pv210.dtsi @@ -240,8 +240,8 @@ i2s0: i2s@eee30000 { reg = <0xeee30000 0x1000>; interrupt-parent = <&vic2>; interrupts = <16>; - dma-names = "rx", "tx", "tx-sec"; - dmas = <&pdma1 9>, <&pdma1 10>, <&pdma1 11>; + dma-names = "tx", "rx", "tx-sec"; + dmas = <&pdma1 10>, <&pdma1 9>, <&pdma1 11>; clock-names = "iis", "i2s_opclk0", "i2s_opclk1"; @@ -260,8 +260,8 @@ i2s1: i2s@e2100000 { reg = <0xe2100000 0x1000>; interrupt-parent = <&vic2>; interrupts = <17>; - dma-names = "rx", "tx"; - dmas = <&pdma1 12>, <&pdma1 13>; + dma-names = "tx", "rx"; + dmas = <&pdma1 13>, <&pdma1 12>; clock-names = "iis", "i2s_opclk0"; clocks = <&clocks CLK_I2S1>, <&clocks SCLK_AUDIO1>; pinctrl-names = "default"; @@ -275,8 +275,8 @@ i2s2: i2s@e2a00000 { reg = <0xe2a00000 0x1000>; interrupt-parent = <&vic2>; interrupts = <18>; - dma-names = "rx", "tx"; - dmas = <&pdma1 14>, <&pdma1 15>; + dma-names = "tx", "rx"; + dmas = <&pdma1 15>, <&pdma1 14>; clock-names = "iis", "i2s_opclk0"; clocks = <&clocks CLK_I2S2>, <&clocks SCLK_AUDIO2>; pinctrl-names = "default"; From 7e391ec939666dd9894f066c8d62cb5c6fb77e23 Mon Sep 17 00:00:00 2001 From: Konrad Dybcio Date: Sat, 19 Mar 2022 18:46:43 +0100 Subject: [PATCH 0412/1842] arm64: dts: qcom: msm8994: Fix BLSP[12]_DMA channels count [ Upstream commit 1ae438d26b620979ed004d559c304d31c42173ae ] MSM8994 actually features 24 DMA channels for each BLSP, fix it! Signed-off-by: Konrad Dybcio Signed-off-by: Bjorn Andersson Link: https://lore.kernel.org/r/20220319174645.340379-14-konrad.dybcio@somainline.org Signed-off-by: Sasha Levin --- arch/arm64/boot/dts/qcom/msm8994.dtsi | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi index 45f9a44326a6..297408b947ff 100644 --- a/arch/arm64/boot/dts/qcom/msm8994.dtsi +++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi @@ -316,7 +316,7 @@ blsp1_dma: dma@f9904000 { #dma-cells = <1>; qcom,ee = <0>; qcom,controlled-remotely; - num-channels = <18>; + num-channels = <24>; qcom,num-ees = <4>; }; @@ -412,7 +412,7 @@ blsp2_dma: dma@f9944000 { #dma-cells = <1>; qcom,ee = <0>; qcom,controlled-remotely; - num-channels = <18>; + num-channels = <24>; qcom,num-ees = <4>; }; From 86b091b6894c449d2734de7aa7d79ccb33ffd97d Mon Sep 17 00:00:00 2001 From: Brian Norris Date: Tue, 8 Mar 2022 11:08:59 -0800 Subject: [PATCH 0413/1842] PM / devfreq: rk3399_dmc: Disable edev on remove() [ Upstream commit 2fccf9e6050e0e3b8b4cd275d41daf7f7fa22804 ] Otherwise we hit an unablanced enable-count when unbinding the DFI device: [ 1279.659119] ------------[ cut here ]------------ [ 1279.659179] WARNING: CPU: 2 PID: 5638 at drivers/devfreq/devfreq-event.c:360 devfreq_event_remove_edev+0x84/0x8c ... [ 1279.659352] Hardware name: Google Kevin (DT) [ 1279.659363] pstate: 80400005 (Nzcv daif +PAN -UAO -TCO BTYPE=--) [ 1279.659371] pc : devfreq_event_remove_edev+0x84/0x8c [ 1279.659380] lr : devm_devfreq_event_release+0x1c/0x28 ... [ 1279.659571] Call trace: [ 1279.659582] devfreq_event_remove_edev+0x84/0x8c [ 1279.659590] devm_devfreq_event_release+0x1c/0x28 [ 1279.659602] release_nodes+0x1cc/0x244 [ 1279.659611] devres_release_all+0x44/0x60 [ 1279.659621] device_release_driver_internal+0x11c/0x1ac [ 1279.659629] device_driver_detach+0x20/0x2c [ 1279.659641] unbind_store+0x7c/0xb0 [ 1279.659650] drv_attr_store+0x2c/0x40 [ 1279.659663] sysfs_kf_write+0x44/0x58 [ 1279.659672] kernfs_fop_write_iter+0xf4/0x190 [ 1279.659684] vfs_write+0x2b0/0x2e4 [ 1279.659693] ksys_write+0x80/0xec [ 1279.659701] __arm64_sys_write+0x24/0x30 [ 1279.659714] el0_svc_common+0xf0/0x1d8 [ 1279.659724] do_el0_svc_compat+0x28/0x3c [ 1279.659738] el0_svc_compat+0x10/0x1c [ 1279.659746] el0_sync_compat_handler+0xa8/0xcc [ 1279.659758] el0_sync_compat+0x188/0x1c0 [ 1279.659768] ---[ end trace cec200e5094155b4 ]--- Signed-off-by: Brian Norris Signed-off-by: Chanwoo Choi Signed-off-by: Sasha Levin --- drivers/devfreq/rk3399_dmc.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/devfreq/rk3399_dmc.c b/drivers/devfreq/rk3399_dmc.c index 2e912166a993..7e52375d9818 100644 --- a/drivers/devfreq/rk3399_dmc.c +++ b/drivers/devfreq/rk3399_dmc.c @@ -485,6 +485,8 @@ static int rk3399_dmcfreq_remove(struct platform_device *pdev) { struct rk3399_dmcfreq *dmcfreq = dev_get_drvdata(&pdev->dev); + devfreq_event_disable_edev(dmcfreq->edev); + /* * Before remove the opp table we need to unregister the opp notifier. */ From 0f9091f202b38a5678a098e0e19de92370636378 Mon Sep 17 00:00:00 2001 From: Gilad Ben-Yossef Date: Wed, 6 Apr 2022 11:11:39 +0300 Subject: [PATCH 0414/1842] crypto: ccree - use fine grained DMA mapping dir [ Upstream commit a260436c98171cd825955a84a7f6e62bc8f4f00d ] Use a fine grained specification of DMA mapping directions in certain cases, allowing both a more optimized operation as well as shushing out a harmless, though persky dma-debug warning. Signed-off-by: Gilad Ben-Yossef Reported-by: Corentin Labbe Signed-off-by: Herbert Xu Signed-off-by: Sasha Levin --- drivers/crypto/ccree/cc_buffer_mgr.c | 27 +++++++++++++++------------ 1 file changed, 15 insertions(+), 12 deletions(-) diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c index 11e0278c8631..6140e4927322 100644 --- a/drivers/crypto/ccree/cc_buffer_mgr.c +++ b/drivers/crypto/ccree/cc_buffer_mgr.c @@ -356,12 +356,14 @@ void cc_unmap_cipher_request(struct device *dev, void *ctx, req_ctx->mlli_params.mlli_dma_addr); } - dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_BIDIRECTIONAL); - dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src)); - if (src != dst) { - dma_unmap_sg(dev, dst, req_ctx->out_nents, DMA_BIDIRECTIONAL); + dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_TO_DEVICE); + dma_unmap_sg(dev, dst, req_ctx->out_nents, DMA_FROM_DEVICE); dev_dbg(dev, "Unmapped req->dst=%pK\n", sg_virt(dst)); + dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src)); + } else { + dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_BIDIRECTIONAL); + dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src)); } } @@ -377,6 +379,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx, u32 dummy = 0; int rc = 0; u32 mapped_nents = 0; + int src_direction = (src != dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL); req_ctx->dma_buf_type = CC_DMA_BUF_DLLI; mlli_params->curr_pool = NULL; @@ -399,7 +402,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx, } /* Map the src SGL */ - rc = cc_map_sg(dev, src, nbytes, DMA_BIDIRECTIONAL, &req_ctx->in_nents, + rc = cc_map_sg(dev, src, nbytes, src_direction, &req_ctx->in_nents, LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents); if (rc) goto cipher_exit; @@ -416,7 +419,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx, } } else { /* Map the dst sg */ - rc = cc_map_sg(dev, dst, nbytes, DMA_BIDIRECTIONAL, + rc = cc_map_sg(dev, dst, nbytes, DMA_FROM_DEVICE, &req_ctx->out_nents, LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents); if (rc) @@ -456,6 +459,7 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req) struct aead_req_ctx *areq_ctx = aead_request_ctx(req); unsigned int hw_iv_size = areq_ctx->hw_iv_size; struct cc_drvdata *drvdata = dev_get_drvdata(dev); + int src_direction = (req->src != req->dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL); if (areq_ctx->mac_buf_dma_addr) { dma_unmap_single(dev, areq_ctx->mac_buf_dma_addr, @@ -514,13 +518,11 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req) sg_virt(req->src), areq_ctx->src.nents, areq_ctx->assoc.nents, areq_ctx->assoclen, req->cryptlen); - dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents, - DMA_BIDIRECTIONAL); + dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents, src_direction); if (req->src != req->dst) { dev_dbg(dev, "Unmapping dst sgl: req->dst=%pK\n", sg_virt(req->dst)); - dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents, - DMA_BIDIRECTIONAL); + dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents, DMA_FROM_DEVICE); } if (drvdata->coherent && areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT && @@ -843,7 +845,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata, else size_for_map -= authsize; - rc = cc_map_sg(dev, req->dst, size_for_map, DMA_BIDIRECTIONAL, + rc = cc_map_sg(dev, req->dst, size_for_map, DMA_FROM_DEVICE, &areq_ctx->dst.mapped_nents, LLI_MAX_NUM_OF_DATA_ENTRIES, &dst_last_bytes, &dst_mapped_nents); @@ -1056,7 +1058,8 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req) size_to_map += authsize; } - rc = cc_map_sg(dev, req->src, size_to_map, DMA_BIDIRECTIONAL, + rc = cc_map_sg(dev, req->src, size_to_map, + (req->src != req->dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL), &areq_ctx->src.mapped_nents, (LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES + LLI_MAX_NUM_OF_DATA_ENTRIES), From 05efc4591f80582b6fe53366b70b6a35a42fd255 Mon Sep 17 00:00:00 2001 From: QintaoShen Date: Thu, 24 Mar 2022 15:44:03 +0800 Subject: [PATCH 0415/1842] soc: ti: ti_sci_pm_domains: Check for null return of devm_kcalloc [ Upstream commit ba56291e297d28aa6eb82c5c1964fae2d7594746 ] The allocation funciton devm_kcalloc may fail and return a null pointer, which would cause a null-pointer dereference later. It might be better to check it and directly return -ENOMEM just like the usage of devm_kcalloc in previous code. Signed-off-by: QintaoShen Signed-off-by: Nishanth Menon Link: https://lore.kernel.org/r/1648107843-29077-1-git-send-email-unSimple1993@163.com Signed-off-by: Sasha Levin --- drivers/soc/ti/ti_sci_pm_domains.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/soc/ti/ti_sci_pm_domains.c b/drivers/soc/ti/ti_sci_pm_domains.c index 8afb3f45d263..a33ec7eaf23d 100644 --- a/drivers/soc/ti/ti_sci_pm_domains.c +++ b/drivers/soc/ti/ti_sci_pm_domains.c @@ -183,6 +183,8 @@ static int ti_sci_pm_domain_probe(struct platform_device *pdev) devm_kcalloc(dev, max_id + 1, sizeof(*pd_provider->data.domains), GFP_KERNEL); + if (!pd_provider->data.domains) + return -ENOMEM; pd_provider->data.num_domains = max_id + 1; pd_provider->data.xlate = ti_sci_pd_xlate; From 9dfa8d087bb854f613fcdbf1af4fb02c0b2d1e4f Mon Sep 17 00:00:00 2001 From: Zixuan Fu Date: Mon, 11 Apr 2022 18:45:34 +0800 Subject: [PATCH 0416/1842] fs: jfs: fix possible NULL pointer dereference in dbFree() [ Upstream commit 0d4837fdb796f99369cf7691d33de1b856bcaf1f ] In our fault-injection testing, the variable "nblocks" in dbFree() can be zero when kmalloc_array() fails in dtSearch(). In this case, the variable "mp" in dbFree() would be NULL and then it is dereferenced in "write_metapage(mp)". The failure log is listed as follows: [ 13.824137] BUG: kernel NULL pointer dereference, address: 0000000000000020 ... [ 13.827416] RIP: 0010:dbFree+0x5f7/0x910 [jfs] [ 13.834341] Call Trace: [ 13.834540] [ 13.834713] txFreeMap+0x7b4/0xb10 [jfs] [ 13.835038] txUpdateMap+0x311/0x650 [jfs] [ 13.835375] jfs_lazycommit+0x5f2/0xc70 [jfs] [ 13.835726] ? sched_dynamic_update+0x1b0/0x1b0 [ 13.836092] kthread+0x3c2/0x4a0 [ 13.836355] ? txLockFree+0x160/0x160 [jfs] [ 13.836763] ? kthread_unuse_mm+0x160/0x160 [ 13.837106] ret_from_fork+0x1f/0x30 [ 13.837402] ... This patch adds a NULL check of "mp" before "write_metapage(mp)" is called. Reported-by: TOTE Robot Signed-off-by: Zixuan Fu Signed-off-by: Dave Kleikamp Signed-off-by: Sasha Levin --- fs/jfs/jfs_dmap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c index e58ae29a223d..0ce17ea8fa8a 100644 --- a/fs/jfs/jfs_dmap.c +++ b/fs/jfs/jfs_dmap.c @@ -385,7 +385,8 @@ int dbFree(struct inode *ip, s64 blkno, s64 nblocks) } /* write the last buffer. */ - write_metapage(mp); + if (mp) + write_metapage(mp); IREAD_UNLOCK(ipbmap); From 039966775ca0a1b5bf767b26994030c2a3139ec3 Mon Sep 17 00:00:00 2001 From: Janusz Krzysztofik Date: Sun, 10 Apr 2022 15:07:54 +0200 Subject: [PATCH 0417/1842] ARM: OMAP1: clock: Fix UART rate reporting algorithm [ Upstream commit 338d5d476cde853dfd97378d20496baabc2ce3c0 ] Since its introduction to the mainline kernel, omap1_uart_recalc() helper makes incorrect use of clk->enable_bit as a ready to use bitmap mask while it only provides the bit number. Fix it. Signed-off-by: Janusz Krzysztofik Acked-by: Tony Lindgren Signed-off-by: Arnd Bergmann Signed-off-by: Sasha Levin --- arch/arm/mach-omap1/clock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/mach-omap1/clock.c b/arch/arm/mach-omap1/clock.c index bd5be82101f3..d89bda12bf3c 100644 --- a/arch/arm/mach-omap1/clock.c +++ b/arch/arm/mach-omap1/clock.c @@ -41,7 +41,7 @@ static DEFINE_SPINLOCK(clockfw_lock); unsigned long omap1_uart_recalc(struct clk *clk) { unsigned int val = __raw_readl(clk->enable_reg); - return val & clk->enable_bit ? 48000000 : 12000000; + return val & 1 << clk->enable_bit ? 48000000 : 12000000; } unsigned long omap1_sossi_recalc(struct clk *clk) From f20c7cd2b24ce0889b951849a9073a270c908348 Mon Sep 17 00:00:00 2001 From: Hari Bathini Date: Wed, 21 Apr 2021 23:20:52 +0530 Subject: [PATCH 0418/1842] powerpc/fadump: Fix fadump to work with a different endian capture kernel [ Upstream commit b74196af372f7cb4902179009265fe63ac81824f ] Dump capture would fail if capture kernel is not of the endianess as the production kernel, because the in-memory data structure (struct opal_fadump_mem_struct) shared across production kernel and capture kernel assumes the same endianess for both the kernels, which doesn't have to be true always. Fix it by having a well-defined endianess for struct opal_fadump_mem_struct. Signed-off-by: Hari Bathini Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/161902744901.86147.14719228311655123526.stgit@hbathini Signed-off-by: Sasha Levin --- arch/powerpc/platforms/powernv/opal-fadump.c | 94 +++++++++++--------- arch/powerpc/platforms/powernv/opal-fadump.h | 10 +-- 2 files changed, 57 insertions(+), 47 deletions(-) diff --git a/arch/powerpc/platforms/powernv/opal-fadump.c b/arch/powerpc/platforms/powernv/opal-fadump.c index 9a360ced663b..e23a51a05f99 100644 --- a/arch/powerpc/platforms/powernv/opal-fadump.c +++ b/arch/powerpc/platforms/powernv/opal-fadump.c @@ -60,7 +60,7 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node) addr = be64_to_cpu(addr); pr_debug("Kernel metadata addr: %llx\n", addr); opal_fdm_active = (void *)addr; - if (opal_fdm_active->registered_regions == 0) + if (be16_to_cpu(opal_fdm_active->registered_regions) == 0) return; ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_BOOT_MEM, &addr); @@ -95,17 +95,17 @@ static int opal_fadump_unregister(struct fw_dump *fadump_conf); static void opal_fadump_update_config(struct fw_dump *fadump_conf, const struct opal_fadump_mem_struct *fdm) { - pr_debug("Boot memory regions count: %d\n", fdm->region_cnt); + pr_debug("Boot memory regions count: %d\n", be16_to_cpu(fdm->region_cnt)); /* * The destination address of the first boot memory region is the * destination address of boot memory regions. */ - fadump_conf->boot_mem_dest_addr = fdm->rgn[0].dest; + fadump_conf->boot_mem_dest_addr = be64_to_cpu(fdm->rgn[0].dest); pr_debug("Destination address of boot memory regions: %#016llx\n", fadump_conf->boot_mem_dest_addr); - fadump_conf->fadumphdr_addr = fdm->fadumphdr_addr; + fadump_conf->fadumphdr_addr = be64_to_cpu(fdm->fadumphdr_addr); } /* @@ -126,9 +126,9 @@ static void opal_fadump_get_config(struct fw_dump *fadump_conf, fadump_conf->boot_memory_size = 0; pr_debug("Boot memory regions:\n"); - for (i = 0; i < fdm->region_cnt; i++) { - base = fdm->rgn[i].src; - size = fdm->rgn[i].size; + for (i = 0; i < be16_to_cpu(fdm->region_cnt); i++) { + base = be64_to_cpu(fdm->rgn[i].src); + size = be64_to_cpu(fdm->rgn[i].size); pr_debug("\t[%03d] base: 0x%lx, size: 0x%lx\n", i, base, size); fadump_conf->boot_mem_addr[i] = base; @@ -143,7 +143,7 @@ static void opal_fadump_get_config(struct fw_dump *fadump_conf, * Start address of reserve dump area (permanent reservation) for * re-registering FADump after dump capture. */ - fadump_conf->reserve_dump_area_start = fdm->rgn[0].dest; + fadump_conf->reserve_dump_area_start = be64_to_cpu(fdm->rgn[0].dest); /* * Rarely, but it can so happen that system crashes before all @@ -155,13 +155,14 @@ static void opal_fadump_get_config(struct fw_dump *fadump_conf, * Hope the memory that could not be preserved only has pages * that are usually filtered out while saving the vmcore. */ - if (fdm->region_cnt > fdm->registered_regions) { + if (be16_to_cpu(fdm->region_cnt) > be16_to_cpu(fdm->registered_regions)) { pr_warn("Not all memory regions were saved!!!\n"); pr_warn(" Unsaved memory regions:\n"); - i = fdm->registered_regions; - while (i < fdm->region_cnt) { + i = be16_to_cpu(fdm->registered_regions); + while (i < be16_to_cpu(fdm->region_cnt)) { pr_warn("\t[%03d] base: 0x%llx, size: 0x%llx\n", - i, fdm->rgn[i].src, fdm->rgn[i].size); + i, be64_to_cpu(fdm->rgn[i].src), + be64_to_cpu(fdm->rgn[i].size)); i++; } @@ -170,7 +171,7 @@ static void opal_fadump_get_config(struct fw_dump *fadump_conf, } fadump_conf->boot_mem_top = (fadump_conf->boot_memory_size + hole_size); - fadump_conf->boot_mem_regs_cnt = fdm->region_cnt; + fadump_conf->boot_mem_regs_cnt = be16_to_cpu(fdm->region_cnt); opal_fadump_update_config(fadump_conf, fdm); } @@ -178,35 +179,38 @@ static void opal_fadump_get_config(struct fw_dump *fadump_conf, static void opal_fadump_init_metadata(struct opal_fadump_mem_struct *fdm) { fdm->version = OPAL_FADUMP_VERSION; - fdm->region_cnt = 0; - fdm->registered_regions = 0; - fdm->fadumphdr_addr = 0; + fdm->region_cnt = cpu_to_be16(0); + fdm->registered_regions = cpu_to_be16(0); + fdm->fadumphdr_addr = cpu_to_be64(0); } static u64 opal_fadump_init_mem_struct(struct fw_dump *fadump_conf) { u64 addr = fadump_conf->reserve_dump_area_start; + u16 reg_cnt; int i; opal_fdm = __va(fadump_conf->kernel_metadata); opal_fadump_init_metadata(opal_fdm); /* Boot memory regions */ + reg_cnt = be16_to_cpu(opal_fdm->region_cnt); for (i = 0; i < fadump_conf->boot_mem_regs_cnt; i++) { - opal_fdm->rgn[i].src = fadump_conf->boot_mem_addr[i]; - opal_fdm->rgn[i].dest = addr; - opal_fdm->rgn[i].size = fadump_conf->boot_mem_sz[i]; + opal_fdm->rgn[i].src = cpu_to_be64(fadump_conf->boot_mem_addr[i]); + opal_fdm->rgn[i].dest = cpu_to_be64(addr); + opal_fdm->rgn[i].size = cpu_to_be64(fadump_conf->boot_mem_sz[i]); - opal_fdm->region_cnt++; + reg_cnt++; addr += fadump_conf->boot_mem_sz[i]; } + opal_fdm->region_cnt = cpu_to_be16(reg_cnt); /* * Kernel metadata is passed to f/w and retrieved in capture kerenl. * So, use it to save fadump header address instead of calculating it. */ - opal_fdm->fadumphdr_addr = (opal_fdm->rgn[0].dest + - fadump_conf->boot_memory_size); + opal_fdm->fadumphdr_addr = cpu_to_be64(be64_to_cpu(opal_fdm->rgn[0].dest) + + fadump_conf->boot_memory_size); opal_fadump_update_config(fadump_conf, opal_fdm); @@ -269,18 +273,21 @@ static u64 opal_fadump_get_bootmem_min(void) static int opal_fadump_register(struct fw_dump *fadump_conf) { s64 rc = OPAL_PARAMETER; + u16 registered_regs; int i, err = -EIO; - for (i = 0; i < opal_fdm->region_cnt; i++) { + registered_regs = be16_to_cpu(opal_fdm->registered_regions); + for (i = 0; i < be16_to_cpu(opal_fdm->region_cnt); i++) { rc = opal_mpipl_update(OPAL_MPIPL_ADD_RANGE, - opal_fdm->rgn[i].src, - opal_fdm->rgn[i].dest, - opal_fdm->rgn[i].size); + be64_to_cpu(opal_fdm->rgn[i].src), + be64_to_cpu(opal_fdm->rgn[i].dest), + be64_to_cpu(opal_fdm->rgn[i].size)); if (rc != OPAL_SUCCESS) break; - opal_fdm->registered_regions++; + registered_regs++; } + opal_fdm->registered_regions = cpu_to_be16(registered_regs); switch (rc) { case OPAL_SUCCESS: @@ -291,7 +298,8 @@ static int opal_fadump_register(struct fw_dump *fadump_conf) case OPAL_RESOURCE: /* If MAX regions limit in f/w is hit, warn and proceed. */ pr_warn("%d regions could not be registered for MPIPL as MAX limit is reached!\n", - (opal_fdm->region_cnt - opal_fdm->registered_regions)); + (be16_to_cpu(opal_fdm->region_cnt) - + be16_to_cpu(opal_fdm->registered_regions))); fadump_conf->dump_registered = 1; err = 0; break; @@ -312,7 +320,7 @@ static int opal_fadump_register(struct fw_dump *fadump_conf) * If some regions were registered before OPAL_MPIPL_ADD_RANGE * OPAL call failed, unregister all regions. */ - if ((err < 0) && (opal_fdm->registered_regions > 0)) + if ((err < 0) && (be16_to_cpu(opal_fdm->registered_regions) > 0)) opal_fadump_unregister(fadump_conf); return err; @@ -328,7 +336,7 @@ static int opal_fadump_unregister(struct fw_dump *fadump_conf) return -EIO; } - opal_fdm->registered_regions = 0; + opal_fdm->registered_regions = cpu_to_be16(0); fadump_conf->dump_registered = 0; return 0; } @@ -563,19 +571,20 @@ static void opal_fadump_region_show(struct fw_dump *fadump_conf, else fdm_ptr = opal_fdm; - for (i = 0; i < fdm_ptr->region_cnt; i++) { + for (i = 0; i < be16_to_cpu(fdm_ptr->region_cnt); i++) { /* * Only regions that are registered for MPIPL * would have dump data. */ if ((fadump_conf->dump_active) && - (i < fdm_ptr->registered_regions)) - dumped_bytes = fdm_ptr->rgn[i].size; + (i < be16_to_cpu(fdm_ptr->registered_regions))) + dumped_bytes = be64_to_cpu(fdm_ptr->rgn[i].size); seq_printf(m, "DUMP: Src: %#016llx, Dest: %#016llx, ", - fdm_ptr->rgn[i].src, fdm_ptr->rgn[i].dest); + be64_to_cpu(fdm_ptr->rgn[i].src), + be64_to_cpu(fdm_ptr->rgn[i].dest)); seq_printf(m, "Size: %#llx, Dumped: %#llx bytes\n", - fdm_ptr->rgn[i].size, dumped_bytes); + be64_to_cpu(fdm_ptr->rgn[i].size), dumped_bytes); } /* Dump is active. Show reserved area start address. */ @@ -624,6 +633,7 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node) { const __be32 *prop; unsigned long dn; + __be64 be_addr; u64 addr = 0; int i, len; s64 ret; @@ -680,13 +690,13 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node) if (!prop) return; - ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_KERNEL, &addr); - if ((ret != OPAL_SUCCESS) || !addr) { + ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_KERNEL, &be_addr); + if ((ret != OPAL_SUCCESS) || !be_addr) { pr_err("Failed to get Kernel metadata (%lld)\n", ret); return; } - addr = be64_to_cpu(addr); + addr = be64_to_cpu(be_addr); pr_debug("Kernel metadata addr: %llx\n", addr); opal_fdm_active = __va(addr); @@ -697,14 +707,14 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node) } /* Kernel regions not registered with f/w for MPIPL */ - if (opal_fdm_active->registered_regions == 0) { + if (be16_to_cpu(opal_fdm_active->registered_regions) == 0) { opal_fdm_active = NULL; return; } - ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_CPU, &addr); - if (addr) { - addr = be64_to_cpu(addr); + ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_CPU, &be_addr); + if (be_addr) { + addr = be64_to_cpu(be_addr); pr_debug("CPU metadata addr: %llx\n", addr); opal_cpu_metadata = __va(addr); } diff --git a/arch/powerpc/platforms/powernv/opal-fadump.h b/arch/powerpc/platforms/powernv/opal-fadump.h index f1e9ecf548c5..3f715efb0aa6 100644 --- a/arch/powerpc/platforms/powernv/opal-fadump.h +++ b/arch/powerpc/platforms/powernv/opal-fadump.h @@ -31,14 +31,14 @@ * OPAL FADump kernel metadata * * The address of this structure will be registered with f/w for retrieving - * and processing during crash dump. + * in the capture kernel to process the crash dump. */ struct opal_fadump_mem_struct { u8 version; u8 reserved[3]; - u16 region_cnt; /* number of regions */ - u16 registered_regions; /* Regions registered for MPIPL */ - u64 fadumphdr_addr; + __be16 region_cnt; /* number of regions */ + __be16 registered_regions; /* Regions registered for MPIPL */ + __be64 fadumphdr_addr; struct opal_mpipl_region rgn[FADUMP_MAX_MEM_REGS]; } __packed; @@ -135,7 +135,7 @@ static inline void opal_fadump_read_regs(char *bufp, unsigned int regs_cnt, for (i = 0; i < regs_cnt; i++, bufp += reg_entry_size) { reg_entry = (struct hdat_fadump_reg_entry *)bufp; val = (cpu_endian ? be64_to_cpu(reg_entry->reg_val) : - reg_entry->reg_val); + (u64)(reg_entry->reg_val)); opal_fadump_set_regval_regnum(regs, be32_to_cpu(reg_entry->reg_type), be32_to_cpu(reg_entry->reg_num), From c16f1b3d72e484c3a9580ac07a852ad77d5ca8f4 Mon Sep 17 00:00:00 2001 From: OGAWA Hirofumi Date: Fri, 29 Apr 2022 14:38:02 -0700 Subject: [PATCH 0419/1842] fat: add ratelimit to fat*_ent_bread() [ Upstream commit 183c3237c928109d2008c0456dff508baf692b20 ] fat*_ent_bread() can be the cause of too many report on I/O error path. So use fat_msg_ratelimit() instead. Link: https://lkml.kernel.org/r/87bkxogfeq.fsf@mail.parknet.co.jp Signed-off-by: OGAWA Hirofumi Reported-by: qianfan Tested-by: qianfan Signed-off-by: Andrew Morton Signed-off-by: Sasha Levin --- fs/fat/fatent.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/fs/fat/fatent.c b/fs/fat/fatent.c index f7e3304b7802..353735032947 100644 --- a/fs/fat/fatent.c +++ b/fs/fat/fatent.c @@ -93,7 +93,8 @@ static int fat12_ent_bread(struct super_block *sb, struct fat_entry *fatent, err_brelse: brelse(bhs[0]); err: - fat_msg(sb, KERN_ERR, "FAT read failed (blocknr %llu)", (llu)blocknr); + fat_msg_ratelimit(sb, KERN_ERR, "FAT read failed (blocknr %llu)", + (llu)blocknr); return -EIO; } @@ -106,8 +107,8 @@ static int fat_ent_bread(struct super_block *sb, struct fat_entry *fatent, fatent->fat_inode = MSDOS_SB(sb)->fat_inode; fatent->bhs[0] = sb_bread(sb, blocknr); if (!fatent->bhs[0]) { - fat_msg(sb, KERN_ERR, "FAT read failed (blocknr %llu)", - (llu)blocknr); + fat_msg_ratelimit(sb, KERN_ERR, "FAT read failed (blocknr %llu)", + (llu)blocknr); return -EIO; } fatent->nr_bhs = 1; From b646e0cfeb38bf5f1944fd548f1dfa9b129fa00c Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Fri, 29 Apr 2022 16:26:37 +0800 Subject: [PATCH 0420/1842] pinctrl: renesas: rzn1: Fix possible null-ptr-deref in sh_pfc_map_resources() [ Upstream commit 2f661477c2bb8068194dbba9738d05219f111c6e ] It will cause null-ptr-deref when using 'res', if platform_get_resource() returns NULL, so move using 'res' after devm_ioremap_resource() that will check it to avoid null-ptr-deref. And use devm_platform_get_and_ioremap_resource() to simplify code. Signed-off-by: Yang Yingliang Link: https://lore.kernel.org/r/20220429082637.1308182-2-yangyingliang@huawei.com Signed-off-by: Geert Uytterhoeven Signed-off-by: Sasha Levin --- drivers/pinctrl/renesas/pinctrl-rzn1.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/drivers/pinctrl/renesas/pinctrl-rzn1.c b/drivers/pinctrl/renesas/pinctrl-rzn1.c index ef5fb25b6016..849d091205d4 100644 --- a/drivers/pinctrl/renesas/pinctrl-rzn1.c +++ b/drivers/pinctrl/renesas/pinctrl-rzn1.c @@ -865,17 +865,15 @@ static int rzn1_pinctrl_probe(struct platform_device *pdev) ipctl->mdio_func[0] = -1; ipctl->mdio_func[1] = -1; - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - ipctl->lev1_protect_phys = (u32)res->start + 0x400; - ipctl->lev1 = devm_ioremap_resource(&pdev->dev, res); + ipctl->lev1 = devm_platform_get_and_ioremap_resource(pdev, 0, &res); if (IS_ERR(ipctl->lev1)) return PTR_ERR(ipctl->lev1); + ipctl->lev1_protect_phys = (u32)res->start + 0x400; - res = platform_get_resource(pdev, IORESOURCE_MEM, 1); - ipctl->lev2_protect_phys = (u32)res->start + 0x400; - ipctl->lev2 = devm_ioremap_resource(&pdev->dev, res); + ipctl->lev2 = devm_platform_get_and_ioremap_resource(pdev, 1, &res); if (IS_ERR(ipctl->lev2)) return PTR_ERR(ipctl->lev2); + ipctl->lev2_protect_phys = (u32)res->start + 0x400; ipctl->clk = devm_clk_get(&pdev->dev, NULL); if (IS_ERR(ipctl->clk)) From d146e2a9864ade19914494de3fb520390b415d58 Mon Sep 17 00:00:00 2001 From: Peng Wu Date: Fri, 29 Apr 2022 01:03:56 +0200 Subject: [PATCH 0421/1842] ARM: versatile: Add missing of_node_put in dcscb_init [ Upstream commit 23b44f9c649bbef10b45fa33080cd8b4166800ae ] The device_node pointer is returned by of_find_compatible_node with refcount incremented. We should use of_node_put() to avoid the refcount leak. Signed-off-by: Peng Wu Signed-off-by: Linus Walleij Link: https://lore.kernel.org/r/20220428230356.69418-1-linus.walleij@linaro.org' Signed-off-by: Arnd Bergmann Signed-off-by: Sasha Levin --- arch/arm/mach-vexpress/dcscb.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm/mach-vexpress/dcscb.c b/arch/arm/mach-vexpress/dcscb.c index a0554d7d04f7..e1adc098f89a 100644 --- a/arch/arm/mach-vexpress/dcscb.c +++ b/arch/arm/mach-vexpress/dcscb.c @@ -144,6 +144,7 @@ static int __init dcscb_init(void) if (!node) return -ENODEV; dcscb_base = of_iomap(node, 0); + of_node_put(node); if (!dcscb_base) return -EADDRNOTAVAIL; cfg = readl_relaxed(dcscb_base + DCS_CFG_R); From d2b3b380c16402758625fccdaca0c4510ee27179 Mon Sep 17 00:00:00 2001 From: Krzysztof Kozlowski Date: Tue, 26 Apr 2022 20:34:43 +0200 Subject: [PATCH 0422/1842] ARM: dts: exynos: add atmel,24c128 fallback to Samsung EEPROM [ Upstream commit f038e8186fbc5723d7d38c6fa1d342945107347e ] The Samsung s524ad0xd1 EEPROM should use atmel,24c128 fallback, according to the AT24 EEPROM bindings. Reported-by: Rob Herring Signed-off-by: Krzysztof Kozlowski Link: https://lore.kernel.org/r/20220426183443.243113-1-krzysztof.kozlowski@linaro.org Signed-off-by: Sasha Levin --- arch/arm/boot/dts/exynos5250-smdk5250.dts | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm/boot/dts/exynos5250-smdk5250.dts b/arch/arm/boot/dts/exynos5250-smdk5250.dts index 572198b6834e..06c4e0996503 100644 --- a/arch/arm/boot/dts/exynos5250-smdk5250.dts +++ b/arch/arm/boot/dts/exynos5250-smdk5250.dts @@ -129,7 +129,7 @@ &i2c_0 { samsung,i2c-max-bus-freq = <20000>; eeprom@50 { - compatible = "samsung,s524ad0xd1"; + compatible = "samsung,s524ad0xd1", "atmel,24c128"; reg = <0x50>; }; @@ -289,7 +289,7 @@ &i2c_1 { samsung,i2c-max-bus-freq = <20000>; eeprom@51 { - compatible = "samsung,s524ad0xd1"; + compatible = "samsung,s524ad0xd1", "atmel,24c128"; reg = <0x51>; }; From 21a3effe446dd6dc5eed7fe897c2f9b88c9a5d6d Mon Sep 17 00:00:00 2001 From: Peng Wu Date: Thu, 28 Apr 2022 10:43:06 +0000 Subject: [PATCH 0423/1842] ARM: hisi: Add missing of_node_put after of_find_compatible_node [ Upstream commit 9bc72e47d4630d58a840a66a869c56b29554cfe4 ] of_find_compatible_node will increment the refcount of the returned device_node. Calling of_node_put() to avoid the refcount leak Signed-off-by: Peng Wu Signed-off-by: Wei Xu Signed-off-by: Sasha Levin --- arch/arm/mach-hisi/platsmp.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm/mach-hisi/platsmp.c b/arch/arm/mach-hisi/platsmp.c index da7a09c1dae5..1cd1d9b0aabf 100644 --- a/arch/arm/mach-hisi/platsmp.c +++ b/arch/arm/mach-hisi/platsmp.c @@ -67,14 +67,17 @@ static void __init hi3xxx_smp_prepare_cpus(unsigned int max_cpus) } ctrl_base = of_iomap(np, 0); if (!ctrl_base) { + of_node_put(np); pr_err("failed to map address\n"); return; } if (of_property_read_u32(np, "smp-offset", &offset) < 0) { + of_node_put(np); pr_err("failed to find smp-offset property\n"); return; } ctrl_base += offset; + of_node_put(np); } } @@ -160,6 +163,7 @@ static int hip01_boot_secondary(unsigned int cpu, struct task_struct *idle) if (WARN_ON(!node)) return -1; ctrl_base = of_iomap(node, 0); + of_node_put(node); /* set the secondary core boot from DDR */ remap_reg_value = readl_relaxed(ctrl_base + REG_SC_CTRL); From eff3587b9c01439b738298475e555c028ac9f55e Mon Sep 17 00:00:00 2001 From: Yicong Yang Date: Mon, 4 Apr 2022 14:25:39 +0800 Subject: [PATCH 0424/1842] PCI: Avoid pci_dev_lock() AB/BA deadlock with sriov_numvfs_store() [ Upstream commit a91ee0e9fca9d7501286cfbced9b30a33e52740a ] The sysfs sriov_numvfs_store() path acquires the device lock before the config space access lock: sriov_numvfs_store device_lock # A (1) acquire device lock sriov_configure vfio_pci_sriov_configure # (for example) vfio_pci_core_sriov_configure pci_disable_sriov sriov_disable pci_cfg_access_lock pci_wait_cfg # B (4) wait for dev->block_cfg_access == 0 Previously, pci_dev_lock() acquired the config space access lock before the device lock: pci_dev_lock pci_cfg_access_lock dev->block_cfg_access = 1 # B (2) set dev->block_cfg_access = 1 device_lock # A (3) wait for device lock Any path that uses pci_dev_lock(), e.g., pci_reset_function(), may deadlock with sriov_numvfs_store() if the operations occur in the sequence (1) (2) (3) (4). Avoid the deadlock by reversing the order in pci_dev_lock() so it acquires the device lock before the config space access lock, the same as the sriov_numvfs_store() path. [bhelgaas: combined and adapted commit log from Jay Zhou's independent subsequent posting: https://lore.kernel.org/r/20220404062539.1710-1-jianjay.zhou@huawei.com] Link: https://lore.kernel.org/linux-pci/1583489997-17156-1-git-send-email-yangyicong@hisilicon.com/ Also-posted-by: Jay Zhou Signed-off-by: Yicong Yang Signed-off-by: Bjorn Helgaas Signed-off-by: Sasha Levin --- drivers/pci/pci.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index cda17c615148..6ebbe06f0b08 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -4975,18 +4975,18 @@ static int pci_dev_reset_slot_function(struct pci_dev *dev, int probe) static void pci_dev_lock(struct pci_dev *dev) { - pci_cfg_access_lock(dev); /* block PM suspend, driver probe, etc. */ device_lock(&dev->dev); + pci_cfg_access_lock(dev); } /* Return 1 on successful lock, 0 on contention */ static int pci_dev_trylock(struct pci_dev *dev) { - if (pci_cfg_access_trylock(dev)) { - if (device_trylock(&dev->dev)) + if (device_trylock(&dev->dev)) { + if (pci_cfg_access_trylock(dev)) return 1; - pci_cfg_access_unlock(dev); + device_unlock(&dev->dev); } return 0; @@ -4994,8 +4994,8 @@ static int pci_dev_trylock(struct pci_dev *dev) static void pci_dev_unlock(struct pci_dev *dev) { - device_unlock(&dev->dev); pci_cfg_access_unlock(dev); + device_unlock(&dev->dev); } static void pci_dev_save_and_disable(struct pci_dev *dev) From 5a3767ac79bccbedb11a15ca8e990666e4b03576 Mon Sep 17 00:00:00 2001 From: Vasily Averin Date: Wed, 11 May 2022 12:46:53 +0300 Subject: [PATCH 0425/1842] tracing: incorrect isolate_mote_t cast in mm_vmscan_lru_isolate [ Upstream commit 2b132903de7124dd9a758be0c27562e91a510848 ] Fixes following sparse warnings: CHECK mm/vmscan.c mm/vmscan.c: note: in included file (through include/trace/trace_events.h, include/trace/define_trace.h, include/trace/events/vmscan.h): ./include/trace/events/vmscan.h:281:1: sparse: warning: cast to restricted isolate_mode_t ./include/trace/events/vmscan.h:281:1: sparse: warning: restricted isolate_mode_t degrades to integer Link: https://lkml.kernel.org/r/e85d7ff2-fd10-53f8-c24e-ba0458439c1b@openvz.org Signed-off-by: Vasily Averin Acked-by: Steven Rostedt (Google) Signed-off-by: Andrew Morton Signed-off-by: Sasha Levin --- include/trace/events/vmscan.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h index 2070df64958e..b4feeb4b216a 100644 --- a/include/trace/events/vmscan.h +++ b/include/trace/events/vmscan.h @@ -283,7 +283,7 @@ TRACE_EVENT(mm_vmscan_lru_isolate, __field(unsigned long, nr_scanned) __field(unsigned long, nr_skipped) __field(unsigned long, nr_taken) - __field(isolate_mode_t, isolate_mode) + __field(unsigned int, isolate_mode) __field(int, lru) ), @@ -294,7 +294,7 @@ TRACE_EVENT(mm_vmscan_lru_isolate, __entry->nr_scanned = nr_scanned; __entry->nr_skipped = nr_skipped; __entry->nr_taken = nr_taken; - __entry->isolate_mode = isolate_mode; + __entry->isolate_mode = (__force unsigned int)isolate_mode; __entry->lru = lru; ), From 8a665c2791bcc0c14449c18d7d99f31e81d79c28 Mon Sep 17 00:00:00 2001 From: Haren Myneni Date: Sat, 9 Apr 2022 01:44:16 -0700 Subject: [PATCH 0426/1842] powerpc/powernv/vas: Assign real address to rx_fifo in vas_rx_win_attr [ Upstream commit c127d130f6d59fa81701f6b04023cf7cd1972fb3 ] In init_winctx_regs(), __pa() is called on winctx->rx_fifo and this function is called to initialize registers for receive and fault windows. But the real address is passed in winctx->rx_fifo for receive windows and the virtual address for fault windows which causes errors with DEBUG_VIRTUAL enabled. Fixes this issue by assigning only real address to rx_fifo in vas_rx_win_attr struct for both receive and fault windows. Reported-by: Michael Ellerman Signed-off-by: Haren Myneni Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/338e958c7ab8f3b266fa794a1f80f99b9671829e.camel@linux.ibm.com Signed-off-by: Sasha Levin --- arch/powerpc/include/asm/vas.h | 2 +- arch/powerpc/platforms/powernv/vas-fault.c | 2 +- arch/powerpc/platforms/powernv/vas-window.c | 4 ++-- arch/powerpc/platforms/powernv/vas.h | 2 +- drivers/crypto/nx/nx-common-powernv.c | 2 +- 5 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/include/asm/vas.h b/arch/powerpc/include/asm/vas.h index e33f80b0ea81..47062b457049 100644 --- a/arch/powerpc/include/asm/vas.h +++ b/arch/powerpc/include/asm/vas.h @@ -52,7 +52,7 @@ enum vas_cop_type { * Receive window attributes specified by the (in-kernel) owner of window. */ struct vas_rx_win_attr { - void *rx_fifo; + u64 rx_fifo; int rx_fifo_size; int wcreds_max; diff --git a/arch/powerpc/platforms/powernv/vas-fault.c b/arch/powerpc/platforms/powernv/vas-fault.c index 3d21fce254b7..dd9c23c09781 100644 --- a/arch/powerpc/platforms/powernv/vas-fault.c +++ b/arch/powerpc/platforms/powernv/vas-fault.c @@ -352,7 +352,7 @@ int vas_setup_fault_window(struct vas_instance *vinst) vas_init_rx_win_attr(&attr, VAS_COP_TYPE_FAULT); attr.rx_fifo_size = vinst->fault_fifo_size; - attr.rx_fifo = vinst->fault_fifo; + attr.rx_fifo = __pa(vinst->fault_fifo); /* * Max creds is based on number of CRBs can fit in the FIFO. diff --git a/arch/powerpc/platforms/powernv/vas-window.c b/arch/powerpc/platforms/powernv/vas-window.c index 7ba0840fc3b5..3a86cdd5ae6c 100644 --- a/arch/powerpc/platforms/powernv/vas-window.c +++ b/arch/powerpc/platforms/powernv/vas-window.c @@ -403,7 +403,7 @@ static void init_winctx_regs(struct vas_window *window, * * See also: Design note in function header. */ - val = __pa(winctx->rx_fifo); + val = winctx->rx_fifo; val = SET_FIELD(VAS_PAGE_MIGRATION_SELECT, val, 0); write_hvwc_reg(window, VREG(LFIFO_BAR), val); @@ -737,7 +737,7 @@ static void init_winctx_for_rxwin(struct vas_window *rxwin, */ winctx->fifo_disable = true; winctx->intr_disable = true; - winctx->rx_fifo = NULL; + winctx->rx_fifo = 0; } winctx->lnotify_lpid = rxattr->lnotify_lpid; diff --git a/arch/powerpc/platforms/powernv/vas.h b/arch/powerpc/platforms/powernv/vas.h index 70f793e8f6cc..1f6e73809205 100644 --- a/arch/powerpc/platforms/powernv/vas.h +++ b/arch/powerpc/platforms/powernv/vas.h @@ -383,7 +383,7 @@ struct vas_window { * is a container for the register fields in the window context. */ struct vas_winctx { - void *rx_fifo; + u64 rx_fifo; int rx_fifo_size; int wcreds_max; int rsvd_txbuf_count; diff --git a/drivers/crypto/nx/nx-common-powernv.c b/drivers/crypto/nx/nx-common-powernv.c index 13c65deda8e9..8a4f10bb3fcd 100644 --- a/drivers/crypto/nx/nx-common-powernv.c +++ b/drivers/crypto/nx/nx-common-powernv.c @@ -827,7 +827,7 @@ static int __init vas_cfg_coproc_info(struct device_node *dn, int chip_id, goto err_out; vas_init_rx_win_attr(&rxattr, coproc->ct); - rxattr.rx_fifo = (void *)rx_fifo; + rxattr.rx_fifo = rx_fifo; rxattr.rx_fifo_size = fifo_size; rxattr.lnotify_lpid = lpid; rxattr.lnotify_pid = pid; From 6a61a97106279c2aa16fbbb2a171fd5dde127d23 Mon Sep 17 00:00:00 2001 From: Lv Ruyi Date: Sat, 2 Apr 2022 01:34:19 +0000 Subject: [PATCH 0427/1842] powerpc/xics: fix refcount leak in icp_opal_init() [ Upstream commit 5dd9e27ea4a39f7edd4bf81e9e70208e7ac0b7c9 ] The of_find_compatible_node() function returns a node pointer with refcount incremented, use of_node_put() on it when done. Reported-by: Zeal Robot Signed-off-by: Lv Ruyi Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220402013419.2410298-1-lv.ruyi@zte.com.cn Signed-off-by: Sasha Levin --- arch/powerpc/sysdev/xics/icp-opal.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/powerpc/sysdev/xics/icp-opal.c b/arch/powerpc/sysdev/xics/icp-opal.c index 68fd2540b093..7fa520efcefa 100644 --- a/arch/powerpc/sysdev/xics/icp-opal.c +++ b/arch/powerpc/sysdev/xics/icp-opal.c @@ -195,6 +195,7 @@ int icp_opal_init(void) printk("XICS: Using OPAL ICP fallbacks\n"); + of_node_put(np); return 0; } From 0230055fa6312745f0e22313ddfd1a6e993fae78 Mon Sep 17 00:00:00 2001 From: Lv Ruyi Date: Thu, 7 Apr 2022 09:00:43 +0000 Subject: [PATCH 0428/1842] powerpc/powernv: fix missing of_node_put in uv_init() [ Upstream commit 3ffa9fd471f57f365bc54fc87824c530422f64a5 ] of_find_compatible_node() returns node pointer with refcount incremented, use of_node_put() on it when done. Reported-by: Zeal Robot Signed-off-by: Lv Ruyi Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220407090043.2491854-1-lv.ruyi@zte.com.cn Signed-off-by: Sasha Levin --- arch/powerpc/platforms/powernv/ultravisor.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/powerpc/platforms/powernv/ultravisor.c b/arch/powerpc/platforms/powernv/ultravisor.c index e4a00ad06f9d..67c8c4b2d8b1 100644 --- a/arch/powerpc/platforms/powernv/ultravisor.c +++ b/arch/powerpc/platforms/powernv/ultravisor.c @@ -55,6 +55,7 @@ static int __init uv_init(void) return -ENODEV; uv_memcons = memcons_init(node, "memcons"); + of_node_put(node); if (!uv_memcons) return -ENOENT; From b4e14e9beb5c052fcc9c172b6f906a034bd2d46a Mon Sep 17 00:00:00 2001 From: Finn Thain Date: Thu, 7 Apr 2022 20:11:32 +1000 Subject: [PATCH 0429/1842] macintosh/via-pmu: Fix build failure when CONFIG_INPUT is disabled [ Upstream commit 86ce436e30d86327c9f5260f718104ae7b21f506 ] drivers/macintosh/via-pmu-event.o: In function `via_pmu_event': via-pmu-event.c:(.text+0x44): undefined reference to `input_event' via-pmu-event.c:(.text+0x68): undefined reference to `input_event' via-pmu-event.c:(.text+0x94): undefined reference to `input_event' via-pmu-event.c:(.text+0xb8): undefined reference to `input_event' drivers/macintosh/via-pmu-event.o: In function `via_pmu_event_init': via-pmu-event.c:(.init.text+0x20): undefined reference to `input_allocate_device' via-pmu-event.c:(.init.text+0xc4): undefined reference to `input_register_device' via-pmu-event.c:(.init.text+0xd4): undefined reference to `input_free_device' make[1]: *** [Makefile:1155: vmlinux] Error 1 make: *** [Makefile:350: __build_one_by_one] Error 2 Don't call into the input subsystem unless CONFIG_INPUT is built-in. Reported-by: kernel test robot Signed-off-by: Finn Thain Tested-by: Randy Dunlap Reviewed-by: Christophe Leroy Acked-by: Randy Dunlap Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/5edbe76ce68227f71e09af4614cc4c1bd61c7ec8.1649326292.git.fthain@linux-m68k.org Signed-off-by: Sasha Levin --- drivers/macintosh/Kconfig | 4 ++++ drivers/macintosh/Makefile | 3 ++- drivers/macintosh/via-pmu.c | 2 +- 3 files changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/macintosh/Kconfig b/drivers/macintosh/Kconfig index 5cdc361da37c..3942db15a2b8 100644 --- a/drivers/macintosh/Kconfig +++ b/drivers/macintosh/Kconfig @@ -67,6 +67,10 @@ config ADB_PMU this device; you should do so if your machine is one of those mentioned above. +config ADB_PMU_EVENT + def_bool y + depends on ADB_PMU && INPUT=y + config ADB_PMU_LED bool "Support for the Power/iBook front LED" depends on PPC_PMAC && ADB_PMU diff --git a/drivers/macintosh/Makefile b/drivers/macintosh/Makefile index 49819b1b6f20..712edcb3e0b0 100644 --- a/drivers/macintosh/Makefile +++ b/drivers/macintosh/Makefile @@ -12,7 +12,8 @@ obj-$(CONFIG_MAC_EMUMOUSEBTN) += mac_hid.o obj-$(CONFIG_INPUT_ADBHID) += adbhid.o obj-$(CONFIG_ANSLCD) += ans-lcd.o -obj-$(CONFIG_ADB_PMU) += via-pmu.o via-pmu-event.o +obj-$(CONFIG_ADB_PMU) += via-pmu.o +obj-$(CONFIG_ADB_PMU_EVENT) += via-pmu-event.o obj-$(CONFIG_ADB_PMU_LED) += via-pmu-led.o obj-$(CONFIG_PMAC_BACKLIGHT) += via-pmu-backlight.o obj-$(CONFIG_ADB_CUDA) += via-cuda.o diff --git a/drivers/macintosh/via-pmu.c b/drivers/macintosh/via-pmu.c index 73e6ae88fafd..aae6328b2429 100644 --- a/drivers/macintosh/via-pmu.c +++ b/drivers/macintosh/via-pmu.c @@ -1460,7 +1460,7 @@ pmu_handle_data(unsigned char *data, int len) pmu_pass_intr(data, len); /* len == 6 is probably a bad check. But how do I * know what PMU versions send what events here? */ - if (len == 6) { + if (IS_ENABLED(CONFIG_ADB_PMU_EVENT) && len == 6) { via_pmu_event(PMU_EVT_POWER, !!(data[1]&8)); via_pmu_event(PMU_EVT_LID, data[1]&1); } From dfc308d6f29aa28463deb9a12278a85a382385ca Mon Sep 17 00:00:00 2001 From: Peng Wu Date: Mon, 25 Apr 2022 08:12:45 +0000 Subject: [PATCH 0430/1842] powerpc/iommu: Add missing of_node_put in iommu_init_early_dart [ Upstream commit 57b742a5b8945118022973e6416b71351df512fb ] The device_node pointer is returned by of_find_compatible_node with refcount incremented. We should use of_node_put() to avoid the refcount leak. Signed-off-by: Peng Wu Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220425081245.21705-1-wupeng58@huawei.com Signed-off-by: Sasha Levin --- arch/powerpc/sysdev/dart_iommu.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/sysdev/dart_iommu.c b/arch/powerpc/sysdev/dart_iommu.c index 6b4a34b36d98..8ff9bcfe4b8d 100644 --- a/arch/powerpc/sysdev/dart_iommu.c +++ b/arch/powerpc/sysdev/dart_iommu.c @@ -403,9 +403,10 @@ void __init iommu_init_early_dart(struct pci_controller_ops *controller_ops) } /* Initialize the DART HW */ - if (dart_init(dn) != 0) + if (dart_init(dn) != 0) { + of_node_put(dn); return; - + } /* * U4 supports a DART bypass, we use it for 64-bit capable devices to * improve performance. However, that only works for devices connected @@ -418,6 +419,7 @@ void __init iommu_init_early_dart(struct pci_controller_ops *controller_ops) /* Setup pci_dma ops */ set_pci_dma_ops(&dma_iommu_ops); + of_node_put(dn); } #ifdef CONFIG_PM From cc80d3c37cec9d6ddb140483647901bc7cc6c31d Mon Sep 17 00:00:00 2001 From: Douglas Miller Date: Fri, 20 May 2022 14:37:06 -0400 Subject: [PATCH 0431/1842] RDMA/hfi1: Prevent panic when SDMA is disabled [ Upstream commit 629e052d0c98e46dde9f0824f0aa437f678d9b8f ] If the hfi1 module is loaded with HFI1_CAP_SDMA off, a call to hfi1_write_iter() will dereference a NULL pointer and panic. A typical stack frame is: sdma_select_user_engine [hfi1] hfi1_user_sdma_process_request [hfi1] hfi1_write_iter [hfi1] do_iter_readv_writev do_iter_write vfs_writev do_writev do_syscall_64 The fix is to test for SDMA in hfi1_write_iter() and fail the I/O with EINVAL. Link: https://lore.kernel.org/r/20220520183706.48973.79803.stgit@awfm-01.cornelisnetworks.com Signed-off-by: Douglas Miller Signed-off-by: Dennis Dalessandro Signed-off-by: Jason Gunthorpe Signed-off-by: Sasha Levin --- drivers/infiniband/hw/hfi1/file_ops.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c index 329ee4f48d95..cfc2110fc38a 100644 --- a/drivers/infiniband/hw/hfi1/file_ops.c +++ b/drivers/infiniband/hw/hfi1/file_ops.c @@ -306,6 +306,8 @@ static ssize_t hfi1_write_iter(struct kiocb *kiocb, struct iov_iter *from) unsigned long dim = from->nr_segs; int idx; + if (!HFI1_CAP_IS_KSET(SDMA)) + return -EINVAL; idx = srcu_read_lock(&fd->pq_srcu); pq = srcu_dereference(fd->pq, &fd->pq_srcu); if (!cq || !pq) { From 61bbbde9b6d21f857cc0fce8538f244aa4f449d3 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Sat, 28 May 2022 11:08:48 -0700 Subject: [PATCH 0432/1842] drm: fix EDID struct for old ARM OABI format [ Upstream commit 47f15561b69e226bfc034e94ff6dbec51a4662af ] When building the kernel for arm with the "-mabi=apcs-gnu" option, gcc will force alignment of all structures and unions to a word boundary (see also STRUCTURE_SIZE_BOUNDARY and the "-mstructure-size-boundary=XX" option if you're a gcc person), even when the members of said structures do not want or need said alignment. This completely messes up the structure alignment of 'struct edid' on those targets, because even though all the embedded structures are marked with "__attribute__((packed))", the unions that contain them are not. This was exposed by commit f1e4c916f97f ("drm/edid: add EDID block count and size helpers"), but the bug is pre-existing. That commit just made the structure layout problem cause a build failure due to the addition of the BUILD_BUG_ON(sizeof(*edid) != EDID_LENGTH); sanity check in drivers/gpu/drm/drm_edid.c:edid_block_data(). This legacy union alignment should probably not be used in the first place, but we can fix the layout by adding the packed attribute to the union entries even when each member is already packed and it shouldn't matter in a sane build environment. You can see this issue with a trivial test program: union { struct { char c[5]; }; struct { char d; unsigned e; } __attribute__((packed)); } a = { "1234" }; where building this with a normal "gcc -S" will result in the expected 5-byte size of said union: .type a, @object .size a, 5 but with an ARM compiler and the old ABI: arm-linux-gnu-gcc -mabi=apcs-gnu -mfloat-abi=soft -S t.c you get .type a, %object .size a, 8 instead, because even though each member of the union is packed, the union itself still gets aligned. This was reported by Sudip for the spear3xx_defconfig target. Link: https://lore.kernel.org/lkml/YpCUzStDnSgQLNFN@debian/ Reported-by: Sudip Mukherjee Acked-by: Arnd Bergmann Cc: Maarten Lankhorst Cc: Maxime Ripard Cc: Thomas Zimmermann Cc: David Airlie Cc: Daniel Vetter Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- include/drm/drm_edid.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/include/drm/drm_edid.h b/include/drm/drm_edid.h index e97daf6ffbb1..4526b6a1e583 100644 --- a/include/drm/drm_edid.h +++ b/include/drm/drm_edid.h @@ -121,7 +121,7 @@ struct detailed_data_monitor_range { u8 supported_scalings; u8 preferred_refresh; } __attribute__((packed)) cvt; - } formula; + } __attribute__((packed)) formula; } __attribute__((packed)); struct detailed_data_wpindex { @@ -154,7 +154,7 @@ struct detailed_non_pixel { struct detailed_data_wpindex color; struct std_timing timings[6]; struct cvt_timing cvt[4]; - } data; + } __attribute__((packed)) data; } __attribute__((packed)); #define EDID_DETAIL_EST_TIMINGS 0xf7 @@ -172,7 +172,7 @@ struct detailed_timing { union { struct detailed_pixel_timing pixel_data; struct detailed_non_pixel other_data; - } data; + } __attribute__((packed)) data; } __attribute__((packed)); #define DRM_EDID_INPUT_SERRATION_VSYNC (1 << 0) From 3ed327b77d65e3779eeec1a2c71b7eca9261520e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Noralf=20Tr=C3=B8nnes?= Date: Wed, 24 Nov 2021 16:07:52 +0100 Subject: [PATCH 0433/1842] dt-bindings: display: sitronix, st7735r: Fix backlight in example MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 471e201f543559e2cb19b182b680ebf04d80ee31 ] The backlight property was lost during conversion to yaml in commit abdd9e3705c8 ("dt-bindings: display: sitronix,st7735r: Convert to DT schema"). Put it back. Fixes: abdd9e3705c8 ("dt-bindings: display: sitronix,st7735r: Convert to DT schema") Signed-off-by: Noralf Trønnes Acked-by: Rob Herring Reviewed-by: Geert Uytterhoeven Acked-by: David Lechner Link: https://patchwork.freedesktop.org/patch/msgid/20211124150757.17929-2-noralf@tronnes.org Signed-off-by: Sasha Levin --- Documentation/devicetree/bindings/display/sitronix,st7735r.yaml | 1 + 1 file changed, 1 insertion(+) diff --git a/Documentation/devicetree/bindings/display/sitronix,st7735r.yaml b/Documentation/devicetree/bindings/display/sitronix,st7735r.yaml index 0cebaaefda03..419c3b2ac5a6 100644 --- a/Documentation/devicetree/bindings/display/sitronix,st7735r.yaml +++ b/Documentation/devicetree/bindings/display/sitronix,st7735r.yaml @@ -72,6 +72,7 @@ examples: dc-gpios = <&gpio 43 GPIO_ACTIVE_HIGH>; reset-gpios = <&gpio 80 GPIO_ACTIVE_HIGH>; rotation = <270>; + backlight = <&backlight>; }; }; From 822dac24b4f996794c7b8ee9d3cdeb867b6fc0c6 Mon Sep 17 00:00:00 2001 From: Niels Dossche Date: Mon, 21 Mar 2022 12:58:23 +0200 Subject: [PATCH 0434/1842] ath11k: acquire ab->base_lock in unassign when finding the peer by addr [ Upstream commit 2db80f93869d491be57cbc2b36f30d0d3a0e5bde ] ath11k_peer_find_by_addr states via lockdep that ab->base_lock must be held when calling that function in order to protect the list. All callers except ath11k_mac_op_unassign_vif_chanctx have that lock acquired when calling ath11k_peer_find_by_addr. That lock is also not transitively held by a path towards ath11k_mac_op_unassign_vif_chanctx. The solution is to acquire the lock when calling ath11k_peer_find_by_addr inside ath11k_mac_op_unassign_vif_chanctx. I am currently working on a static analyser to detect missing locks and this was a reported case. I manually verified the report by looking at the code, but I do not have real hardware so this is compile tested only. Fixes: 701e48a43e15 ("ath11k: add packet log support for QCA6390") Signed-off-by: Niels Dossche Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/20220314215253.92658-1-dossche.niels@gmail.com Signed-off-by: Sasha Levin --- drivers/net/wireless/ath/ath11k/mac.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c index cc9122f42024..b2928a5cf72e 100644 --- a/drivers/net/wireless/ath/ath11k/mac.c +++ b/drivers/net/wireless/ath/ath11k/mac.c @@ -5325,6 +5325,7 @@ ath11k_mac_op_unassign_vif_chanctx(struct ieee80211_hw *hw, struct ath11k *ar = hw->priv; struct ath11k_base *ab = ar->ab; struct ath11k_vif *arvif = (void *)vif->drv_priv; + struct ath11k_peer *peer; int ret; mutex_lock(&ar->conf_mutex); @@ -5336,9 +5337,13 @@ ath11k_mac_op_unassign_vif_chanctx(struct ieee80211_hw *hw, WARN_ON(!arvif->is_started); if (ab->hw_params.vdev_start_delay && - arvif->vdev_type == WMI_VDEV_TYPE_MONITOR && - ath11k_peer_find_by_addr(ab, ar->mac_addr)) - ath11k_peer_delete(ar, arvif->vdev_id, ar->mac_addr); + arvif->vdev_type == WMI_VDEV_TYPE_MONITOR) { + spin_lock_bh(&ab->base_lock); + peer = ath11k_peer_find_by_addr(ab, ar->mac_addr); + spin_unlock_bh(&ab->base_lock); + if (peer) + ath11k_peer_delete(ar, arvif->vdev_id, ar->mac_addr); + } ret = ath11k_mac_vdev_stop(arvif); if (ret) From 0d6dc3efb1fb3379517c20d0ca983a5361bbc747 Mon Sep 17 00:00:00 2001 From: Wenli Looi Date: Sun, 20 Mar 2022 17:30:08 -0600 Subject: [PATCH 0435/1842] ath9k: fix ar9003_get_eepmisc MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 9aaff3864b603408c02c629957ae8d8ff5d5a4f2 ] The current implementation is reading the wrong eeprom type. Fixes: d8ec2e2a63e8 ("ath9k: Add an eeprom_ops callback for retrieving the eepmisc value") Signed-off-by: Wenli Looi Acked-by: Toke Høiland-Jørgensen Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/20220320233010.123106-5-wlooi@ucalgary.ca Signed-off-by: Sasha Levin --- drivers/net/wireless/ath/ath9k/ar9003_eeprom.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c index b0a4ca3559fd..abed1effd95c 100644 --- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c +++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c @@ -5615,7 +5615,7 @@ unsigned int ar9003_get_paprd_scale_factor(struct ath_hw *ah, static u8 ar9003_get_eepmisc(struct ath_hw *ah) { - return ah->eeprom.map4k.baseEepHeader.eepMisc; + return ah->eeprom.ar9300_eep.baseEepHeader.opCapFlags.eepMisc; } const struct eeprom_ops eep_ar9300_ops = { From 9072d627857d4bb4008d85858fdc5c8f0598cadd Mon Sep 17 00:00:00 2001 From: Jani Nikula Date: Wed, 30 Mar 2022 20:04:26 +0300 Subject: [PATCH 0436/1842] drm/edid: fix invalid EDID extension block filtering MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 3aefc722ff52076407203b6af9713de567993adf ] The invalid EDID block filtering uses the number of valid EDID extensions instead of all EDID extensions for looping the extensions in the copy. This is fine, by coincidence, if all the invalid blocks are at the end of the EDID. However, it's completely broken if there are invalid extensions in the middle; the invalid blocks are included and valid blocks are excluded. Fix it by modifying the base block after, not before, the copy. Fixes: 14544d0937bf ("drm/edid: Only print the bad edid when aborting") Reported-by: Ville Syrjälä Signed-off-by: Jani Nikula Reviewed-by: Ville Syrjälä Link: https://patchwork.freedesktop.org/patch/msgid/20220330170426.349248-1-jani.nikula@intel.com Signed-off-by: Sasha Levin --- drivers/gpu/drm/drm_edid.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c index 862e173d3431..4334e466b4e0 100644 --- a/drivers/gpu/drm/drm_edid.c +++ b/drivers/gpu/drm/drm_edid.c @@ -1995,9 +1995,6 @@ struct edid *drm_do_get_edid(struct drm_connector *connector, connector_bad_edid(connector, edid, edid[0x7e] + 1); - edid[EDID_LENGTH-1] += edid[0x7e] - valid_extensions; - edid[0x7e] = valid_extensions; - new = kmalloc_array(valid_extensions + 1, EDID_LENGTH, GFP_KERNEL); if (!new) @@ -2014,6 +2011,9 @@ struct edid *drm_do_get_edid(struct drm_connector *connector, base += EDID_LENGTH; } + new[EDID_LENGTH - 1] += new[0x7e] - valid_extensions; + new[0x7e] = valid_extensions; + kfree(edid); edid = new; } From 187ecfc3b70e2edbe39ef4cc24c5c592cb976dd2 Mon Sep 17 00:00:00 2001 From: Lucas Stach Date: Mon, 21 Mar 2022 11:47:05 +0100 Subject: [PATCH 0437/1842] drm/bridge: adv7511: clean up CEC adapter when probe fails [ Upstream commit 7ed2b0dabf7a22874cb30f8878df239ef638eb53 ] When the probe routine fails we also need to clean up the CEC adapter registered in adv7511_cec_init(). Fixes: 3b1b975003e4 ("drm: adv7511/33: add HDMI CEC support") Signed-off-by: Lucas Stach Reviewed-by: Robert Foss Signed-off-by: Robert Foss Link: https://patchwork.freedesktop.org/patch/msgid/20220321104705.2804423-1-l.stach@pengutronix.de Signed-off-by: Sasha Levin --- drivers/gpu/drm/bridge/adv7511/adv7511_drv.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c index c6f059be4b89..aca2f14f04c2 100644 --- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c +++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c @@ -1306,6 +1306,7 @@ static int adv7511_probe(struct i2c_client *i2c, const struct i2c_device_id *id) return 0; err_unregister_cec: + cec_unregister_adapter(adv7511->cec_adap); i2c_unregister_device(adv7511->i2c_cec); if (adv7511->cec_clk) clk_disable_unprepare(adv7511->cec_clk); From 35db6e2e9988a752ba181016551d748db2dce9c2 Mon Sep 17 00:00:00 2001 From: Kuldeep Singh Date: Tue, 29 Mar 2022 00:50:06 +0530 Subject: [PATCH 0438/1842] spi: qcom-qspi: Add minItems to interconnect-names [ Upstream commit e23d86c49a9c78e8dbe3abff20b30812b26ab427 ] Add minItems constraint to interconnect-names as well. The schema currently tries to match 2 names and fail for DTs with single entry. With the change applied, below interconnect-names values are possible: ['qspi-config'], ['qspi-config', 'qspi-memory'] Fixes: 8f9c291558ea ("dt-bindings: spi: Add interconnect binding for QSPI") Signed-off-by: Kuldeep Singh Reviewed-by: Krzysztof Kozlowski Link: https://lore.kernel.org/r/20220328192006.18523-1-singh.kuldeep87k@gmail.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml | 1 + 1 file changed, 1 insertion(+) diff --git a/Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml b/Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml index ef5698f426b2..392204a08e96 100644 --- a/Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml +++ b/Documentation/devicetree/bindings/spi/qcom,spi-qcom-qspi.yaml @@ -45,6 +45,7 @@ properties: maxItems: 2 interconnect-names: + minItems: 1 items: - const: qspi-config - const: qspi-memory From fb66e0512e5ccc093070e21cf88cce8d98c181b5 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Mon, 4 Apr 2022 09:29:01 +0000 Subject: [PATCH 0439/1842] ASoC: mediatek: Fix error handling in mt8173_max98090_dev_probe [ Upstream commit 4f4e0454e226de3bf4efd7e7924d1edc571c52d5 ] Call of_node_put(platform_node) to avoid refcount leak in the error path. Fixes: 94319ba10eca ("ASoC: mediatek: Use platform_of_node for machine drivers") Fixes: 493433785df0 ("ASoC: mediatek: mt8173: fix device_node leak") Signed-off-by: Miaoqian Lin Reviewed-by: AngeloGioacchino Del Regno Link: https://lore.kernel.org/r/20220404092903.26725-1-linmq006@gmail.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/mediatek/mt8173/mt8173-max98090.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/sound/soc/mediatek/mt8173/mt8173-max98090.c b/sound/soc/mediatek/mt8173/mt8173-max98090.c index 3bdd4931316c..5f39e810e27a 100644 --- a/sound/soc/mediatek/mt8173/mt8173-max98090.c +++ b/sound/soc/mediatek/mt8173/mt8173-max98090.c @@ -167,7 +167,8 @@ static int mt8173_max98090_dev_probe(struct platform_device *pdev) if (!codec_node) { dev_err(&pdev->dev, "Property 'audio-codec' missing or invalid\n"); - return -EINVAL; + ret = -EINVAL; + goto put_platform_node; } for_each_card_prelinks(card, i, dai_link) { if (dai_link->codecs->name) @@ -182,6 +183,8 @@ static int mt8173_max98090_dev_probe(struct platform_device *pdev) __func__, ret); of_node_put(codec_node); + +put_platform_node: of_node_put(platform_node); return ret; } From f279c49f17ce10866087ea6c0c57382158974b63 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Mon, 4 Apr 2022 09:35:25 +0000 Subject: [PATCH 0440/1842] ASoC: mediatek: Fix missing of_node_put in mt2701_wm8960_machine_probe [ Upstream commit 05654431a18fe24e5e46a375d98904134628a102 ] This node pointer is returned by of_parse_phandle() with refcount incremented in this function. Calling of_node_put() to avoid the refcount leak. Fixes: 8625c1dbd876 ("ASoC: mediatek: Add mt2701-wm8960 machine driver") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220404093526.30004-1-linmq006@gmail.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/mediatek/mt2701/mt2701-wm8960.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/sound/soc/mediatek/mt2701/mt2701-wm8960.c b/sound/soc/mediatek/mt2701/mt2701-wm8960.c index 414e422c0eba..70e494fb3da8 100644 --- a/sound/soc/mediatek/mt2701/mt2701-wm8960.c +++ b/sound/soc/mediatek/mt2701/mt2701-wm8960.c @@ -129,7 +129,8 @@ static int mt2701_wm8960_machine_probe(struct platform_device *pdev) if (!codec_node) { dev_err(&pdev->dev, "Property 'audio-codec' missing or invalid\n"); - return -EINVAL; + ret = -EINVAL; + goto put_platform_node; } for_each_card_prelinks(card, i, dai_link) { if (dai_link->codecs->name) @@ -140,7 +141,7 @@ static int mt2701_wm8960_machine_probe(struct platform_device *pdev) ret = snd_soc_of_parse_audio_routing(card, "audio-routing"); if (ret) { dev_err(&pdev->dev, "failed to parse audio-routing: %d\n", ret); - return ret; + goto put_codec_node; } ret = devm_snd_soc_register_card(&pdev->dev, card); @@ -148,6 +149,10 @@ static int mt2701_wm8960_machine_probe(struct platform_device *pdev) dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n", __func__, ret); +put_codec_node: + of_node_put(codec_node); +put_platform_node: + of_node_put(platform_node); return ret; } From 58c7c015771479440ea90dd7e8abd18c600d40d7 Mon Sep 17 00:00:00 2001 From: Ammar Faizi Date: Tue, 29 Mar 2022 17:47:04 +0700 Subject: [PATCH 0441/1842] x86/delay: Fix the wrong asm constraint in delay_loop() [ Upstream commit b86eb74098a92afd789da02699b4b0dd3f73b889 ] The asm constraint does not reflect the fact that the asm statement can modify the value of the local variable loops. Which it does. Specifying the wrong constraint may lead to undefined behavior, it may clobber random stuff (e.g. local variable, important temporary value in regs, etc.). This is especially dangerous when the compiler decides to inline the function and since it doesn't know that the value gets modified, it might decide to use it from a register directly without reloading it. Change the constraint to "+a" to denote that the first argument is an input and an output argument. [ bp: Fix typo, massage commit message. ] Fixes: e01b70ef3eb3 ("x86: fix bug in arch/i386/lib/delay.c file, delay_loop function") Signed-off-by: Ammar Faizi Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/20220329104705.65256-2-ammarfaizi2@gnuweeb.org Signed-off-by: Sasha Levin --- arch/x86/lib/delay.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/lib/delay.c b/arch/x86/lib/delay.c index 65d15df6212d..0e65d00e2339 100644 --- a/arch/x86/lib/delay.c +++ b/arch/x86/lib/delay.c @@ -54,8 +54,8 @@ static void delay_loop(u64 __loops) " jnz 2b \n" "3: dec %0 \n" - : /* we don't need output */ - :"a" (loops) + : "+a" (loops) + : ); } From 032f8c67fe95608cf51cbf72163f7fa02f8913b3 Mon Sep 17 00:00:00 2001 From: Paul Cercueil Date: Sat, 26 Sep 2020 19:04:55 +0200 Subject: [PATCH 0442/1842] drm/ingenic: Reset pixclock rate when parent clock rate changes [ Upstream commit 33700f6f7d9f6b4e1e6df933ef7fd388889c662c ] Old Ingenic SoCs can overclock very well, up to +50% of their nominal clock rate, whithout requiring overvolting or anything like that, just by changing the rate of the main PLL. Unfortunately, all clocks on the system are derived from that PLL, and when the PLL rate is updated, so is our pixel clock. To counter that issue, we make sure that the panel is in VBLANK before the rate change happens, and we will then re-set the pixel clock rate afterwards, once the PLL has been changed, to be as close as possible to the pixel rate requested by the encoder. v2: Add comment about mutex usage Signed-off-by: Paul Cercueil Reviewed-by: Sam Ravnborg Link: https://patchwork.freedesktop.org/patch/msgid/20200926170501.1109197-2-paul@crapouillou.net Signed-off-by: Sasha Levin --- drivers/gpu/drm/ingenic/ingenic-drm-drv.c | 61 ++++++++++++++++++++++- 1 file changed, 60 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c index b6bb5fc7d183..e34718cf5c2e 100644 --- a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c +++ b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -68,6 +69,21 @@ struct ingenic_drm { bool panel_is_sharp; bool no_vblank; + + /* + * clk_mutex is used to synchronize the pixel clock rate update with + * the VBLANK. When the pixel clock's parent clock needs to be updated, + * clock_nb's notifier function will lock the mutex, then wait until the + * next VBLANK. At that point, the parent clock's rate can be updated, + * and the mutex is then unlocked. If an atomic commit happens in the + * meantime, it will lock on the mutex, effectively waiting until the + * clock update process finishes. Finally, the pixel clock's rate will + * be recomputed when the mutex has been released, in the pending atomic + * commit, or a future one. + */ + struct mutex clk_mutex; + bool update_clk_rate; + struct notifier_block clock_nb; }; static const u32 ingenic_drm_primary_formats[] = { @@ -111,6 +127,29 @@ static inline struct ingenic_drm *drm_crtc_get_priv(struct drm_crtc *crtc) return container_of(crtc, struct ingenic_drm, crtc); } +static inline struct ingenic_drm *drm_nb_get_priv(struct notifier_block *nb) +{ + return container_of(nb, struct ingenic_drm, clock_nb); +} + +static int ingenic_drm_update_pixclk(struct notifier_block *nb, + unsigned long action, + void *data) +{ + struct ingenic_drm *priv = drm_nb_get_priv(nb); + + switch (action) { + case PRE_RATE_CHANGE: + mutex_lock(&priv->clk_mutex); + priv->update_clk_rate = true; + drm_crtc_wait_one_vblank(&priv->crtc); + return NOTIFY_OK; + default: + mutex_unlock(&priv->clk_mutex); + return NOTIFY_OK; + } +} + static void ingenic_drm_crtc_atomic_enable(struct drm_crtc *crtc, struct drm_crtc_state *state) { @@ -276,8 +315,14 @@ static void ingenic_drm_crtc_atomic_flush(struct drm_crtc *crtc, if (drm_atomic_crtc_needs_modeset(state)) { ingenic_drm_crtc_update_timings(priv, &state->mode); + priv->update_clk_rate = true; + } + if (priv->update_clk_rate) { + mutex_lock(&priv->clk_mutex); clk_set_rate(priv->pix_clk, state->adjusted_mode.clock * 1000); + priv->update_clk_rate = false; + mutex_unlock(&priv->clk_mutex); } if (event) { @@ -936,16 +981,28 @@ static int ingenic_drm_bind(struct device *dev, bool has_components) if (soc_info->has_osd) regmap_write(priv->map, JZ_REG_LCD_OSDC, JZ_LCD_OSDC_OSDEN); + mutex_init(&priv->clk_mutex); + priv->clock_nb.notifier_call = ingenic_drm_update_pixclk; + + parent_clk = clk_get_parent(priv->pix_clk); + ret = clk_notifier_register(parent_clk, &priv->clock_nb); + if (ret) { + dev_err(dev, "Unable to register clock notifier\n"); + goto err_devclk_disable; + } + ret = drm_dev_register(drm, 0); if (ret) { dev_err(dev, "Failed to register DRM driver\n"); - goto err_devclk_disable; + goto err_clk_notifier_unregister; } drm_fbdev_generic_setup(drm, 32); return 0; +err_clk_notifier_unregister: + clk_notifier_unregister(parent_clk, &priv->clock_nb); err_devclk_disable: if (priv->lcd_clk) clk_disable_unprepare(priv->lcd_clk); @@ -967,7 +1024,9 @@ static int compare_of(struct device *dev, void *data) static void ingenic_drm_unbind(struct device *dev) { struct ingenic_drm *priv = dev_get_drvdata(dev); + struct clk *parent_clk = clk_get_parent(priv->pix_clk); + clk_notifier_unregister(parent_clk, &priv->clock_nb); if (priv->lcd_clk) clk_disable_unprepare(priv->lcd_clk); clk_disable_unprepare(priv->pix_clk); From 2268f190af204a33254e3a2d5c7cc8294d53da4e Mon Sep 17 00:00:00 2001 From: Miles Chen Date: Wed, 16 Mar 2022 07:23:00 +0800 Subject: [PATCH 0443/1842] drm/mediatek: Fix mtk_cec_mask() [ Upstream commit 2c5d69b0a141e1e98febe3111e6f4fd8420493a5 ] In current implementation, mtk_cec_mask() writes val into target register and ignores the mask. After talking to our hdmi experts, mtk_cec_mask() should read a register, clean only mask bits, and update (val | mask) bits to the register. Link: https://patchwork.kernel.org/project/linux-mediatek/patch/20220315232301.2434-1-miles.chen@mediatek.com/ Fixes: 8f83f26891e1 ("drm/mediatek: Add HDMI support") Signed-off-by: Miles Chen Reviewed-by: AngeloGioacchino Del Regno Reviewed-by: Matthias Brugger Cc: Zhiqiang Lin Cc: CK Hu Cc: Matthias Brugger Cc: AngeloGioacchino Del Regno Signed-off-by: Chun-Kuang Hu Signed-off-by: Sasha Levin --- drivers/gpu/drm/mediatek/mtk_cec.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/mediatek/mtk_cec.c b/drivers/gpu/drm/mediatek/mtk_cec.c index cb29b649fcdb..12bf93769497 100644 --- a/drivers/gpu/drm/mediatek/mtk_cec.c +++ b/drivers/gpu/drm/mediatek/mtk_cec.c @@ -84,7 +84,7 @@ static void mtk_cec_mask(struct mtk_cec *cec, unsigned int offset, u32 tmp = readl(cec->regs + offset) & ~mask; tmp |= val & mask; - writel(val, cec->regs + offset); + writel(tmp, cec->regs + offset); } void mtk_cec_set_hpd_event(struct device *dev, From 15cec7dfd3df29f93b0f9df4510a36c2c8a7b672 Mon Sep 17 00:00:00 2001 From: Maxime Ripard Date: Mon, 28 Mar 2022 17:36:54 +0200 Subject: [PATCH 0444/1842] drm/vc4: hvs: Reset muxes at probe time [ Upstream commit 8514e6b1f40319e31ac4aa3fbf606796786366c9 ] By default, the HVS driver will force the HVS output 3 to be muxed to the HVS channel 2. However, the Transposer can only be assigned to the HVS channel 2, so whenever we try to use the writeback connector, we'll mux its associated output (Output 2) to the channel 2. This leads to both the output 2 and 3 feeding from the same channel, which is explicitly discouraged in the documentation. In order to avoid this, let's reset all the output muxes to their reset value. Fixes: 87ebcd42fb7b ("drm/vc4: crtc: Assign output to channel automatically") Signed-off-by: Maxime Ripard Acked-by: Thomas Zimmermann Link: https://lore.kernel.org/r/20220328153659.2382206-2-maxime@cerno.tech Signed-off-by: Sasha Levin --- drivers/gpu/drm/vc4/vc4_hvs.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c index ad691571d759..95fa6fc052a7 100644 --- a/drivers/gpu/drm/vc4/vc4_hvs.c +++ b/drivers/gpu/drm/vc4/vc4_hvs.c @@ -564,6 +564,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data) struct vc4_hvs *hvs = NULL; int ret; u32 dispctrl; + u32 reg; hvs = devm_kzalloc(&pdev->dev, sizeof(*hvs), GFP_KERNEL); if (!hvs) @@ -635,6 +636,26 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data) vc4->hvs = hvs; + reg = HVS_READ(SCALER_DISPECTRL); + reg &= ~SCALER_DISPECTRL_DSP2_MUX_MASK; + HVS_WRITE(SCALER_DISPECTRL, + reg | VC4_SET_FIELD(0, SCALER_DISPECTRL_DSP2_MUX)); + + reg = HVS_READ(SCALER_DISPCTRL); + reg &= ~SCALER_DISPCTRL_DSP3_MUX_MASK; + HVS_WRITE(SCALER_DISPCTRL, + reg | VC4_SET_FIELD(3, SCALER_DISPCTRL_DSP3_MUX)); + + reg = HVS_READ(SCALER_DISPEOLN); + reg &= ~SCALER_DISPEOLN_DSP4_MUX_MASK; + HVS_WRITE(SCALER_DISPEOLN, + reg | VC4_SET_FIELD(3, SCALER_DISPEOLN_DSP4_MUX)); + + reg = HVS_READ(SCALER_DISPDITHER); + reg &= ~SCALER_DISPDITHER_DSP5_MUX_MASK; + HVS_WRITE(SCALER_DISPDITHER, + reg | VC4_SET_FIELD(3, SCALER_DISPDITHER_DSP5_MUX)); + dispctrl = HVS_READ(SCALER_DISPCTRL); dispctrl |= SCALER_DISPCTRL_ENABLE; @@ -642,10 +663,6 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data) SCALER_DISPCTRL_DISPEIRQ(1) | SCALER_DISPCTRL_DISPEIRQ(2); - /* Set DSP3 (PV1) to use HVS channel 2, which would otherwise - * be unused. - */ - dispctrl &= ~SCALER_DISPCTRL_DSP3_MUX_MASK; dispctrl &= ~(SCALER_DISPCTRL_DMAEIRQ | SCALER_DISPCTRL_SLVWREIRQ | SCALER_DISPCTRL_SLVRDEIRQ | @@ -659,7 +676,6 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data) SCALER_DISPCTRL_DSPEISLUR(1) | SCALER_DISPCTRL_DSPEISLUR(2) | SCALER_DISPCTRL_SCLEIRQ); - dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_DSP3_MUX); HVS_WRITE(SCALER_DISPCTRL, dispctrl); From ac904216b8b8afbd9958642ae31479c535cb4645 Mon Sep 17 00:00:00 2001 From: Maxime Ripard Date: Mon, 28 Mar 2022 17:36:55 +0200 Subject: [PATCH 0445/1842] drm/vc4: txp: Don't set TXP_VSTART_AT_EOF [ Upstream commit 234998df929f14d00cbf2f1e81a7facb69fd9266 ] The TXP_VSTART_AT_EOF will generate a second VSTART signal to the HVS. However, the HVS waits for VSTART to enable the FIFO and will thus start filling the FIFO before the start of the frame. This leads to corruption at the beginning of the first frame, and content from the previous frame at the beginning of the next frames. Since one VSTART is enough, let's get rid of it. Fixes: 008095e065a8 ("drm/vc4: Add support for the transposer block") Signed-off-by: Maxime Ripard Acked-by: Thomas Zimmermann Link: https://lore.kernel.org/r/20220328153659.2382206-3-maxime@cerno.tech Signed-off-by: Sasha Levin --- drivers/gpu/drm/vc4/vc4_txp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/vc4/vc4_txp.c b/drivers/gpu/drm/vc4/vc4_txp.c index d13502ae973d..d4e750cf3c02 100644 --- a/drivers/gpu/drm/vc4/vc4_txp.c +++ b/drivers/gpu/drm/vc4/vc4_txp.c @@ -295,7 +295,7 @@ static void vc4_txp_connector_atomic_commit(struct drm_connector *conn, if (WARN_ON(i == ARRAY_SIZE(drm_fmts))) return; - ctrl = TXP_GO | TXP_VSTART_AT_EOF | TXP_EI | + ctrl = TXP_GO | TXP_EI | VC4_SET_FIELD(0xf, TXP_BYTE_ENABLE) | VC4_SET_FIELD(txp_fmts[i], TXP_FORMAT); From 84b0e23e107e8a8dad03f04701613704b5802846 Mon Sep 17 00:00:00 2001 From: Maxime Ripard Date: Mon, 28 Mar 2022 17:36:56 +0200 Subject: [PATCH 0446/1842] drm/vc4: txp: Force alpha to be 0xff if it's disabled [ Upstream commit 5453343a88ede8b12812fced81ecd24cb888ccc3 ] If we use a format that has padding instead of the alpha component (such as XRGB8888), it appears that the Transposer will fill the padding to 0, disregarding what was stored in the input buffer padding. This leads to issues with IGT, since it will set the padding to 0xff, but will then compare the CRC of the two frames which will thus fail. Another nice side effect is that it is now possible to just use the buffer as ARGB. Fixes: 008095e065a8 ("drm/vc4: Add support for the transposer block") Signed-off-by: Maxime Ripard Acked-by: Thomas Zimmermann Link: https://lore.kernel.org/r/20220328153659.2382206-4-maxime@cerno.tech Signed-off-by: Sasha Levin --- drivers/gpu/drm/vc4/vc4_txp.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/gpu/drm/vc4/vc4_txp.c b/drivers/gpu/drm/vc4/vc4_txp.c index d4e750cf3c02..f8fa09dfea5d 100644 --- a/drivers/gpu/drm/vc4/vc4_txp.c +++ b/drivers/gpu/drm/vc4/vc4_txp.c @@ -301,6 +301,12 @@ static void vc4_txp_connector_atomic_commit(struct drm_connector *conn, if (fb->format->has_alpha) ctrl |= TXP_ALPHA_ENABLE; + else + /* + * If TXP_ALPHA_ENABLE isn't set and TXP_ALPHA_INVERT is, the + * hardware will force the output padding to be 0xff. + */ + ctrl |= TXP_ALPHA_INVERT; gem = drm_fb_cma_get_gem_obj(fb, 0); TXP_WRITE(TXP_DST_PTR, gem->paddr + fb->offsets[0]); From 7ff76dc2d8bda825c556b348b7b20b96f857316e Mon Sep 17 00:00:00 2001 From: Andrii Nakryiko Date: Fri, 8 Apr 2022 11:14:23 -0700 Subject: [PATCH 0447/1842] libbpf: Don't error out on CO-RE relos for overriden weak subprogs [ Upstream commit e89d57d938c8fa80c457982154ed6110804814fe ] During BPF static linking, all the ELF relocations and .BTF.ext information (including CO-RE relocations) are preserved for __weak subprograms that were logically overriden by either previous weak subprogram instance or by corresponding "strong" (non-weak) subprogram. This is just how native user-space linkers work, nothing new. But libbpf is over-zealous when processing CO-RE relocation to error out when CO-RE relocation belonging to such eliminated weak subprogram is encountered. Instead of erroring out on this expected situation, log debug-level message and skip the relocation. Fixes: db2b8b06423c ("libbpf: Support CO-RE relocations for multi-prog sections") Signed-off-by: Andrii Nakryiko Signed-off-by: Daniel Borkmann Link: https://lore.kernel.org/bpf/20220408181425.2287230-2-andrii@kernel.org Signed-off-by: Sasha Levin --- tools/lib/bpf/libbpf.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 61df26f048d9..dda8f9cdc652 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -5945,10 +5945,17 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path) insn_idx = rec->insn_off / BPF_INSN_SZ; prog = find_prog_by_sec_insn(obj, sec_idx, insn_idx); if (!prog) { - pr_warn("sec '%s': failed to find program at insn #%d for CO-RE offset relocation #%d\n", - sec_name, insn_idx, i); - err = -EINVAL; - goto out; + /* When __weak subprog is "overridden" by another instance + * of the subprog from a different object file, linker still + * appends all the .BTF.ext info that used to belong to that + * eliminated subprogram. + * This is similar to what x86-64 linker does for relocations. + * So just ignore such relocations just like we ignore + * subprog instructions when discovering subprograms. + */ + pr_debug("sec '%s': skipping CO-RE relocation #%d for insn #%d belonging to eliminated weak subprogram\n", + sec_name, i, insn_idx); + continue; } /* no need to apply CO-RE relocation if the program is * not going to be loaded From 6c0a8c771a71b13b998327e1cff52526634ce47a Mon Sep 17 00:00:00 2001 From: Yuntao Wang Date: Thu, 7 Apr 2022 21:04:23 +0800 Subject: [PATCH 0448/1842] bpf: Fix excessive memory allocation in stack_map_alloc() [ Upstream commit b45043192b3e481304062938a6561da2ceea46a6 ] The 'n_buckets * (value_size + sizeof(struct stack_map_bucket))' part of the allocated memory for 'smap' is never used after the memlock accounting was removed, thus get rid of it. [ Note, Daniel: Commit b936ca643ade ("bpf: rework memlock-based memory accounting for maps") moved `cost += n_buckets * (value_size + sizeof(struct stack_map_bucket))` up and therefore before the bpf_map_area_alloc() allocation, sigh. In a later step commit c85d69135a91 ("bpf: move memory size checks to bpf_map_charge_init()"), and the overflow checks of `cost >= U32_MAX - PAGE_SIZE` moved into bpf_map_charge_init(). And then 370868107bf6 ("bpf: Eliminate rlimit-based memory accounting for stackmap maps") finally removed the bpf_map_charge_init(). Anyway, the original code did the allocation same way as /after/ this fix. ] Fixes: b936ca643ade ("bpf: rework memlock-based memory accounting for maps") Signed-off-by: Yuntao Wang Signed-off-by: Daniel Borkmann Link: https://lore.kernel.org/bpf/20220407130423.798386-1-ytcoode@gmail.com Signed-off-by: Sasha Levin --- kernel/bpf/stackmap.c | 1 - 1 file changed, 1 deletion(-) diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index 4575d2d60cb1..c19e669afba0 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -121,7 +121,6 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr) return ERR_PTR(-E2BIG); cost = n_buckets * sizeof(struct stack_map_bucket *) + sizeof(*smap); - cost += n_buckets * (value_size + sizeof(struct stack_map_bucket)); err = bpf_map_charge_init(&mem, cost); if (err) return ERR_PTR(err); From 3cea0259edd1ae44bd62c4403e5c2bd8263e96e0 Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Fri, 18 Mar 2022 13:46:57 +0100 Subject: [PATCH 0449/1842] nl80211: show SSID for P2P_GO interfaces [ Upstream commit a75971bc2b8453630e9f85e0beaa4da8db8277a3 ] There's no real reason not to send the SSID to userspace when it requests information about P2P_GO, it is, in that respect, exactly the same as AP interfaces. Fix that. Fixes: 44905265bc15 ("nl80211: don't expose wdev->ssid for most interfaces") Signed-off-by: Johannes Berg Link: https://lore.kernel.org/r/20220318134656.14354ae223f0.Ia25e85a512281b92e1645d4160766a4b1a471597@changeid Signed-off-by: Johannes Berg Signed-off-by: Sasha Levin --- net/wireless/nl80211.c | 1 + 1 file changed, 1 insertion(+) diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c index f8d5f35cfc66..8a7f0c8fba5e 100644 --- a/net/wireless/nl80211.c +++ b/net/wireless/nl80211.c @@ -3485,6 +3485,7 @@ static int nl80211_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flag wdev_lock(wdev); switch (wdev->iftype) { case NL80211_IFTYPE_AP: + case NL80211_IFTYPE_P2P_GO: if (wdev->ssid_len && nla_put(msg, NL80211_ATTR_SSID, wdev->ssid_len, wdev->ssid)) goto nla_put_failure_locked; From 78a3e9fcdb7b1712de30d9d248bf19cee6a890b0 Mon Sep 17 00:00:00 2001 From: Zhou Qingyang Date: Wed, 1 Dec 2021 11:37:03 +0800 Subject: [PATCH 0450/1842] drm/komeda: Fix an undefined behavior bug in komeda_plane_add() [ Upstream commit f5e284bb74ab296f98122673c7ecd22028b2c200 ] In komeda_plane_add(), komeda_get_layer_fourcc_list() is assigned to formats and used in drm_universal_plane_init(). drm_universal_plane_init() passes formats to __drm_universal_plane_init(). __drm_universal_plane_init() further passes formats to memcpy() as src parameter, which could lead to an undefined behavior bug on failure of komeda_get_layer_fourcc_list(). Fix this bug by adding a check of formats. This bug was found by a static analyzer. The analysis employs differential checking to identify inconsistent security operations (e.g., checks or kfrees) between two code paths and confirms that the inconsistent operations are not recovered in the current function or the callers, so they constitute bugs. Note that, as a bug found by static analysis, it can be a false positive or hard to trigger. Multiple researchers have cross-reviewed the bug. Builds with CONFIG_DRM_KOMEDA=m show no new warnings, and our static analyzer no longer warns about this code. Fixes: 61f1c4a8ab75 ("drm/komeda: Attach komeda_dev to DRM-KMS") Signed-off-by: Zhou Qingyang Signed-off-by: Liviu Dudau Link: https://lore.kernel.org/dri-devel/20211201033704.32054-1-zhou1615@umn.edu Signed-off-by: Sasha Levin --- drivers/gpu/drm/arm/display/komeda/komeda_plane.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_plane.c b/drivers/gpu/drm/arm/display/komeda/komeda_plane.c index a5f57b38d193..bc3f42e915e9 100644 --- a/drivers/gpu/drm/arm/display/komeda/komeda_plane.c +++ b/drivers/gpu/drm/arm/display/komeda/komeda_plane.c @@ -264,6 +264,10 @@ static int komeda_plane_add(struct komeda_kms_dev *kms, formats = komeda_get_layer_fourcc_list(&mdev->fmt_tbl, layer->layer_type, &n_formats); + if (!formats) { + kfree(kplane); + return -ENOMEM; + } err = drm_universal_plane_init(&kms->base, plane, get_possible_crtcs(kms, c->pipeline), From b4c7dd0037e6aeecad9b947b30f0d9eaeda11762 Mon Sep 17 00:00:00 2001 From: Jiasheng Jiang Date: Tue, 14 Dec 2021 18:08:37 +0800 Subject: [PATCH 0451/1842] drm: mali-dp: potential dereference of null pointer [ Upstream commit 73c3ed7495c67b8fbdc31cf58e6ca8757df31a33 ] The return value of kzalloc() needs to be checked. To avoid use of null pointer '&state->base' in case of the failure of alloc. Fixes: 99665d072183 ("drm: mali-dp: add malidp_crtc_state struct") Signed-off-by: Jiasheng Jiang Reviewed-by: Brian Starkey Signed-off-by: Liviu Dudau Link: https://patchwork.freedesktop.org/patch/msgid/20211214100837.46912-1-jiasheng@iscas.ac.cn Signed-off-by: Sasha Levin --- drivers/gpu/drm/arm/malidp_crtc.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/arm/malidp_crtc.c b/drivers/gpu/drm/arm/malidp_crtc.c index 587d94798f5c..af729094260c 100644 --- a/drivers/gpu/drm/arm/malidp_crtc.c +++ b/drivers/gpu/drm/arm/malidp_crtc.c @@ -483,7 +483,10 @@ static void malidp_crtc_reset(struct drm_crtc *crtc) if (crtc->state) malidp_crtc_destroy_state(crtc, crtc->state); - __drm_atomic_helper_crtc_reset(crtc, &state->base); + if (state) + __drm_atomic_helper_crtc_reset(crtc, &state->base); + else + __drm_atomic_helper_crtc_reset(crtc, NULL); } static int malidp_crtc_enable_vblank(struct drm_crtc *crtc) From 58eff5b73f6cb89c0b8ad0bc9c4f4e8589dbbcd4 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Mon, 11 Apr 2022 11:10:33 +0000 Subject: [PATCH 0452/1842] spi: spi-ti-qspi: Fix return value handling of wait_for_completion_timeout [ Upstream commit 8b1ea69a63eb62f97cef63e6d816b64ed84e8760 ] wait_for_completion_timeout() returns unsigned long not int. It returns 0 if timed out, and positive if completed. The check for <= 0 is ambiguous and should be == 0 here indicating timeout which is the only error case. Fixes: 5720ec0a6d26 ("spi: spi-ti-qspi: Add DMA support for QSPI mmap read") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220411111034.24447-1-linmq006@gmail.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- drivers/spi/spi-ti-qspi.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/spi/spi-ti-qspi.c b/drivers/spi/spi-ti-qspi.c index e06aafe169e0..081da1fd3fd7 100644 --- a/drivers/spi/spi-ti-qspi.c +++ b/drivers/spi/spi-ti-qspi.c @@ -448,6 +448,7 @@ static int ti_qspi_dma_xfer(struct ti_qspi *qspi, dma_addr_t dma_dst, enum dma_ctrl_flags flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT; struct dma_async_tx_descriptor *tx; int ret; + unsigned long time_left; tx = dmaengine_prep_dma_memcpy(chan, dma_dst, dma_src, len, flags); if (!tx) { @@ -467,9 +468,9 @@ static int ti_qspi_dma_xfer(struct ti_qspi *qspi, dma_addr_t dma_dst, } dma_async_issue_pending(chan); - ret = wait_for_completion_timeout(&qspi->transfer_complete, + time_left = wait_for_completion_timeout(&qspi->transfer_complete, msecs_to_jiffies(len)); - if (ret <= 0) { + if (time_left == 0) { dmaengine_terminate_sync(chan); dev_err(qspi->dev, "DMA wait_for_completion_timeout\n"); return -ETIMEDOUT; From 4565d5be8be272a64142204ac53499b6a06bf00a Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Mon, 28 Feb 2022 17:40:49 -0800 Subject: [PATCH 0453/1842] scftorture: Fix distribution of short handler delays [ Upstream commit 8106bddbab5f0ba180e6d693c7c1fc6926d57caa ] The scftorture test module's scf_handler() function is supposed to provide three different distributions of short delays (including "no delay") and one distribution of long delays, if specified by the scftorture.longwait module parameter. However, the second of the two non-zero-wait short delays is disabled due to the first such delay's "goto out" not being enclosed in the "then" clause with the "udelay()". This commit therefore adjusts the code to provide the intended set of delays. Fixes: e9d338a0b179 ("scftorture: Add smp_call_function() torture test") Signed-off-by: Paul E. McKenney Signed-off-by: Sasha Levin --- kernel/scftorture.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/kernel/scftorture.c b/kernel/scftorture.c index 554a521ee235..060ee0b1569a 100644 --- a/kernel/scftorture.c +++ b/kernel/scftorture.c @@ -253,9 +253,10 @@ static void scf_handler(void *scfc_in) } this_cpu_inc(scf_invoked_count); if (longwait <= 0) { - if (!(r & 0xffc0)) + if (!(r & 0xffc0)) { udelay(r & 0x3f); - goto out; + goto out; + } } if (r & 0xfff) goto out; From 8e357f086d40cf817e978f45428aafa7c478c8d2 Mon Sep 17 00:00:00 2001 From: "Russell King (Oracle)" Date: Mon, 11 Apr 2022 10:45:56 +0100 Subject: [PATCH 0454/1842] net: dsa: mt7530: 1G can also support 1000BASE-X link mode MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 66f862563ed68717dfd84e808ca12705ed275ced ] When using an external PHY connected using RGMII to mt7531 port 5, the PHY can be used to used support 1000BASE-X connections. Moreover, if 1000BASE-T is supported, then we should allow 1000BASE-X as well, since which are supported is a property of the PHY. Therefore, it makes no sense to exclude this from the linkmodes when 1000BASE-T is supported. Fixes: c288575f7810 ("net: dsa: mt7530: Add the support of MT7531 switch") Tested-by: Marek Behún Signed-off-by: Russell King (Oracle) Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- drivers/net/dsa/mt7530.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c index c355824ddb81..265620a81f9f 100644 --- a/drivers/net/dsa/mt7530.c +++ b/drivers/net/dsa/mt7530.c @@ -1952,13 +1952,7 @@ static void mt7531_sgmii_validate(struct mt7530_priv *priv, int port, /* Port5 supports ethier RGMII or SGMII. * Port6 supports SGMII only. */ - switch (port) { - case 5: - if (mt7531_is_rgmii_port(priv, port)) - break; - fallthrough; - case 6: - phylink_set(supported, 1000baseX_Full); + if (port == 6) { phylink_set(supported, 2500baseX_Full); phylink_set(supported, 2500baseT_Full); } @@ -2315,8 +2309,6 @@ static void mt7530_mac_port_validate(struct dsa_switch *ds, int port, unsigned long *supported) { - if (port == 5) - phylink_set(supported, 1000baseX_Full); } static void mt7531_mac_port_validate(struct dsa_switch *ds, int port, @@ -2353,8 +2345,10 @@ mt753x_phylink_validate(struct dsa_switch *ds, int port, } /* This switch only supports 1G full-duplex. */ - if (state->interface != PHY_INTERFACE_MODE_MII) + if (state->interface != PHY_INTERFACE_MODE_MII) { phylink_set(mask, 1000baseT_Full); + phylink_set(mask, 1000baseX_Full); + } priv->info->mac_port_validate(ds, port, mask); From 2a1b5110c95e4d49c8c3906270dfcde680a5a7be Mon Sep 17 00:00:00 2001 From: Lin Ma Date: Tue, 12 Apr 2022 13:32:08 +0800 Subject: [PATCH 0455/1842] NFC: NULL out the dev->rfkill to prevent UAF [ Upstream commit 1b0e81416a24d6e9b8c2341e22e8bf48f8b8bfc9 ] Commit 3e3b5dfcd16a ("NFC: reorder the logic in nfc_{un,}register_device") assumes the device_is_registered() in function nfc_dev_up() will help to check when the rfkill is unregistered. However, this check only take effect when device_del(&dev->dev) is done in nfc_unregister_device(). Hence, the rfkill object is still possible be dereferenced. The crash trace in latest kernel (5.18-rc2): [ 68.760105] ================================================================== [ 68.760330] BUG: KASAN: use-after-free in __lock_acquire+0x3ec1/0x6750 [ 68.760756] Read of size 8 at addr ffff888009c93018 by task fuzz/313 [ 68.760756] [ 68.760756] CPU: 0 PID: 313 Comm: fuzz Not tainted 5.18.0-rc2 #4 [ 68.760756] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 [ 68.760756] Call Trace: [ 68.760756] [ 68.760756] dump_stack_lvl+0x57/0x7d [ 68.760756] print_report.cold+0x5e/0x5db [ 68.760756] ? __lock_acquire+0x3ec1/0x6750 [ 68.760756] kasan_report+0xbe/0x1c0 [ 68.760756] ? __lock_acquire+0x3ec1/0x6750 [ 68.760756] __lock_acquire+0x3ec1/0x6750 [ 68.760756] ? lockdep_hardirqs_on_prepare+0x410/0x410 [ 68.760756] ? register_lock_class+0x18d0/0x18d0 [ 68.760756] lock_acquire+0x1ac/0x4f0 [ 68.760756] ? rfkill_blocked+0xe/0x60 [ 68.760756] ? lockdep_hardirqs_on_prepare+0x410/0x410 [ 68.760756] ? mutex_lock_io_nested+0x12c0/0x12c0 [ 68.760756] ? nla_get_range_signed+0x540/0x540 [ 68.760756] ? _raw_spin_lock_irqsave+0x4e/0x50 [ 68.760756] _raw_spin_lock_irqsave+0x39/0x50 [ 68.760756] ? rfkill_blocked+0xe/0x60 [ 68.760756] rfkill_blocked+0xe/0x60 [ 68.760756] nfc_dev_up+0x84/0x260 [ 68.760756] nfc_genl_dev_up+0x90/0xe0 [ 68.760756] genl_family_rcv_msg_doit+0x1f4/0x2f0 [ 68.760756] ? genl_family_rcv_msg_attrs_parse.constprop.0+0x230/0x230 [ 68.760756] ? security_capable+0x51/0x90 [ 68.760756] genl_rcv_msg+0x280/0x500 [ 68.760756] ? genl_get_cmd+0x3c0/0x3c0 [ 68.760756] ? lock_acquire+0x1ac/0x4f0 [ 68.760756] ? nfc_genl_dev_down+0xe0/0xe0 [ 68.760756] ? lockdep_hardirqs_on_prepare+0x410/0x410 [ 68.760756] netlink_rcv_skb+0x11b/0x340 [ 68.760756] ? genl_get_cmd+0x3c0/0x3c0 [ 68.760756] ? netlink_ack+0x9c0/0x9c0 [ 68.760756] ? netlink_deliver_tap+0x136/0xb00 [ 68.760756] genl_rcv+0x1f/0x30 [ 68.760756] netlink_unicast+0x430/0x710 [ 68.760756] ? memset+0x20/0x40 [ 68.760756] ? netlink_attachskb+0x740/0x740 [ 68.760756] ? __build_skb_around+0x1f4/0x2a0 [ 68.760756] netlink_sendmsg+0x75d/0xc00 [ 68.760756] ? netlink_unicast+0x710/0x710 [ 68.760756] ? netlink_unicast+0x710/0x710 [ 68.760756] sock_sendmsg+0xdf/0x110 [ 68.760756] __sys_sendto+0x19e/0x270 [ 68.760756] ? __ia32_sys_getpeername+0xa0/0xa0 [ 68.760756] ? fd_install+0x178/0x4c0 [ 68.760756] ? fd_install+0x195/0x4c0 [ 68.760756] ? kernel_fpu_begin_mask+0x1c0/0x1c0 [ 68.760756] __x64_sys_sendto+0xd8/0x1b0 [ 68.760756] ? lockdep_hardirqs_on+0xbf/0x130 [ 68.760756] ? syscall_enter_from_user_mode+0x1d/0x50 [ 68.760756] do_syscall_64+0x3b/0x90 [ 68.760756] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 68.760756] RIP: 0033:0x7f67fb50e6b3 ... [ 68.760756] RSP: 002b:00007f67fa91fe90 EFLAGS: 00000293 ORIG_RAX: 000000000000002c [ 68.760756] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f67fb50e6b3 [ 68.760756] RDX: 000000000000001c RSI: 0000559354603090 RDI: 0000000000000003 [ 68.760756] RBP: 00007f67fa91ff00 R08: 00007f67fa91fedc R09: 000000000000000c [ 68.760756] R10: 0000000000000000 R11: 0000000000000293 R12: 00007ffe824d496e [ 68.760756] R13: 00007ffe824d496f R14: 00007f67fa120000 R15: 0000000000000003 [ 68.760756] [ 68.760756] [ 68.760756] Allocated by task 279: [ 68.760756] kasan_save_stack+0x1e/0x40 [ 68.760756] __kasan_kmalloc+0x81/0xa0 [ 68.760756] rfkill_alloc+0x7f/0x280 [ 68.760756] nfc_register_device+0xa3/0x1a0 [ 68.760756] nci_register_device+0x77a/0xad0 [ 68.760756] nfcmrvl_nci_register_dev+0x20b/0x2c0 [ 68.760756] nfcmrvl_nci_uart_open+0xf2/0x1dd [ 68.760756] nci_uart_tty_ioctl+0x2c3/0x4a0 [ 68.760756] tty_ioctl+0x764/0x1310 [ 68.760756] __x64_sys_ioctl+0x122/0x190 [ 68.760756] do_syscall_64+0x3b/0x90 [ 68.760756] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 68.760756] [ 68.760756] Freed by task 314: [ 68.760756] kasan_save_stack+0x1e/0x40 [ 68.760756] kasan_set_track+0x21/0x30 [ 68.760756] kasan_set_free_info+0x20/0x30 [ 68.760756] __kasan_slab_free+0x108/0x170 [ 68.760756] kfree+0xb0/0x330 [ 68.760756] device_release+0x96/0x200 [ 68.760756] kobject_put+0xf9/0x1d0 [ 68.760756] nfc_unregister_device+0x77/0x190 [ 68.760756] nfcmrvl_nci_unregister_dev+0x88/0xd0 [ 68.760756] nci_uart_tty_close+0xdf/0x180 [ 68.760756] tty_ldisc_kill+0x73/0x110 [ 68.760756] tty_ldisc_hangup+0x281/0x5b0 [ 68.760756] __tty_hangup.part.0+0x431/0x890 [ 68.760756] tty_release+0x3a8/0xc80 [ 68.760756] __fput+0x1f0/0x8c0 [ 68.760756] task_work_run+0xc9/0x170 [ 68.760756] exit_to_user_mode_prepare+0x194/0x1a0 [ 68.760756] syscall_exit_to_user_mode+0x19/0x50 [ 68.760756] do_syscall_64+0x48/0x90 [ 68.760756] entry_SYSCALL_64_after_hwframe+0x44/0xae This patch just add the null out of dev->rfkill to make sure such dereference cannot happen. This is safe since the device_lock() already protect the check/write from data race. Fixes: 3e3b5dfcd16a ("NFC: reorder the logic in nfc_{un,}register_device") Signed-off-by: Lin Ma Reviewed-by: Krzysztof Kozlowski Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/nfc/core.c | 1 + 1 file changed, 1 insertion(+) diff --git a/net/nfc/core.c b/net/nfc/core.c index 3b2983813ff1..2ef56366bd5f 100644 --- a/net/nfc/core.c +++ b/net/nfc/core.c @@ -1158,6 +1158,7 @@ void nfc_unregister_device(struct nfc_dev *dev) if (dev->rfkill) { rfkill_unregister(dev->rfkill); rfkill_destroy(dev->rfkill); + dev->rfkill = NULL; } dev->shutting_down = true; device_unlock(&dev->dev); From e7681199bbe421ddf6d20bd7765d4f214f366abf Mon Sep 17 00:00:00 2001 From: Jan Kiszka Date: Fri, 4 Mar 2022 07:36:37 +0100 Subject: [PATCH 0456/1842] efi: Add missing prototype for efi_capsule_setup_info [ Upstream commit aa480379d8bdb33920d68acfd90f823c8af32578 ] Fixes "no previous declaration for 'efi_capsule_setup_info'" warnings under W=1. Fixes: 2959c95d510c ("efi/capsule: Add support for Quark security header") Signed-off-by: Jan Kiszka Link: https://lore.kernel.org/r/c28d3f86-dd72-27d1-e2c2-40971b8da6bd@siemens.com Signed-off-by: Ard Biesheuvel Signed-off-by: Sasha Levin --- include/linux/efi.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/efi.h b/include/linux/efi.h index e17cd4c44f93..3bac68fb7ff1 100644 --- a/include/linux/efi.h +++ b/include/linux/efi.h @@ -167,6 +167,8 @@ struct capsule_info { size_t page_bytes_remain; }; +int efi_capsule_setup_info(struct capsule_info *cap_info, void *kbuff, + size_t hdr_bytes); int __efi_capsule_setup_info(struct capsule_info *cap_info); typedef int (*efi_freemem_callback_t) (u64 start, u64 end, void *arg); From 3eba802d47fbdf35b4d48596ab04bedc0ca64c2d Mon Sep 17 00:00:00 2001 From: Christoph Hellwig Date: Fri, 15 Apr 2022 06:52:32 +0200 Subject: [PATCH 0457/1842] target: remove an incorrect unmap zeroes data deduction [ Upstream commit 179d8609d8424529e95021df939ed7b0b82b37f1 ] For block devices, the SCSI target drivers implements UNMAP as calls to blkdev_issue_discard, which does not guarantee zeroing just because Write Zeroes is supported. Note that this does not affect the file backed path which uses fallocate to punch holes. Fixes: 2237498f0b5c ("target/iblock: Convert WRITE_SAME to blkdev_issue_zeroout") Signed-off-by: Christoph Hellwig Reviewed-by: Martin K. Petersen Reviewed-by: Chaitanya Kulkarni Link: https://lore.kernel.org/r/20220415045258.199825-2-hch@lst.de Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- drivers/target/target_core_device.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c index 109f019d2148..1eded5c4ebda 100644 --- a/drivers/target/target_core_device.c +++ b/drivers/target/target_core_device.c @@ -831,7 +831,6 @@ bool target_configure_unmap_from_queue(struct se_dev_attrib *attrib, attrib->unmap_granularity = q->limits.discard_granularity / block_size; attrib->unmap_granularity_alignment = q->limits.discard_alignment / block_size; - attrib->unmap_zeroes_data = !!(q->limits.max_write_zeroes_sectors); return true; } EXPORT_SYMBOL(target_configure_unmap_from_queue); From bea698509934fb94112ba7de7a0a68d782bba55d Mon Sep 17 00:00:00 2001 From: Arnd Bergmann Date: Wed, 6 Apr 2022 21:07:09 +0200 Subject: [PATCH 0458/1842] drbd: fix duplicate array initializer MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 33cb0917bbe241dd17a2b87ead63514c1b7e5615 ] There are two initializers for P_RETRY_WRITE: drivers/block/drbd/drbd_main.c:3676:22: warning: initialized field overwritten [-Woverride-init] Remove the first one since it was already ignored by the compiler and reorder the list to match the enum definition. As P_ZEROES had no entry, add that one instead. Fixes: 036b17eaab93 ("drbd: Receiving part for the PROTOCOL_UPDATE packet") Fixes: f31e583aa2c2 ("drbd: introduce P_ZEROES (REQ_OP_WRITE_ZEROES on the "wire")") Signed-off-by: Arnd Bergmann Reviewed-by: Christoph Böhmwalder Link: https://lore.kernel.org/r/20220406190715.1938174-2-christoph.boehmwalder@linbit.com Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- drivers/block/drbd/drbd_main.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c index 3cdbd81f983f..407527ff6b1f 100644 --- a/drivers/block/drbd/drbd_main.c +++ b/drivers/block/drbd/drbd_main.c @@ -3631,9 +3631,8 @@ const char *cmdname(enum drbd_packet cmd) * when we want to support more than * one PRO_VERSION */ static const char *cmdnames[] = { + [P_DATA] = "Data", - [P_WSAME] = "WriteSame", - [P_TRIM] = "Trim", [P_DATA_REPLY] = "DataReply", [P_RS_DATA_REPLY] = "RSDataReply", [P_BARRIER] = "Barrier", @@ -3644,7 +3643,6 @@ const char *cmdname(enum drbd_packet cmd) [P_DATA_REQUEST] = "DataRequest", [P_RS_DATA_REQUEST] = "RSDataRequest", [P_SYNC_PARAM] = "SyncParam", - [P_SYNC_PARAM89] = "SyncParam89", [P_PROTOCOL] = "ReportProtocol", [P_UUIDS] = "ReportUUIDs", [P_SIZES] = "ReportSizes", @@ -3652,6 +3650,7 @@ const char *cmdname(enum drbd_packet cmd) [P_SYNC_UUID] = "ReportSyncUUID", [P_AUTH_CHALLENGE] = "AuthChallenge", [P_AUTH_RESPONSE] = "AuthResponse", + [P_STATE_CHG_REQ] = "StateChgRequest", [P_PING] = "Ping", [P_PING_ACK] = "PingAck", [P_RECV_ACK] = "RecvAck", @@ -3662,24 +3661,26 @@ const char *cmdname(enum drbd_packet cmd) [P_NEG_DREPLY] = "NegDReply", [P_NEG_RS_DREPLY] = "NegRSDReply", [P_BARRIER_ACK] = "BarrierAck", - [P_STATE_CHG_REQ] = "StateChgRequest", [P_STATE_CHG_REPLY] = "StateChgReply", [P_OV_REQUEST] = "OVRequest", [P_OV_REPLY] = "OVReply", [P_OV_RESULT] = "OVResult", [P_CSUM_RS_REQUEST] = "CsumRSRequest", [P_RS_IS_IN_SYNC] = "CsumRSIsInSync", + [P_SYNC_PARAM89] = "SyncParam89", [P_COMPRESSED_BITMAP] = "CBitmap", [P_DELAY_PROBE] = "DelayProbe", [P_OUT_OF_SYNC] = "OutOfSync", - [P_RETRY_WRITE] = "RetryWrite", [P_RS_CANCEL] = "RSCancel", [P_CONN_ST_CHG_REQ] = "conn_st_chg_req", [P_CONN_ST_CHG_REPLY] = "conn_st_chg_reply", [P_RETRY_WRITE] = "retry_write", [P_PROTOCOL_UPDATE] = "protocol_update", + [P_TRIM] = "Trim", [P_RS_THIN_REQ] = "rs_thin_req", [P_RS_DEALLOCATED] = "rs_deallocated", + [P_WSAME] = "WriteSame", + [P_ZEROES] = "Zeroes", /* enum drbd_packet, but not commands - obsoleted flags: * P_MAY_IGNORE From dd2b1d70ef2081227ebdacb5db8ac18f9378131b Mon Sep 17 00:00:00 2001 From: Tyler Hicks Date: Tue, 11 Jan 2022 10:38:00 -0600 Subject: [PATCH 0459/1842] EDAC/dmc520: Don't print an error for each unconfigured interrupt line [ Upstream commit ad2df24732e8956a45a00894d2163c4ee8fb0e1f ] The dmc520 driver requires that at least one interrupt line, out of the ten possible, is configured. The driver prints an error and returns -EINVAL from its .probe function if there are no interrupt lines configured. Don't print a KERN_ERR level message for each interrupt line that's unconfigured as that can confuse users into thinking that there is an error condition. Before this change, the following KERN_ERR level messages would be reported if only dram_ecc_errc and dram_ecc_errd were configured in the device tree: dmc520 68000000.dmc: IRQ ram_ecc_errc not found dmc520 68000000.dmc: IRQ ram_ecc_errd not found dmc520 68000000.dmc: IRQ failed_access not found dmc520 68000000.dmc: IRQ failed_prog not found dmc520 68000000.dmc: IRQ link_err not dmc520 68000000.dmc: IRQ temperature_event not found dmc520 68000000.dmc: IRQ arch_fsm not found dmc520 68000000.dmc: IRQ phy_request not found Fixes: 1088750d7839 ("EDAC: Add EDAC driver for DMC520") Reported-by: Sinan Kaya Signed-off-by: Tyler Hicks Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/20220111163800.22362-1-tyhicks@linux.microsoft.com Signed-off-by: Sasha Levin --- drivers/edac/dmc520_edac.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/edac/dmc520_edac.c b/drivers/edac/dmc520_edac.c index b8a7d9594afd..1fa5ca57e9ec 100644 --- a/drivers/edac/dmc520_edac.c +++ b/drivers/edac/dmc520_edac.c @@ -489,7 +489,7 @@ static int dmc520_edac_probe(struct platform_device *pdev) dev = &pdev->dev; for (idx = 0; idx < NUMBER_OF_IRQS; idx++) { - irq = platform_get_irq_byname(pdev, dmc520_irq_configs[idx].name); + irq = platform_get_irq_byname_optional(pdev, dmc520_irq_configs[idx].name); irqs[idx] = irq; masks[idx] = dmc520_irq_configs[idx].mask; if (irq >= 0) { From 3c68daf4a368cd9e63ae5a2145c9e4a6f838c166 Mon Sep 17 00:00:00 2001 From: Zheyu Ma Date: Mon, 11 Apr 2022 20:58:08 +0800 Subject: [PATCH 0460/1842] mtd: rawnand: denali: Use managed device resources [ Upstream commit 3a745b51cddafade99aaea1b93aad31e9614e230 ] All of the resources used by this driver has managed interfaces, so use them. Otherwise we will get the following splat: [ 4.472703] denali-nand-pci 0000:00:05.0: timeout while waiting for irq 0x1000 [ 4.474071] denali-nand-pci: probe of 0000:00:05.0 failed with error -5 [ 4.473538] nand: No NAND device found [ 4.474068] BUG: unable to handle page fault for address: ffffc90005000410 [ 4.475169] #PF: supervisor write access in kernel mode [ 4.475579] #PF: error_code(0x0002) - not-present page [ 4.478362] RIP: 0010:iowrite32+0x9/0x50 [ 4.486068] Call Trace: [ 4.486269] [ 4.486443] denali_isr+0x15b/0x300 [denali] [ 4.486788] ? denali_direct_write+0x50/0x50 [denali] [ 4.487189] __handle_irq_event_percpu+0x161/0x3b0 [ 4.487571] handle_irq_event+0x7d/0x1b0 [ 4.487884] handle_fasteoi_irq+0x2b0/0x770 [ 4.488219] __common_interrupt+0xc8/0x1b0 [ 4.488549] common_interrupt+0x9a/0xc0 Fixes: 93db446a424c ("mtd: nand: move raw NAND related code to the raw/ subdir") Signed-off-by: Zheyu Ma Signed-off-by: Miquel Raynal Link: https://lore.kernel.org/linux-mtd/20220411125808.958276-1-zheyuma97@gmail.com Signed-off-by: Sasha Levin --- drivers/mtd/nand/raw/denali_pci.c | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-) diff --git a/drivers/mtd/nand/raw/denali_pci.c b/drivers/mtd/nand/raw/denali_pci.c index 20c085a30adc..de7e722d3826 100644 --- a/drivers/mtd/nand/raw/denali_pci.c +++ b/drivers/mtd/nand/raw/denali_pci.c @@ -74,22 +74,21 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) return ret; } - denali->reg = ioremap(csr_base, csr_len); + denali->reg = devm_ioremap(denali->dev, csr_base, csr_len); if (!denali->reg) { dev_err(&dev->dev, "Spectra: Unable to remap memory region\n"); return -ENOMEM; } - denali->host = ioremap(mem_base, mem_len); + denali->host = devm_ioremap(denali->dev, mem_base, mem_len); if (!denali->host) { dev_err(&dev->dev, "Spectra: ioremap failed!"); - ret = -ENOMEM; - goto out_unmap_reg; + return -ENOMEM; } ret = denali_init(denali); if (ret) - goto out_unmap_host; + return ret; nsels = denali->nbanks; @@ -117,10 +116,6 @@ static int denali_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) out_remove_denali: denali_remove(denali); -out_unmap_host: - iounmap(denali->host); -out_unmap_reg: - iounmap(denali->reg); return ret; } @@ -129,8 +124,6 @@ static void denali_pci_remove(struct pci_dev *dev) struct denali_controller *denali = pci_get_drvdata(dev); denali_remove(denali); - iounmap(denali->reg); - iounmap(denali->host); } static struct pci_driver denali_pci_driver = { From 39d4bd3f5991f056c1e85b94a0596545a4dc1e97 Mon Sep 17 00:00:00 2001 From: Jonathan Teh Date: Sun, 13 Mar 2022 19:48:18 +0000 Subject: [PATCH 0461/1842] HID: hid-led: fix maximum brightness for Dream Cheeky [ Upstream commit 116c3f4a78ebe478d5ad5a038baf931e93e7d748 ] Increase maximum brightness for Dream Cheeky to 63. Emperically determined based on testing in kernel 4.4 on this device: Bus 003 Device 002: ID 1d34:0004 Dream Cheeky Webmail Notifier Fixes: 6c7ad07e9e05 ("HID: migrate USB LED driver from usb misc to hid") Signed-off-by: Jonathan Teh Signed-off-by: Jiri Kosina Signed-off-by: Sasha Levin --- drivers/hid/hid-led.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/hid/hid-led.c b/drivers/hid/hid-led.c index c2c66ceca132..7d82f8d426bb 100644 --- a/drivers/hid/hid-led.c +++ b/drivers/hid/hid-led.c @@ -366,7 +366,7 @@ static const struct hidled_config hidled_configs[] = { .type = DREAM_CHEEKY, .name = "Dream Cheeky Webmail Notifier", .short_name = "dream_cheeky", - .max_brightness = 31, + .max_brightness = 63, .num_leds = 1, .report_size = 9, .report_type = RAW_REQUEST, From 6d0726725c7c560495f5ff364862a2cefea542e3 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Sat, 16 Apr 2022 07:37:21 +0000 Subject: [PATCH 0462/1842] HID: elan: Fix potential double free in elan_input_configured [ Upstream commit 1af20714fedad238362571620be0bd690ded05b6 ] 'input' is a managed resource allocated with devm_input_allocate_device(), so there is no need to call input_free_device() explicitly or there will be a double free. According to the doc of devm_input_allocate_device(): * Managed input devices do not need to be explicitly unregistered or * freed as it will be done automatically when owner device unbinds from * its driver (or binding fails). Fixes: b7429ea53d6c ("HID: elan: Fix memleak in elan_input_configured") Fixes: 9a6a4193d65b ("HID: Add driver for USB ELAN Touchpad") Signed-off-by: Miaoqian Lin Acked-by: Benjamin Tissoires Signed-off-by: Jiri Kosina Signed-off-by: Sasha Levin --- drivers/hid/hid-elan.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/hid/hid-elan.c b/drivers/hid/hid-elan.c index 0e8f424025fe..838673303f77 100644 --- a/drivers/hid/hid-elan.c +++ b/drivers/hid/hid-elan.c @@ -188,7 +188,6 @@ static int elan_input_configured(struct hid_device *hdev, struct hid_input *hi) ret = input_mt_init_slots(input, ELAN_MAX_FINGERS, INPUT_MT_POINTER); if (ret) { hid_err(hdev, "Failed to init elan MT slots: %d\n", ret); - input_free_device(input); return ret; } @@ -200,7 +199,6 @@ static int elan_input_configured(struct hid_device *hdev, struct hid_input *hi) hid_err(hdev, "Failed to register elan input device: %d\n", ret); input_mt_destroy_slots(input); - input_free_device(input); return ret; } From f35c3f2374082b56353d2cfafccf933ce241349a Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Wed, 20 Apr 2022 01:16:40 +0000 Subject: [PATCH 0463/1842] drm/bridge: Fix error handling in analogix_dp_probe [ Upstream commit 9f15930bb2ef9f031d62ffc49629cbae89137733 ] In the error handling path, the clk_prepare_enable() function call should be balanced by a corresponding 'clk_disable_unprepare()' call, as already done in the remove function. Fixes: 3424e3a4f844 ("drm: bridge: analogix/dp: split exynos dp driver to bridge directory") Signed-off-by: Miaoqian Lin Reviewed-by: Robert Foss Signed-off-by: Robert Foss Link: https://patchwork.freedesktop.org/patch/msgid/20220420011644.25730-1-linmq006@gmail.com Signed-off-by: Sasha Levin --- .../gpu/drm/bridge/analogix/analogix_dp_core.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c index aa1bb86293fd..31b4ff60a010 100644 --- a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c +++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c @@ -1705,8 +1705,10 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data) res = platform_get_resource(pdev, IORESOURCE_MEM, 0); dp->reg_base = devm_ioremap_resource(&pdev->dev, res); - if (IS_ERR(dp->reg_base)) - return ERR_CAST(dp->reg_base); + if (IS_ERR(dp->reg_base)) { + ret = PTR_ERR(dp->reg_base); + goto err_disable_clk; + } dp->force_hpd = of_property_read_bool(dev->of_node, "force-hpd"); @@ -1718,7 +1720,8 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data) if (IS_ERR(dp->hpd_gpiod)) { dev_err(dev, "error getting HDP GPIO: %ld\n", PTR_ERR(dp->hpd_gpiod)); - return ERR_CAST(dp->hpd_gpiod); + ret = PTR_ERR(dp->hpd_gpiod); + goto err_disable_clk; } if (dp->hpd_gpiod) { @@ -1738,7 +1741,8 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data) if (dp->irq == -ENXIO) { dev_err(&pdev->dev, "failed to get irq\n"); - return ERR_PTR(-ENODEV); + ret = -ENODEV; + goto err_disable_clk; } ret = devm_request_threaded_irq(&pdev->dev, dp->irq, @@ -1747,11 +1751,15 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data) irq_flags, "analogix-dp", dp); if (ret) { dev_err(&pdev->dev, "failed to request irq\n"); - return ERR_PTR(ret); + goto err_disable_clk; } disable_irq(dp->irq); return dp; + +err_disable_clk: + clk_disable_unprepare(dp->clock); + return ERR_PTR(ret); } EXPORT_SYMBOL_GPL(analogix_dp_probe); From 147a376c1afea117eccda36451121ea781aa5028 Mon Sep 17 00:00:00 2001 From: Chengming Zhou Date: Fri, 8 Apr 2022 19:53:08 +0800 Subject: [PATCH 0464/1842] sched/fair: Fix cfs_rq_clock_pelt() for throttled cfs_rq [ Upstream commit 64eaf50731ac0a8c76ce2fedd50ef6652aabc5ff ] Since commit 23127296889f ("sched/fair: Update scale invariance of PELT") change to use rq_clock_pelt() instead of rq_clock_task(), we should also use rq_clock_pelt() for throttled_clock_task_time and throttled_clock_task accounting to get correct cfs_rq_clock_pelt() of throttled cfs_rq. And rename throttled_clock_task(_time) to be clock_pelt rather than clock_task. Fixes: 23127296889f ("sched/fair: Update scale invariance of PELT") Signed-off-by: Chengming Zhou Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Ben Segall Reviewed-by: Vincent Guittot Link: https://lore.kernel.org/r/20220408115309.81603-1-zhouchengming@bytedance.com Signed-off-by: Sasha Levin --- kernel/sched/fair.c | 8 ++++---- kernel/sched/pelt.h | 4 ++-- kernel/sched/sched.h | 4 ++-- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1a306ef51bbe..bca0efc03a51 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4758,8 +4758,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data) cfs_rq->throttle_count--; if (!cfs_rq->throttle_count) { - cfs_rq->throttled_clock_task_time += rq_clock_task(rq) - - cfs_rq->throttled_clock_task; + cfs_rq->throttled_clock_pelt_time += rq_clock_pelt(rq) - + cfs_rq->throttled_clock_pelt; /* Add cfs_rq with already running entity in the list */ if (cfs_rq->nr_running >= 1) @@ -4776,7 +4776,7 @@ static int tg_throttle_down(struct task_group *tg, void *data) /* group is entering throttled state, stop time */ if (!cfs_rq->throttle_count) { - cfs_rq->throttled_clock_task = rq_clock_task(rq); + cfs_rq->throttled_clock_pelt = rq_clock_pelt(rq); list_del_leaf_cfs_rq(cfs_rq); } cfs_rq->throttle_count++; @@ -5194,7 +5194,7 @@ static void sync_throttle(struct task_group *tg, int cpu) pcfs_rq = tg->parent->cfs_rq[cpu]; cfs_rq->throttle_count = pcfs_rq->throttle_count; - cfs_rq->throttled_clock_task = rq_clock_task(cpu_rq(cpu)); + cfs_rq->throttled_clock_pelt = rq_clock_pelt(cpu_rq(cpu)); } /* conditionally throttle active cfs_rq's from put_prev_entity() */ diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h index 45bf08e22207..89150ced09cf 100644 --- a/kernel/sched/pelt.h +++ b/kernel/sched/pelt.h @@ -145,9 +145,9 @@ static inline u64 rq_clock_pelt(struct rq *rq) static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) { if (unlikely(cfs_rq->throttle_count)) - return cfs_rq->throttled_clock_task - cfs_rq->throttled_clock_task_time; + return cfs_rq->throttled_clock_pelt - cfs_rq->throttled_clock_pelt_time; - return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_task_time; + return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_pelt_time; } #else static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 08db8e095e48..8d39f5d99172 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -599,8 +599,8 @@ struct cfs_rq { s64 runtime_remaining; u64 throttled_clock; - u64 throttled_clock_task; - u64 throttled_clock_task_time; + u64 throttled_clock_pelt; + u64 throttled_clock_pelt_time; int throttled; int throttle_count; struct list_head throttled_list; From 172789fd9532ce34e12ae1f03608bb0475272adb Mon Sep 17 00:00:00 2001 From: Zheng Yongjun Date: Fri, 22 Apr 2022 06:26:41 +0000 Subject: [PATCH 0465/1842] spi: img-spfi: Fix pm_runtime_get_sync() error checking [ Upstream commit cc470d55343056d6b2a5c32e10e0aad06f324078 ] If the device is already in a runtime PM enabled state pm_runtime_get_sync() will return 1, so a test for negative value should be used to check for errors. Fixes: deba25800a12b ("spi: Add driver for IMG SPFI controller") Signed-off-by: Zheng Yongjun Link: https://lore.kernel.org/r/20220422062641.10486-1-zhengyongjun3@huawei.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- drivers/spi/spi-img-spfi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/spi/spi-img-spfi.c b/drivers/spi/spi-img-spfi.c index 5f05d519fbbd..71376b6df89d 100644 --- a/drivers/spi/spi-img-spfi.c +++ b/drivers/spi/spi-img-spfi.c @@ -731,7 +731,7 @@ static int img_spfi_resume(struct device *dev) int ret; ret = pm_runtime_get_sync(dev); - if (ret) { + if (ret < 0) { pm_runtime_put_noidle(dev); return ret; } From 96c848afbddcf097d761700efcf2a0b1892899a5 Mon Sep 17 00:00:00 2001 From: Schspa Shi Date: Thu, 21 Apr 2022 03:15:41 +0800 Subject: [PATCH 0466/1842] cpufreq: Fix possible race in cpufreq online error path [ Upstream commit f346e96267cd76175d6c201b40f770c0116a8a04 ] When cpufreq online fails, the policy->cpus mask is not cleared and policy->rwsem is released too early, so the driver can be invoked via the cpuinfo_cur_freq sysfs attribute while its ->offline() or ->exit() callbacks are being run. Take policy->clk as an example: static int cpufreq_online(unsigned int cpu) { ... // policy->cpus != 0 at this time down_write(&policy->rwsem); ret = cpufreq_add_dev_interface(policy); up_write(&policy->rwsem); return 0; out_destroy_policy: for_each_cpu(j, policy->real_cpus) remove_cpu_dev_symlink(policy, get_cpu_device(j)); up_write(&policy->rwsem); ... out_exit_policy: if (cpufreq_driver->exit) cpufreq_driver->exit(policy); clk_put(policy->clk); // policy->clk is a wild pointer ... ^ | Another process access __cpufreq_get cpufreq_verify_current_freq cpufreq_generic_get // acces wild pointer of policy->clk; | | out_offline_policy: | cpufreq_policy_free(policy); | // deleted here, and will wait for no body reference cpufreq_policy_put_kobj(policy); } Address this by modifying cpufreq_online() to release policy->rwsem in the error path after the driver callbacks have run and to clear policy->cpus before releasing the semaphore. Fixes: 7106e02baed4 ("cpufreq: release policy->rwsem on error") Signed-off-by: Schspa Shi [ rjw: Subject and changelog edits ] Signed-off-by: Rafael J. Wysocki Signed-off-by: Sasha Levin --- drivers/cpufreq/cpufreq.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 30dafe8fc505..3540ea93b6f1 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1515,8 +1515,6 @@ static int cpufreq_online(unsigned int cpu) for_each_cpu(j, policy->real_cpus) remove_cpu_dev_symlink(policy, get_cpu_device(j)); - up_write(&policy->rwsem); - out_offline_policy: if (cpufreq_driver->offline) cpufreq_driver->offline(policy); @@ -1525,6 +1523,9 @@ static int cpufreq_online(unsigned int cpu) if (cpufreq_driver->exit) cpufreq_driver->exit(policy); + cpumask_clear(policy->cpus); + up_write(&policy->rwsem); + out_free_policy: cpufreq_policy_free(policy); return ret; From 461e4c1f199076275f16bf6f3d3e42c6b6c79f33 Mon Sep 17 00:00:00 2001 From: Dan Carpenter Date: Sat, 9 Apr 2022 09:12:25 +0300 Subject: [PATCH 0467/1842] ath9k_htc: fix potential out of bounds access with invalid rxstatus->rs_keyix MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 2dc509305cf956381532792cb8dceef2b1504765 ] The "rxstatus->rs_keyix" eventually gets passed to test_bit() so we need to ensure that it is within the bitmap. drivers/net/wireless/ath/ath9k/common.c:46 ath9k_cmn_rx_accept() error: passing untrusted data 'rx_stats->rs_keyix' to 'test_bit()' Fixes: 4ed1a8d4a257 ("ath9k_htc: use ath9k_cmn_rx_accept") Signed-off-by: Dan Carpenter Acked-by: Toke Høiland-Jørgensen Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/20220409061225.GA5447@kili Signed-off-by: Sasha Levin --- drivers/net/wireless/ath/ath9k/htc_drv_txrx.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c index 0bdc4dcb7b8f..30ddf333e04d 100644 --- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c +++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c @@ -1006,6 +1006,14 @@ static bool ath9k_rx_prepare(struct ath9k_htc_priv *priv, goto rx_next; } + if (rxstatus->rs_keyix >= ATH_KEYMAX && + rxstatus->rs_keyix != ATH9K_RXKEYIX_INVALID) { + ath_dbg(common, ANY, + "Invalid keyix, dropping (keyix: %d)\n", + rxstatus->rs_keyix); + goto rx_next; + } + /* Get the RX status information */ memset(rx_status, 0, sizeof(struct ieee80211_rx_status)); From b6b70cd3ddfab2005771f81eb6a6f638b44d0e10 Mon Sep 17 00:00:00 2001 From: Chen-Yu Tsai Date: Thu, 31 Mar 2022 09:49:06 +0100 Subject: [PATCH 0468/1842] media: hantro: Empty encoder capture buffers by default [ Upstream commit 309373a3571ef7175bd9da0c9b13476a718e8478 ] The payload size for encoder capture buffers is set by the driver upon finishing encoding each frame, based on the encoded length returned from hardware, and whatever header and padding length used. Setting a non-zero default serves no real purpose, and also causes issues if the capture buffer is returned to userspace unused, confusing the application. Instead, always set the payload size to 0 for encoder capture buffers when preparing them. Fixes: 775fec69008d ("media: add Rockchip VPU JPEG encoder driver") Fixes: 082aaecff35f ("media: hantro: Fix .buf_prepare") Signed-off-by: Chen-Yu Tsai Reviewed-by: Ezequiel Garcia Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/staging/media/hantro/hantro_v4l2.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/staging/media/hantro/hantro_v4l2.c b/drivers/staging/media/hantro/hantro_v4l2.c index 5c2ca61add8e..b65f2f3ef357 100644 --- a/drivers/staging/media/hantro/hantro_v4l2.c +++ b/drivers/staging/media/hantro/hantro_v4l2.c @@ -644,8 +644,12 @@ static int hantro_buf_prepare(struct vb2_buffer *vb) * (for OUTPUT buffers, if userspace passes 0 bytesused, v4l2-core sets * it to buffer length). */ - if (V4L2_TYPE_IS_CAPTURE(vq->type)) - vb2_set_plane_payload(vb, 0, pix_fmt->plane_fmt[0].sizeimage); + if (V4L2_TYPE_IS_CAPTURE(vq->type)) { + if (ctx->is_encoder) + vb2_set_plane_payload(vb, 0, 0); + else + vb2_set_plane_payload(vb, 0, pix_fmt->plane_fmt[0].sizeimage); + } return 0; } From d764a7d647f7ae74dd66693bda5357ac75e42eb9 Mon Sep 17 00:00:00 2001 From: Marek Vasut Date: Wed, 6 Apr 2022 11:36:27 +0200 Subject: [PATCH 0469/1842] drm/panel: simple: Add missing bus flags for Innolux G070Y2-L01 [ Upstream commit 0f73a559f916b618c0c05186bd644c90cc9e9695 ] The DE signal is active high on this display, fill in the missing bus_flags. This aligns panel_desc with its display_timing . Fixes: a5d2ade627dca ("drm/panel: simple: Add support for Innolux G070Y2-L01") Signed-off-by: Marek Vasut Cc: Christoph Fritz Cc: Laurent Pinchart Cc: Maxime Ripard Cc: Sam Ravnborg Cc: Thomas Zimmermann Acked-by: Sam Ravnborg Link: https://patchwork.freedesktop.org/patch/msgid/20220406093627.18011-1-marex@denx.de Signed-off-by: Sasha Levin --- drivers/gpu/drm/panel/panel-simple.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c index 959dcbd8a29c..18850439a2ab 100644 --- a/drivers/gpu/drm/panel/panel-simple.c +++ b/drivers/gpu/drm/panel/panel-simple.c @@ -2144,6 +2144,7 @@ static const struct panel_desc innolux_g070y2_l01 = { .unprepare = 800, }, .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, + .bus_flags = DRM_BUS_FLAG_DE_HIGH, .connector_type = DRM_MODE_CONNECTOR_LVDS, }; From f2c68c52898f623fe84518da4606538d193b0cca Mon Sep 17 00:00:00 2001 From: Colin Ian King Date: Sun, 24 Apr 2022 21:59:45 +0100 Subject: [PATCH 0470/1842] ALSA: pcm: Check for null pointer of pointer substream before dereferencing it [ Upstream commit 011b559be832194f992f73d6c0d5485f5925a10b ] Pointer substream is being dereferenced on the assignment of pointer card before substream is being null checked with the macro PCM_RUNTIME_CHECK. Although PCM_RUNTIME_CHECK calls BUG_ON, it still is useful to perform the the pointer check before card is assigned. Fixes: d4cfb30fce03 ("ALSA: pcm: Set per-card upper limit of PCM buffer allocations") Signed-off-by: Colin Ian King Link: https://lore.kernel.org/r/20220424205945.1372247-1-colin.i.king@gmail.com Signed-off-by: Takashi Iwai Signed-off-by: Sasha Levin --- sound/core/pcm_memory.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sound/core/pcm_memory.c b/sound/core/pcm_memory.c index a9a0d74f3165..191883842a35 100644 --- a/sound/core/pcm_memory.c +++ b/sound/core/pcm_memory.c @@ -434,7 +434,6 @@ EXPORT_SYMBOL(snd_pcm_lib_malloc_pages); */ int snd_pcm_lib_free_pages(struct snd_pcm_substream *substream) { - struct snd_card *card = substream->pcm->card; struct snd_pcm_runtime *runtime; if (PCM_RUNTIME_CHECK(substream)) @@ -443,6 +442,8 @@ int snd_pcm_lib_free_pages(struct snd_pcm_substream *substream) if (runtime->dma_area == NULL) return 0; if (runtime->dma_buffer_p != &substream->dma_buffer) { + struct snd_card *card = substream->pcm->card; + /* it's a newly allocated buffer. release it now. */ do_free_pages(card, runtime->dma_buffer_p); kfree(runtime->dma_buffer_p); From 94845fc422f95b2c283f7f2995a847f034b2e13d Mon Sep 17 00:00:00 2001 From: Amir Goldstein Date: Fri, 22 Apr 2022 15:03:12 +0300 Subject: [PATCH 0471/1842] inotify: show inotify mask flags in proc fdinfo [ Upstream commit a32e697cda27679a0327ae2cafdad8c7170f548f ] The inotify mask flags IN_ONESHOT and IN_EXCL_UNLINK are not "internal to kernel" and should be exposed in procfs fdinfo so CRIU can restore them. Fixes: 6933599697c9 ("inotify: hide internal kernel bits from fdinfo") Link: https://lore.kernel.org/r/20220422120327.3459282-2-amir73il@gmail.com Signed-off-by: Amir Goldstein Signed-off-by: Jan Kara Signed-off-by: Sasha Levin --- fs/notify/fdinfo.c | 11 ++--------- fs/notify/inotify/inotify.h | 12 ++++++++++++ fs/notify/inotify/inotify_user.c | 2 +- 3 files changed, 15 insertions(+), 10 deletions(-) diff --git a/fs/notify/fdinfo.c b/fs/notify/fdinfo.c index f0d6b54be412..765b50aeadd2 100644 --- a/fs/notify/fdinfo.c +++ b/fs/notify/fdinfo.c @@ -83,16 +83,9 @@ static void inotify_fdinfo(struct seq_file *m, struct fsnotify_mark *mark) inode_mark = container_of(mark, struct inotify_inode_mark, fsn_mark); inode = igrab(fsnotify_conn_inode(mark->connector)); if (inode) { - /* - * IN_ALL_EVENTS represents all of the mask bits - * that we expose to userspace. There is at - * least one bit (FS_EVENT_ON_CHILD) which is - * used only internally to the kernel. - */ - u32 mask = mark->mask & IN_ALL_EVENTS; - seq_printf(m, "inotify wd:%x ino:%lx sdev:%x mask:%x ignored_mask:%x ", + seq_printf(m, "inotify wd:%x ino:%lx sdev:%x mask:%x ignored_mask:0 ", inode_mark->wd, inode->i_ino, inode->i_sb->s_dev, - mask, mark->ignored_mask); + inotify_mark_user_mask(mark)); show_mark_fhandle(m, inode); seq_putc(m, '\n'); iput(inode); diff --git a/fs/notify/inotify/inotify.h b/fs/notify/inotify/inotify.h index 2007e3711916..8f00151eb731 100644 --- a/fs/notify/inotify/inotify.h +++ b/fs/notify/inotify/inotify.h @@ -22,6 +22,18 @@ static inline struct inotify_event_info *INOTIFY_E(struct fsnotify_event *fse) return container_of(fse, struct inotify_event_info, fse); } +/* + * INOTIFY_USER_FLAGS represents all of the mask bits that we expose to + * userspace. There is at least one bit (FS_EVENT_ON_CHILD) which is + * used only internally to the kernel. + */ +#define INOTIFY_USER_MASK (IN_ALL_EVENTS | IN_ONESHOT | IN_EXCL_UNLINK) + +static inline __u32 inotify_mark_user_mask(struct fsnotify_mark *fsn_mark) +{ + return fsn_mark->mask & INOTIFY_USER_MASK; +} + extern void inotify_ignored_and_remove_idr(struct fsnotify_mark *fsn_mark, struct fsnotify_group *group); extern int inotify_handle_inode_event(struct fsnotify_mark *inode_mark, diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c index 5f6c6bf65909..32b6b97021be 100644 --- a/fs/notify/inotify/inotify_user.c +++ b/fs/notify/inotify/inotify_user.c @@ -88,7 +88,7 @@ static inline __u32 inotify_arg_to_mask(struct inode *inode, u32 arg) mask |= FS_EVENT_ON_CHILD; /* mask off the flags used to open the fd */ - mask |= (arg & (IN_ALL_EVENTS | IN_ONESHOT | IN_EXCL_UNLINK)); + mask |= (arg & INOTIFY_USER_MASK); return mask; } From f929416d5c9c9908659cde74a3e29e84c8ffe418 Mon Sep 17 00:00:00 2001 From: Amir Goldstein Date: Fri, 22 Apr 2022 15:03:14 +0300 Subject: [PATCH 0472/1842] fsnotify: fix wrong lockdep annotations [ Upstream commit 623af4f538b5df9b416e1b82f720af7371b4c771 ] Commit 6960b0d909cd ("fsnotify: change locking order") changed some of the mark_mutex locks in direct reclaim path to use: mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING); This change is explained: "...It uses nested locking to avoid deadlock in case we do the final iput() on an inode which still holds marks and thus would take the mutex again when calling fsnotify_inode_delete() in destroy_inode()." The problem is that the mutex_lock_nested() is not a nested lock at all. In fact, it has the opposite effect of preventing lockdep from warning about a very possible deadlock. Due to these wrong annotations, a deadlock that was introduced with nfsd filecache in kernel v5.4 went unnoticed in v5.4.y for over two years until it was reported recently by Khazhismel Kumykov, only to find out that the deadlock was already fixed in kernel v5.5. Fix the wrong lockdep annotations. Cc: Khazhismel Kumykov Fixes: 6960b0d909cd ("fsnotify: change locking order") Link: https://lore.kernel.org/r/20220321112310.vpr7oxro2xkz5llh@quack3.lan/ Link: https://lore.kernel.org/r/20220422120327.3459282-4-amir73il@gmail.com Signed-off-by: Amir Goldstein Signed-off-by: Jan Kara Signed-off-by: Sasha Levin --- fs/notify/mark.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/fs/notify/mark.c b/fs/notify/mark.c index 8387937b9d01..5b44be5f93dd 100644 --- a/fs/notify/mark.c +++ b/fs/notify/mark.c @@ -430,7 +430,7 @@ void fsnotify_free_mark(struct fsnotify_mark *mark) void fsnotify_destroy_mark(struct fsnotify_mark *mark, struct fsnotify_group *group) { - mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING); + mutex_lock(&group->mark_mutex); fsnotify_detach_mark(mark); mutex_unlock(&group->mark_mutex); fsnotify_free_mark(mark); @@ -742,7 +742,7 @@ void fsnotify_clear_marks_by_group(struct fsnotify_group *group, * move marks to free to to_free list in one go and then free marks in * to_free list one by one. */ - mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING); + mutex_lock(&group->mark_mutex); list_for_each_entry_safe(mark, lmark, &group->marks_list, g_list) { if ((1U << mark->connector->type) & type_mask) list_move(&mark->g_list, &to_free); @@ -751,7 +751,7 @@ void fsnotify_clear_marks_by_group(struct fsnotify_group *group, clear: while (1) { - mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING); + mutex_lock(&group->mark_mutex); if (list_empty(head)) { mutex_unlock(&group->mark_mutex); break; From cc68e53f9a7f1044bc2824987a6e45bcb2f97513 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Nuno=20S=C3=A1?= Date: Wed, 20 Apr 2022 15:02:05 +0200 Subject: [PATCH 0473/1842] of: overlay: do not break notify on NOTIFY_{OK|STOP} MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 5f756a2eaa4436d7d3dc1e040147f5e992ae34b5 ] We should not break overlay notifications on NOTIFY_{OK|STOP} otherwise we might break on the first fragment. We should only stop notifications if a *real* errno is returned by one of the listeners. Fixes: a1d19bd4cf1fe ("of: overlay: pr_err from return NOTIFY_OK to overlay apply/remove") Signed-off-by: Nuno Sá Signed-off-by: Rob Herring Link: https://lore.kernel.org/r/20220420130205.89435-1-nuno.sa@analog.com Signed-off-by: Sasha Levin --- drivers/of/overlay.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c index 43a77d720008..c8a0c0e9dec1 100644 --- a/drivers/of/overlay.c +++ b/drivers/of/overlay.c @@ -170,9 +170,7 @@ static int overlay_notify(struct overlay_changeset *ovcs, ret = blocking_notifier_call_chain(&overlay_notify_chain, action, &nd); - if (ret == NOTIFY_OK || ret == NOTIFY_STOP) - return 0; - if (ret) { + if (notifier_to_errno(ret)) { ret = notifier_to_errno(ret); pr_err("overlay changeset %s notifier error %d, target: %pOF\n", of_overlay_action_name[action], ret, nd.target); From 328cfeac735c02e7b0a346eb885e3bfad2b973dd Mon Sep 17 00:00:00 2001 From: Kuogee Hsieh Date: Fri, 25 Feb 2022 13:23:09 -0800 Subject: [PATCH 0474/1842] drm/msm/dpu: adjust display_v_end for eDP and DP MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit e18aeea7f5efb9508722c8c7fd4d32e6f8cdfe50 ] The “DP timing” requires the active region to be defined in the bottom-right corner of the frame dimensions which is different with DSI. Therefore both display_h_end and display_v_end need to be adjusted accordingly. However current implementation has only display_h_end adjusted. Signed-off-by: Kuogee Hsieh Fixes: fc3a69ec68d3 ("drm/msm/dpu: intf timing path for displayport") Reviewed-by: Dmitry Baryshkov Reviewed-by: Stephen Boyd Patchwork: https://patchwork.freedesktop.org/patch/476277/ Link: https://lore.kernel.org/r/1645824192-29670-2-git-send-email-quic_khsieh@quicinc.com Signed-off-by: Dmitry Baryshkov Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c index 6f0f54588124..108882bbd2b8 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c @@ -146,6 +146,7 @@ static void dpu_hw_intf_setup_timing_engine(struct dpu_hw_intf *ctx, active_v_end = active_v_start + (p->yres * hsync_period) - 1; display_v_start += p->hsync_pulse_width + p->h_back_porch; + display_v_end -= p->h_front_porch; active_hctl = (active_h_end << 16) | active_h_start; display_hctl = active_hctl; From 02192ee93684e2989ec359821bc91850a3c46d5e Mon Sep 17 00:00:00 2001 From: Bart Van Assche Date: Tue, 19 Apr 2022 15:58:05 -0700 Subject: [PATCH 0475/1842] scsi: ufs: qcom: Fix ufs_qcom_resume() [ Upstream commit bee40dc167da159ea5b939c074e1da258610a3d6 ] Clearing hba->is_sys_suspended if ufs_qcom_resume() succeeds is wrong. That variable must only be cleared if all actions involved in a resume succeed. Hence remove the statement that clears hba->is_sys_suspended from ufs_qcom_resume(). Link: https://lore.kernel.org/r/20220419225811.4127248-23-bvanassche@acm.org Fixes: 81c0fc51b7a7 ("ufs-qcom: add support for Qualcomm Technologies Inc platforms") Tested-by: Bean Huo Reviewed-by: Bjorn Andersson Reviewed-by: Bean Huo Signed-off-by: Bart Van Assche Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin --- drivers/scsi/ufs/ufs-qcom.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c index 20182e39cb28..117740b302fa 100644 --- a/drivers/scsi/ufs/ufs-qcom.c +++ b/drivers/scsi/ufs/ufs-qcom.c @@ -623,12 +623,7 @@ static int ufs_qcom_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op) return err; } - err = ufs_qcom_ice_resume(host); - if (err) - return err; - - hba->is_sys_suspended = false; - return 0; + return ufs_qcom_ice_resume(host); } static void ufs_qcom_dev_ref_clk_ctrl(struct ufs_qcom_host *host, bool enable) From c54d66c5147565f9958e0907c4fab7a3c51e5420 Mon Sep 17 00:00:00 2001 From: Kiwoong Kim Date: Thu, 31 Mar 2022 10:24:05 +0900 Subject: [PATCH 0476/1842] scsi: ufs: core: Exclude UECxx from SFR dump list [ Upstream commit ef60031022eb6d972aac86ca26c98c33e1289436 ] Some devices may return invalid or zeroed data during an UIC error condition. In addition, reading these SFRs will clear them. This means the subsequent error handling will not be able to see them and therefore no error handling will be scheduled. Skip reading these SFRs in ufshcd_dump_regs(). Link: https://lore.kernel.org/r/1648689845-33521-1-git-send-email-kwmad.kim@samsung.com Fixes: d67247566450 ("scsi: ufs: Use explicit access size in ufshcd_dump_regs") Signed-off-by: Kiwoong Kim Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin --- drivers/scsi/ufs/ufshcd.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index bf302776340c..ea6ceab1a1b2 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -107,8 +107,13 @@ int ufshcd_dump_regs(struct ufs_hba *hba, size_t offset, size_t len, if (!regs) return -ENOMEM; - for (pos = 0; pos < len; pos += 4) + for (pos = 0; pos < len; pos += 4) { + if (offset == 0 && + pos >= REG_UIC_ERROR_CODE_PHY_ADAPTER_LAYER && + pos <= REG_UIC_ERROR_CODE_DME) + continue; regs[pos / 4] = ufshcd_readl(hba, offset + pos); + } ufshcd_hex_dump(prefix, regs, len); kfree(regs); From 97b56f17b355ddeb6558b2ba2ad8a40704b229e9 Mon Sep 17 00:00:00 2001 From: Colin Ian King Date: Tue, 26 Apr 2022 13:25:31 +0100 Subject: [PATCH 0477/1842] selftests/resctrl: Fix null pointer dereference on open failed [ Upstream commit c7b607fa9325ccc94982774c505176677117689c ] Currently if opening /dev/null fails to open then file pointer fp is null and further access to fp via fprintf will cause a null pointer dereference. Fix this by returning a negative error value when a null fp is detected. Detected using cppcheck static analysis: tools/testing/selftests/resctrl/fill_buf.c:124:6: note: Assuming that condition '!fp' is not redundant if (!fp) ^ tools/testing/selftests/resctrl/fill_buf.c:126:10: note: Null pointer dereference fprintf(fp, "Sum: %d ", ret); Fixes: a2561b12fe39 ("selftests/resctrl: Add built in benchmark") Signed-off-by: Colin Ian King Signed-off-by: Shuah Khan Signed-off-by: Sasha Levin --- tools/testing/selftests/resctrl/fill_buf.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c index 51e5cf22632f..56ccbeae0638 100644 --- a/tools/testing/selftests/resctrl/fill_buf.c +++ b/tools/testing/selftests/resctrl/fill_buf.c @@ -121,8 +121,10 @@ static int fill_cache_read(unsigned char *start_ptr, unsigned char *end_ptr, /* Consume read result so that reading memory is not optimized out. */ fp = fopen("/dev/null", "w"); - if (!fp) + if (!fp) { perror("Unable to write to /dev/null"); + return -1; + } fprintf(fp, "Sum: %d ", ret); fclose(fp); From ab88c8d906c6b7cfa2b2c6de80423cffc288f566 Mon Sep 17 00:00:00 2001 From: Andrii Nakryiko Date: Mon, 25 Apr 2022 17:45:04 -0700 Subject: [PATCH 0478/1842] libbpf: Fix logic for finding matching program for CO-RE relocation [ Upstream commit 966a7509325395c51c5f6d89e7352b0585e4804b ] Fix the bug in bpf_object__relocate_core() which can lead to finding invalid matching BPF program when processing CO-RE relocation. IF matching program is not found, last encountered program will be assumed to be correct program and thus error detection won't detect the problem. Fixes: 9c82a63cf370 ("libbpf: Fix CO-RE relocs against .text section") Signed-off-by: Andrii Nakryiko Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/20220426004511.2691730-4-andrii@kernel.org Signed-off-by: Sasha Levin --- tools/lib/bpf/libbpf.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index dda8f9cdc652..8fada26529b7 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -5928,9 +5928,10 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path) */ prog = NULL; for (i = 0; i < obj->nr_programs; i++) { - prog = &obj->programs[i]; - if (strcmp(prog->sec_name, sec_name) == 0) + if (strcmp(obj->programs[i].sec_name, sec_name) == 0) { + prog = &obj->programs[i]; break; + } } if (!prog) { pr_warn("sec '%s': failed to find a BPF program\n", sec_name); From 0e1cd4edefc8baeaea51aa39f85645516263eca9 Mon Sep 17 00:00:00 2001 From: Chen-Tsung Hsieh Date: Wed, 26 Jan 2022 15:32:26 +0800 Subject: [PATCH 0479/1842] mtd: spi-nor: core: Check written SR value in spi_nor_write_16bit_sr_and_check() [ Upstream commit 70dd83d737d8900b2d98db6dc6b928c596334d37 ] Read back Status Register 1 to ensure that the written byte match the received value and return -EIO if read back test failed. Without this patch, spi_nor_write_16bit_sr_and_check() only check the second half of the 16bit. It causes errors like spi_nor_sr_unlock() return success incorrectly when spi_nor_write_16bit_sr_and_check() doesn't write SR successfully. Fixes: 39d1e3340c73 ("mtd: spi-nor: Fix clearing of QE bit on lock()/unlock()") Signed-off-by: Chen-Tsung Hsieh Signed-off-by: Pratyush Yadav Reviewed-by: Michael Walle Reviewed-by: Tudor Ambarus Acked-by: Pratyush Yadav Link: https://lore.kernel.org/r/20220126073227.3401275-1-chentsung@chromium.org Signed-off-by: Sasha Levin --- drivers/mtd/spi-nor/core.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c index 2b26a875a855..e8146a47da12 100644 --- a/drivers/mtd/spi-nor/core.c +++ b/drivers/mtd/spi-nor/core.c @@ -827,6 +827,15 @@ static int spi_nor_write_16bit_sr_and_check(struct spi_nor *nor, u8 sr1) if (ret) return ret; + ret = spi_nor_read_sr(nor, sr_cr); + if (ret) + return ret; + + if (sr1 != sr_cr[0]) { + dev_dbg(nor->dev, "SR: Read back test failed\n"); + return -EIO; + } + if (nor->flags & SNOR_F_NO_READ_CR) return 0; From b6ecf2b7e691eed248290e3dfec22d3093c44e0c Mon Sep 17 00:00:00 2001 From: Matthieu Baerts Date: Sat, 23 Apr 2022 20:24:10 +0200 Subject: [PATCH 0480/1842] x86/pm: Fix false positive kmemleak report in msr_build_context() [ Upstream commit b0b592cf08367719e1d1ef07c9f136e8c17f7ec3 ] Since e2a1256b17b1 ("x86/speculation: Restore speculation related MSRs during S3 resume") kmemleak reports this issue: unreferenced object 0xffff888009cedc00 (size 256): comm "swapper/0", pid 1, jiffies 4294693823 (age 73.764s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 48 00 00 00 00 00 00 00 ........H....... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: msr_build_context (include/linux/slab.h:621) pm_check_save_msr (arch/x86/power/cpu.c:520) do_one_initcall (init/main.c:1298) kernel_init_freeable (init/main.c:1370) kernel_init (init/main.c:1504) ret_from_fork (arch/x86/entry/entry_64.S:304) Reproducer: - boot the VM with a debug kernel config (see https://github.com/multipath-tcp/mptcp_net-next/issues/268) - wait ~1 minute - start a kmemleak scan The root cause here is alignment within the packed struct saved_context (from suspend_64.h). Kmemleak only searches for pointers that are aligned (see how pointers are scanned in kmemleak.c), but pahole shows that the saved_msrs struct member and all members after it in the structure are unaligned: struct saved_context { struct pt_regs regs; /* 0 168 */ /* --- cacheline 2 boundary (128 bytes) was 40 bytes ago --- */ u16 ds; /* 168 2 */ ... u64 misc_enable; /* 232 8 */ bool misc_enable_saved; /* 240 1 */ /* Note below odd offset values for the remainder of this struct */ struct saved_msrs saved_msrs; /* 241 16 */ /* --- cacheline 4 boundary (256 bytes) was 1 bytes ago --- */ long unsigned int efer; /* 257 8 */ u16 gdt_pad; /* 265 2 */ struct desc_ptr gdt_desc; /* 267 10 */ u16 idt_pad; /* 277 2 */ struct desc_ptr idt; /* 279 10 */ u16 ldt; /* 289 2 */ u16 tss; /* 291 2 */ long unsigned int tr; /* 293 8 */ long unsigned int safety; /* 301 8 */ long unsigned int return_address; /* 309 8 */ /* size: 317, cachelines: 5, members: 25 */ /* last cacheline: 61 bytes */ } __attribute__((__packed__)); Move misc_enable_saved to the end of the struct declaration so that saved_msrs fits in before the cacheline 4 boundary. The comment above the saved_context declaration says to fix wakeup_64.S file and __save/__restore_processor_state() if the struct is modified: it looks like all the accesses in wakeup_64.S are done through offsets which are computed at build-time. Update that comment accordingly. At the end, the false positive kmemleak report is due to a limitation from kmemleak but it is always good to avoid unaligned members for optimisation purposes. Please note that it looks like this issue is not new, e.g. https://lore.kernel.org/all/9f1bb619-c4ee-21c4-a251-870bd4db04fa@lwfinger.net/ https://lore.kernel.org/all/94e48fcd-1dbd-ebd2-4c91-f39941735909@molgen.mpg.de/ [ bp: Massage + cleanup commit message. ] Fixes: 7a9c2dd08ead ("x86/pm: Introduce quirk framework to save/restore extra MSR registers around suspend/resume") Suggested-by: Mat Martineau Signed-off-by: Matthieu Baerts Signed-off-by: Borislav Petkov Reviewed-by: Rafael J. Wysocki Link: https://lore.kernel.org/r/20220426202138.498310-1-matthieu.baerts@tessares.net Signed-off-by: Sasha Levin --- arch/x86/include/asm/suspend_32.h | 2 +- arch/x86/include/asm/suspend_64.h | 12 ++++++++---- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/suspend_32.h b/arch/x86/include/asm/suspend_32.h index fdbd9d7b7bca..3b97aa921543 100644 --- a/arch/x86/include/asm/suspend_32.h +++ b/arch/x86/include/asm/suspend_32.h @@ -21,7 +21,6 @@ struct saved_context { #endif unsigned long cr0, cr2, cr3, cr4; u64 misc_enable; - bool misc_enable_saved; struct saved_msrs saved_msrs; struct desc_ptr gdt_desc; struct desc_ptr idt; @@ -30,6 +29,7 @@ struct saved_context { unsigned long tr; unsigned long safety; unsigned long return_address; + bool misc_enable_saved; } __attribute__((packed)); /* routines for saving/restoring kernel state */ diff --git a/arch/x86/include/asm/suspend_64.h b/arch/x86/include/asm/suspend_64.h index 35bb35d28733..54df06687d83 100644 --- a/arch/x86/include/asm/suspend_64.h +++ b/arch/x86/include/asm/suspend_64.h @@ -14,9 +14,13 @@ * Image of the saved processor state, used by the low level ACPI suspend to * RAM code and by the low level hibernation code. * - * If you modify it, fix arch/x86/kernel/acpi/wakeup_64.S and make sure that - * __save/__restore_processor_state(), defined in arch/x86/kernel/suspend_64.c, - * still work as required. + * If you modify it, check how it is used in arch/x86/kernel/acpi/wakeup_64.S + * and make sure that __save/__restore_processor_state(), defined in + * arch/x86/power/cpu.c, still work as required. + * + * Because the structure is packed, make sure to avoid unaligned members. For + * optimisation purposes but also because tools like kmemleak only search for + * pointers that are aligned. */ struct saved_context { struct pt_regs regs; @@ -36,7 +40,6 @@ struct saved_context { unsigned long cr0, cr2, cr3, cr4; u64 misc_enable; - bool misc_enable_saved; struct saved_msrs saved_msrs; unsigned long efer; u16 gdt_pad; /* Unused */ @@ -48,6 +51,7 @@ struct saved_context { unsigned long tr; unsigned long safety; unsigned long return_address; + bool misc_enable_saved; } __attribute__((packed)); #define loaddebug(thread,register) \ From 81f1ddffdc22ca5789e33b9d4712914e302090c1 Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Tue, 26 Apr 2022 16:49:11 +0800 Subject: [PATCH 0481/1842] mtd: rawnand: cadence: fix possible null-ptr-deref in cadence_nand_dt_probe() [ Upstream commit a28ed09dafee20da51eb26452950839633afd824 ] It will cause null-ptr-deref when using 'res', if platform_get_resource() returns NULL, so move using 'res' after devm_ioremap_resource() that will check it to avoid null-ptr-deref. And use devm_platform_get_and_ioremap_resource() to simplify code. Fixes: ec4ba01e894d ("mtd: rawnand: Add new Cadence NAND driver to MTD subsystem") Signed-off-by: Yang Yingliang Signed-off-by: Miquel Raynal Link: https://lore.kernel.org/linux-mtd/20220426084913.4021868-1-yangyingliang@huawei.com Signed-off-by: Sasha Levin --- drivers/mtd/nand/raw/cadence-nand-controller.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/mtd/nand/raw/cadence-nand-controller.c b/drivers/mtd/nand/raw/cadence-nand-controller.c index b46786cd53e0..4fdb39214a12 100644 --- a/drivers/mtd/nand/raw/cadence-nand-controller.c +++ b/drivers/mtd/nand/raw/cadence-nand-controller.c @@ -2983,11 +2983,10 @@ static int cadence_nand_dt_probe(struct platform_device *ofdev) if (IS_ERR(cdns_ctrl->reg)) return PTR_ERR(cdns_ctrl->reg); - res = platform_get_resource(ofdev, IORESOURCE_MEM, 1); - cdns_ctrl->io.dma = res->start; - cdns_ctrl->io.virt = devm_ioremap_resource(&ofdev->dev, res); + cdns_ctrl->io.virt = devm_platform_get_and_ioremap_resource(ofdev, 1, &res); if (IS_ERR(cdns_ctrl->io.virt)) return PTR_ERR(cdns_ctrl->io.virt); + cdns_ctrl->io.dma = res->start; dt->clk = devm_clk_get(cdns_ctrl->dev, "nf_clk"); if (IS_ERR(dt->clk)) From e2fef34d78065cdf0f8f6a0c306a767e062968eb Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Mon, 25 Apr 2022 16:40:02 -0700 Subject: [PATCH 0482/1842] x86/speculation: Add missing prototype for unpriv_ebpf_notify() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 2147c438fde135d6c145a96e373d9348e7076f7f ] Fix the following warnings seen with "make W=1": kernel/sysctl.c:183:13: warning: no previous prototype for ‘unpriv_ebpf_notify’ [-Wmissing-prototypes] 183 | void __weak unpriv_ebpf_notify(int new_state) | ^~~~~~~~~~~~~~~~~~ arch/x86/kernel/cpu/bugs.c:659:6: warning: no previous prototype for ‘unpriv_ebpf_notify’ [-Wmissing-prototypes] 659 | void unpriv_ebpf_notify(int new_state) | ^~~~~~~~~~~~~~~~~~ Fixes: 44a3918c8245 ("x86/speculation: Include unprivileged eBPF status in Spectre v2 mitigation reporting") Reported-by: kernel test robot Signed-off-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/5689d065f739602ececaee1e05e68b8644009608.1650930000.git.jpoimboe@redhat.com Signed-off-by: Sasha Levin --- include/linux/bpf.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index ea3ff499e94a..f21bc441e3fa 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1730,6 +1730,8 @@ void bpf_offload_dev_netdev_unregister(struct bpf_offload_dev *offdev, struct net_device *netdev); bool bpf_offload_dev_match(struct bpf_prog *prog, struct net_device *netdev); +void unpriv_ebpf_notify(int new_state); + #if defined(CONFIG_NET) && defined(CONFIG_BPF_SYSCALL) int bpf_prog_offload_init(struct bpf_prog *prog, union bpf_attr *attr); From e251a33fe879835c372a0e4c3e4064d36df331b2 Mon Sep 17 00:00:00 2001 From: Nicolas Frattaroli Date: Wed, 27 Apr 2022 19:23:11 +0200 Subject: [PATCH 0483/1842] ASoC: rk3328: fix disabling mclk on pclk probe failure [ Upstream commit dd508e324cdde1c06ace08a8143fa50333a90703 ] If preparing/enabling the pclk fails, the probe function should unprepare and disable the previously prepared and enabled mclk, which it doesn't do. This commit rectifies this. Fixes: c32759035ad2 ("ASoC: rockchip: support ACODEC for rk3328") Signed-off-by: Nicolas Frattaroli Reviewed-by: Katsuhiro Suzuki Link: https://lore.kernel.org/r/20220427172310.138638-1-frattaroli.nicolas@gmail.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/rk3328_codec.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sound/soc/codecs/rk3328_codec.c b/sound/soc/codecs/rk3328_codec.c index aed18cbb9f68..e40b97929f6c 100644 --- a/sound/soc/codecs/rk3328_codec.c +++ b/sound/soc/codecs/rk3328_codec.c @@ -481,7 +481,7 @@ static int rk3328_platform_probe(struct platform_device *pdev) ret = clk_prepare_enable(rk3328->pclk); if (ret < 0) { dev_err(&pdev->dev, "failed to enable acodec pclk\n"); - return ret; + goto err_unprepare_mclk; } base = devm_platform_ioremap_resource(pdev, 0); From d5773db56ce963fd1aae62ea74a4e0a49effb595 Mon Sep 17 00:00:00 2001 From: Yang Jihong Date: Fri, 29 Apr 2022 17:05:39 +0800 Subject: [PATCH 0484/1842] perf tools: Add missing headers needed by util/data.h MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 4d27cf1d9de5becfa4d1efb2ea54dba1b9fc962a ] 'struct perf_data' in util/data.h uses the "u64" data type, which is defined in "linux/types.h". If we only include util/data.h, the following compilation error occurs: util/data.h:38:3: error: unknown type name ‘u64’ u64 version; ^~~ Solution: include "linux/types.h." to add the needed type definitions. Fixes: 258031c017c353e8 ("perf header: Add DIR_FORMAT feature to describe directory data") Signed-off-by: Yang Jihong Cc: Adrian Hunter Cc: Alexander Shishkin Cc: Andi Kleen Cc: Ingo Molnar Cc: Jiri Olsa Cc: Mark Rutland Cc: Namhyung Kim Cc: Peter Zijlstra Link: https://lore.kernel.org/r/20220429090539.212448-1-yangjihong1@huawei.com Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Sasha Levin --- tools/perf/util/data.h | 1 + 1 file changed, 1 insertion(+) diff --git a/tools/perf/util/data.h b/tools/perf/util/data.h index 75947ef6bc17..5b52ffedf0d5 100644 --- a/tools/perf/util/data.h +++ b/tools/perf/util/data.h @@ -3,6 +3,7 @@ #define __PERF_DATA_H #include +#include enum perf_data_mode { PERF_DATA_MODE_WRITE, From 134760263f6441741db0b2970e7face6b34b6d1c Mon Sep 17 00:00:00 2001 From: Vinod Polimera Date: Mon, 25 Apr 2022 08:56:53 +0530 Subject: [PATCH 0485/1842] drm/msm/disp/dpu1: set vbif hw config to NULL to avoid use after memory free during pm runtime resume [ Upstream commit fa5186b279ecf44b14fb435540d2065be91cb1ed ] BUG: Unable to handle kernel paging request at virtual address 006b6b6b6b6b6be3 Call trace: dpu_vbif_init_memtypes+0x40/0xb8 dpu_runtime_resume+0xcc/0x1c0 pm_generic_runtime_resume+0x30/0x44 __genpd_runtime_resume+0x68/0x7c genpd_runtime_resume+0x134/0x258 __rpm_callback+0x98/0x138 rpm_callback+0x30/0x88 rpm_resume+0x36c/0x49c __pm_runtime_resume+0x80/0xb0 dpu_core_irq_uninstall+0x30/0xb0 dpu_irq_uninstall+0x18/0x24 msm_drm_uninit+0xd8/0x16c Fixes: 25fdd5933e4c ("drm/msm: Add SDM845 DPU support") Signed-off-by: Vinod Polimera Reviewed-by: Dmitry Baryshkov Patchwork: https://patchwork.freedesktop.org/patch/483255/ Link: https://lore.kernel.org/r/1650857213-30075-1-git-send-email-quic_vpolimer@quicinc.com [DB: fixed Fixes tag] Signed-off-by: Dmitry Baryshkov Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index 08e082d0443a..b05ff46d773d 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -678,8 +678,10 @@ static void _dpu_kms_hw_destroy(struct dpu_kms *dpu_kms) for (i = 0; i < dpu_kms->catalog->vbif_count; i++) { u32 vbif_idx = dpu_kms->catalog->vbif[i].id; - if ((vbif_idx < VBIF_MAX) && dpu_kms->hw_vbif[vbif_idx]) + if ((vbif_idx < VBIF_MAX) && dpu_kms->hw_vbif[vbif_idx]) { dpu_hw_vbif_destroy(dpu_kms->hw_vbif[vbif_idx]); + dpu_kms->hw_vbif[vbif_idx] = NULL; + } } } From 04204612dd87f0e5fce48b98d36619f6172960cd Mon Sep 17 00:00:00 2001 From: Kuogee Hsieh Date: Mon, 18 Apr 2022 14:56:28 -0700 Subject: [PATCH 0486/1842] drm/msm/dp: stop event kernel thread when DP unbind [ Upstream commit 570d3e5d28db7a94557fa179167a9fb8642fb8a1 ] Current DP driver implementation, event thread is kept running after DP display is unbind. This patch fix this problem by disabling DP irq and stop event thread to exit gracefully at dp_display_unbind(). Changes in v2: -- start event thread at dp_display_bind() Changes in v3: -- disable all HDP interrupts at unbind -- replace dp_hpd_event_setup() with dp_hpd_event_thread_start() -- replace dp_hpd_event_stop() with dp_hpd_event_thread_stop() -- move init_waitqueue_head(&dp->event_q) to probe() -- move spin_lock_init(&dp->event_lock) to probe() Changes in v4: -- relocate both dp_display_bind() and dp_display_unbind() to bottom of file Changes in v5: -- cancel relocation of both dp_display_bind() and dp_display_unbind() Changes in v6: -- move empty event q to dp_event_thread_start() Changes in v7: -- call ktheread_stop() directly instead of dp_hpd_event_thread_stop() function Changes in v8: -- return error immediately if audio registration failed. Changes in v9: -- return error immediately if event thread create failed. Changes in v10: -- delete extra DRM_ERROR("failed to create DP event thread\n"); Fixes: 8ede2ecc3e5e ("drm/msm/dp: Add DP compliance tests on Snapdragon Chipsets") Signed-off-by: Kuogee Hsieh Reported-by: Dmitry Baryshkov Reviewed-by: Stephen Boyd Patchwork: https://patchwork.freedesktop.org/patch/482399/ Link: https://lore.kernel.org/r/1650318988-17580-1-git-send-email-quic_khsieh@quicinc.com [DB: fixed Fixes tag] Signed-off-by: Dmitry Baryshkov Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/dp/dp_display.c | 39 +++++++++++++++++++++++------ 1 file changed, 31 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c index 6cd6934c8c9f..36caf3d5a9f9 100644 --- a/drivers/gpu/drm/msm/dp/dp_display.c +++ b/drivers/gpu/drm/msm/dp/dp_display.c @@ -111,6 +111,7 @@ struct dp_display_private { u32 hpd_state; u32 event_pndx; u32 event_gndx; + struct task_struct *ev_tsk; struct dp_event event_list[DP_EVENT_Q_MAX]; spinlock_t event_lock; @@ -194,6 +195,8 @@ void dp_display_signal_audio_complete(struct msm_dp *dp_display) complete_all(&dp->audio_comp); } +static int dp_hpd_event_thread_start(struct dp_display_private *dp_priv); + static int dp_display_bind(struct device *dev, struct device *master, void *data) { @@ -234,9 +237,18 @@ static int dp_display_bind(struct device *dev, struct device *master, } rc = dp_register_audio_driver(dev, dp->audio); - if (rc) + if (rc) { DRM_ERROR("Audio registration Dp failed\n"); + goto end; + } + rc = dp_hpd_event_thread_start(dp); + if (rc) { + DRM_ERROR("Event thread create failed\n"); + goto end; + } + + return 0; end: return rc; } @@ -255,6 +267,11 @@ static void dp_display_unbind(struct device *dev, struct device *master, return; } + /* disable all HPD interrupts */ + dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false); + + kthread_stop(dp->ev_tsk); + dp_power_client_deinit(dp->power); dp_aux_unregister(dp->aux); priv->dp = NULL; @@ -981,7 +998,7 @@ static int hpd_event_thread(void *data) dp_priv = (struct dp_display_private *)data; - while (1) { + while (!kthread_should_stop()) { if (timeout_mode) { wait_event_timeout(dp_priv->event_q, (dp_priv->event_pndx == dp_priv->event_gndx), @@ -1062,12 +1079,17 @@ static int hpd_event_thread(void *data) return 0; } -static void dp_hpd_event_setup(struct dp_display_private *dp_priv) +static int dp_hpd_event_thread_start(struct dp_display_private *dp_priv) { - init_waitqueue_head(&dp_priv->event_q); - spin_lock_init(&dp_priv->event_lock); + /* set event q to empty */ + dp_priv->event_gndx = 0; + dp_priv->event_pndx = 0; - kthread_run(hpd_event_thread, dp_priv, "dp_hpd_handler"); + dp_priv->ev_tsk = kthread_run(hpd_event_thread, dp_priv, "dp_hpd_handler"); + if (IS_ERR(dp_priv->ev_tsk)) + return PTR_ERR(dp_priv->ev_tsk); + + return 0; } static irqreturn_t dp_display_irq_handler(int irq, void *dev_id) @@ -1167,8 +1189,11 @@ static int dp_display_probe(struct platform_device *pdev) return -EPROBE_DEFER; } + /* setup event q */ mutex_init(&dp->event_mutex); g_dp_display = &dp->dp_display; + init_waitqueue_head(&dp->event_q); + spin_lock_init(&dp->event_lock); /* Store DP audio handle inside DP display */ g_dp_display->dp_audio = dp->audio; @@ -1308,8 +1333,6 @@ void msm_dp_irq_postinstall(struct msm_dp *dp_display) dp = container_of(dp_display, struct dp_display_private, dp_display); - dp_hpd_event_setup(dp); - dp_add_event(dp, EV_HPD_INIT_SETUP, 0, 100); } From 3574e0b2904c73beea4e3187fda3dce9bd25af5b Mon Sep 17 00:00:00 2001 From: Lv Ruyi Date: Sun, 24 Apr 2022 03:24:18 +0000 Subject: [PATCH 0487/1842] drm/msm/dp: fix error check return value of irq_of_parse_and_map() [ Upstream commit e92d0d93f86699b7b25c7906613fdc374d66c8ca ] The irq_of_parse_and_map() function returns 0 on failure, and does not return an negative value. Fixes: 8ede2ecc3e5e ("drm/msm/dp: Add DP compliance tests on Snapdragon Chipsets") Reported-by: Zeal Robot Signed-off-by: Lv Ruyi Reviewed-by: Dmitry Baryshkov Patchwork: https://patchwork.freedesktop.org/patch/483176/ Link: https://lore.kernel.org/r/20220424032418.3173632-1-lv.ruyi@zte.com.cn Signed-off-by: Dmitry Baryshkov Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/dp/dp_display.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c index 36caf3d5a9f9..09c8e50da68d 100644 --- a/drivers/gpu/drm/msm/dp/dp_display.c +++ b/drivers/gpu/drm/msm/dp/dp_display.c @@ -1147,10 +1147,9 @@ int dp_display_request_irq(struct msm_dp *dp_display) dp = container_of(dp_display, struct dp_display_private, dp_display); dp->irq = irq_of_parse_and_map(dp->pdev->dev.of_node, 0); - if (dp->irq < 0) { - rc = dp->irq; - DRM_ERROR("failed to get irq: %d\n", rc); - return rc; + if (!dp->irq) { + DRM_ERROR("failed to get irq\n"); + return -EINVAL; } rc = devm_request_irq(&dp->pdev->dev, dp->irq, From e99755e6a99293c7cb88a71ad2dc4e02eba6354b Mon Sep 17 00:00:00 2001 From: Dmitry Baryshkov Date: Sat, 2 Apr 2022 02:11:04 +0300 Subject: [PATCH 0488/1842] drm/msm/dsi: fix error checks and return values for DSI xmit functions [ Upstream commit f0e7e9ed379c012c4d6b09a09b868accc426223c ] As noticed by Dan ([1] an the followup thread) there are multiple issues with the return values for MSM DSI command transmission callback. In the error case it can easily return a positive value when it should have returned a proper error code. This commits attempts to fix these issues both in TX and in RX paths. [1]: https://lore.kernel.org/linux-arm-msm/20211001123617.GH2283@kili/ Fixes: a689554ba6ed ("drm/msm: Initial add DSI connector support") Reported-by: Dan Carpenter Signed-off-by: Dmitry Baryshkov Reviewed-by: Abhinav Kumar Tested-by: Marijn Suijten Patchwork: https://patchwork.freedesktop.org/patch/480501/ Link: https://lore.kernel.org/r/20220401231104.967193-1-dmitry.baryshkov@linaro.org Signed-off-by: Dmitry Baryshkov Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/dsi/dsi_host.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c index 64454a63bbac..51e8318cc8ff 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -1371,10 +1371,10 @@ static int dsi_cmds2buf_tx(struct msm_dsi_host *msm_host, dsi_get_bpp(msm_host->format) / 8; len = dsi_cmd_dma_add(msm_host, msg); - if (!len) { + if (len < 0) { pr_err("%s: failed to add cmd type = 0x%x\n", __func__, msg->type); - return -EINVAL; + return len; } /* for video mode, do not send cmds more than @@ -1393,10 +1393,14 @@ static int dsi_cmds2buf_tx(struct msm_dsi_host *msm_host, } ret = dsi_cmd_dma_tx(msm_host, len); - if (ret < len) { - pr_err("%s: cmd dma tx failed, type=0x%x, data0=0x%x, len=%d\n", - __func__, msg->type, (*(u8 *)(msg->tx_buf)), len); - return -ECOMM; + if (ret < 0) { + pr_err("%s: cmd dma tx failed, type=0x%x, data0=0x%x, len=%d, ret=%d\n", + __func__, msg->type, (*(u8 *)(msg->tx_buf)), len, ret); + return ret; + } else if (ret < len) { + pr_err("%s: cmd dma tx failed, type=0x%x, data0=0x%x, ret=%d len=%d\n", + __func__, msg->type, (*(u8 *)(msg->tx_buf)), ret, len); + return -EIO; } return len; @@ -2139,9 +2143,12 @@ int msm_dsi_host_cmd_rx(struct mipi_dsi_host *host, } ret = dsi_cmds2buf_tx(msm_host, msg); - if (ret < msg->tx_len) { + if (ret < 0) { pr_err("%s: Read cmd Tx failed, %d\n", __func__, ret); return ret; + } else if (ret < msg->tx_len) { + pr_err("%s: Read cmd Tx failed, too short: %d\n", __func__, ret); + return -ECOMM; } /* From d9cb951d11a4ace4de5c50b1178ad211de17079e Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Fri, 22 Apr 2022 11:22:27 +0800 Subject: [PATCH 0489/1842] drm/msm/hdmi: check return value after calling platform_get_resource_byname() [ Upstream commit a36e506711548df923ceb7ec9f6001375be799a5 ] It will cause null-ptr-deref if platform_get_resource_byname() returns NULL, we need check the return value. Fixes: c6a57a50ad56 ("drm/msm/hdmi: add hdmi hdcp support (V3)") Signed-off-by: Yang Yingliang Reviewed-by: Dmitry Baryshkov Patchwork: https://patchwork.freedesktop.org/patch/482992/ Link: https://lore.kernel.org/r/20220422032227.2991553-1-yangyingliang@huawei.com Signed-off-by: Dmitry Baryshkov Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/hdmi/hdmi.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c index 94f948ef279d..2758b51aa4e0 100644 --- a/drivers/gpu/drm/msm/hdmi/hdmi.c +++ b/drivers/gpu/drm/msm/hdmi/hdmi.c @@ -142,6 +142,10 @@ static struct hdmi *msm_hdmi_init(struct platform_device *pdev) /* HDCP needs physical address of hdmi register */ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, config->mmio_name); + if (!res) { + ret = -EINVAL; + goto fail; + } hdmi->mmio_phy_addr = res->start; hdmi->qfprom_mmio = msm_ioremap(pdev, From 7b815e91ff51139c85f7a7de5e54b11cebd2e8ca Mon Sep 17 00:00:00 2001 From: Lv Ruyi Date: Mon, 25 Apr 2022 09:18:31 +0000 Subject: [PATCH 0490/1842] drm/msm/hdmi: fix error check return value of irq_of_parse_and_map() [ Upstream commit 03371e4fbdeb7f596cbceacb59e474248b6d95ac ] The irq_of_parse_and_map() function returns 0 on failure, and does not return a negative value anyhow, so never enter this conditional branch. Fixes: f6a8eaca0ea1 ("drm/msm/mdp5: use irqdomains") Reported-by: Zeal Robot Signed-off-by: Lv Ruyi Reviewed-by: Stephen Boyd Patchwork: https://patchwork.freedesktop.org/patch/483294/ Link: https://lore.kernel.org/r/20220425091831.3500487-1-lv.ruyi@zte.com.cn Signed-off-by: Dmitry Baryshkov Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/hdmi/hdmi.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c index 2758b51aa4e0..28b33b35a30c 100644 --- a/drivers/gpu/drm/msm/hdmi/hdmi.c +++ b/drivers/gpu/drm/msm/hdmi/hdmi.c @@ -315,9 +315,9 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi, } hdmi->irq = irq_of_parse_and_map(pdev->dev.of_node, 0); - if (hdmi->irq < 0) { - ret = hdmi->irq; - DRM_DEV_ERROR(dev->dev, "failed to get irq: %d\n", ret); + if (!hdmi->irq) { + ret = -EINVAL; + DRM_DEV_ERROR(dev->dev, "failed to get irq\n"); goto fail; } From 5a26a4947031e6ead8a1c586f8bc72115faf473b Mon Sep 17 00:00:00 2001 From: Dmitry Baryshkov Date: Sat, 30 Apr 2022 21:09:17 +0300 Subject: [PATCH 0491/1842] drm/msm: add missing include to msm_drv.c [ Upstream commit 8123fe83c3a3448bbfa5b5b1cacfdfe7d076fca6 ] Add explicit include of drm_bridge.h to the msm_drv.c to fix the following warning: drivers/gpu/drm/msm/msm_drv.c:236:17: error: implicit declaration of function 'drm_bridge_remove'; did you mean 'drm_bridge_detach'? [-Werror=implicit-function-declaration] Fixes: d28ea556267c ("drm/msm: properly add and remove internal bridges") Reported-by: kernel test robot Reviewed-by: Abhinav Kumar Signed-off-by: Dmitry Baryshkov Patchwork: https://patchwork.freedesktop.org/patch/484310/ Link: https://lore.kernel.org/r/20220430180917.3819294-1-dmitry.baryshkov@linaro.org Signed-off-by: Dmitry Baryshkov Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/msm_drv.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index e37e5afc680a..087efcb1f34c 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -10,6 +10,7 @@ #include #include +#include #include #include #include From e19ece6f248ac433dc5836d3fe20d39b49f40848 Mon Sep 17 00:00:00 2001 From: Jagan Teki Date: Thu, 11 Nov 2021 15:11:03 +0530 Subject: [PATCH 0492/1842] drm/panel: panel-simple: Fix proper bpc for AM-1280800N3TZQW-T00H [ Upstream commit 7eafbecd2288c542ea15ea20cf1a7e64a25c21bc ] AM-1280800N3TZQW-T00H panel support 8 bpc not 6 bpc as per recent testing in i.MX8MM platform. Fix it. Fixes: bca684e69c4c ("drm/panel: simple: Add AM-1280800N3TZQW-T00H") Signed-off-by: Jagan Teki Reviewed-by: Robert Foss Signed-off-by: Robert Foss Link: https://patchwork.freedesktop.org/patch/msgid/20211111094103.494831-1-jagan@amarulasolutions.com Signed-off-by: Sasha Levin --- drivers/gpu/drm/panel/panel-simple.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c index 18850439a2ab..bf2c845ef3a2 100644 --- a/drivers/gpu/drm/panel/panel-simple.c +++ b/drivers/gpu/drm/panel/panel-simple.c @@ -676,7 +676,7 @@ static const struct drm_display_mode ampire_am_1280800n3tzqw_t00h_mode = { static const struct panel_desc ampire_am_1280800n3tzqw_t00h = { .modes = &ire_am_1280800n3tzqw_t00h_mode, .num_modes = 1, - .bpc = 6, + .bpc = 8, .size = { .width = 217, .height = 136, From 3451852312303d54a003c73bd0ae39cebb960bd5 Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Fri, 22 Apr 2022 11:28:54 +0800 Subject: [PATCH 0493/1842] drm/rockchip: vop: fix possible null-ptr-deref in vop_bind() [ Upstream commit f8c242908ad15bbd604d3bcb54961b7d454c43f8 ] It will cause null-ptr-deref in resource_size(), if platform_get_resource() returns NULL, move calling resource_size() after devm_ioremap_resource() that will check 'res' to avoid null-ptr-deref. Fixes: 2048e3286f34 ("drm: rockchip: Add basic drm driver") Signed-off-by: Yang Yingliang Signed-off-by: Heiko Stuebner Link: https://patchwork.freedesktop.org/patch/msgid/20220422032854.2995175-1-yangyingliang@huawei.com Signed-off-by: Sasha Levin --- drivers/gpu/drm/rockchip/rockchip_drm_vop.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c index 0f23144491e4..91568f166a8a 100644 --- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c +++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c @@ -2097,10 +2097,10 @@ static int vop_bind(struct device *dev, struct device *master, void *data) vop_win_init(vop); res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - vop->len = resource_size(res); vop->regs = devm_ioremap_resource(dev, res); if (IS_ERR(vop->regs)) return PTR_ERR(vop->regs); + vop->len = resource_size(res); res = platform_get_resource(pdev, IORESOURCE_MEM, 1); if (res) { From 23716d76141589071a4f1efbbc5ed126a60525bd Mon Sep 17 00:00:00 2001 From: James Clark Date: Wed, 9 Mar 2022 19:43:13 +0000 Subject: [PATCH 0494/1842] perf tools: Use Python devtools for version autodetection rather than runtime [ Upstream commit 630af16eee495f583db5202c3613d1b191f10694 ] This fixes the issue where the build will fail if only the Python2 runtime is installed but the Python3 devtools are installed. Currently the workaround is 'make PYTHON=python3'. Fix it by autodetecting Python based on whether python[x]-config exists rather than just python[x] because both are needed for the build. Then -config is stripped to find the Python runtime. Testing ======= * Auto detect links with Python3 when the v3 devtools are installed and only Python 2 runtime is installed * Auto detect links with Python2 when both devtools are installed * Sensible warning is printed if no Python devtools are installed * 'make PYTHON=x' still automatically sets PYTHON_CONFIG=x-config * 'make PYTHON=x' fails if x-config doesn't exist * 'make PYTHON=python3' overrides Python2 devtools * 'make PYTHON=python2' overrides Python3 devtools * 'make PYTHON_CONFIG=x-config' works * 'make PYTHON=x PYTHON_CONFIG=x' works * 'make PYTHON=missing' reports an error * 'make PYTHON_CONFIG=missing' reports an error Fixes: 79373082fa9de8be ("perf python: Autodetect python3 binary") Signed-off-by: James Clark Cc: Alexander Shishkin Cc: James Clark Cc: Jiri Olsa Cc: Mark Rutland Cc: Namhyung Kim Link: https://lore.kernel.org/r/20220309194313.3350126-2-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Sasha Levin --- tools/perf/Makefile.config | 39 ++++++++++++++++++++++++++------------ 1 file changed, 27 insertions(+), 12 deletions(-) diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config index 41dff8d38448..5ee3c4d1fbb2 100644 --- a/tools/perf/Makefile.config +++ b/tools/perf/Makefile.config @@ -222,18 +222,33 @@ ifdef PARSER_DEBUG endif # Try different combinations to accommodate systems that only have -# python[2][-config] in weird combinations but always preferring -# python2 and python2-config as per pep-0394. If python2 or python -# aren't found, then python3 is used. -PYTHON_AUTO := python -PYTHON_AUTO := $(if $(call get-executable,python3),python3,$(PYTHON_AUTO)) -PYTHON_AUTO := $(if $(call get-executable,python),python,$(PYTHON_AUTO)) -PYTHON_AUTO := $(if $(call get-executable,python2),python2,$(PYTHON_AUTO)) -override PYTHON := $(call get-executable-or-default,PYTHON,$(PYTHON_AUTO)) -PYTHON_AUTO_CONFIG := \ - $(if $(call get-executable,$(PYTHON)-config),$(PYTHON)-config,python-config) -override PYTHON_CONFIG := \ - $(call get-executable-or-default,PYTHON_CONFIG,$(PYTHON_AUTO_CONFIG)) +# python[2][3]-config in weird combinations in the following order of +# priority from lowest to highest: +# * python3-config +# * python-config +# * python2-config as per pep-0394. +# * $(PYTHON)-config (If PYTHON is user supplied but PYTHON_CONFIG isn't) +# +PYTHON_AUTO := python-config +PYTHON_AUTO := $(if $(call get-executable,python3-config),python3-config,$(PYTHON_AUTO)) +PYTHON_AUTO := $(if $(call get-executable,python-config),python-config,$(PYTHON_AUTO)) +PYTHON_AUTO := $(if $(call get-executable,python2-config),python2-config,$(PYTHON_AUTO)) + +# If PYTHON is defined but PYTHON_CONFIG isn't, then take $(PYTHON)-config as if it was the user +# supplied value for PYTHON_CONFIG. Because it's "user supplied", error out if it doesn't exist. +ifdef PYTHON + ifndef PYTHON_CONFIG + PYTHON_CONFIG_AUTO := $(call get-executable,$(PYTHON)-config) + PYTHON_CONFIG := $(if $(PYTHON_CONFIG_AUTO),$(PYTHON_CONFIG_AUTO),\ + $(call $(error $(PYTHON)-config not found))) + endif +endif + +# Select either auto detected python and python-config or use user supplied values if they are +# defined. get-executable-or-default fails with an error if the first argument is supplied but +# doesn't exist. +override PYTHON_CONFIG := $(call get-executable-or-default,PYTHON_CONFIG,$(PYTHON_AUTO)) +override PYTHON := $(call get-executable-or-default,PYTHON,$(subst -config,,$(PYTHON_AUTO))) grep-libs = $(filter -l%,$(1)) strip-libs = $(filter-out -l%,$(1)) From 940b12435bffc975684021e1db366d902e245ed8 Mon Sep 17 00:00:00 2001 From: Christoph Hellwig Date: Mon, 18 Apr 2022 06:53:07 +0200 Subject: [PATCH 0495/1842] virtio_blk: fix the discard_granularity and discard_alignment queue limits [ Upstream commit 62952cc5bccd89b76d710de1d0b43244af0f2903 ] The discard_alignment queue limit is named a bit misleading means the offset into the block device at which the discard granularity starts. On the other hand the discard_sector_alignment from the virtio 1.1 looks similar to what Linux uses as discard granularity (even if not very well described): "discard_sector_alignment can be used by OS when splitting a request based on alignment. " And at least qemu does set it to the discard granularity. So stop setting the discard_alignment and use the virtio discard_sector_alignment to set the discard granularity. Fixes: 1f23816b8eb8 ("virtio_blk: add discard and write zeroes support") Signed-off-by: Christoph Hellwig Reviewed-by: Martin K. Petersen Link: https://lore.kernel.org/r/20220418045314.360785-5-hch@lst.de Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- drivers/block/virtio_blk.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 02e2056780ad..9b54eec9b17e 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -865,11 +865,12 @@ static int virtblk_probe(struct virtio_device *vdev) blk_queue_io_opt(q, blk_size * opt_io_size); if (virtio_has_feature(vdev, VIRTIO_BLK_F_DISCARD)) { - q->limits.discard_granularity = blk_size; - virtio_cread(vdev, struct virtio_blk_config, discard_sector_alignment, &v); - q->limits.discard_alignment = v ? v << SECTOR_SHIFT : 0; + if (v) + q->limits.discard_granularity = v << SECTOR_SHIFT; + else + q->limits.discard_granularity = blk_size; virtio_cread(vdev, struct virtio_blk_config, max_discard_sectors, &v); From 35abf2081fa9fa6cd23f5975f35b41a3f0647cc4 Mon Sep 17 00:00:00 2001 From: Randy Dunlap Date: Sun, 13 Mar 2022 18:27:25 -0700 Subject: [PATCH 0496/1842] x86: Fix return value of __setup handlers [ Upstream commit 12441ccdf5e2f5a01a46e344976cbbd3d46845c9 ] __setup() handlers should return 1 to obsolete_checksetup() in init/main.c to indicate that the boot option has been handled. A return of 0 causes the boot option/value to be listed as an Unknown kernel parameter and added to init's (limited) argument (no '=') or environment (with '=') strings. So return 1 from these x86 __setup handlers. Examples: Unknown kernel command line parameters "apicpmtimer BOOT_IMAGE=/boot/bzImage-517rc8 vdso=1 ring3mwait=disable", will be passed to user space. Run /sbin/init as init process with arguments: /sbin/init apicpmtimer with environment: HOME=/ TERM=linux BOOT_IMAGE=/boot/bzImage-517rc8 vdso=1 ring3mwait=disable Fixes: 2aae950b21e4 ("x86_64: Add vDSO for x86-64 with gettimeofday/clock_gettime/getcpu") Fixes: 77b52b4c5c66 ("x86: add "debugpat" boot option") Fixes: e16fd002afe2 ("x86/cpufeature: Enable RING3MWAIT for Knights Landing") Fixes: b8ce33590687 ("x86_64: convert to clock events") Reported-by: Igor Zhbanov Signed-off-by: Randy Dunlap Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/64644a2f-4a20-bab3-1e15-3b2cdd0defe3@omprussia.ru Link: https://lore.kernel.org/r/20220314012725.26661-1-rdunlap@infradead.org Signed-off-by: Sasha Levin --- arch/x86/entry/vdso/vma.c | 2 +- arch/x86/kernel/apic/apic.c | 2 +- arch/x86/kernel/cpu/intel.c | 2 +- arch/x86/mm/pat/memtype.c | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c index 9185cb1d13b9..5876289e48d8 100644 --- a/arch/x86/entry/vdso/vma.c +++ b/arch/x86/entry/vdso/vma.c @@ -440,7 +440,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) static __init int vdso_setup(char *s) { vdso64_enabled = simple_strtoul(s, NULL, 0); - return 0; + return 1; } __setup("vdso=", vdso_setup); diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c index 24539a05c58c..1c96f2425eaf 100644 --- a/arch/x86/kernel/apic/apic.c +++ b/arch/x86/kernel/apic/apic.c @@ -168,7 +168,7 @@ static __init int setup_apicpmtimer(char *s) { apic_calibrate_pmtmr = 1; notsc_setup(NULL); - return 0; + return 1; } __setup("apicpmtimer", setup_apicpmtimer); #endif diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index 816fdbec795a..c6ad53e38f65 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -88,7 +88,7 @@ static bool ring3mwait_disabled __read_mostly; static int __init ring3mwait_disable(char *__unused) { ring3mwait_disabled = true; - return 0; + return 1; } __setup("ring3mwait=disable", ring3mwait_disable); diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c index 232932bda4e5..f9c53a710740 100644 --- a/arch/x86/mm/pat/memtype.c +++ b/arch/x86/mm/pat/memtype.c @@ -101,7 +101,7 @@ int pat_debug_enable; static int __init pat_debug_setup(char *str) { pat_debug_enable = 1; - return 0; + return 1; } __setup("debugpat", pat_debug_setup); From 5e76e5163392ace02371f6f3a08d4818c3ca67bf Mon Sep 17 00:00:00 2001 From: Daniel Thompson Date: Tue, 3 May 2022 14:45:41 +0100 Subject: [PATCH 0497/1842] irqchip/exiu: Fix acknowledgment of edge triggered interrupts [ Upstream commit 4efc851c36e389f7ed432edac0149acc5f94b0c7 ] Currently the EXIU uses the fasteoi interrupt flow that is configured by it's parent (irq-gic-v3.c). With this flow the only chance to clear the interrupt request happens during .irq_eoi() and (obviously) this happens after the interrupt handler has run. EXIU requires edge triggered interrupts to be acked prior to interrupt handling. Without this we risk incorrect interrupt dismissal when a new interrupt is delivered after the handler reads and acknowledges the peripheral but before the irq_eoi() takes place. Fix this by clearing the interrupt request from .irq_ack() if we are configured for edge triggered interrupts. This requires adopting the fasteoi-ack flow instead of the fasteoi to ensure the ack gets called. These changes have been tested using the power button on a Developerbox/SC2A11 combined with some hackery in gpio-keys so I can play with the different trigger mode [and an mdelay(500) so I can can check what happens on a double click in both modes]. Fixes: 706cffc1b912 ("irqchip/exiu: Add support for Socionext Synquacer EXIU controller") Signed-off-by: Daniel Thompson Reviewed-by: Ard Biesheuvel Signed-off-by: Marc Zyngier Link: https://lore.kernel.org/r/20220503134541.2566457-1-daniel.thompson@linaro.org Signed-off-by: Sasha Levin --- arch/arm64/Kconfig.platforms | 1 + drivers/irqchip/irq-sni-exiu.c | 25 ++++++++++++++++++++++--- 2 files changed, 23 insertions(+), 3 deletions(-) diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms index 5c4ac1c9f4e0..889e78f40a25 100644 --- a/arch/arm64/Kconfig.platforms +++ b/arch/arm64/Kconfig.platforms @@ -250,6 +250,7 @@ config ARCH_STRATIX10 config ARCH_SYNQUACER bool "Socionext SynQuacer SoC Family" + select IRQ_FASTEOI_HIERARCHY_HANDLERS config ARCH_TEGRA bool "NVIDIA Tegra SoC Family" diff --git a/drivers/irqchip/irq-sni-exiu.c b/drivers/irqchip/irq-sni-exiu.c index abd011fcecf4..c7db617e1a2f 100644 --- a/drivers/irqchip/irq-sni-exiu.c +++ b/drivers/irqchip/irq-sni-exiu.c @@ -37,11 +37,26 @@ struct exiu_irq_data { u32 spi_base; }; -static void exiu_irq_eoi(struct irq_data *d) +static void exiu_irq_ack(struct irq_data *d) { struct exiu_irq_data *data = irq_data_get_irq_chip_data(d); writel(BIT(d->hwirq), data->base + EIREQCLR); +} + +static void exiu_irq_eoi(struct irq_data *d) +{ + struct exiu_irq_data *data = irq_data_get_irq_chip_data(d); + + /* + * Level triggered interrupts are latched and must be cleared during + * EOI or the interrupt will be jammed on. Of course if a level + * triggered interrupt is still asserted then the write will not clear + * the interrupt. + */ + if (irqd_is_level_type(d)) + writel(BIT(d->hwirq), data->base + EIREQCLR); + irq_chip_eoi_parent(d); } @@ -91,10 +106,13 @@ static int exiu_irq_set_type(struct irq_data *d, unsigned int type) writel_relaxed(val, data->base + EILVL); val = readl_relaxed(data->base + EIEDG); - if (type == IRQ_TYPE_LEVEL_LOW || type == IRQ_TYPE_LEVEL_HIGH) + if (type == IRQ_TYPE_LEVEL_LOW || type == IRQ_TYPE_LEVEL_HIGH) { val &= ~BIT(d->hwirq); - else + irq_set_handler_locked(d, handle_fasteoi_irq); + } else { val |= BIT(d->hwirq); + irq_set_handler_locked(d, handle_fasteoi_ack_irq); + } writel_relaxed(val, data->base + EIEDG); writel_relaxed(BIT(d->hwirq), data->base + EIREQCLR); @@ -104,6 +122,7 @@ static int exiu_irq_set_type(struct irq_data *d, unsigned int type) static struct irq_chip exiu_irq_chip = { .name = "EXIU", + .irq_ack = exiu_irq_ack, .irq_eoi = exiu_irq_eoi, .irq_enable = exiu_irq_enable, .irq_mask = exiu_irq_mask, From f4b503b4ef59bc9e8ad8db1debcab4fa61f074ef Mon Sep 17 00:00:00 2001 From: Krzysztof Kozlowski Date: Sat, 23 Apr 2022 11:42:26 +0200 Subject: [PATCH 0498/1842] irqchip/aspeed-i2c-ic: Fix irq_of_parse_and_map() return value [ Upstream commit 50f0f26e7c8665763d0d7d3372dbcf191f94d077 ] The irq_of_parse_and_map() returns 0 on failure, not a negative ERRNO. Fixes: f48e699ddf70 ("irqchip/aspeed-i2c-ic: Add I2C IRQ controller for Aspeed") Signed-off-by: Krzysztof Kozlowski Signed-off-by: Marc Zyngier Link: https://lore.kernel.org/r/20220423094227.33148-1-krzysztof.kozlowski@linaro.org Signed-off-by: Sasha Levin --- drivers/irqchip/irq-aspeed-i2c-ic.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/irqchip/irq-aspeed-i2c-ic.c b/drivers/irqchip/irq-aspeed-i2c-ic.c index 8d591c179f81..3d3210828e9b 100644 --- a/drivers/irqchip/irq-aspeed-i2c-ic.c +++ b/drivers/irqchip/irq-aspeed-i2c-ic.c @@ -79,8 +79,8 @@ static int __init aspeed_i2c_ic_of_init(struct device_node *node, } i2c_ic->parent_irq = irq_of_parse_and_map(node, 0); - if (i2c_ic->parent_irq < 0) { - ret = i2c_ic->parent_irq; + if (!i2c_ic->parent_irq) { + ret = -EINVAL; goto err_iounmap; } From 0d5c8ac9229a4c54e0abf0b0441e5c6345f45d0c Mon Sep 17 00:00:00 2001 From: Krzysztof Kozlowski Date: Sat, 23 Apr 2022 11:42:27 +0200 Subject: [PATCH 0499/1842] irqchip/aspeed-scu-ic: Fix irq_of_parse_and_map() return value [ Upstream commit f03a9670d27d23fe734a456f16e2579b21ec02b4 ] The irq_of_parse_and_map() returns 0 on failure, not a negative ERRNO. Fixes: 04f605906ff0 ("irqchip: Add Aspeed SCU interrupt controller") Signed-off-by: Krzysztof Kozlowski Signed-off-by: Marc Zyngier Link: https://lore.kernel.org/r/20220423094227.33148-2-krzysztof.kozlowski@linaro.org Signed-off-by: Sasha Levin --- drivers/irqchip/irq-aspeed-scu-ic.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/irqchip/irq-aspeed-scu-ic.c b/drivers/irqchip/irq-aspeed-scu-ic.c index 0f0aac7cc114..7cb13364ecfa 100644 --- a/drivers/irqchip/irq-aspeed-scu-ic.c +++ b/drivers/irqchip/irq-aspeed-scu-ic.c @@ -159,8 +159,8 @@ static int aspeed_scu_ic_of_init_common(struct aspeed_scu_ic *scu_ic, } irq = irq_of_parse_and_map(node, 0); - if (irq < 0) { - rc = irq; + if (!irq) { + rc = -EINVAL; goto err; } From 49bfbaf6a039b48dbf57b9071dd7f2aa6796087d Mon Sep 17 00:00:00 2001 From: Randy Dunlap Date: Mon, 14 Mar 2022 17:10:45 -0700 Subject: [PATCH 0500/1842] x86/mm: Cleanup the control_va_addr_alignment() __setup handler [ Upstream commit 1ef64b1e89e6d4018da46e08ffc32779a31160c7 ] Clean up control_va_addr_alignment(): a. Make '=' required instead of optional (as documented). b. Print a warning if an invalid option value is used. c. Return 1 from the __setup handler when an invalid option value is used. This prevents the kernel from polluting init's (limited) environment space with the entire string. Fixes: dfb09f9b7ab0 ("x86, amd: Avoid cache aliasing penalties on AMD family 15h") Reported-by: Igor Zhbanov Signed-off-by: Randy Dunlap Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/64644a2f-4a20-bab3-1e15-3b2cdd0defe3@omprussia.ru Link: https://lore.kernel.org/r/20220315001045.7680-1-rdunlap@infradead.org Signed-off-by: Sasha Levin --- arch/x86/kernel/sys_x86_64.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c index 504fa5425bce..3fd1c81eb5e3 100644 --- a/arch/x86/kernel/sys_x86_64.c +++ b/arch/x86/kernel/sys_x86_64.c @@ -68,9 +68,6 @@ static int __init control_va_addr_alignment(char *str) if (*str == 0) return 1; - if (*str == '=') - str++; - if (!strcmp(str, "32")) va_align.flags = ALIGN_VA_32; else if (!strcmp(str, "64")) @@ -80,11 +77,11 @@ static int __init control_va_addr_alignment(char *str) else if (!strcmp(str, "on")) va_align.flags = ALIGN_VA_32 | ALIGN_VA_64; else - return 0; + pr_warn("invalid option value: 'align_va_addr=%s'\n", str); return 1; } -__setup("align_va_addr", control_va_addr_alignment); +__setup("align_va_addr=", control_va_addr_alignment); SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len, unsigned long, prot, unsigned long, flags, From 018ebe4c18107ef155ffbedcf81f2077997efde9 Mon Sep 17 00:00:00 2001 From: Tong Tiangen Date: Wed, 20 Apr 2022 03:04:13 +0000 Subject: [PATCH 0501/1842] arm64: fix types in copy_highpage() [ Upstream commit 921d161f15d6b090599f6a8c23f131969edbd1fa ] In copy_highpage() the `kto` and `kfrom` local variables are pointers to struct page, but these are used to hold arbitrary pointers to kernel memory . Each call to page_address() returns a void pointer to memory associated with the relevant page, and copy_page() expects void pointers to this memory. This inconsistency was introduced in commit 2563776b41c3 ("arm64: mte: Tags-aware copy_{user_,}highpage() implementations") and while this doesn't appear to be harmful in practice it is clearly wrong. Correct this by making `kto` and `kfrom` void pointers. Fixes: 2563776b41c3 ("arm64: mte: Tags-aware copy_{user_,}highpage() implementations") Signed-off-by: Tong Tiangen Acked-by: Mark Rutland Reviewed-by: Kefeng Wang Link: https://lore.kernel.org/r/20220420030418.3189040-3-tongtiangen@huawei.com Signed-off-by: Catalin Marinas Signed-off-by: Sasha Levin --- arch/arm64/mm/copypage.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 70a71f38b6a9..24913271e898 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -16,8 +16,8 @@ void copy_highpage(struct page *to, struct page *from) { - struct page *kto = page_address(to); - struct page *kfrom = page_address(from); + void *kto = page_address(to); + void *kfrom = page_address(from); copy_page(kto, kfrom); From 369a712442f91fba0bf8ba84b0e88ffb32475dd4 Mon Sep 17 00:00:00 2001 From: Zev Weiss Date: Wed, 4 May 2022 21:31:52 -0700 Subject: [PATCH 0502/1842] regulator: core: Fix enable_count imbalance with EXCLUSIVE_GET [ Upstream commit c3e3ca05dae37f8f74bb80358efd540911cbc2c8 ] Since the introduction of regulator->enable_count, a driver that did an exclusive get on an already-enabled regulator would end up with enable_count initialized to 0 but rdev->use_count initialized to 1. With that starting point the regulator is effectively stuck enabled, because if the driver attempted to disable it it would fail the enable_count underflow check in _regulator_handle_consumer_disable(). The EXCLUSIVE_GET path in _regulator_get() now initializes enable_count along with rdev->use_count so that the regulator can be disabled without underflowing the former. Signed-off-by: Zev Weiss Fixes: 5451781dadf85 ("regulator: core: Only count load for enabled consumers") Link: https://lore.kernel.org/r/20220505043152.12933-1-zev@bewilderbeest.net Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- drivers/regulator/core.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c index 2c48e55c4104..6e3f3511e7dd 100644 --- a/drivers/regulator/core.c +++ b/drivers/regulator/core.c @@ -2027,10 +2027,13 @@ struct regulator *_regulator_get(struct device *dev, const char *id, rdev->exclusive = 1; ret = _regulator_is_enabled(rdev); - if (ret > 0) + if (ret > 0) { rdev->use_count = 1; - else + regulator->enable_count = 1; + } else { rdev->use_count = 0; + regulator->enable_count = 0; + } } link = device_link_add(dev, &rdev->dev, DL_FLAG_STATELESS); From a10092dabae6a3536c60c9c0b16d1135aba0ef74 Mon Sep 17 00:00:00 2001 From: Kuogee Hsieh Date: Tue, 3 May 2022 09:25:36 -0700 Subject: [PATCH 0503/1842] drm/msm/dp: fix event thread stuck in wait_event after kthread_stop() [ Upstream commit 2f9b5b3ae2eb625b75a898212a76f3b8c6d0d2b0 ] Event thread supposed to exit from its while loop after kthread_stop(). However there may has possibility that event thread is pending in the middle of wait_event due to condition checking never become true. To make sure event thread exit its loop after kthread_stop(), this patch OR kthread_should_stop() into wait_event's condition checking so that event thread will exit its loop after kernal_stop(). Changes in v2: -- correct spelling error at commit title Changes in v3: -- remove unnecessary parenthesis -- while(1) to replace while (!kthread_should_stop()) Reported-by: Dmitry Baryshkov Fixes: 570d3e5d28db ("drm/msm/dp: stop event kernel thread when DP unbind") Signed-off-by: Kuogee Hsieh Reviewed-by: Dmitry Baryshkov Reviewed-by: Stephen Boyd Patchwork: https://patchwork.freedesktop.org/patch/484576/ Link: https://lore.kernel.org/r/1651595136-24312-1-git-send-email-quic_khsieh@quicinc.com Signed-off-by: Dmitry Baryshkov Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/dp/dp_display.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c index 09c8e50da68d..ebd05678a27b 100644 --- a/drivers/gpu/drm/msm/dp/dp_display.c +++ b/drivers/gpu/drm/msm/dp/dp_display.c @@ -998,15 +998,20 @@ static int hpd_event_thread(void *data) dp_priv = (struct dp_display_private *)data; - while (!kthread_should_stop()) { + while (1) { if (timeout_mode) { wait_event_timeout(dp_priv->event_q, - (dp_priv->event_pndx == dp_priv->event_gndx), - EVENT_TIMEOUT); + (dp_priv->event_pndx == dp_priv->event_gndx) || + kthread_should_stop(), EVENT_TIMEOUT); } else { wait_event_interruptible(dp_priv->event_q, - (dp_priv->event_pndx != dp_priv->event_gndx)); + (dp_priv->event_pndx != dp_priv->event_gndx) || + kthread_should_stop()); } + + if (kthread_should_stop()) + break; + spin_lock_irqsave(&dp_priv->event_lock, flag); todo = &dp_priv->event_list[dp_priv->event_gndx]; if (todo->delay) { From 49dc28b4b2e28ef7564e355c91487996c1cbebd7 Mon Sep 17 00:00:00 2001 From: Jessica Zhang Date: Thu, 5 May 2022 14:40:50 -0700 Subject: [PATCH 0504/1842] drm/msm/mdp5: Return error code in mdp5_pipe_release when deadlock is detected [ Upstream commit d59be579fa932c46b908f37509f319cbd4ca9a68 ] mdp5_get_global_state runs the risk of hitting a -EDEADLK when acquiring the modeset lock, but currently mdp5_pipe_release doesn't check for if an error is returned. Because of this, there is a possibility of mdp5_pipe_release hitting a NULL dereference error. To avoid this, let's have mdp5_pipe_release check if mdp5_get_global_state returns an error and propogate that error. Changes since v1: - Separated declaration and initialization of *new_state to avoid compiler warning - Fixed some spelling mistakes in commit message Changes since v2: - Return 0 in case where hwpipe is NULL as this is considered normal behavior - Added 2nd patch in series to fix a similar NULL dereference issue in mdp5_mixer_release Reported-by: Tomeu Vizoso Signed-off-by: Jessica Zhang Fixes: 7907a0d77cb4 ("drm/msm/mdp5: Use the new private_obj state") Reviewed-by: Rob Clark Reviewed-by: Dmitry Baryshkov Patchwork: https://patchwork.freedesktop.org/patch/485179/ Link: https://lore.kernel.org/r/20220505214051.155-1-quic_jesszhan@quicinc.com Signed-off-by: Dmitry Baryshkov Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c | 15 +++++++++++---- drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h | 2 +- drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c | 20 ++++++++++++++++---- 3 files changed, 28 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c index ba6695963aa6..a4f5cb90f3e8 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c @@ -119,18 +119,23 @@ int mdp5_pipe_assign(struct drm_atomic_state *s, struct drm_plane *plane, return 0; } -void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe) +int mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe) { struct msm_drm_private *priv = s->dev->dev_private; struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(priv->kms)); struct mdp5_global_state *state = mdp5_get_global_state(s); - struct mdp5_hw_pipe_state *new_state = &state->hwpipe; + struct mdp5_hw_pipe_state *new_state; if (!hwpipe) - return; + return 0; + + if (IS_ERR(state)) + return PTR_ERR(state); + + new_state = &state->hwpipe; if (WARN_ON(!new_state->hwpipe_to_plane[hwpipe->idx])) - return; + return -EINVAL; DBG("%s: release from plane %s", hwpipe->name, new_state->hwpipe_to_plane[hwpipe->idx]->name); @@ -141,6 +146,8 @@ void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe) } new_state->hwpipe_to_plane[hwpipe->idx] = NULL; + + return 0; } void mdp5_pipe_destroy(struct mdp5_hw_pipe *hwpipe) diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h index 9b26d0761bd4..cca67938cab2 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h @@ -37,7 +37,7 @@ int mdp5_pipe_assign(struct drm_atomic_state *s, struct drm_plane *plane, uint32_t caps, uint32_t blkcfg, struct mdp5_hw_pipe **hwpipe, struct mdp5_hw_pipe **r_hwpipe); -void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe); +int mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe); struct mdp5_hw_pipe *mdp5_pipe_init(enum mdp5_pipe pipe, uint32_t reg_offset, uint32_t caps); diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c index da0799333970..0dc23c86747e 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c @@ -393,12 +393,24 @@ static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_state, mdp5_state->r_hwpipe = NULL; - mdp5_pipe_release(state->state, old_hwpipe); - mdp5_pipe_release(state->state, old_right_hwpipe); + ret = mdp5_pipe_release(state->state, old_hwpipe); + if (ret) + return ret; + + ret = mdp5_pipe_release(state->state, old_right_hwpipe); + if (ret) + return ret; + } } else { - mdp5_pipe_release(state->state, mdp5_state->hwpipe); - mdp5_pipe_release(state->state, mdp5_state->r_hwpipe); + ret = mdp5_pipe_release(state->state, mdp5_state->hwpipe); + if (ret) + return ret; + + ret = mdp5_pipe_release(state->state, mdp5_state->r_hwpipe); + if (ret) + return ret; + mdp5_state->hwpipe = mdp5_state->r_hwpipe = NULL; } From 883f1d52a57bf51e1d7a80c432345e2c6222477e Mon Sep 17 00:00:00 2001 From: Jessica Zhang Date: Thu, 5 May 2022 14:40:51 -0700 Subject: [PATCH 0505/1842] drm/msm/mdp5: Return error code in mdp5_mixer_release when deadlock is detected [ Upstream commit ca75f6f7c6f89365e40f10f641b15981b1f07c31 ] There is a possibility for mdp5_get_global_state to return -EDEADLK when acquiring the modeset lock, but currently global_state in mdp5_mixer_release doesn't check for if an error is returned. To avoid a NULL dereference error, let's have mdp5_mixer_release check if an error is returned and propagate that error. Reported-by: Tomeu Vizoso Signed-off-by: Jessica Zhang Fixes: 7907a0d77cb4 ("drm/msm/mdp5: Use the new private_obj state") Reviewed-by: Rob Clark Reviewed-by: Dmitry Baryshkov Patchwork: https://patchwork.freedesktop.org/patch/485181/ Link: https://lore.kernel.org/r/20220505214051.155-2-quic_jesszhan@quicinc.com Signed-off-by: Dmitry Baryshkov Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 10 ++++++++-- drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c | 15 +++++++++++---- drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h | 4 ++-- 3 files changed, 21 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c index a8fa084dfa49..06f19ef5dbf3 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c @@ -608,9 +608,15 @@ int mdp5_crtc_setup_pipeline(struct drm_crtc *crtc, if (ret) return ret; - mdp5_mixer_release(new_crtc_state->state, old_mixer); + ret = mdp5_mixer_release(new_crtc_state->state, old_mixer); + if (ret) + return ret; + if (old_r_mixer) { - mdp5_mixer_release(new_crtc_state->state, old_r_mixer); + ret = mdp5_mixer_release(new_crtc_state->state, old_r_mixer); + if (ret) + return ret; + if (!need_right_mixer) pipeline->r_mixer = NULL; } diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c index 954db683ae44..2536def2a000 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c @@ -116,21 +116,28 @@ int mdp5_mixer_assign(struct drm_atomic_state *s, struct drm_crtc *crtc, return 0; } -void mdp5_mixer_release(struct drm_atomic_state *s, struct mdp5_hw_mixer *mixer) +int mdp5_mixer_release(struct drm_atomic_state *s, struct mdp5_hw_mixer *mixer) { struct mdp5_global_state *global_state = mdp5_get_global_state(s); - struct mdp5_hw_mixer_state *new_state = &global_state->hwmixer; + struct mdp5_hw_mixer_state *new_state; if (!mixer) - return; + return 0; + + if (IS_ERR(global_state)) + return PTR_ERR(global_state); + + new_state = &global_state->hwmixer; if (WARN_ON(!new_state->hwmixer_to_crtc[mixer->idx])) - return; + return -EINVAL; DBG("%s: release from crtc %s", mixer->name, new_state->hwmixer_to_crtc[mixer->idx]->name); new_state->hwmixer_to_crtc[mixer->idx] = NULL; + + return 0; } void mdp5_mixer_destroy(struct mdp5_hw_mixer *mixer) diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h index 43c9ba43ce18..545ee223b9d7 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h @@ -30,7 +30,7 @@ void mdp5_mixer_destroy(struct mdp5_hw_mixer *lm); int mdp5_mixer_assign(struct drm_atomic_state *s, struct drm_crtc *crtc, uint32_t caps, struct mdp5_hw_mixer **mixer, struct mdp5_hw_mixer **r_mixer); -void mdp5_mixer_release(struct drm_atomic_state *s, - struct mdp5_hw_mixer *mixer); +int mdp5_mixer_release(struct drm_atomic_state *s, + struct mdp5_hw_mixer *mixer); #endif /* __MDP5_LM_H__ */ From d50b26221fbae540a09ab6314b93c91b2e26abb7 Mon Sep 17 00:00:00 2001 From: Dan Carpenter Date: Thu, 5 May 2022 13:28:05 +0300 Subject: [PATCH 0506/1842] drm/msm: return an error pointer in msm_gem_prime_get_sg_table() [ Upstream commit cf575e31611eb6dccf08fad02e57e35b2187704d ] The msm_gem_prime_get_sg_table() needs to return error pointers on error. This is called from drm_gem_map_dma_buf() and returning a NULL will lead to a crash in that function. Fixes: ac45146733b0 ("drm/msm: fix msm_gem_prime_get_sg_table()") Signed-off-by: Dan Carpenter Reviewed-by: Dmitry Baryshkov Patchwork: https://patchwork.freedesktop.org/patch/485023/ Link: https://lore.kernel.org/r/YnOmtS5tfENywR9m@kili Signed-off-by: Dmitry Baryshkov Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/msm_gem_prime.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c index 515ef80816a0..8c64ce7288f1 100644 --- a/drivers/gpu/drm/msm/msm_gem_prime.c +++ b/drivers/gpu/drm/msm/msm_gem_prime.c @@ -17,7 +17,7 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) int npages = obj->size >> PAGE_SHIFT; if (WARN_ON(!msm_obj->pages)) /* should have already pinned! */ - return NULL; + return ERR_PTR(-ENOMEM); return drm_prime_pages_to_sg(obj->dev, msm_obj->pages, npages); } From 7a79ab25968408e6ce08770015cc9ba758827fd9 Mon Sep 17 00:00:00 2001 From: Xiaomeng Tong Date: Sat, 19 Mar 2022 11:22:22 +0100 Subject: [PATCH 0507/1842] media: uvcvideo: Fix missing check to determine if element is found in list [ Upstream commit 261f33388c29f6f3c12a724e6d89172b7f6d5996 ] The list iterator will point to a bogus position containing HEAD if the list is empty or the element is not found in list. This case should be checked before any use of the iterator, otherwise it will lead to a invalid memory access. The missing check here is before "pin = iterm->id;", just add check here to fix the security bug. In addition, the list iterator value will *always* be set and non-NULL by list_for_each_entry(), so it is incorrect to assume that the iterator value will be NULL if the element is not found in list, considering the (mis)use here: "if (iterm == NULL". Use a new value 'it' as the list iterator, while use the old value 'iterm' as a dedicated pointer to point to the found element, which 1. can fix this bug, due to 'iterm' is NULL only if it's not found. 2. do not need to change all the uses of 'iterm' after the loop. 3. can also limit the scope of the list iterator 'it' *only inside* the traversal loop by simply declaring 'it' inside the loop in the future, as usage of the iterator outside of the list_for_each_entry is considered harmful. https://lkml.org/lkml/2022/2/17/1032 Fixes: d5e90b7a6cd1c ("[media] uvcvideo: Move to video_ioctl2") Signed-off-by: Xiaomeng Tong Signed-off-by: Laurent Pinchart Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/media/usb/uvc/uvc_v4l2.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c index 753b8a99e08f..b40a2b904ace 100644 --- a/drivers/media/usb/uvc/uvc_v4l2.c +++ b/drivers/media/usb/uvc/uvc_v4l2.c @@ -863,29 +863,31 @@ static int uvc_ioctl_enum_input(struct file *file, void *fh, struct uvc_video_chain *chain = handle->chain; const struct uvc_entity *selector = chain->selector; struct uvc_entity *iterm = NULL; + struct uvc_entity *it; u32 index = input->index; - int pin = 0; if (selector == NULL || (chain->dev->quirks & UVC_QUIRK_IGNORE_SELECTOR_UNIT)) { if (index != 0) return -EINVAL; - list_for_each_entry(iterm, &chain->entities, chain) { - if (UVC_ENTITY_IS_ITERM(iterm)) + list_for_each_entry(it, &chain->entities, chain) { + if (UVC_ENTITY_IS_ITERM(it)) { + iterm = it; break; + } } - pin = iterm->id; } else if (index < selector->bNrInPins) { - pin = selector->baSourceID[index]; - list_for_each_entry(iterm, &chain->entities, chain) { - if (!UVC_ENTITY_IS_ITERM(iterm)) + list_for_each_entry(it, &chain->entities, chain) { + if (!UVC_ENTITY_IS_ITERM(it)) continue; - if (iterm->id == pin) + if (it->id == selector->baSourceID[index]) { + iterm = it; break; + } } } - if (iterm == NULL || iterm->id != pin) + if (iterm == NULL) return -EINVAL; memset(input, 0, sizeof(*input)); From f40549ce20e81c8e1df28db41ba1a76a4b8ab034 Mon Sep 17 00:00:00 2001 From: Andreas Gruenbacher Date: Thu, 5 May 2022 18:19:13 -0700 Subject: [PATCH 0508/1842] iomap: iomap_write_failed fix [ Upstream commit b71450e2cc4b3c79f33c5bd276d152af9bd54f79 ] The @lend parameter of truncate_pagecache_range() should be the offset of the last byte of the hole, not the first byte beyond it. Fixes: ae259a9c8593 ("fs: introduce iomap infrastructure") Signed-off-by: Andreas Gruenbacher Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Sasha Levin --- fs/iomap/buffered-io.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index cd9f7baa5bb7..dd33b31b0a82 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -528,7 +528,8 @@ iomap_write_failed(struct inode *inode, loff_t pos, unsigned len) * write started inside the existing inode size. */ if (pos + len > i_size) - truncate_pagecache_range(inode, max(pos, i_size), pos + len); + truncate_pagecache_range(inode, max(pos, i_size), + pos + len - 1); } static int From 560dcbe1c7a78f597f2167371ebdbe2bca3d0735 Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Thu, 5 May 2022 17:39:54 +0800 Subject: [PATCH 0509/1842] spi: spi-fsl-qspi: check return value after calling platform_get_resource_byname() [ Upstream commit a2b331ac11e1cac56f5b7d367e9f3c5796deaaed ] It will cause null-ptr-deref if platform_get_resource_byname() returns NULL, we need check the return value. Fixes: 858e26a515c2 ("spi: spi-fsl-qspi: Reduce devm_ioremap size to 4 times AHB buffer size") Signed-off-by: Yang Yingliang Link: https://lore.kernel.org/r/20220505093954.1285615-1-yangyingliang@huawei.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- drivers/spi/spi-fsl-qspi.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/spi/spi-fsl-qspi.c b/drivers/spi/spi-fsl-qspi.c index 9851551ebbe0..46ae46a944c5 100644 --- a/drivers/spi/spi-fsl-qspi.c +++ b/drivers/spi/spi-fsl-qspi.c @@ -876,6 +876,10 @@ static int fsl_qspi_probe(struct platform_device *pdev) res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "QuadSPI-memory"); + if (!res) { + ret = -EINVAL; + goto err_put_ctrl; + } q->memmap_phy = res->start; /* Since there are 4 cs, map size required is 4 times ahb_buf_size */ q->ahb_addr = devm_ioremap(dev, q->memmap_phy, From e2786db0a7eb94207a1f7dc1e21f748d5d7bc7a6 Mon Sep 17 00:00:00 2001 From: Viresh Kumar Date: Mon, 9 May 2022 09:27:37 +0530 Subject: [PATCH 0510/1842] Revert "cpufreq: Fix possible race in cpufreq online error path" [ Upstream commit 85f0e42bd65d01b351d561efb38e584d4c596553 ] This reverts commit f346e96267cd76175d6c201b40f770c0116a8a04. The commit tried to fix a possible real bug but it made it even worse. The fix was simply buggy as now an error out to out_offline_policy or out_exit_policy will try to release a semaphore which was never taken in the first place. This works fine only if we failed late, i.e. via out_destroy_policy. Fixes: f346e96267cd ("cpufreq: Fix possible race in cpufreq online error path") Signed-off-by: Viresh Kumar Signed-off-by: Rafael J. Wysocki Signed-off-by: Sasha Levin --- drivers/cpufreq/cpufreq.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 3540ea93b6f1..30dafe8fc505 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1515,6 +1515,8 @@ static int cpufreq_online(unsigned int cpu) for_each_cpu(j, policy->real_cpus) remove_cpu_dev_symlink(policy, get_cpu_device(j)); + up_write(&policy->rwsem); + out_offline_policy: if (cpufreq_driver->offline) cpufreq_driver->offline(policy); @@ -1523,9 +1525,6 @@ static int cpufreq_online(unsigned int cpu) if (cpufreq_driver->exit) cpufreq_driver->exit(policy); - cpumask_clear(policy->cpus); - up_write(&policy->rwsem); - out_free_policy: cpufreq_policy_free(policy); return ret; From 37a9db0ee7e73de9ed8abef7e6dca7de600b8461 Mon Sep 17 00:00:00 2001 From: Konrad Dybcio Date: Sat, 30 Apr 2022 18:37:52 +0200 Subject: [PATCH 0511/1842] regulator: qcom_smd: Fix up PM8950 regulator configuration [ Upstream commit b11b3d21a94d66bc05d1142e0b210bfa316c62be ] Following changes have been made: - S5, L4, L18, L20 and L21 were removed (S5 is managed by SPMI, whereas the rest seems not to exist [or at least it's blocked by Sony Loire /MSM8956/ RPM firmware]) - Supply maps have were adjusted to reflect regulator changes. Fixes: e44adca5fa25 ("regulator: qcom_smd: Add PM8950 regulators") Signed-off-by: Konrad Dybcio Link: https://lore.kernel.org/r/20220430163753.609909-1-konrad.dybcio@somainline.org Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- drivers/regulator/qcom_smd-regulator.c | 35 +++++++++++++------------- 1 file changed, 17 insertions(+), 18 deletions(-) diff --git a/drivers/regulator/qcom_smd-regulator.c b/drivers/regulator/qcom_smd-regulator.c index 8d784a2a09d8..05d227f9d2f2 100644 --- a/drivers/regulator/qcom_smd-regulator.c +++ b/drivers/regulator/qcom_smd-regulator.c @@ -844,32 +844,31 @@ static const struct rpm_regulator_data rpm_pm8950_regulators[] = { { "s2", QCOM_SMD_RPM_SMPA, 2, &pm8950_hfsmps, "vdd_s2" }, { "s3", QCOM_SMD_RPM_SMPA, 3, &pm8950_hfsmps, "vdd_s3" }, { "s4", QCOM_SMD_RPM_SMPA, 4, &pm8950_hfsmps, "vdd_s4" }, - { "s5", QCOM_SMD_RPM_SMPA, 5, &pm8950_ftsmps2p5, "vdd_s5" }, + /* S5 is managed via SPMI. */ { "s6", QCOM_SMD_RPM_SMPA, 6, &pm8950_hfsmps, "vdd_s6" }, { "l1", QCOM_SMD_RPM_LDOA, 1, &pm8950_ult_nldo, "vdd_l1_l19" }, { "l2", QCOM_SMD_RPM_LDOA, 2, &pm8950_ult_nldo, "vdd_l2_l23" }, { "l3", QCOM_SMD_RPM_LDOA, 3, &pm8950_ult_nldo, "vdd_l3" }, - { "l4", QCOM_SMD_RPM_LDOA, 4, &pm8950_ult_pldo, "vdd_l4_l5_l6_l7_l16" }, - { "l5", QCOM_SMD_RPM_LDOA, 5, &pm8950_pldo_lv, "vdd_l4_l5_l6_l7_l16" }, - { "l6", QCOM_SMD_RPM_LDOA, 6, &pm8950_pldo_lv, "vdd_l4_l5_l6_l7_l16" }, - { "l7", QCOM_SMD_RPM_LDOA, 7, &pm8950_pldo_lv, "vdd_l4_l5_l6_l7_l16" }, + /* L4 seems not to exist. */ + { "l5", QCOM_SMD_RPM_LDOA, 5, &pm8950_pldo_lv, "vdd_l5_l6_l7_l16" }, + { "l6", QCOM_SMD_RPM_LDOA, 6, &pm8950_pldo_lv, "vdd_l5_l6_l7_l16" }, + { "l7", QCOM_SMD_RPM_LDOA, 7, &pm8950_pldo_lv, "vdd_l5_l6_l7_l16" }, { "l8", QCOM_SMD_RPM_LDOA, 8, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22" }, { "l9", QCOM_SMD_RPM_LDOA, 9, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18" }, { "l10", QCOM_SMD_RPM_LDOA, 10, &pm8950_ult_nldo, "vdd_l9_l10_l13_l14_l15_l18"}, - { "l11", QCOM_SMD_RPM_LDOA, 11, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22"}, - { "l12", QCOM_SMD_RPM_LDOA, 12, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22"}, - { "l13", QCOM_SMD_RPM_LDOA, 13, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18"}, - { "l14", QCOM_SMD_RPM_LDOA, 14, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18"}, - { "l15", QCOM_SMD_RPM_LDOA, 15, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18"}, - { "l16", QCOM_SMD_RPM_LDOA, 16, &pm8950_ult_pldo, "vdd_l4_l5_l6_l7_l16"}, - { "l17", QCOM_SMD_RPM_LDOA, 17, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22"}, - { "l18", QCOM_SMD_RPM_LDOA, 18, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18"}, - { "l19", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l1_l19"}, - { "l20", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l20"}, - { "l21", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l21"}, - { "l22", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l8_l11_l12_l17_l22"}, - { "l23", QCOM_SMD_RPM_LDOA, 18, &pm8950_pldo, "vdd_l2_l23"}, + { "l11", QCOM_SMD_RPM_LDOA, 11, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22" }, + { "l12", QCOM_SMD_RPM_LDOA, 12, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22" }, + { "l13", QCOM_SMD_RPM_LDOA, 13, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18" }, + { "l14", QCOM_SMD_RPM_LDOA, 14, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18" }, + { "l15", QCOM_SMD_RPM_LDOA, 15, &pm8950_ult_pldo, "vdd_l9_l10_l13_l14_l15_l18" }, + { "l16", QCOM_SMD_RPM_LDOA, 16, &pm8950_ult_pldo, "vdd_l5_l6_l7_l16" }, + { "l17", QCOM_SMD_RPM_LDOA, 17, &pm8950_ult_pldo, "vdd_l8_l11_l12_l17_l22" }, + /* L18 seems not to exist. */ + { "l19", QCOM_SMD_RPM_LDOA, 19, &pm8950_pldo, "vdd_l1_l19" }, + /* L20 & L21 seem not to exist. */ + { "l22", QCOM_SMD_RPM_LDOA, 22, &pm8950_pldo, "vdd_l8_l11_l12_l17_l22" }, + { "l23", QCOM_SMD_RPM_LDOA, 23, &pm8950_pldo, "vdd_l2_l23" }, {} }; From 779d41c80b10043cfb6d2cd70a7b45c4e5ac883c Mon Sep 17 00:00:00 2001 From: Ravi Bangoria Date: Fri, 29 Apr 2022 10:44:41 +0530 Subject: [PATCH 0512/1842] perf/amd/ibs: Use interrupt regs ip for stack unwinding [ Upstream commit 3d47083b9ff46863e8374ad3bb5edb5e464c75f8 ] IbsOpRip is recorded when IBS interrupt is triggered. But there is a skid from the time IBS interrupt gets triggered to the time the interrupt is presented to the core. Meanwhile processor would have moved ahead and thus IbsOpRip will be inconsistent with rsp and rbp recorded as part of the interrupt regs. This causes issues while unwinding stack using the ORC unwinder as it needs consistent rip, rsp and rbp. Fix this by using rip from interrupt regs instead of IbsOpRip for stack unwinding. Fixes: ee9f8fce99640 ("x86/unwind: Add the ORC unwinder") Reported-by: Dmitry Monakhov Suggested-by: Peter Zijlstra Signed-off-by: Ravi Bangoria Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220429051441.14251-1-ravi.bangoria@amd.com Signed-off-by: Sasha Levin --- arch/x86/events/amd/ibs.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c index 780d89d2ae32..8a85658a24cc 100644 --- a/arch/x86/events/amd/ibs.c +++ b/arch/x86/events/amd/ibs.c @@ -312,6 +312,16 @@ static int perf_ibs_init(struct perf_event *event) hwc->config_base = perf_ibs->msr; hwc->config = config; + /* + * rip recorded by IbsOpRip will not be consistent with rsp and rbp + * recorded as part of interrupt regs. Thus we need to use rip from + * interrupt regs while unwinding call stack. Setting _EARLY flag + * makes sure we unwind call-stack before perf sample rip is set to + * IbsOpRip. + */ + if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) + event->attr.sample_type |= __PERF_SAMPLE_CALLCHAIN_EARLY; + return 0; } @@ -692,6 +702,14 @@ static int perf_ibs_handle_irq(struct perf_ibs *perf_ibs, struct pt_regs *iregs) data.raw = &raw; } + /* + * rip recorded by IbsOpRip will not be consistent with rsp and rbp + * recorded as part of interrupt regs. Thus we need to use rip from + * interrupt regs while unwinding call stack. + */ + if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) + data.callchain = perf_callchain(event, iregs); + throttle = perf_event_overflow(event, &data, ®s); out: if (throttle) { From 4024affd53e2feef332f03d9d95b9347377b129c Mon Sep 17 00:00:00 2001 From: Baochen Qiang Date: Mon, 9 May 2022 14:57:31 +0300 Subject: [PATCH 0513/1842] ath11k: Don't check arvif->is_started before sending management frames [ Upstream commit 355333a217541916576351446b5832fec7930566 ] Commit 66307ca04057 ("ath11k: fix mgmt_tx_wmi cmd sent to FW for deleted vdev") wants both of below two conditions are true before sending management frames: 1: ar->allocated_vdev_map & (1LL << arvif->vdev_id) 2: arvif->is_started Actually the second one is not necessary because with the first one we can make sure the vdev is present. Also use ar->conf_mutex to synchronize vdev delete and mgmt. TX. This issue is found in case of Passpoint scenario where ath11k needs to send action frames before vdev is started. Fix it by removing the second condition. Tested-on: WCN6855 hw2.0 PCI WLAN.HSP.1.1-01720.1-QCAHSPSWPL_V1_V2_SILICONZ_LITE-1 Fixes: 66307ca04057 ("ath11k: fix mgmt_tx_wmi cmd sent to FW for deleted vdev") Signed-off-by: Baochen Qiang Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/20220506013614.1580274-3-quic_bqiang@quicinc.com Signed-off-by: Sasha Levin --- drivers/net/wireless/ath/ath11k/mac.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c index b2928a5cf72e..44282aec069d 100644 --- a/drivers/net/wireless/ath/ath11k/mac.c +++ b/drivers/net/wireless/ath/ath11k/mac.c @@ -4008,8 +4008,8 @@ static void ath11k_mgmt_over_wmi_tx_work(struct work_struct *work) } arvif = ath11k_vif_to_arvif(skb_cb->vif); - if (ar->allocated_vdev_map & (1LL << arvif->vdev_id) && - arvif->is_started) { + mutex_lock(&ar->conf_mutex); + if (ar->allocated_vdev_map & (1LL << arvif->vdev_id)) { ret = ath11k_mac_mgmt_tx_wmi(ar, arvif, skb); if (ret) { ath11k_warn(ar->ab, "failed to tx mgmt frame, vdev_id %d :%d\n", @@ -4025,6 +4025,7 @@ static void ath11k_mgmt_over_wmi_tx_work(struct work_struct *work) arvif->is_started); ieee80211_free_txskb(ar->hw, skb); } + mutex_unlock(&ar->conf_mutex); } } From e84aaf23ca82753d765bf84d05295d9d9c5fed29 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Wed, 11 May 2022 10:58:03 +0400 Subject: [PATCH 0514/1842] ASoC: fsl: Fix refcount leak in imx_sgtl5000_probe [ Upstream commit 41cd312dfe980af869c3503b4d38e62ed20dd3b7 ] of_find_i2c_device_by_node() takes a reference, In error paths, we should call put_device() to drop the reference to aviod refount leak. Fixes: 81e8e4926167 ("ASoC: fsl: add sgtl5000 clock support for imx-sgtl5000") Signed-off-by: Miaoqian Lin Reviewed-by: Fabio Estevam Link: https://lore.kernel.org/r/20220511065803.3957-1-linmq006@gmail.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/fsl/imx-sgtl5000.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/sound/soc/fsl/imx-sgtl5000.c b/sound/soc/fsl/imx-sgtl5000.c index f45cb4bbb6c4..5997bb5acb73 100644 --- a/sound/soc/fsl/imx-sgtl5000.c +++ b/sound/soc/fsl/imx-sgtl5000.c @@ -120,19 +120,19 @@ static int imx_sgtl5000_probe(struct platform_device *pdev) data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); if (!data) { ret = -ENOMEM; - goto fail; + goto put_device; } comp = devm_kzalloc(&pdev->dev, 3 * sizeof(*comp), GFP_KERNEL); if (!comp) { ret = -ENOMEM; - goto fail; + goto put_device; } data->codec_clk = clk_get(&codec_dev->dev, NULL); if (IS_ERR(data->codec_clk)) { ret = PTR_ERR(data->codec_clk); - goto fail; + goto put_device; } data->clk_frequency = clk_get_rate(data->codec_clk); @@ -158,10 +158,10 @@ static int imx_sgtl5000_probe(struct platform_device *pdev) data->card.dev = &pdev->dev; ret = snd_soc_of_parse_card_name(&data->card, "model"); if (ret) - goto fail; + goto put_device; ret = snd_soc_of_parse_audio_routing(&data->card, "audio-routing"); if (ret) - goto fail; + goto put_device; data->card.num_links = 1; data->card.owner = THIS_MODULE; data->card.dai_link = &data->dai; @@ -176,7 +176,7 @@ static int imx_sgtl5000_probe(struct platform_device *pdev) if (ret != -EPROBE_DEFER) dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n", ret); - goto fail; + goto put_device; } of_node_put(ssi_np); @@ -184,6 +184,8 @@ static int imx_sgtl5000_probe(struct platform_device *pdev) return 0; +put_device: + put_device(&codec_dev->dev); fail: if (data && !IS_ERR(data->codec_clk)) clk_put(data->codec_clk); From 2a0da7641e1f17a744ac7b3f76471388c97b63dc Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Wed, 11 May 2022 17:37:22 +0400 Subject: [PATCH 0515/1842] ASoC: mxs-saif: Fix refcount leak in mxs_saif_probe [ Upstream commit 2be84f73785fa9ed6443e3c5b158730266f1c2ee ] of_parse_phandle() returns a node pointer with refcount incremented, we should use of_node_put() on it when done. Fixes: 08641c7c74dd ("ASoC: mxs: add device tree support for mxs-saif") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220511133725.39039-1-linmq006@gmail.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/mxs/mxs-saif.c | 1 + 1 file changed, 1 insertion(+) diff --git a/sound/soc/mxs/mxs-saif.c b/sound/soc/mxs/mxs-saif.c index f2eda81985e2..d87ac26999cf 100644 --- a/sound/soc/mxs/mxs-saif.c +++ b/sound/soc/mxs/mxs-saif.c @@ -764,6 +764,7 @@ static int mxs_saif_probe(struct platform_device *pdev) saif->master_id = saif->id; } else { ret = of_alias_get_id(master, "saif"); + of_node_put(master); if (ret < 0) return ret; else From 9f564e29a51210a49df3d925117777c157a17d6d Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Wed, 11 May 2022 15:35:05 +0400 Subject: [PATCH 0516/1842] regulator: pfuze100: Fix refcount leak in pfuze_parse_regulators_dt [ Upstream commit afaa7b933ef00a2d3262f4d1252087613fb5c06d ] of_node_get() returns a node with refcount incremented. Calling of_node_put() to drop the reference when not needed anymore. Fixes: 3784b6d64dc5 ("regulator: pfuze100: add pfuze100 regulator driver") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220511113506.45185-1-linmq006@gmail.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- drivers/regulator/pfuze100-regulator.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/regulator/pfuze100-regulator.c b/drivers/regulator/pfuze100-regulator.c index 01a12cfcea7c..0a19500d3725 100644 --- a/drivers/regulator/pfuze100-regulator.c +++ b/drivers/regulator/pfuze100-regulator.c @@ -531,6 +531,7 @@ static int pfuze_parse_regulators_dt(struct pfuze_chip *chip) parent = of_get_child_by_name(np, "regulators"); if (!parent) { dev_err(dev, "regulators node not found\n"); + of_node_put(np); return -EINVAL; } @@ -560,6 +561,7 @@ static int pfuze_parse_regulators_dt(struct pfuze_chip *chip) } of_node_put(parent); + of_node_put(np); if (ret < 0) { dev_err(dev, "Error parsing regulator init data: %d\n", ret); From c1b08aa568e829b743affe5d3231e6de28b7609e Mon Sep 17 00:00:00 2001 From: Kuninori Morimoto Date: Tue, 14 Dec 2021 11:08:41 +0900 Subject: [PATCH 0517/1842] ASoC: samsung: Use dev_err_probe() helper [ Upstream commit 27c6eaebcf75e4fac145d17c7fa76bc64b60d24c ] Use the dev_err_probe() helper, instead of open-coding the same operation. Signed-off-by: Kuninori Morimoto Link: https://lore.kernel.org/r/20211214020843.2225831-21-kuninori.morimoto.gx@renesas.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/samsung/aries_wm8994.c | 17 +++++++---------- sound/soc/samsung/arndale.c | 5 ++--- sound/soc/samsung/littlemill.c | 5 ++--- sound/soc/samsung/lowland.c | 5 ++--- sound/soc/samsung/odroid.c | 4 +--- sound/soc/samsung/smdk_wm8994.c | 4 ++-- sound/soc/samsung/smdk_wm8994pcm.c | 4 ++-- sound/soc/samsung/snow.c | 9 +++------ sound/soc/samsung/speyside.c | 5 ++--- sound/soc/samsung/tm2_wm5110.c | 3 +-- sound/soc/samsung/tobermory.c | 5 ++--- 11 files changed, 26 insertions(+), 40 deletions(-) diff --git a/sound/soc/samsung/aries_wm8994.c b/sound/soc/samsung/aries_wm8994.c index 0ac5956ba270..709336bbcc2f 100644 --- a/sound/soc/samsung/aries_wm8994.c +++ b/sound/soc/samsung/aries_wm8994.c @@ -585,19 +585,16 @@ static int aries_audio_probe(struct platform_device *pdev) extcon_np = of_parse_phandle(np, "extcon", 0); priv->usb_extcon = extcon_find_edev_by_node(extcon_np); - if (IS_ERR(priv->usb_extcon)) { - if (PTR_ERR(priv->usb_extcon) != -EPROBE_DEFER) - dev_err(dev, "Failed to get extcon device"); - return PTR_ERR(priv->usb_extcon); - } + if (IS_ERR(priv->usb_extcon)) + return dev_err_probe(dev, PTR_ERR(priv->usb_extcon), + "Failed to get extcon device"); of_node_put(extcon_np); priv->adc = devm_iio_channel_get(dev, "headset-detect"); - if (IS_ERR(priv->adc)) { - if (PTR_ERR(priv->adc) != -EPROBE_DEFER) - dev_err(dev, "Failed to get ADC channel"); - return PTR_ERR(priv->adc); - } + if (IS_ERR(priv->adc)) + return dev_err_probe(dev, PTR_ERR(priv->adc), + "Failed to get ADC channel"); + if (priv->adc->channel->type != IIO_VOLTAGE) return -EINVAL; diff --git a/sound/soc/samsung/arndale.c b/sound/soc/samsung/arndale.c index 28587375813a..35e34e534b8b 100644 --- a/sound/soc/samsung/arndale.c +++ b/sound/soc/samsung/arndale.c @@ -174,9 +174,8 @@ static int arndale_audio_probe(struct platform_device *pdev) ret = devm_snd_soc_register_card(card->dev, card); if (ret) { - if (ret != -EPROBE_DEFER) - dev_err(&pdev->dev, - "snd_soc_register_card() failed: %d\n", ret); + dev_err_probe(&pdev->dev, ret, + "snd_soc_register_card() failed\n"); goto err_put_of_nodes; } return 0; diff --git a/sound/soc/samsung/littlemill.c b/sound/soc/samsung/littlemill.c index a1ff1400857e..e73356a66087 100644 --- a/sound/soc/samsung/littlemill.c +++ b/sound/soc/samsung/littlemill.c @@ -325,9 +325,8 @@ static int littlemill_probe(struct platform_device *pdev) card->dev = &pdev->dev; ret = devm_snd_soc_register_card(&pdev->dev, card); - if (ret && ret != -EPROBE_DEFER) - dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n", - ret); + if (ret) + dev_err_probe(&pdev->dev, ret, "snd_soc_register_card() failed\n"); return ret; } diff --git a/sound/soc/samsung/lowland.c b/sound/soc/samsung/lowland.c index 998d10cf8c94..7b12ccd2a9b2 100644 --- a/sound/soc/samsung/lowland.c +++ b/sound/soc/samsung/lowland.c @@ -183,9 +183,8 @@ static int lowland_probe(struct platform_device *pdev) card->dev = &pdev->dev; ret = devm_snd_soc_register_card(&pdev->dev, card); - if (ret && ret != -EPROBE_DEFER) - dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n", - ret); + if (ret) + dev_err_probe(&pdev->dev, ret, "snd_soc_register_card() failed\n"); return ret; } diff --git a/sound/soc/samsung/odroid.c b/sound/soc/samsung/odroid.c index ca643a488c3c..4ff12e2e704f 100644 --- a/sound/soc/samsung/odroid.c +++ b/sound/soc/samsung/odroid.c @@ -311,9 +311,7 @@ static int odroid_audio_probe(struct platform_device *pdev) ret = devm_snd_soc_register_card(dev, card); if (ret < 0) { - if (ret != -EPROBE_DEFER) - dev_err(dev, "snd_soc_register_card() failed: %d\n", - ret); + dev_err_probe(dev, ret, "snd_soc_register_card() failed\n"); goto err_put_clk_i2s; } diff --git a/sound/soc/samsung/smdk_wm8994.c b/sound/soc/samsung/smdk_wm8994.c index 64a1a64656ab..92cd9e8a2e61 100644 --- a/sound/soc/samsung/smdk_wm8994.c +++ b/sound/soc/samsung/smdk_wm8994.c @@ -178,8 +178,8 @@ static int smdk_audio_probe(struct platform_device *pdev) ret = devm_snd_soc_register_card(&pdev->dev, card); - if (ret && ret != -EPROBE_DEFER) - dev_err(&pdev->dev, "snd_soc_register_card() failed:%d\n", ret); + if (ret) + dev_err_probe(&pdev->dev, ret, "snd_soc_register_card() failed\n"); return ret; } diff --git a/sound/soc/samsung/smdk_wm8994pcm.c b/sound/soc/samsung/smdk_wm8994pcm.c index a01640576f71..110a51a4f870 100644 --- a/sound/soc/samsung/smdk_wm8994pcm.c +++ b/sound/soc/samsung/smdk_wm8994pcm.c @@ -118,8 +118,8 @@ static int snd_smdk_probe(struct platform_device *pdev) smdk_pcm.dev = &pdev->dev; ret = devm_snd_soc_register_card(&pdev->dev, &smdk_pcm); - if (ret && ret != -EPROBE_DEFER) - dev_err(&pdev->dev, "snd_soc_register_card failed %d\n", ret); + if (ret) + dev_err_probe(&pdev->dev, ret, "snd_soc_register_card failed\n"); return ret; } diff --git a/sound/soc/samsung/snow.c b/sound/soc/samsung/snow.c index 07163f07c6d5..6aa2c66d8e8c 100644 --- a/sound/soc/samsung/snow.c +++ b/sound/soc/samsung/snow.c @@ -215,12 +215,9 @@ static int snow_probe(struct platform_device *pdev) snd_soc_card_set_drvdata(card, priv); ret = devm_snd_soc_register_card(dev, card); - if (ret) { - if (ret != -EPROBE_DEFER) - dev_err(&pdev->dev, - "snd_soc_register_card failed (%d)\n", ret); - return ret; - } + if (ret) + return dev_err_probe(&pdev->dev, ret, + "snd_soc_register_card failed\n"); return ret; } diff --git a/sound/soc/samsung/speyside.c b/sound/soc/samsung/speyside.c index f5f6ba00d073..37b1f4f60b21 100644 --- a/sound/soc/samsung/speyside.c +++ b/sound/soc/samsung/speyside.c @@ -330,9 +330,8 @@ static int speyside_probe(struct platform_device *pdev) card->dev = &pdev->dev; ret = devm_snd_soc_register_card(&pdev->dev, card); - if (ret && ret != -EPROBE_DEFER) - dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n", - ret); + if (ret) + dev_err_probe(&pdev->dev, ret, "snd_soc_register_card() failed\n"); return ret; } diff --git a/sound/soc/samsung/tm2_wm5110.c b/sound/soc/samsung/tm2_wm5110.c index 125e07f65d2b..ca1be7a7cb8a 100644 --- a/sound/soc/samsung/tm2_wm5110.c +++ b/sound/soc/samsung/tm2_wm5110.c @@ -611,8 +611,7 @@ static int tm2_probe(struct platform_device *pdev) ret = devm_snd_soc_register_card(dev, card); if (ret < 0) { - if (ret != -EPROBE_DEFER) - dev_err(dev, "Failed to register card: %d\n", ret); + dev_err_probe(dev, ret, "Failed to register card\n"); goto dai_node_put; } diff --git a/sound/soc/samsung/tobermory.c b/sound/soc/samsung/tobermory.c index c962d2c2a7f7..95c6267b0c0c 100644 --- a/sound/soc/samsung/tobermory.c +++ b/sound/soc/samsung/tobermory.c @@ -229,9 +229,8 @@ static int tobermory_probe(struct platform_device *pdev) card->dev = &pdev->dev; ret = devm_snd_soc_register_card(&pdev->dev, card); - if (ret && ret != -EPROBE_DEFER) - dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n", - ret); + if (ret) + dev_err_probe(&pdev->dev, ret, "snd_soc_register_card() failed\n"); return ret; } From cacea459f95be22b3750f3b25b7a1c5897a68206 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Thu, 12 May 2022 08:38:28 +0400 Subject: [PATCH 0518/1842] ASoC: samsung: Fix refcount leak in aries_audio_probe [ Upstream commit bf4a9b2467b775717d0e9034ad916888e19713a3 ] of_parse_phandle() returns a node pointer with refcount incremented, we should use of_node_put() on it when done. If extcon_find_edev_by_node() fails, it doesn't call of_node_put() Calling of_node_put() after extcon_find_edev_by_node() to fix this. Fixes: 7a3a7671fa6c ("ASoC: samsung: Add driver for Aries boards") Signed-off-by: Miaoqian Lin Reviewed-by: Krzysztof Kozlowski Link: https://lore.kernel.org/r/20220512043828.496-1-linmq006@gmail.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/samsung/aries_wm8994.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sound/soc/samsung/aries_wm8994.c b/sound/soc/samsung/aries_wm8994.c index 709336bbcc2f..18458192aff1 100644 --- a/sound/soc/samsung/aries_wm8994.c +++ b/sound/soc/samsung/aries_wm8994.c @@ -585,10 +585,10 @@ static int aries_audio_probe(struct platform_device *pdev) extcon_np = of_parse_phandle(np, "extcon", 0); priv->usb_extcon = extcon_find_edev_by_node(extcon_np); + of_node_put(extcon_np); if (IS_ERR(priv->usb_extcon)) return dev_err_probe(dev, PTR_ERR(priv->usb_extcon), "Failed to get extcon device"); - of_node_put(extcon_np); priv->adc = devm_iio_channel_get(dev, "headset-detect"); if (IS_ERR(priv->adc)) From 1472fb1c744792c011569c07b79896c74324ee94 Mon Sep 17 00:00:00 2001 From: Phil Auld Date: Thu, 12 May 2022 10:34:39 -0400 Subject: [PATCH 0519/1842] kselftest/cgroup: fix test_stress.sh to use OUTPUT dir [ Upstream commit 54de76c0123915e7533ce352de30a1f2d80fe81f ] Running cgroup kselftest with O= fails to run the with_stress test due to hardcoded ./test_core. Find test_core binary using the OUTPUT directory. Fixes: 1a99fcc035fb ("selftests: cgroup: Run test_core under interfering stress") Signed-off-by: Phil Auld Signed-off-by: Tejun Heo Signed-off-by: Sasha Levin --- tools/testing/selftests/cgroup/test_stress.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/cgroup/test_stress.sh b/tools/testing/selftests/cgroup/test_stress.sh index 15d9d5896394..109c044f715f 100755 --- a/tools/testing/selftests/cgroup/test_stress.sh +++ b/tools/testing/selftests/cgroup/test_stress.sh @@ -1,4 +1,4 @@ #!/bin/bash # SPDX-License-Identifier: GPL-2.0 -./with_stress.sh -s subsys -s fork ./test_core +./with_stress.sh -s subsys -s fork ${OUTPUT}/test_core From 34feaea3aa4f741ded71542b516a55da63c730cc Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Thu, 12 May 2022 12:05:27 -0700 Subject: [PATCH 0520/1842] scripts/faddr2line: Fix overlapping text section failures [ Upstream commit 1d1a0e7c5100d332583e20b40aa8c0a8ed3d7849 ] There have been some recent reports of faddr2line failures: $ scripts/faddr2line sound/soundcore.ko sound_devnode+0x5/0x35 bad symbol size: base: 0x0000000000000000 end: 0x0000000000000000 $ ./scripts/faddr2line vmlinux.o enter_from_user_mode+0x24 bad symbol size: base: 0x0000000000005fe0 end: 0x0000000000005fe0 The problem is that faddr2line is based on 'nm', which has a major limitation: it doesn't know how to distinguish between different text sections. So if an offset exists in multiple text sections in the object, it may fail. Rewrite faddr2line to be section-aware, by basing it on readelf. Fixes: 67326666e2d4 ("scripts: add script for translating stack dump function offsets") Reported-by: Kaiwan N Billimoria Reported-by: Peter Zijlstra Signed-off-by: Josh Poimboeuf Link: https://lore.kernel.org/r/29ff99f86e3da965b6e46c1cc2d72ce6528c17c3.1652382321.git.jpoimboe@kernel.org Signed-off-by: Sasha Levin --- scripts/faddr2line | 150 +++++++++++++++++++++++++++++---------------- 1 file changed, 97 insertions(+), 53 deletions(-) diff --git a/scripts/faddr2line b/scripts/faddr2line index 6c6439f69a72..0e6268d59883 100755 --- a/scripts/faddr2line +++ b/scripts/faddr2line @@ -44,17 +44,6 @@ set -o errexit set -o nounset -READELF="${CROSS_COMPILE:-}readelf" -ADDR2LINE="${CROSS_COMPILE:-}addr2line" -SIZE="${CROSS_COMPILE:-}size" -NM="${CROSS_COMPILE:-}nm" - -command -v awk >/dev/null 2>&1 || die "awk isn't installed" -command -v ${READELF} >/dev/null 2>&1 || die "readelf isn't installed" -command -v ${ADDR2LINE} >/dev/null 2>&1 || die "addr2line isn't installed" -command -v ${SIZE} >/dev/null 2>&1 || die "size isn't installed" -command -v ${NM} >/dev/null 2>&1 || die "nm isn't installed" - usage() { echo "usage: faddr2line [--list] ..." >&2 exit 1 @@ -69,6 +58,14 @@ die() { exit 1 } +READELF="${CROSS_COMPILE:-}readelf" +ADDR2LINE="${CROSS_COMPILE:-}addr2line" +AWK="awk" + +command -v ${AWK} >/dev/null 2>&1 || die "${AWK} isn't installed" +command -v ${READELF} >/dev/null 2>&1 || die "${READELF} isn't installed" +command -v ${ADDR2LINE} >/dev/null 2>&1 || die "${ADDR2LINE} isn't installed" + # Try to figure out the source directory prefix so we can remove it from the # addr2line output. HACK ALERT: This assumes that start_kernel() is in # init/main.c! This only works for vmlinux. Otherwise it falls back to @@ -76,7 +73,7 @@ die() { find_dir_prefix() { local objfile=$1 - local start_kernel_addr=$(${READELF} -sW $objfile | awk '$8 == "start_kernel" {printf "0x%s", $2}') + local start_kernel_addr=$(${READELF} --symbols --wide $objfile | ${AWK} '$8 == "start_kernel" {printf "0x%s", $2}') [[ -z $start_kernel_addr ]] && return local file_line=$(${ADDR2LINE} -e $objfile $start_kernel_addr) @@ -97,86 +94,133 @@ __faddr2line() { local dir_prefix=$3 local print_warnings=$4 - local func=${func_addr%+*} + local sym_name=${func_addr%+*} local offset=${func_addr#*+} offset=${offset%/*} - local size= - [[ $func_addr =~ "/" ]] && size=${func_addr#*/} + local user_size= + [[ $func_addr =~ "/" ]] && user_size=${func_addr#*/} - if [[ -z $func ]] || [[ -z $offset ]] || [[ $func = $func_addr ]]; then + if [[ -z $sym_name ]] || [[ -z $offset ]] || [[ $sym_name = $func_addr ]]; then warn "bad func+offset $func_addr" DONE=1 return fi # Go through each of the object's symbols which match the func name. - # In rare cases there might be duplicates. - file_end=$(${SIZE} -Ax $objfile | awk '$1 == ".text" {print $2}') - while read symbol; do - local fields=($symbol) - local sym_base=0x${fields[0]} - local sym_type=${fields[1]} - local sym_end=${fields[3]} + # In rare cases there might be duplicates, in which case we print all + # matches. + while read line; do + local fields=($line) + local sym_addr=0x${fields[1]} + local sym_elf_size=${fields[2]} + local sym_sec=${fields[6]} - # calculate the size - local sym_size=$(($sym_end - $sym_base)) - if [[ -z $sym_size ]] || [[ $sym_size -le 0 ]]; then - warn "bad symbol size: base: $sym_base end: $sym_end" + # Get the section size: + local sec_size=$(${READELF} --section-headers --wide $objfile | + sed 's/\[ /\[/' | + ${AWK} -v sec=$sym_sec '$1 == "[" sec "]" { print "0x" $6; exit }') + + if [[ -z $sec_size ]]; then + warn "bad section size: section: $sym_sec" DONE=1 return fi + + # Calculate the symbol size. + # + # Unfortunately we can't use the ELF size, because kallsyms + # also includes the padding bytes in its size calculation. For + # kallsyms, the size calculation is the distance between the + # symbol and the next symbol in a sorted list. + local sym_size + local cur_sym_addr + local found=0 + while read line; do + local fields=($line) + cur_sym_addr=0x${fields[1]} + local cur_sym_elf_size=${fields[2]} + local cur_sym_name=${fields[7]:-} + + if [[ $cur_sym_addr = $sym_addr ]] && + [[ $cur_sym_elf_size = $sym_elf_size ]] && + [[ $cur_sym_name = $sym_name ]]; then + found=1 + continue + fi + + if [[ $found = 1 ]]; then + sym_size=$(($cur_sym_addr - $sym_addr)) + [[ $sym_size -lt $sym_elf_size ]] && continue; + found=2 + break + fi + done < <(${READELF} --symbols --wide $objfile | ${AWK} -v sec=$sym_sec '$7 == sec' | sort --key=2) + + if [[ $found = 0 ]]; then + warn "can't find symbol: sym_name: $sym_name sym_sec: $sym_sec sym_addr: $sym_addr sym_elf_size: $sym_elf_size" + DONE=1 + return + fi + + # If nothing was found after the symbol, assume it's the last + # symbol in the section. + [[ $found = 1 ]] && sym_size=$(($sec_size - $sym_addr)) + + if [[ -z $sym_size ]] || [[ $sym_size -le 0 ]]; then + warn "bad symbol size: sym_addr: $sym_addr cur_sym_addr: $cur_sym_addr" + DONE=1 + return + fi + sym_size=0x$(printf %x $sym_size) - # calculate the address - local addr=$(($sym_base + $offset)) + # Calculate the section address from user-supplied offset: + local addr=$(($sym_addr + $offset)) if [[ -z $addr ]] || [[ $addr = 0 ]]; then - warn "bad address: $sym_base + $offset" + warn "bad address: $sym_addr + $offset" DONE=1 return fi addr=0x$(printf %x $addr) - # weed out non-function symbols - if [[ $sym_type != t ]] && [[ $sym_type != T ]]; then + # If the user provided a size, make sure it matches the symbol's size: + if [[ -n $user_size ]] && [[ $user_size -ne $sym_size ]]; then [[ $print_warnings = 1 ]] && - echo "skipping $func address at $addr due to non-function symbol of type '$sym_type'" - continue - fi - - # if the user provided a size, make sure it matches the symbol's size - if [[ -n $size ]] && [[ $size -ne $sym_size ]]; then - [[ $print_warnings = 1 ]] && - echo "skipping $func address at $addr due to size mismatch ($size != $sym_size)" + echo "skipping $sym_name address at $addr due to size mismatch ($user_size != $sym_size)" continue; fi - # make sure the provided offset is within the symbol's range + # Make sure the provided offset is within the symbol's range: if [[ $offset -gt $sym_size ]]; then [[ $print_warnings = 1 ]] && - echo "skipping $func address at $addr due to size mismatch ($offset > $sym_size)" + echo "skipping $sym_name address at $addr due to size mismatch ($offset > $sym_size)" continue fi - # separate multiple entries with a blank line + # In case of duplicates or multiple addresses specified on the + # cmdline, separate multiple entries with a blank line: [[ $FIRST = 0 ]] && echo FIRST=0 - # pass real address to addr2line - echo "$func+$offset/$sym_size:" - local file_lines=$(${ADDR2LINE} -fpie $objfile $addr | sed "s; $dir_prefix\(\./\)*; ;") - [[ -z $file_lines ]] && return + echo "$sym_name+$offset/$sym_size:" + # Pass section address to addr2line and strip absolute paths + # from the output: + local output=$(${ADDR2LINE} -fpie $objfile $addr | sed "s; $dir_prefix\(\./\)*; ;") + [[ -z $output ]] && continue + + # Default output (non --list): if [[ $LIST = 0 ]]; then - echo "$file_lines" | while read -r line + echo "$output" | while read -r line do echo $line done DONE=1; - return + continue fi - # show each line with context - echo "$file_lines" | while read -r line + # For --list, show each line with its corresponding source code: + echo "$output" | while read -r line do echo echo $line @@ -184,12 +228,12 @@ __faddr2line() { n1=$[$n-5] n2=$[$n+5] f=$(echo $line | sed 's/.*at \(.\+\):.*/\1/g') - awk 'NR>=strtonum("'$n1'") && NR<=strtonum("'$n2'") { if (NR=='$n') printf(">%d<", NR); else printf(" %d ", NR); printf("\t%s\n", $0)}' $f + ${AWK} 'NR>=strtonum("'$n1'") && NR<=strtonum("'$n2'") { if (NR=='$n') printf(">%d<", NR); else printf(" %d ", NR); printf("\t%s\n", $0)}' $f done DONE=1 - done < <(${NM} -n $objfile | awk -v fn=$func -v end=$file_end '$3 == fn { found=1; line=$0; start=$1; next } found == 1 { found=0; print line, "0x"$1 } END {if (found == 1) print line, end; }') + done < <(${READELF} --symbols --wide $objfile | ${AWK} -v fn=$sym_name '$4 == "FUNC" && $8 == fn') } [[ $# -lt 2 ]] && usage From 0572a5bd38e3209db151116fd13b36e7d832d512 Mon Sep 17 00:00:00 2001 From: Christophe JAILLET Date: Sun, 6 Mar 2022 19:08:07 +0100 Subject: [PATCH 0521/1842] media: aspeed: Fix an error handling path in aspeed_video_probe() [ Upstream commit 310fda622bbd38be17fb444f7f049b137af3bc0d ] A dma_free_coherent() call is missing in the error handling path of the probe, as already done in the remove function. In fact, this call is included in aspeed_video_free_buf(). So use the latter both in the error handling path of the probe and in the remove function. It is easier to see the relation with aspeed_video_alloc_buf() this way. Fixes: d2b4387f3bdf ("media: platform: Add Aspeed Video Engine driver") Signed-off-by: Christophe JAILLET Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/media/platform/aspeed-video.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c index 757a58829a51..9d9124308f6a 100644 --- a/drivers/media/platform/aspeed-video.c +++ b/drivers/media/platform/aspeed-video.c @@ -1723,6 +1723,7 @@ static int aspeed_video_probe(struct platform_device *pdev) rc = aspeed_video_setup_video(video); if (rc) { + aspeed_video_free_buf(video, &video->jpeg); clk_unprepare(video->vclk); clk_unprepare(video->eclk); return rc; @@ -1748,8 +1749,7 @@ static int aspeed_video_remove(struct platform_device *pdev) v4l2_device_unregister(v4l2_dev); - dma_free_coherent(video->dev, VE_JPEG_HEADER_SIZE, video->jpeg.virt, - video->jpeg.dma); + aspeed_video_free_buf(video, &video->jpeg); of_reserved_mem_device_release(dev); From 8e4e0c4ac55e0afa780817acc4fd3accdacdcc64 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Mon, 7 Mar 2022 08:52:06 +0100 Subject: [PATCH 0522/1842] media: exynos4-is: Fix PM disable depth imbalance in fimc_is_probe [ Upstream commit 5c0db68ce0faeb000c3540d095eb272d671a6e03 ] If probe fails then we need to call pm_runtime_disable() to balance out the previous pm_runtime_enable() call. Fixes: 9a761e436843 ("[media] exynos4-is: Add Exynos4x12 FIMC-IS driver") Signed-off-by: Miaoqian Lin Reviewed-by: Krzysztof Kozlowski Reviewed-by: Alim Akhtar Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/media/platform/exynos4-is/fimc-is.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/media/platform/exynos4-is/fimc-is.c b/drivers/media/platform/exynos4-is/fimc-is.c index d26fa5967d82..d4b31b3c9282 100644 --- a/drivers/media/platform/exynos4-is/fimc-is.c +++ b/drivers/media/platform/exynos4-is/fimc-is.c @@ -830,7 +830,7 @@ static int fimc_is_probe(struct platform_device *pdev) ret = pm_runtime_resume_and_get(dev); if (ret < 0) - goto err_irq; + goto err_pm_disable; vb2_dma_contig_set_max_seg_size(dev, DMA_BIT_MASK(32)); @@ -864,6 +864,8 @@ static int fimc_is_probe(struct platform_device *pdev) pm_runtime_put_noidle(dev); if (!pm_runtime_enabled(dev)) fimc_is_runtime_suspend(dev); +err_pm_disable: + pm_runtime_disable(dev); err_irq: free_irq(is->irq, is); err_clk: From b3e483735847c7ae2c10a685624eb13c1b6bef63 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Mon, 7 Mar 2022 09:08:59 +0100 Subject: [PATCH 0523/1842] media: st-delta: Fix PM disable depth imbalance in delta_probe [ Upstream commit 94e3dba710fe0afc772172305444250023fc2d30 ] The pm_runtime_enable will decrease power disable depth. If the probe fails, we should use pm_runtime_disable() to balance pm_runtime_enable(). Fixes: f386509e4959 ("[media] st-delta: STiH4xx multi-format video decoder v4l2 driver") Signed-off-by: Miaoqian Lin Acked-by: Hugues Fruchet Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/media/platform/sti/delta/delta-v4l2.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/media/platform/sti/delta/delta-v4l2.c b/drivers/media/platform/sti/delta/delta-v4l2.c index c691b3d81549..5da49a0ab70b 100644 --- a/drivers/media/platform/sti/delta/delta-v4l2.c +++ b/drivers/media/platform/sti/delta/delta-v4l2.c @@ -1862,7 +1862,7 @@ static int delta_probe(struct platform_device *pdev) if (ret) { dev_err(delta->dev, "%s failed to initialize firmware ipc channel\n", DELTA_PREFIX); - goto err; + goto err_pm_disable; } /* register all available decoders */ @@ -1876,7 +1876,7 @@ static int delta_probe(struct platform_device *pdev) if (ret) { dev_err(delta->dev, "%s failed to register V4L2 device\n", DELTA_PREFIX); - goto err; + goto err_pm_disable; } delta->work_queue = create_workqueue(DELTA_NAME); @@ -1901,6 +1901,8 @@ static int delta_probe(struct platform_device *pdev) destroy_workqueue(delta->work_queue); err_v4l2: v4l2_device_unregister(&delta->v4l2_dev); +err_pm_disable: + pm_runtime_disable(dev); err: return ret; } From 7d792640d3e91752b067e48fb9aee8b09f83e8e8 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Fri, 18 Mar 2022 12:01:01 +0100 Subject: [PATCH 0524/1842] media: exynos4-is: Change clk_disable to clk_disable_unprepare [ Upstream commit 9fadab72a6916c7507d7fedcd644859eef995078 ] The corresponding API for clk_prepare_enable is clk_disable_unprepare, other than clk_disable. Fix this by changing clk_disable to clk_disable_unprepare. Fixes: b4155d7d5b2c ("[media] exynos4-is: Ensure fimc-is clocks are not enabled until properly configured") Signed-off-by: Miaoqian Lin Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/media/platform/exynos4-is/fimc-is.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/media/platform/exynos4-is/fimc-is.c b/drivers/media/platform/exynos4-is/fimc-is.c index d4b31b3c9282..dc2a144cd29b 100644 --- a/drivers/media/platform/exynos4-is/fimc-is.c +++ b/drivers/media/platform/exynos4-is/fimc-is.c @@ -140,7 +140,7 @@ static int fimc_is_enable_clocks(struct fimc_is *is) dev_err(&is->pdev->dev, "clock %s enable failed\n", fimc_is_clocks[i]); for (--i; i >= 0; i--) - clk_disable(is->clocks[i]); + clk_disable_unprepare(is->clocks[i]); return ret; } pr_debug("enabled clock: %s\n", fimc_is_clocks[i]); From a3304766d9384886e6d3092c776273526947a2e9 Mon Sep 17 00:00:00 2001 From: Pavel Skripkin Date: Fri, 15 Apr 2022 23:24:48 +0200 Subject: [PATCH 0525/1842] media: pvrusb2: fix array-index-out-of-bounds in pvr2_i2c_core_init [ Upstream commit 471bec68457aaf981add77b4f590d65dd7da1059 ] Syzbot reported that -1 is used as array index. The problem was in missing validation check. hdw->unit_number is initialized with -1 and then if init table walk fails this value remains unchanged. Since code blindly uses this member for array indexing adding sanity check is the easiest fix for that. hdw->workpoll initialization moved upper to prevent warning in __flush_work. Reported-and-tested-by: syzbot+1a247e36149ffd709a9b@syzkaller.appspotmail.com Fixes: d855497edbfb ("V4L/DVB (4228a): pvrusb2 to kernel 2.6.18") Signed-off-by: Pavel Skripkin Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/media/usb/pvrusb2/pvrusb2-hdw.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c index 3915d551d59e..fccd1798445d 100644 --- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c +++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c @@ -2569,6 +2569,11 @@ struct pvr2_hdw *pvr2_hdw_create(struct usb_interface *intf, } while (0); mutex_unlock(&pvr2_unit_mtx); + INIT_WORK(&hdw->workpoll, pvr2_hdw_worker_poll); + + if (hdw->unit_number == -1) + goto fail; + cnt1 = 0; cnt2 = scnprintf(hdw->name+cnt1,sizeof(hdw->name)-cnt1,"pvrusb2"); cnt1 += cnt2; @@ -2580,8 +2585,6 @@ struct pvr2_hdw *pvr2_hdw_create(struct usb_interface *intf, if (cnt1 >= sizeof(hdw->name)) cnt1 = sizeof(hdw->name)-1; hdw->name[cnt1] = 0; - INIT_WORK(&hdw->workpoll,pvr2_hdw_worker_poll); - pvr2_trace(PVR2_TRACE_INIT,"Driver unit number is %d, name is %s", hdw->unit_number,hdw->name); From fc68385fcbac2838ee86562b63fcfdf86ca42627 Mon Sep 17 00:00:00 2001 From: Michael Rodin Date: Tue, 23 Nov 2021 12:50:36 +0100 Subject: [PATCH 0526/1842] media: vsp1: Fix offset calculation for plane cropping [ Upstream commit 5f25abec8f21b7527c1223a354d23c270befddb3 ] The vertical subsampling factor is currently not considered in the offset calculation for plane cropping done in rpf_configure_partition. This causes a distortion (shift of the color plane) when formats with the vsub factor larger than 1 are used (e.g. NV12, see vsp1_video_formats in vsp1_pipe.c). This commit considers vsub factor for all planes except plane 0 (luminance). Drop generalization of the offset calculation to reduce the binary size. Fixes: e5ad37b64de9 ("[media] v4l: vsp1: Add cropping support") Signed-off-by: Michael Rodin Signed-off-by: LUU HOAI Signed-off-by: Laurent Pinchart Reviewed-by: Kieran Bingham Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/media/platform/vsp1/vsp1_rpf.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/media/platform/vsp1/vsp1_rpf.c b/drivers/media/platform/vsp1/vsp1_rpf.c index 85587c1b6a37..75083cb234fe 100644 --- a/drivers/media/platform/vsp1/vsp1_rpf.c +++ b/drivers/media/platform/vsp1/vsp1_rpf.c @@ -291,11 +291,11 @@ static void rpf_configure_partition(struct vsp1_entity *entity, + crop.left * fmtinfo->bpp[0] / 8; if (format->num_planes > 1) { + unsigned int bpl = format->plane_fmt[1].bytesperline; unsigned int offset; - offset = crop.top * format->plane_fmt[1].bytesperline - + crop.left / fmtinfo->hsub - * fmtinfo->bpp[1] / 8; + offset = crop.top / fmtinfo->vsub * bpl + + crop.left / fmtinfo->hsub * fmtinfo->bpp[1] / 8; mem.addr[1] += offset; mem.addr[2] += offset; } From 36c644c63bfcaee2d3a426f45e89a9cd09799318 Mon Sep 17 00:00:00 2001 From: Ying Hsu Date: Sat, 26 Mar 2022 07:09:28 +0000 Subject: [PATCH 0527/1842] Bluetooth: fix dangling sco_conn and use-after-free in sco_sock_timeout [ Upstream commit 7aa1e7d15f8a5b65f67bacb100d8fc033b21efa2 ] Connecting the same socket twice consecutively in sco_sock_connect() could lead to a race condition where two sco_conn objects are created but only one is associated with the socket. If the socket is closed before the SCO connection is established, the timer associated with the dangling sco_conn object won't be canceled. As the sock object is being freed, the use-after-free problem happens when the timer callback function sco_sock_timeout() accesses the socket. Here's the call trace: dump_stack+0x107/0x163 ? refcount_inc+0x1c/ print_address_description.constprop.0+0x1c/0x47e ? refcount_inc+0x1c/0x7b kasan_report+0x13a/0x173 ? refcount_inc+0x1c/0x7b check_memory_region+0x132/0x139 refcount_inc+0x1c/0x7b sco_sock_timeout+0xb2/0x1ba process_one_work+0x739/0xbd1 ? cancel_delayed_work+0x13f/0x13f ? __raw_spin_lock_init+0xf0/0xf0 ? to_kthread+0x59/0x85 worker_thread+0x593/0x70e kthread+0x346/0x35a ? drain_workqueue+0x31a/0x31a ? kthread_bind+0x4b/0x4b ret_from_fork+0x1f/0x30 Link: https://syzkaller.appspot.com/bug?extid=2bef95d3ab4daa10155b Reported-by: syzbot+2bef95d3ab4daa10155b@syzkaller.appspotmail.com Fixes: e1dee2c1de2b ("Bluetooth: fix repeated calls to sco_sock_kill") Signed-off-by: Ying Hsu Reviewed-by: Joseph Hwang Signed-off-by: Marcel Holtmann Signed-off-by: Sasha Levin --- net/bluetooth/sco.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c index 2f2b8ddc4dd5..df254c7de2dd 100644 --- a/net/bluetooth/sco.c +++ b/net/bluetooth/sco.c @@ -575,19 +575,24 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen addr->sa_family != AF_BLUETOOTH) return -EINVAL; - if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND) - return -EBADFD; + lock_sock(sk); + if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND) { + err = -EBADFD; + goto done; + } - if (sk->sk_type != SOCK_SEQPACKET) - return -EINVAL; + if (sk->sk_type != SOCK_SEQPACKET) { + err = -EINVAL; + goto done; + } hdev = hci_get_route(&sa->sco_bdaddr, &sco_pi(sk)->src, BDADDR_BREDR); - if (!hdev) - return -EHOSTUNREACH; + if (!hdev) { + err = -EHOSTUNREACH; + goto done; + } hci_dev_lock(hdev); - lock_sock(sk); - /* Set destination address and psm */ bacpy(&sco_pi(sk)->dst, &sa->sco_bdaddr); From 5702c3c6576d753f405aa5863fdc079e7f6fb779 Mon Sep 17 00:00:00 2001 From: Howard Chung Date: Thu, 26 Nov 2020 12:22:21 +0800 Subject: [PATCH 0528/1842] Bluetooth: Interleave with allowlist scan [ Upstream commit c4f1f408168cd6a83d973e98e1cd1888e4d3d907 ] This patch implements the interleaving between allowlist scan and no-filter scan. It'll be used to save power when at least one monitor is registered and at least one pending connection or one device to be scanned for. The durations of the allowlist scan and the no-filter scan are controlled by MGMT command: Set Default System Configuration. The default values are set randomly for now. Signed-off-by: Howard Chung Reviewed-by: Alain Michaud Reviewed-by: Manish Mandlik Signed-off-by: Marcel Holtmann Signed-off-by: Johan Hedberg Signed-off-by: Sasha Levin --- include/net/bluetooth/hci_core.h | 10 +++ net/bluetooth/hci_core.c | 3 + net/bluetooth/hci_request.c | 128 +++++++++++++++++++++++++++++-- net/bluetooth/mgmt_config.c | 10 +++ 4 files changed, 144 insertions(+), 7 deletions(-) diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h index 09104ca14a3e..0520c550086e 100644 --- a/include/net/bluetooth/hci_core.h +++ b/include/net/bluetooth/hci_core.h @@ -364,6 +364,8 @@ struct hci_dev { __u8 ssp_debug_mode; __u8 hw_error_code; __u32 clock; + __u16 advmon_allowlist_duration; + __u16 advmon_no_filter_duration; __u16 devid_source; __u16 devid_vendor; @@ -545,6 +547,14 @@ struct hci_dev { struct delayed_work rpa_expired; bdaddr_t rpa; + enum { + INTERLEAVE_SCAN_NONE, + INTERLEAVE_SCAN_NO_FILTER, + INTERLEAVE_SCAN_ALLOWLIST + } interleave_scan_state; + + struct delayed_work interleave_scan; + #if IS_ENABLED(CONFIG_BT_LEDS) struct led_trigger *power_led; #endif diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c index c331b4176de7..99657c1e66ba 100644 --- a/net/bluetooth/hci_core.c +++ b/net/bluetooth/hci_core.c @@ -3606,6 +3606,9 @@ struct hci_dev *hci_alloc_dev(void) hdev->cur_adv_instance = 0x00; hdev->adv_instance_timeout = 0; + hdev->advmon_allowlist_duration = 300; + hdev->advmon_no_filter_duration = 500; + hdev->sniff_max_interval = 800; hdev->sniff_min_interval = 80; diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c index d965b7c66bd6..2405e1ffebbd 100644 --- a/net/bluetooth/hci_request.c +++ b/net/bluetooth/hci_request.c @@ -382,6 +382,53 @@ void __hci_req_write_fast_connectable(struct hci_request *req, bool enable) hci_req_add(req, HCI_OP_WRITE_PAGE_SCAN_TYPE, 1, &type); } +static void start_interleave_scan(struct hci_dev *hdev) +{ + hdev->interleave_scan_state = INTERLEAVE_SCAN_NO_FILTER; + queue_delayed_work(hdev->req_workqueue, + &hdev->interleave_scan, 0); +} + +static bool is_interleave_scanning(struct hci_dev *hdev) +{ + return hdev->interleave_scan_state != INTERLEAVE_SCAN_NONE; +} + +static void cancel_interleave_scan(struct hci_dev *hdev) +{ + bt_dev_dbg(hdev, "cancelling interleave scan"); + + cancel_delayed_work_sync(&hdev->interleave_scan); + + hdev->interleave_scan_state = INTERLEAVE_SCAN_NONE; +} + +/* Return true if interleave_scan wasn't started until exiting this function, + * otherwise, return false + */ +static bool __hci_update_interleaved_scan(struct hci_dev *hdev) +{ + /* If there is at least one ADV monitors and one pending LE connection + * or one device to be scanned for, we should alternate between + * allowlist scan and one without any filters to save power. + */ + bool use_interleaving = hci_is_adv_monitoring(hdev) && + !(list_empty(&hdev->pend_le_conns) && + list_empty(&hdev->pend_le_reports)); + bool is_interleaving = is_interleave_scanning(hdev); + + if (use_interleaving && !is_interleaving) { + start_interleave_scan(hdev); + bt_dev_dbg(hdev, "starting interleave scan"); + return true; + } + + if (!use_interleaving && is_interleaving) + cancel_interleave_scan(hdev); + + return false; +} + /* This function controls the background scanning based on hdev->pend_le_conns * list. If there are pending LE connection we start the background scanning, * otherwise we stop it. @@ -454,8 +501,7 @@ static void __hci_update_background_scan(struct hci_request *req) hci_req_add_le_scan_disable(req, false); hci_req_add_le_passive_scan(req); - - BT_DBG("%s starting background scanning", hdev->name); + bt_dev_dbg(hdev, "starting background scanning"); } } @@ -852,12 +898,17 @@ static u8 update_white_list(struct hci_request *req) return 0x00; } - /* Once the controller offloading of advertisement monitor is in place, - * the if condition should include the support of MSFT extension - * support. If suspend is ongoing, whitelist should be the default to - * prevent waking by random advertisements. + /* Use the allowlist unless the following conditions are all true: + * - We are not currently suspending + * - There are 1 or more ADV monitors registered + * - Interleaved scanning is not currently using the allowlist + * + * Once the controller offloading of advertisement monitor is in place, + * the above condition should include the support of MSFT extension + * support. */ - if (!idr_is_empty(&hdev->adv_monitors_idr) && !hdev->suspended) + if (!idr_is_empty(&hdev->adv_monitors_idr) && !hdev->suspended && + hdev->interleave_scan_state != INTERLEAVE_SCAN_ALLOWLIST) return 0x00; /* Select filter policy to use white list */ @@ -1010,6 +1061,10 @@ void hci_req_add_le_passive_scan(struct hci_request *req) &own_addr_type)) return; + if (__hci_update_interleaved_scan(hdev)) + return; + + bt_dev_dbg(hdev, "interleave state %d", hdev->interleave_scan_state); /* Adding or removing entries from the white list must * happen before enabling scanning. The controller does * not allow white list modification while scanning. @@ -1884,6 +1939,62 @@ static void adv_timeout_expire(struct work_struct *work) hci_dev_unlock(hdev); } +static int hci_req_add_le_interleaved_scan(struct hci_request *req, + unsigned long opt) +{ + struct hci_dev *hdev = req->hdev; + int ret = 0; + + hci_dev_lock(hdev); + + if (hci_dev_test_flag(hdev, HCI_LE_SCAN)) + hci_req_add_le_scan_disable(req, false); + hci_req_add_le_passive_scan(req); + + switch (hdev->interleave_scan_state) { + case INTERLEAVE_SCAN_ALLOWLIST: + bt_dev_dbg(hdev, "next state: allowlist"); + hdev->interleave_scan_state = INTERLEAVE_SCAN_NO_FILTER; + break; + case INTERLEAVE_SCAN_NO_FILTER: + bt_dev_dbg(hdev, "next state: no filter"); + hdev->interleave_scan_state = INTERLEAVE_SCAN_ALLOWLIST; + break; + case INTERLEAVE_SCAN_NONE: + BT_ERR("unexpected error"); + ret = -1; + } + + hci_dev_unlock(hdev); + + return ret; +} + +static void interleave_scan_work(struct work_struct *work) +{ + struct hci_dev *hdev = container_of(work, struct hci_dev, + interleave_scan.work); + u8 status; + unsigned long timeout; + + if (hdev->interleave_scan_state == INTERLEAVE_SCAN_ALLOWLIST) { + timeout = msecs_to_jiffies(hdev->advmon_allowlist_duration); + } else if (hdev->interleave_scan_state == INTERLEAVE_SCAN_NO_FILTER) { + timeout = msecs_to_jiffies(hdev->advmon_no_filter_duration); + } else { + bt_dev_err(hdev, "unexpected error"); + return; + } + + hci_req_sync(hdev, hci_req_add_le_interleaved_scan, 0, + HCI_CMD_TIMEOUT, &status); + + /* Don't continue interleaving if it was canceled */ + if (is_interleave_scanning(hdev)) + queue_delayed_work(hdev->req_workqueue, + &hdev->interleave_scan, timeout); +} + int hci_get_random_address(struct hci_dev *hdev, bool require_privacy, bool use_rpa, struct adv_info *adv_instance, u8 *own_addr_type, bdaddr_t *rand_addr) @@ -3311,6 +3422,7 @@ void hci_request_setup(struct hci_dev *hdev) INIT_DELAYED_WORK(&hdev->le_scan_disable, le_scan_disable_work); INIT_DELAYED_WORK(&hdev->le_scan_restart, le_scan_restart_work); INIT_DELAYED_WORK(&hdev->adv_instance_expire, adv_timeout_expire); + INIT_DELAYED_WORK(&hdev->interleave_scan, interleave_scan_work); } void hci_request_cancel_all(struct hci_dev *hdev) @@ -3330,4 +3442,6 @@ void hci_request_cancel_all(struct hci_dev *hdev) cancel_delayed_work_sync(&hdev->adv_instance_expire); hdev->adv_instance_timeout = 0; } + + cancel_interleave_scan(hdev); } diff --git a/net/bluetooth/mgmt_config.c b/net/bluetooth/mgmt_config.c index b30b571f8caf..2d3ad288c78a 100644 --- a/net/bluetooth/mgmt_config.c +++ b/net/bluetooth/mgmt_config.c @@ -67,6 +67,8 @@ int read_def_system_config(struct sock *sk, struct hci_dev *hdev, void *data, HDEV_PARAM_U16(0x001a, le_supv_timeout), HDEV_PARAM_U16_JIFFIES_TO_MSECS(0x001b, def_le_autoconnect_timeout), + HDEV_PARAM_U16(0x001d, advmon_allowlist_duration), + HDEV_PARAM_U16(0x001e, advmon_no_filter_duration), }; struct mgmt_rp_read_def_system_config *rp = (void *)params; @@ -138,6 +140,8 @@ int set_def_system_config(struct sock *sk, struct hci_dev *hdev, void *data, case 0x0019: case 0x001a: case 0x001b: + case 0x001d: + case 0x001e: if (len != sizeof(u16)) { bt_dev_warn(hdev, "invalid length %d, exp %zu for type %d", len, sizeof(u16), type); @@ -251,6 +255,12 @@ int set_def_system_config(struct sock *sk, struct hci_dev *hdev, void *data, hdev->def_le_autoconnect_timeout = msecs_to_jiffies(TLV_GET_LE16(buffer)); break; + case 0x0001d: + hdev->advmon_allowlist_duration = TLV_GET_LE16(buffer); + break; + case 0x0001e: + hdev->advmon_no_filter_duration = TLV_GET_LE16(buffer); + break; default: bt_dev_warn(hdev, "unsupported parameter %u", type); break; From 394df9f17e1577871d22db2837371e4937895282 Mon Sep 17 00:00:00 2001 From: Bhaskar Chowdhury Date: Thu, 25 Mar 2021 10:05:44 +0530 Subject: [PATCH 0529/1842] Bluetooth: L2CAP: Rudimentary typo fixes [ Upstream commit 5153ceb9e622f4e27de461404edc73324da70f8c ] s/minium/minimum/ s/procdure/procedure/ Signed-off-by: Bhaskar Chowdhury Acked-by: Randy Dunlap Signed-off-by: Marcel Holtmann Signed-off-by: Sasha Levin --- net/bluetooth/l2cap_core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c index 012c1a0abda8..ad33c592cde4 100644 --- a/net/bluetooth/l2cap_core.c +++ b/net/bluetooth/l2cap_core.c @@ -1689,7 +1689,7 @@ static void l2cap_le_conn_ready(struct l2cap_conn *conn) smp_conn_security(hcon, hcon->pending_sec_level); /* For LE slave connections, make sure the connection interval - * is in the range of the minium and maximum interval that has + * is in the range of the minimum and maximum interval that has * been configured for this connection. If not, then trigger * the connection update procedure. */ @@ -7540,7 +7540,7 @@ static void l2cap_data_channel(struct l2cap_conn *conn, u16 cid, BT_DBG("chan %p, len %d", chan, skb->len); /* If we receive data on a fixed channel before the info req/rsp - * procdure is done simply assume that the channel is supported + * procedure is done simply assume that the channel is supported * and mark it as ready. */ if (chan->chan_type == L2CAP_CHAN_FIXED) From c024f6f11d4d2d9b4fa2be2ba4bc01a5bbe6ee4b Mon Sep 17 00:00:00 2001 From: Sathish Narasimman Date: Mon, 5 Apr 2021 20:00:41 +0530 Subject: [PATCH 0530/1842] Bluetooth: LL privacy allow RPA [ Upstream commit 8ce85ada0a05e21a5386ba5c417c52ab00fcd0d1 ] allow RPA to add bd address to whitelist Signed-off-by: Sathish Narasimman Signed-off-by: Marcel Holtmann Signed-off-by: Sasha Levin --- net/bluetooth/hci_request.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c index 2405e1ffebbd..eb4c1c18eb01 100644 --- a/net/bluetooth/hci_request.c +++ b/net/bluetooth/hci_request.c @@ -842,6 +842,10 @@ static u8 update_white_list(struct hci_request *req) */ bool allow_rpa = hdev->suspended; + if (use_ll_privacy(hdev) && + hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY)) + allow_rpa = true; + /* Go through the current white list programmed into the * controller one by one and check if that address is still * in the list of pending connections or list of devices to From d763aa352cfc242f395d2d4cfeb288b54d2b979b Mon Sep 17 00:00:00 2001 From: Archie Pusaka Date: Fri, 4 Jun 2021 16:26:25 +0800 Subject: [PATCH 0531/1842] Bluetooth: use inclusive language in HCI role comments [ Upstream commit 74be523ce6bed0531e4f31c3e1387909589e9bfe ] This patch replaces some non-inclusive terms based on the appropriate language mapping table compiled by the Bluetooth SIG: https://specificationrefs.bluetooth.com/language-mapping/Appropriate_Language_Mapping_Table.pdf Specifically, these terms are replaced: master -> initiator (for smp) or central (everything else) slave -> responder (for smp) or peripheral (everything else) The #define preprocessor terms are unchanged for now to not disturb dependent APIs. Signed-off-by: Archie Pusaka Signed-off-by: Marcel Holtmann Signed-off-by: Sasha Levin --- net/bluetooth/hci_conn.c | 8 ++++---- net/bluetooth/hci_event.c | 6 +++--- net/bluetooth/l2cap_core.c | 2 +- net/bluetooth/smp.c | 6 +++--- 4 files changed, 11 insertions(+), 11 deletions(-) diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c index ecd2ffcf2ba2..140d9764c77e 100644 --- a/net/bluetooth/hci_conn.c +++ b/net/bluetooth/hci_conn.c @@ -240,7 +240,7 @@ int hci_disconnect(struct hci_conn *conn, __u8 reason) { BT_DBG("hcon %p", conn); - /* When we are master of an established connection and it enters + /* When we are central of an established connection and it enters * the disconnect timeout, then go ahead and try to read the * current clock offset. Processing of the result is done * within the event handling and hci_clock_offset_evt function. @@ -1065,16 +1065,16 @@ struct hci_conn *hci_connect_le(struct hci_dev *hdev, bdaddr_t *dst, hci_req_init(&req, hdev); - /* Disable advertising if we're active. For master role + /* Disable advertising if we're active. For central role * connections most controllers will refuse to connect if - * advertising is enabled, and for slave role connections we + * advertising is enabled, and for peripheral role connections we * anyway have to disable it in order to start directed * advertising. */ if (hci_dev_test_flag(hdev, HCI_LE_ADV)) __hci_req_disable_advertising(&req); - /* If requested to connect as slave use directed advertising */ + /* If requested to connect as peripheral use directed advertising */ if (conn->role == HCI_ROLE_SLAVE) { /* If we're active scanning most controllers are unable * to initiate advertising. Simply reject the attempt. diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c index e926e80d9731..061ef20e135e 100644 --- a/net/bluetooth/hci_event.c +++ b/net/bluetooth/hci_event.c @@ -2759,9 +2759,9 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb) bacpy(&cp.bdaddr, &ev->bdaddr); if (lmp_rswitch_capable(hdev) && (mask & HCI_LM_MASTER)) - cp.role = 0x00; /* Become master */ + cp.role = 0x00; /* Become central */ else - cp.role = 0x01; /* Remain slave */ + cp.role = 0x01; /* Remain peripheral */ hci_send_cmd(hdev, HCI_OP_ACCEPT_CONN_REQ, sizeof(cp), &cp); } else if (!(flags & HCI_PROTO_DEFER)) { @@ -5153,7 +5153,7 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status, conn->dst_type = bdaddr_type; /* If we didn't have a hci_conn object previously - * but we're in master role this must be something + * but we're in central role this must be something * initiated using a white list. Since white list based * connections are not "first class citizens" we don't * have full tracking of them. Therefore, we go ahead diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c index ad33c592cde4..6c80e62cea0c 100644 --- a/net/bluetooth/l2cap_core.c +++ b/net/bluetooth/l2cap_core.c @@ -1688,7 +1688,7 @@ static void l2cap_le_conn_ready(struct l2cap_conn *conn) if (hcon->out) smp_conn_security(hcon, hcon->pending_sec_level); - /* For LE slave connections, make sure the connection interval + /* For LE peripheral connections, make sure the connection interval * is in the range of the minimum and maximum interval that has * been configured for this connection. If not, then trigger * the connection update procedure. diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c index 2b7879afc333..b7374dbee23a 100644 --- a/net/bluetooth/smp.c +++ b/net/bluetooth/smp.c @@ -909,8 +909,8 @@ static int tk_request(struct l2cap_conn *conn, u8 remote_oob, u8 auth, hcon->pending_sec_level = BT_SECURITY_HIGH; } - /* If both devices have Keyoard-Display I/O, the master - * Confirms and the slave Enters the passkey. + /* If both devices have Keyboard-Display I/O, the initiator + * Confirms and the responder Enters the passkey. */ if (smp->method == OVERLAP) { if (hcon->role == HCI_ROLE_MASTER) @@ -3076,7 +3076,7 @@ static void bredr_pairing(struct l2cap_chan *chan) if (!test_bit(HCI_CONN_ENCRYPT, &hcon->flags)) return; - /* Only master may initiate SMP over BR/EDR */ + /* Only initiator may initiate SMP over BR/EDR */ if (hcon->role != HCI_ROLE_MASTER) return; From 792f8b0e748c9ef8f19f27dbf5ce656e843c7b1f Mon Sep 17 00:00:00 2001 From: Archie Pusaka Date: Fri, 4 Jun 2021 16:26:27 +0800 Subject: [PATCH 0532/1842] Bluetooth: use inclusive language when filtering devices [ Upstream commit 3d4f9c00492b4e21641e5140a5e78cb50b58d60b ] This patch replaces some non-inclusive terms based on the appropriate language mapping table compiled by the Bluetooth SIG: https://specificationrefs.bluetooth.com/language-mapping/Appropriate_Language_Mapping_Table.pdf Specifically, these terms are replaced: blacklist -> reject list whitelist -> accept list Signed-off-by: Archie Pusaka Reviewed-by: Miao-chen Chou Signed-off-by: Marcel Holtmann Signed-off-by: Sasha Levin --- include/net/bluetooth/hci.h | 16 +++--- include/net/bluetooth/hci_core.h | 8 +-- net/bluetooth/hci_core.c | 24 ++++---- net/bluetooth/hci_debugfs.c | 8 +-- net/bluetooth/hci_event.c | 94 ++++++++++++++++---------------- net/bluetooth/hci_request.c | 89 +++++++++++++++--------------- net/bluetooth/hci_sock.c | 12 ++-- net/bluetooth/l2cap_core.c | 4 +- net/bluetooth/mgmt.c | 14 ++--- 9 files changed, 135 insertions(+), 134 deletions(-) diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h index 243de74e118e..ede7a153c69a 100644 --- a/include/net/bluetooth/hci.h +++ b/include/net/bluetooth/hci.h @@ -1503,7 +1503,7 @@ struct hci_cp_le_set_scan_enable { } __packed; #define HCI_LE_USE_PEER_ADDR 0x00 -#define HCI_LE_USE_WHITELIST 0x01 +#define HCI_LE_USE_ACCEPT_LIST 0x01 #define HCI_OP_LE_CREATE_CONN 0x200d struct hci_cp_le_create_conn { @@ -1523,22 +1523,22 @@ struct hci_cp_le_create_conn { #define HCI_OP_LE_CREATE_CONN_CANCEL 0x200e -#define HCI_OP_LE_READ_WHITE_LIST_SIZE 0x200f -struct hci_rp_le_read_white_list_size { +#define HCI_OP_LE_READ_ACCEPT_LIST_SIZE 0x200f +struct hci_rp_le_read_accept_list_size { __u8 status; __u8 size; } __packed; -#define HCI_OP_LE_CLEAR_WHITE_LIST 0x2010 +#define HCI_OP_LE_CLEAR_ACCEPT_LIST 0x2010 -#define HCI_OP_LE_ADD_TO_WHITE_LIST 0x2011 -struct hci_cp_le_add_to_white_list { +#define HCI_OP_LE_ADD_TO_ACCEPT_LIST 0x2011 +struct hci_cp_le_add_to_accept_list { __u8 bdaddr_type; bdaddr_t bdaddr; } __packed; -#define HCI_OP_LE_DEL_FROM_WHITE_LIST 0x2012 -struct hci_cp_le_del_from_white_list { +#define HCI_OP_LE_DEL_FROM_ACCEPT_LIST 0x2012 +struct hci_cp_le_del_from_accept_list { __u8 bdaddr_type; bdaddr_t bdaddr; } __packed; diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h index 0520c550086e..11a92bb4d7a9 100644 --- a/include/net/bluetooth/hci_core.h +++ b/include/net/bluetooth/hci_core.h @@ -308,7 +308,7 @@ struct hci_dev { __u8 max_page; __u8 features[HCI_MAX_PAGES][8]; __u8 le_features[8]; - __u8 le_white_list_size; + __u8 le_accept_list_size; __u8 le_resolv_list_size; __u8 le_num_of_adv_sets; __u8 le_states[8]; @@ -499,14 +499,14 @@ struct hci_dev { struct hci_conn_hash conn_hash; struct list_head mgmt_pending; - struct list_head blacklist; - struct list_head whitelist; + struct list_head reject_list; + struct list_head accept_list; struct list_head uuids; struct list_head link_keys; struct list_head long_term_keys; struct list_head identity_resolving_keys; struct list_head remote_oob_data; - struct list_head le_white_list; + struct list_head le_accept_list; struct list_head le_resolv_list; struct list_head le_conn_params; struct list_head pend_le_conns; diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c index 99657c1e66ba..2cb0cf035476 100644 --- a/net/bluetooth/hci_core.c +++ b/net/bluetooth/hci_core.c @@ -742,14 +742,14 @@ static int hci_init3_req(struct hci_request *req, unsigned long opt) } if (hdev->commands[26] & 0x40) { - /* Read LE White List Size */ - hci_req_add(req, HCI_OP_LE_READ_WHITE_LIST_SIZE, + /* Read LE Accept List Size */ + hci_req_add(req, HCI_OP_LE_READ_ACCEPT_LIST_SIZE, 0, NULL); } if (hdev->commands[26] & 0x80) { - /* Clear LE White List */ - hci_req_add(req, HCI_OP_LE_CLEAR_WHITE_LIST, 0, NULL); + /* Clear LE Accept List */ + hci_req_add(req, HCI_OP_LE_CLEAR_ACCEPT_LIST, 0, NULL); } if (hdev->commands[34] & 0x40) { @@ -3548,13 +3548,13 @@ static int hci_suspend_notifier(struct notifier_block *nb, unsigned long action, /* Suspend consists of two actions: * - First, disconnect everything and make the controller not * connectable (disabling scanning) - * - Second, program event filter/whitelist and enable scan + * - Second, program event filter/accept list and enable scan */ ret = hci_change_suspend_state(hdev, BT_SUSPEND_DISCONNECT); if (!ret) state = BT_SUSPEND_DISCONNECT; - /* Only configure whitelist if disconnect succeeded and wake + /* Only configure accept list if disconnect succeeded and wake * isn't being prevented. */ if (!ret && !(hdev->prevent_wake && hdev->prevent_wake(hdev))) { @@ -3657,14 +3657,14 @@ struct hci_dev *hci_alloc_dev(void) mutex_init(&hdev->req_lock); INIT_LIST_HEAD(&hdev->mgmt_pending); - INIT_LIST_HEAD(&hdev->blacklist); - INIT_LIST_HEAD(&hdev->whitelist); + INIT_LIST_HEAD(&hdev->reject_list); + INIT_LIST_HEAD(&hdev->accept_list); INIT_LIST_HEAD(&hdev->uuids); INIT_LIST_HEAD(&hdev->link_keys); INIT_LIST_HEAD(&hdev->long_term_keys); INIT_LIST_HEAD(&hdev->identity_resolving_keys); INIT_LIST_HEAD(&hdev->remote_oob_data); - INIT_LIST_HEAD(&hdev->le_white_list); + INIT_LIST_HEAD(&hdev->le_accept_list); INIT_LIST_HEAD(&hdev->le_resolv_list); INIT_LIST_HEAD(&hdev->le_conn_params); INIT_LIST_HEAD(&hdev->pend_le_conns); @@ -3880,8 +3880,8 @@ void hci_cleanup_dev(struct hci_dev *hdev) destroy_workqueue(hdev->req_workqueue); hci_dev_lock(hdev); - hci_bdaddr_list_clear(&hdev->blacklist); - hci_bdaddr_list_clear(&hdev->whitelist); + hci_bdaddr_list_clear(&hdev->reject_list); + hci_bdaddr_list_clear(&hdev->accept_list); hci_uuids_clear(hdev); hci_link_keys_clear(hdev); hci_smp_ltks_clear(hdev); @@ -3889,7 +3889,7 @@ void hci_cleanup_dev(struct hci_dev *hdev) hci_remote_oob_data_clear(hdev); hci_adv_instances_clear(hdev); hci_adv_monitors_clear(hdev); - hci_bdaddr_list_clear(&hdev->le_white_list); + hci_bdaddr_list_clear(&hdev->le_accept_list); hci_bdaddr_list_clear(&hdev->le_resolv_list); hci_conn_params_clear_all(hdev); hci_discovery_filter_clear(hdev); diff --git a/net/bluetooth/hci_debugfs.c b/net/bluetooth/hci_debugfs.c index 5e8af2658e44..338833f12365 100644 --- a/net/bluetooth/hci_debugfs.c +++ b/net/bluetooth/hci_debugfs.c @@ -125,7 +125,7 @@ static int device_list_show(struct seq_file *f, void *ptr) struct bdaddr_list *b; hci_dev_lock(hdev); - list_for_each_entry(b, &hdev->whitelist, list) + list_for_each_entry(b, &hdev->accept_list, list) seq_printf(f, "%pMR (type %u)\n", &b->bdaddr, b->bdaddr_type); list_for_each_entry(p, &hdev->le_conn_params, list) { seq_printf(f, "%pMR (type %u) %u\n", &p->addr, p->addr_type, @@ -144,7 +144,7 @@ static int blacklist_show(struct seq_file *f, void *p) struct bdaddr_list *b; hci_dev_lock(hdev); - list_for_each_entry(b, &hdev->blacklist, list) + list_for_each_entry(b, &hdev->reject_list, list) seq_printf(f, "%pMR (type %u)\n", &b->bdaddr, b->bdaddr_type); hci_dev_unlock(hdev); @@ -734,7 +734,7 @@ static int white_list_show(struct seq_file *f, void *ptr) struct bdaddr_list *b; hci_dev_lock(hdev); - list_for_each_entry(b, &hdev->le_white_list, list) + list_for_each_entry(b, &hdev->le_accept_list, list) seq_printf(f, "%pMR (type %u)\n", &b->bdaddr, b->bdaddr_type); hci_dev_unlock(hdev); @@ -1145,7 +1145,7 @@ void hci_debugfs_create_le(struct hci_dev *hdev) &force_static_address_fops); debugfs_create_u8("white_list_size", 0444, hdev->debugfs, - &hdev->le_white_list_size); + &hdev->le_accept_list_size); debugfs_create_file("white_list", 0444, hdev->debugfs, hdev, &white_list_fops); debugfs_create_u8("resolv_list_size", 0444, hdev->debugfs, diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c index 061ef20e135e..f75869835e3e 100644 --- a/net/bluetooth/hci_event.c +++ b/net/bluetooth/hci_event.c @@ -236,7 +236,7 @@ static void hci_cc_reset(struct hci_dev *hdev, struct sk_buff *skb) hdev->ssp_debug_mode = 0; - hci_bdaddr_list_clear(&hdev->le_white_list); + hci_bdaddr_list_clear(&hdev->le_accept_list); hci_bdaddr_list_clear(&hdev->le_resolv_list); } @@ -1456,36 +1456,22 @@ static void hci_cc_le_read_num_adv_sets(struct hci_dev *hdev, hdev->le_num_of_adv_sets = rp->num_of_sets; } -static void hci_cc_le_read_white_list_size(struct hci_dev *hdev, - struct sk_buff *skb) +static void hci_cc_le_read_accept_list_size(struct hci_dev *hdev, + struct sk_buff *skb) { - struct hci_rp_le_read_white_list_size *rp = (void *) skb->data; + struct hci_rp_le_read_accept_list_size *rp = (void *)skb->data; BT_DBG("%s status 0x%2.2x size %u", hdev->name, rp->status, rp->size); if (rp->status) return; - hdev->le_white_list_size = rp->size; + hdev->le_accept_list_size = rp->size; } -static void hci_cc_le_clear_white_list(struct hci_dev *hdev, - struct sk_buff *skb) -{ - __u8 status = *((__u8 *) skb->data); - - BT_DBG("%s status 0x%2.2x", hdev->name, status); - - if (status) - return; - - hci_bdaddr_list_clear(&hdev->le_white_list); -} - -static void hci_cc_le_add_to_white_list(struct hci_dev *hdev, +static void hci_cc_le_clear_accept_list(struct hci_dev *hdev, struct sk_buff *skb) { - struct hci_cp_le_add_to_white_list *sent; __u8 status = *((__u8 *) skb->data); BT_DBG("%s status 0x%2.2x", hdev->name, status); @@ -1493,18 +1479,13 @@ static void hci_cc_le_add_to_white_list(struct hci_dev *hdev, if (status) return; - sent = hci_sent_cmd_data(hdev, HCI_OP_LE_ADD_TO_WHITE_LIST); - if (!sent) - return; - - hci_bdaddr_list_add(&hdev->le_white_list, &sent->bdaddr, - sent->bdaddr_type); + hci_bdaddr_list_clear(&hdev->le_accept_list); } -static void hci_cc_le_del_from_white_list(struct hci_dev *hdev, - struct sk_buff *skb) +static void hci_cc_le_add_to_accept_list(struct hci_dev *hdev, + struct sk_buff *skb) { - struct hci_cp_le_del_from_white_list *sent; + struct hci_cp_le_add_to_accept_list *sent; __u8 status = *((__u8 *) skb->data); BT_DBG("%s status 0x%2.2x", hdev->name, status); @@ -1512,11 +1493,30 @@ static void hci_cc_le_del_from_white_list(struct hci_dev *hdev, if (status) return; - sent = hci_sent_cmd_data(hdev, HCI_OP_LE_DEL_FROM_WHITE_LIST); + sent = hci_sent_cmd_data(hdev, HCI_OP_LE_ADD_TO_ACCEPT_LIST); if (!sent) return; - hci_bdaddr_list_del(&hdev->le_white_list, &sent->bdaddr, + hci_bdaddr_list_add(&hdev->le_accept_list, &sent->bdaddr, + sent->bdaddr_type); +} + +static void hci_cc_le_del_from_accept_list(struct hci_dev *hdev, + struct sk_buff *skb) +{ + struct hci_cp_le_del_from_accept_list *sent; + __u8 status = *((__u8 *) skb->data); + + BT_DBG("%s status 0x%2.2x", hdev->name, status); + + if (status) + return; + + sent = hci_sent_cmd_data(hdev, HCI_OP_LE_DEL_FROM_ACCEPT_LIST); + if (!sent) + return; + + hci_bdaddr_list_del(&hdev->le_accept_list, &sent->bdaddr, sent->bdaddr_type); } @@ -2331,7 +2331,7 @@ static void cs_le_create_conn(struct hci_dev *hdev, bdaddr_t *peer_addr, /* We don't want the connection attempt to stick around * indefinitely since LE doesn't have a page timeout concept * like BR/EDR. Set a timer for any connection that doesn't use - * the white list for connecting. + * the accept list for connecting. */ if (filter_policy == HCI_LE_USE_PEER_ADDR) queue_delayed_work(conn->hdev->workqueue, @@ -2587,7 +2587,7 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) * only used during suspend. */ if (ev->link_type == ACL_LINK && - hci_bdaddr_list_lookup_with_flags(&hdev->whitelist, + hci_bdaddr_list_lookup_with_flags(&hdev->accept_list, &ev->bdaddr, BDADDR_BREDR)) { conn = hci_conn_add(hdev, ev->link_type, &ev->bdaddr, @@ -2709,19 +2709,19 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb) return; } - if (hci_bdaddr_list_lookup(&hdev->blacklist, &ev->bdaddr, + if (hci_bdaddr_list_lookup(&hdev->reject_list, &ev->bdaddr, BDADDR_BREDR)) { hci_reject_conn(hdev, &ev->bdaddr); return; } - /* Require HCI_CONNECTABLE or a whitelist entry to accept the + /* Require HCI_CONNECTABLE or an accept list entry to accept the * connection. These features are only touched through mgmt so * only do the checks if HCI_MGMT is set. */ if (hci_dev_test_flag(hdev, HCI_MGMT) && !hci_dev_test_flag(hdev, HCI_CONNECTABLE) && - !hci_bdaddr_list_lookup_with_flags(&hdev->whitelist, &ev->bdaddr, + !hci_bdaddr_list_lookup_with_flags(&hdev->accept_list, &ev->bdaddr, BDADDR_BREDR)) { hci_reject_conn(hdev, &ev->bdaddr); return; @@ -3481,20 +3481,20 @@ static void hci_cmd_complete_evt(struct hci_dev *hdev, struct sk_buff *skb, hci_cc_le_set_scan_enable(hdev, skb); break; - case HCI_OP_LE_READ_WHITE_LIST_SIZE: - hci_cc_le_read_white_list_size(hdev, skb); + case HCI_OP_LE_READ_ACCEPT_LIST_SIZE: + hci_cc_le_read_accept_list_size(hdev, skb); break; - case HCI_OP_LE_CLEAR_WHITE_LIST: - hci_cc_le_clear_white_list(hdev, skb); + case HCI_OP_LE_CLEAR_ACCEPT_LIST: + hci_cc_le_clear_accept_list(hdev, skb); break; - case HCI_OP_LE_ADD_TO_WHITE_LIST: - hci_cc_le_add_to_white_list(hdev, skb); + case HCI_OP_LE_ADD_TO_ACCEPT_LIST: + hci_cc_le_add_to_accept_list(hdev, skb); break; - case HCI_OP_LE_DEL_FROM_WHITE_LIST: - hci_cc_le_del_from_white_list(hdev, skb); + case HCI_OP_LE_DEL_FROM_ACCEPT_LIST: + hci_cc_le_del_from_accept_list(hdev, skb); break; case HCI_OP_LE_READ_SUPPORTED_STATES: @@ -5154,7 +5154,7 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status, /* If we didn't have a hci_conn object previously * but we're in central role this must be something - * initiated using a white list. Since white list based + * initiated using an accept list. Since accept list based * connections are not "first class citizens" we don't * have full tracking of them. Therefore, we go ahead * with a "best effort" approach of determining the @@ -5204,7 +5204,7 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status, addr_type = BDADDR_LE_RANDOM; /* Drop the connection if the device is blocked */ - if (hci_bdaddr_list_lookup(&hdev->blacklist, &conn->dst, addr_type)) { + if (hci_bdaddr_list_lookup(&hdev->reject_list, &conn->dst, addr_type)) { hci_conn_drop(conn); goto unlock; } @@ -5372,7 +5372,7 @@ static struct hci_conn *check_pending_le_conn(struct hci_dev *hdev, return NULL; /* Ignore if the device is blocked */ - if (hci_bdaddr_list_lookup(&hdev->blacklist, addr, addr_type)) + if (hci_bdaddr_list_lookup(&hdev->reject_list, addr, addr_type)) return NULL; /* Most controller will fail if we try to create new connections diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c index eb4c1c18eb01..a0f980e61505 100644 --- a/net/bluetooth/hci_request.c +++ b/net/bluetooth/hci_request.c @@ -736,17 +736,17 @@ void hci_req_add_le_scan_disable(struct hci_request *req, bool rpa_le_conn) } } -static void del_from_white_list(struct hci_request *req, bdaddr_t *bdaddr, - u8 bdaddr_type) +static void del_from_accept_list(struct hci_request *req, bdaddr_t *bdaddr, + u8 bdaddr_type) { - struct hci_cp_le_del_from_white_list cp; + struct hci_cp_le_del_from_accept_list cp; cp.bdaddr_type = bdaddr_type; bacpy(&cp.bdaddr, bdaddr); - bt_dev_dbg(req->hdev, "Remove %pMR (0x%x) from whitelist", &cp.bdaddr, + bt_dev_dbg(req->hdev, "Remove %pMR (0x%x) from accept list", &cp.bdaddr, cp.bdaddr_type); - hci_req_add(req, HCI_OP_LE_DEL_FROM_WHITE_LIST, sizeof(cp), &cp); + hci_req_add(req, HCI_OP_LE_DEL_FROM_ACCEPT_LIST, sizeof(cp), &cp); if (use_ll_privacy(req->hdev) && hci_dev_test_flag(req->hdev, HCI_ENABLE_LL_PRIVACY)) { @@ -765,31 +765,31 @@ static void del_from_white_list(struct hci_request *req, bdaddr_t *bdaddr, } } -/* Adds connection to white list if needed. On error, returns -1. */ -static int add_to_white_list(struct hci_request *req, - struct hci_conn_params *params, u8 *num_entries, - bool allow_rpa) +/* Adds connection to accept list if needed. On error, returns -1. */ +static int add_to_accept_list(struct hci_request *req, + struct hci_conn_params *params, u8 *num_entries, + bool allow_rpa) { - struct hci_cp_le_add_to_white_list cp; + struct hci_cp_le_add_to_accept_list cp; struct hci_dev *hdev = req->hdev; - /* Already in white list */ - if (hci_bdaddr_list_lookup(&hdev->le_white_list, ¶ms->addr, + /* Already in accept list */ + if (hci_bdaddr_list_lookup(&hdev->le_accept_list, ¶ms->addr, params->addr_type)) return 0; /* Select filter policy to accept all advertising */ - if (*num_entries >= hdev->le_white_list_size) + if (*num_entries >= hdev->le_accept_list_size) return -1; - /* White list can not be used with RPAs */ + /* Accept list can not be used with RPAs */ if (!allow_rpa && !hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY) && hci_find_irk_by_addr(hdev, ¶ms->addr, params->addr_type)) { return -1; } - /* During suspend, only wakeable devices can be in whitelist */ + /* During suspend, only wakeable devices can be in accept list */ if (hdev->suspended && !hci_conn_test_flag(HCI_CONN_FLAG_REMOTE_WAKEUP, params->current_flags)) return 0; @@ -798,9 +798,9 @@ static int add_to_white_list(struct hci_request *req, cp.bdaddr_type = params->addr_type; bacpy(&cp.bdaddr, ¶ms->addr); - bt_dev_dbg(hdev, "Add %pMR (0x%x) to whitelist", &cp.bdaddr, + bt_dev_dbg(hdev, "Add %pMR (0x%x) to accept list", &cp.bdaddr, cp.bdaddr_type); - hci_req_add(req, HCI_OP_LE_ADD_TO_WHITE_LIST, sizeof(cp), &cp); + hci_req_add(req, HCI_OP_LE_ADD_TO_ACCEPT_LIST, sizeof(cp), &cp); if (use_ll_privacy(hdev) && hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY)) { @@ -828,15 +828,15 @@ static int add_to_white_list(struct hci_request *req, return 0; } -static u8 update_white_list(struct hci_request *req) +static u8 update_accept_list(struct hci_request *req) { struct hci_dev *hdev = req->hdev; struct hci_conn_params *params; struct bdaddr_list *b; u8 num_entries = 0; bool pend_conn, pend_report; - /* We allow whitelisting even with RPAs in suspend. In the worst case, - * we won't be able to wake from devices that use the privacy1.2 + /* We allow usage of accept list even with RPAs in suspend. In the worst + * case, we won't be able to wake from devices that use the privacy1.2 * features. Additionally, once we support privacy1.2 and IRK * offloading, we can update this to also check for those conditions. */ @@ -846,13 +846,13 @@ static u8 update_white_list(struct hci_request *req) hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY)) allow_rpa = true; - /* Go through the current white list programmed into the + /* Go through the current accept list programmed into the * controller one by one and check if that address is still * in the list of pending connections or list of devices to * report. If not present in either list, then queue the * command to remove it from the controller. */ - list_for_each_entry(b, &hdev->le_white_list, list) { + list_for_each_entry(b, &hdev->le_accept_list, list) { pend_conn = hci_pend_le_action_lookup(&hdev->pend_le_conns, &b->bdaddr, b->bdaddr_type); @@ -861,14 +861,14 @@ static u8 update_white_list(struct hci_request *req) b->bdaddr_type); /* If the device is not likely to connect or report, - * remove it from the whitelist. + * remove it from the accept list. */ if (!pend_conn && !pend_report) { - del_from_white_list(req, &b->bdaddr, b->bdaddr_type); + del_from_accept_list(req, &b->bdaddr, b->bdaddr_type); continue; } - /* White list can not be used with RPAs */ + /* Accept list can not be used with RPAs */ if (!allow_rpa && !hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY) && hci_find_irk_by_addr(hdev, &b->bdaddr, b->bdaddr_type)) { @@ -878,27 +878,27 @@ static u8 update_white_list(struct hci_request *req) num_entries++; } - /* Since all no longer valid white list entries have been + /* Since all no longer valid accept list entries have been * removed, walk through the list of pending connections * and ensure that any new device gets programmed into * the controller. * * If the list of the devices is larger than the list of - * available white list entries in the controller, then + * available accept list entries in the controller, then * just abort and return filer policy value to not use the - * white list. + * accept list. */ list_for_each_entry(params, &hdev->pend_le_conns, action) { - if (add_to_white_list(req, params, &num_entries, allow_rpa)) + if (add_to_accept_list(req, params, &num_entries, allow_rpa)) return 0x00; } /* After adding all new pending connections, walk through * the list of pending reports and also add these to the - * white list if there is still space. Abort if space runs out. + * accept list if there is still space. Abort if space runs out. */ list_for_each_entry(params, &hdev->pend_le_reports, action) { - if (add_to_white_list(req, params, &num_entries, allow_rpa)) + if (add_to_accept_list(req, params, &num_entries, allow_rpa)) return 0x00; } @@ -915,7 +915,7 @@ static u8 update_white_list(struct hci_request *req) hdev->interleave_scan_state != INTERLEAVE_SCAN_ALLOWLIST) return 0x00; - /* Select filter policy to use white list */ + /* Select filter policy to use accept list */ return 0x01; } @@ -1069,20 +1069,20 @@ void hci_req_add_le_passive_scan(struct hci_request *req) return; bt_dev_dbg(hdev, "interleave state %d", hdev->interleave_scan_state); - /* Adding or removing entries from the white list must + /* Adding or removing entries from the accept list must * happen before enabling scanning. The controller does - * not allow white list modification while scanning. + * not allow accept list modification while scanning. */ - filter_policy = update_white_list(req); + filter_policy = update_accept_list(req); /* When the controller is using random resolvable addresses and * with that having LE privacy enabled, then controllers with * Extended Scanner Filter Policies support can now enable support * for handling directed advertising. * - * So instead of using filter polices 0x00 (no whitelist) - * and 0x01 (whitelist enabled) use the new filter policies - * 0x02 (no whitelist) and 0x03 (whitelist enabled). + * So instead of using filter polices 0x00 (no accept list) + * and 0x01 (accept list enabled) use the new filter policies + * 0x02 (no accept list) and 0x03 (accept list enabled). */ if (hci_dev_test_flag(hdev, HCI_PRIVACY) && (hdev->le_features[0] & HCI_LE_EXT_SCAN_POLICY)) @@ -1102,7 +1102,8 @@ void hci_req_add_le_passive_scan(struct hci_request *req) interval = hdev->le_scan_interval; } - bt_dev_dbg(hdev, "LE passive scan with whitelist = %d", filter_policy); + bt_dev_dbg(hdev, "LE passive scan with accept list = %d", + filter_policy); hci_req_start_scan(req, LE_SCAN_PASSIVE, interval, window, own_addr_type, filter_policy, addr_resolv); } @@ -1150,7 +1151,7 @@ static void hci_req_set_event_filter(struct hci_request *req) /* Always clear event filter when starting */ hci_req_clear_event_filter(req); - list_for_each_entry(b, &hdev->whitelist, list) { + list_for_each_entry(b, &hdev->accept_list, list) { if (!hci_conn_test_flag(HCI_CONN_FLAG_REMOTE_WAKEUP, b->current_flags)) continue; @@ -2562,11 +2563,11 @@ int hci_update_random_address(struct hci_request *req, bool require_privacy, return 0; } -static bool disconnected_whitelist_entries(struct hci_dev *hdev) +static bool disconnected_accept_list_entries(struct hci_dev *hdev) { struct bdaddr_list *b; - list_for_each_entry(b, &hdev->whitelist, list) { + list_for_each_entry(b, &hdev->accept_list, list) { struct hci_conn *conn; conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &b->bdaddr); @@ -2598,7 +2599,7 @@ void __hci_req_update_scan(struct hci_request *req) return; if (hci_dev_test_flag(hdev, HCI_CONNECTABLE) || - disconnected_whitelist_entries(hdev)) + disconnected_accept_list_entries(hdev)) scan = SCAN_PAGE; else scan = SCAN_DISABLED; @@ -3087,7 +3088,7 @@ static int active_scan(struct hci_request *req, unsigned long opt) uint16_t interval = opt; struct hci_dev *hdev = req->hdev; u8 own_addr_type; - /* White list is not used for discovery */ + /* Accept list is not used for discovery */ u8 filter_policy = 0x00; /* Discovery doesn't require controller address resolution */ bool addr_resolv = false; diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c index 53f85d7c5f9e..71d18d3295f5 100644 --- a/net/bluetooth/hci_sock.c +++ b/net/bluetooth/hci_sock.c @@ -897,7 +897,7 @@ static int hci_sock_release(struct socket *sock) return 0; } -static int hci_sock_blacklist_add(struct hci_dev *hdev, void __user *arg) +static int hci_sock_reject_list_add(struct hci_dev *hdev, void __user *arg) { bdaddr_t bdaddr; int err; @@ -907,14 +907,14 @@ static int hci_sock_blacklist_add(struct hci_dev *hdev, void __user *arg) hci_dev_lock(hdev); - err = hci_bdaddr_list_add(&hdev->blacklist, &bdaddr, BDADDR_BREDR); + err = hci_bdaddr_list_add(&hdev->reject_list, &bdaddr, BDADDR_BREDR); hci_dev_unlock(hdev); return err; } -static int hci_sock_blacklist_del(struct hci_dev *hdev, void __user *arg) +static int hci_sock_reject_list_del(struct hci_dev *hdev, void __user *arg) { bdaddr_t bdaddr; int err; @@ -924,7 +924,7 @@ static int hci_sock_blacklist_del(struct hci_dev *hdev, void __user *arg) hci_dev_lock(hdev); - err = hci_bdaddr_list_del(&hdev->blacklist, &bdaddr, BDADDR_BREDR); + err = hci_bdaddr_list_del(&hdev->reject_list, &bdaddr, BDADDR_BREDR); hci_dev_unlock(hdev); @@ -964,12 +964,12 @@ static int hci_sock_bound_ioctl(struct sock *sk, unsigned int cmd, case HCIBLOCKADDR: if (!capable(CAP_NET_ADMIN)) return -EPERM; - return hci_sock_blacklist_add(hdev, (void __user *)arg); + return hci_sock_reject_list_add(hdev, (void __user *)arg); case HCIUNBLOCKADDR: if (!capable(CAP_NET_ADMIN)) return -EPERM; - return hci_sock_blacklist_del(hdev, (void __user *)arg); + return hci_sock_reject_list_del(hdev, (void __user *)arg); } return -ENOIOCTLCMD; diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c index 6c80e62cea0c..2557cd917f5e 100644 --- a/net/bluetooth/l2cap_core.c +++ b/net/bluetooth/l2cap_core.c @@ -7652,7 +7652,7 @@ static void l2cap_recv_frame(struct l2cap_conn *conn, struct sk_buff *skb) * at least ensure that we ignore incoming data from them. */ if (hcon->type == LE_LINK && - hci_bdaddr_list_lookup(&hcon->hdev->blacklist, &hcon->dst, + hci_bdaddr_list_lookup(&hcon->hdev->reject_list, &hcon->dst, bdaddr_dst_type(hcon))) { kfree_skb(skb); return; @@ -8108,7 +8108,7 @@ static void l2cap_connect_cfm(struct hci_conn *hcon, u8 status) dst_type = bdaddr_dst_type(hcon); /* If device is blocked, do not create channels for it */ - if (hci_bdaddr_list_lookup(&hdev->blacklist, &hcon->dst, dst_type)) + if (hci_bdaddr_list_lookup(&hdev->reject_list, &hcon->dst, dst_type)) return; /* Find fixed channels and notify them of the new connection. We diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c index 08f67f91d427..878bf7382244 100644 --- a/net/bluetooth/mgmt.c +++ b/net/bluetooth/mgmt.c @@ -4041,7 +4041,7 @@ static int get_device_flags(struct sock *sk, struct hci_dev *hdev, void *data, memset(&rp, 0, sizeof(rp)); if (cp->addr.type == BDADDR_BREDR) { - br_params = hci_bdaddr_list_lookup_with_flags(&hdev->whitelist, + br_params = hci_bdaddr_list_lookup_with_flags(&hdev->accept_list, &cp->addr.bdaddr, cp->addr.type); if (!br_params) @@ -4109,7 +4109,7 @@ static int set_device_flags(struct sock *sk, struct hci_dev *hdev, void *data, hci_dev_lock(hdev); if (cp->addr.type == BDADDR_BREDR) { - br_params = hci_bdaddr_list_lookup_with_flags(&hdev->whitelist, + br_params = hci_bdaddr_list_lookup_with_flags(&hdev->accept_list, &cp->addr.bdaddr, cp->addr.type); @@ -4979,7 +4979,7 @@ static int block_device(struct sock *sk, struct hci_dev *hdev, void *data, hci_dev_lock(hdev); - err = hci_bdaddr_list_add(&hdev->blacklist, &cp->addr.bdaddr, + err = hci_bdaddr_list_add(&hdev->reject_list, &cp->addr.bdaddr, cp->addr.type); if (err < 0) { status = MGMT_STATUS_FAILED; @@ -5015,7 +5015,7 @@ static int unblock_device(struct sock *sk, struct hci_dev *hdev, void *data, hci_dev_lock(hdev); - err = hci_bdaddr_list_del(&hdev->blacklist, &cp->addr.bdaddr, + err = hci_bdaddr_list_del(&hdev->reject_list, &cp->addr.bdaddr, cp->addr.type); if (err < 0) { status = MGMT_STATUS_INVALID_PARAMS; @@ -6506,7 +6506,7 @@ static int add_device(struct sock *sk, struct hci_dev *hdev, goto unlock; } - err = hci_bdaddr_list_add_with_flags(&hdev->whitelist, + err = hci_bdaddr_list_add_with_flags(&hdev->accept_list, &cp->addr.bdaddr, cp->addr.type, 0); if (err) @@ -6604,7 +6604,7 @@ static int remove_device(struct sock *sk, struct hci_dev *hdev, } if (cp->addr.type == BDADDR_BREDR) { - err = hci_bdaddr_list_del(&hdev->whitelist, + err = hci_bdaddr_list_del(&hdev->accept_list, &cp->addr.bdaddr, cp->addr.type); if (err) { @@ -6675,7 +6675,7 @@ static int remove_device(struct sock *sk, struct hci_dev *hdev, goto unlock; } - list_for_each_entry_safe(b, btmp, &hdev->whitelist, list) { + list_for_each_entry_safe(b, btmp, &hdev->accept_list, list) { device_removed(sk, hdev, &b->bdaddr, b->bdaddr_type); list_del(&b->list); kfree(b); From 8ace1e63550a4488f3eb4ce0fea7007898908f7d Mon Sep 17 00:00:00 2001 From: Niels Dossche Date: Tue, 5 Apr 2022 19:37:52 +0200 Subject: [PATCH 0533/1842] Bluetooth: use hdev lock for accept_list and reject_list in conn req [ Upstream commit fb048cae51bacdfbbda2954af3c213fdb1d484f4 ] All accesses (both reads and modifications) to hdev->{accept,reject}_list are protected by hdev lock, except the ones in hci_conn_request_evt. This can cause a race condition in the form of a list corruption. The solution is to protect these lists in hci_conn_request_evt as well. I was unable to find the exact commit that introduced the issue for the reject list, I was only able to find it for the accept list. Fixes: a55bd29d5227 ("Bluetooth: Add white list lookup for incoming connection requests") Signed-off-by: Niels Dossche Signed-off-by: Marcel Holtmann Signed-off-by: Sasha Levin --- net/bluetooth/hci_event.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c index f75869835e3e..954b29605c94 100644 --- a/net/bluetooth/hci_event.c +++ b/net/bluetooth/hci_event.c @@ -2709,10 +2709,12 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb) return; } + hci_dev_lock(hdev); + if (hci_bdaddr_list_lookup(&hdev->reject_list, &ev->bdaddr, BDADDR_BREDR)) { hci_reject_conn(hdev, &ev->bdaddr); - return; + goto unlock; } /* Require HCI_CONNECTABLE or an accept list entry to accept the @@ -2724,13 +2726,11 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb) !hci_bdaddr_list_lookup_with_flags(&hdev->accept_list, &ev->bdaddr, BDADDR_BREDR)) { hci_reject_conn(hdev, &ev->bdaddr); - return; + goto unlock; } /* Connection accepted */ - hci_dev_lock(hdev); - ie = hci_inquiry_cache_lookup(hdev, &ev->bdaddr); if (ie) memcpy(ie->data.dev_class, ev->dev_class, 3); @@ -2742,8 +2742,7 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb) HCI_ROLE_SLAVE); if (!conn) { bt_dev_err(hdev, "no memory for new connection"); - hci_dev_unlock(hdev); - return; + goto unlock; } } @@ -2783,6 +2782,10 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb) conn->state = BT_CONNECT2; hci_connect_cfm(conn, 0); } + + return; +unlock: + hci_dev_unlock(hdev); } static u8 hci_to_mgmt_reason(u8 err) From 4dcae15ff84f26c28e56768c84947a5865e70fa1 Mon Sep 17 00:00:00 2001 From: Keith Busch Date: Wed, 4 May 2022 11:43:25 -0700 Subject: [PATCH 0534/1842] nvme: set dma alignment to dword [ Upstream commit 52fde2c07da606f3f120af4f734eadcfb52b04be ] The nvme specification only requires qword alignment for segment descriptors, and the driver already guarantees that. The spec has always allowed user data to be dword aligned, which is what the queue's attribute is for, so relax the alignment requirement to that value. While we could allow byte alignment for some controllers when using SGLs, we still need to support PRP, and that only allows dword. Fixes: 3b2a1ebceba3 ("nvme: set dma alignment to qword") Signed-off-by: Keith Busch Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index e73a5c62a858..d301f0280ff6 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -2024,7 +2024,7 @@ static void nvme_set_queue_limits(struct nvme_ctrl *ctrl, blk_queue_max_segments(q, min_t(u32, max_segments, USHRT_MAX)); } blk_queue_virt_boundary(q, NVME_CTRL_PAGE_SIZE - 1); - blk_queue_dma_alignment(q, 7); + blk_queue_dma_alignment(q, 3); blk_queue_write_cache(q, vwc, vwc); } From a67a1661cf8a79797f85043d73ebd4df16ebca06 Mon Sep 17 00:00:00 2001 From: Geert Uytterhoeven Date: Fri, 13 May 2022 14:50:28 +0200 Subject: [PATCH 0535/1842] m68k: math-emu: Fix dependencies of math emulation support [ Upstream commit ed6bc6bf0a7d75e80eb1df883c09975ebb74e590 ] If CONFIG_M54xx=y, CONFIG_MMU=y, and CONFIG_M68KFPU_EMU=y: {standard input}:272: Error: invalid instruction for this architecture; needs 68000 or higher (68000 [68ec000, 68hc000, 68hc001, 68008, 68302, 68306, 68307, 68322, 68356], 68010, 68020 [68k, 68ec020], 68030 [68ec030], 68040 [68ec040], 68060 [68ec060], cpu32 [68330, 68331, 68332, 68333, 68334, 68336, 68340, 68341, 68349, 68360], fidoa [fido]) -- statement `sub.b %d1,%d3' ignored {standard input}:609: Error: invalid instruction for this architecture; needs 68020 or higher (68020 [68k, 68ec020], 68030 [68ec030], 68040 [68ec040], 68060 [68ec060]) -- statement `bfextu 4(%a1){%d0,#8},%d0' ignored {standard input}:752: Error: operands mismatch -- statement `mulu.l 4(%a0),%d3:%d0' ignored {standard input}:1155: Error: operands mismatch -- statement `divu.l %d0,%d3:%d7' ignored The math emulation support code is intended for 68020 and higher, and uses several instructions or instruction modes not available on coldfire or 68000. Originally, the dependency of M68KFPU_EMU on MMU was fine, as MMU support was only available on 68020 or higher. But this assumption was broken by the introduction of MMU support for M547x and M548x. Drop the dependency on MMU, as the code should work fine on 68020 and up without MMU (which are not yet supported by Linux, though). Add dependencies on M68KCLASSIC (to rule out Coldfire) and FPU (kernel has some type of floating-point support --- be it hardware or software emulated, to rule out anything below 68020). Fixes: 1f7034b9616e6f14 ("m68k: allow ColdFire 547x and 548x CPUs to be built with MMU enabled") Reported-by: kernel test robot Signed-off-by: Geert Uytterhoeven Reviewed-by: Greg Ungerer Link: https://lore.kernel.org/r/18c34695b7c95107f60ccca82a4ff252f3edf477.1652446117.git.geert@linux-m68k.org Signed-off-by: Sasha Levin --- arch/m68k/Kconfig.cpu | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/m68k/Kconfig.cpu b/arch/m68k/Kconfig.cpu index c17205da47fe..936cd9619bf0 100644 --- a/arch/m68k/Kconfig.cpu +++ b/arch/m68k/Kconfig.cpu @@ -312,7 +312,7 @@ comment "Processor Specific Options" config M68KFPU_EMU bool "Math emulation support" - depends on MMU + depends on M68KCLASSIC && FPU help At some point in the future, this will cause floating-point math instructions to be emulated by the kernel on machines that lack a From 6950ee32c1879818de03f13a9a5de1be41ad2782 Mon Sep 17 00:00:00 2001 From: Paul Moore Date: Sun, 27 Sep 2020 22:38:26 -0400 Subject: [PATCH 0536/1842] lsm,selinux: pass flowi_common instead of flowi to the LSM hooks [ Upstream commit 3df98d79215ace13d1e91ddfc5a67a0f5acbd83f ] As pointed out by Herbert in a recent related patch, the LSM hooks do not have the necessary address family information to use the flowi struct safely. As none of the LSMs currently use any of the protocol specific flowi information, replace the flowi pointers with pointers to the address family independent flowi_common struct. Reported-by: Herbert Xu Acked-by: James Morris Signed-off-by: Paul Moore Signed-off-by: Sasha Levin --- .../chelsio/inline_crypto/chtls/chtls_cm.c | 2 +- drivers/net/wireguard/socket.c | 4 ++-- include/linux/lsm_hook_defs.h | 4 ++-- include/linux/lsm_hooks.h | 2 +- include/linux/security.h | 23 +++++++++++-------- include/net/flow.h | 10 ++++++++ include/net/route.h | 6 ++--- net/dccp/ipv4.c | 2 +- net/dccp/ipv6.c | 6 ++--- net/ipv4/icmp.c | 4 ++-- net/ipv4/inet_connection_sock.c | 4 ++-- net/ipv4/ip_output.c | 2 +- net/ipv4/ping.c | 2 +- net/ipv4/raw.c | 2 +- net/ipv4/syncookies.c | 2 +- net/ipv4/udp.c | 2 +- net/ipv6/af_inet6.c | 2 +- net/ipv6/datagram.c | 2 +- net/ipv6/icmp.c | 6 ++--- net/ipv6/inet6_connection_sock.c | 4 ++-- net/ipv6/netfilter/nf_reject_ipv6.c | 2 +- net/ipv6/ping.c | 2 +- net/ipv6/raw.c | 2 +- net/ipv6/syncookies.c | 2 +- net/ipv6/tcp_ipv6.c | 4 ++-- net/ipv6/udp.c | 2 +- net/l2tp/l2tp_ip6.c | 2 +- net/netfilter/nf_synproxy_core.c | 2 +- net/xfrm/xfrm_state.c | 6 +++-- security/security.c | 17 +++++++------- security/selinux/hooks.c | 4 ++-- security/selinux/include/xfrm.h | 2 +- security/selinux/xfrm.c | 13 ++++++----- 33 files changed, 85 insertions(+), 66 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c index d6b6ebb3f1ec..51e071c20e39 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c @@ -1150,7 +1150,7 @@ static struct sock *chtls_recv_sock(struct sock *lsk, fl6.daddr = ip6h->saddr; fl6.fl6_dport = inet_rsk(oreq)->ir_rmt_port; fl6.fl6_sport = htons(inet_rsk(oreq)->ir_num); - security_req_classify_flow(oreq, flowi6_to_flowi(&fl6)); + security_req_classify_flow(oreq, flowi6_to_flowi_common(&fl6)); dst = ip6_dst_lookup_flow(sock_net(lsk), lsk, &fl6, NULL); if (IS_ERR(dst)) goto free_sk; diff --git a/drivers/net/wireguard/socket.c b/drivers/net/wireguard/socket.c index 473221aa2236..eef5911fa210 100644 --- a/drivers/net/wireguard/socket.c +++ b/drivers/net/wireguard/socket.c @@ -49,7 +49,7 @@ static int send4(struct wg_device *wg, struct sk_buff *skb, rt = dst_cache_get_ip4(cache, &fl.saddr); if (!rt) { - security_sk_classify_flow(sock, flowi4_to_flowi(&fl)); + security_sk_classify_flow(sock, flowi4_to_flowi_common(&fl)); if (unlikely(!inet_confirm_addr(sock_net(sock), NULL, 0, fl.saddr, RT_SCOPE_HOST))) { endpoint->src4.s_addr = 0; @@ -129,7 +129,7 @@ static int send6(struct wg_device *wg, struct sk_buff *skb, dst = dst_cache_get_ip6(cache, &fl.saddr); if (!dst) { - security_sk_classify_flow(sock, flowi6_to_flowi(&fl)); + security_sk_classify_flow(sock, flowi6_to_flowi_common(&fl)); if (unlikely(!ipv6_addr_any(&fl.saddr) && !ipv6_chk_addr(sock_net(sock), &fl.saddr, NULL, 0))) { endpoint->src6 = fl.saddr = in6addr_any; diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h index a6a3d4ddfc2d..d13631a5e908 100644 --- a/include/linux/lsm_hook_defs.h +++ b/include/linux/lsm_hook_defs.h @@ -311,7 +311,7 @@ LSM_HOOK(int, 0, secmark_relabel_packet, u32 secid) LSM_HOOK(void, LSM_RET_VOID, secmark_refcount_inc, void) LSM_HOOK(void, LSM_RET_VOID, secmark_refcount_dec, void) LSM_HOOK(void, LSM_RET_VOID, req_classify_flow, const struct request_sock *req, - struct flowi *fl) + struct flowi_common *flic) LSM_HOOK(int, 0, tun_dev_alloc_security, void **security) LSM_HOOK(void, LSM_RET_VOID, tun_dev_free_security, void *security) LSM_HOOK(int, 0, tun_dev_create, void) @@ -351,7 +351,7 @@ LSM_HOOK(int, 0, xfrm_state_delete_security, struct xfrm_state *x) LSM_HOOK(int, 0, xfrm_policy_lookup, struct xfrm_sec_ctx *ctx, u32 fl_secid, u8 dir) LSM_HOOK(int, 1, xfrm_state_pol_flow_match, struct xfrm_state *x, - struct xfrm_policy *xp, const struct flowi *fl) + struct xfrm_policy *xp, const struct flowi_common *flic) LSM_HOOK(int, 0, xfrm_decode_session, struct sk_buff *skb, u32 *secid, int ckall) #endif /* CONFIG_SECURITY_NETWORK_XFRM */ diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h index a8531b37e6f5..64cdf4d7bfb3 100644 --- a/include/linux/lsm_hooks.h +++ b/include/linux/lsm_hooks.h @@ -1105,7 +1105,7 @@ * @xfrm_state_pol_flow_match: * @x contains the state to match. * @xp contains the policy to check for a match. - * @fl contains the flow to check for a match. + * @flic contains the flowi_common struct to check for a match. * Return 1 if there is a match. * @xfrm_decode_session: * @skb points to skb to decode. diff --git a/include/linux/security.h b/include/linux/security.h index 330029ef7e89..e9b4b5410614 100644 --- a/include/linux/security.h +++ b/include/linux/security.h @@ -170,7 +170,7 @@ struct sk_buff; struct sock; struct sockaddr; struct socket; -struct flowi; +struct flowi_common; struct dst_entry; struct xfrm_selector; struct xfrm_policy; @@ -1363,8 +1363,9 @@ int security_socket_getpeersec_dgram(struct socket *sock, struct sk_buff *skb, u int security_sk_alloc(struct sock *sk, int family, gfp_t priority); void security_sk_free(struct sock *sk); void security_sk_clone(const struct sock *sk, struct sock *newsk); -void security_sk_classify_flow(struct sock *sk, struct flowi *fl); -void security_req_classify_flow(const struct request_sock *req, struct flowi *fl); +void security_sk_classify_flow(struct sock *sk, struct flowi_common *flic); +void security_req_classify_flow(const struct request_sock *req, + struct flowi_common *flic); void security_sock_graft(struct sock*sk, struct socket *parent); int security_inet_conn_request(struct sock *sk, struct sk_buff *skb, struct request_sock *req); @@ -1515,11 +1516,13 @@ static inline void security_sk_clone(const struct sock *sk, struct sock *newsk) { } -static inline void security_sk_classify_flow(struct sock *sk, struct flowi *fl) +static inline void security_sk_classify_flow(struct sock *sk, + struct flowi_common *flic) { } -static inline void security_req_classify_flow(const struct request_sock *req, struct flowi *fl) +static inline void security_req_classify_flow(const struct request_sock *req, + struct flowi_common *flic) { } @@ -1646,9 +1649,9 @@ void security_xfrm_state_free(struct xfrm_state *x); int security_xfrm_policy_lookup(struct xfrm_sec_ctx *ctx, u32 fl_secid, u8 dir); int security_xfrm_state_pol_flow_match(struct xfrm_state *x, struct xfrm_policy *xp, - const struct flowi *fl); + const struct flowi_common *flic); int security_xfrm_decode_session(struct sk_buff *skb, u32 *secid); -void security_skb_classify_flow(struct sk_buff *skb, struct flowi *fl); +void security_skb_classify_flow(struct sk_buff *skb, struct flowi_common *flic); #else /* CONFIG_SECURITY_NETWORK_XFRM */ @@ -1700,7 +1703,8 @@ static inline int security_xfrm_policy_lookup(struct xfrm_sec_ctx *ctx, u32 fl_s } static inline int security_xfrm_state_pol_flow_match(struct xfrm_state *x, - struct xfrm_policy *xp, const struct flowi *fl) + struct xfrm_policy *xp, + const struct flowi_common *flic) { return 1; } @@ -1710,7 +1714,8 @@ static inline int security_xfrm_decode_session(struct sk_buff *skb, u32 *secid) return 0; } -static inline void security_skb_classify_flow(struct sk_buff *skb, struct flowi *fl) +static inline void security_skb_classify_flow(struct sk_buff *skb, + struct flowi_common *flic) { } diff --git a/include/net/flow.h b/include/net/flow.h index b2531df3f65f..39d0cedcddee 100644 --- a/include/net/flow.h +++ b/include/net/flow.h @@ -195,11 +195,21 @@ static inline struct flowi *flowi4_to_flowi(struct flowi4 *fl4) return container_of(fl4, struct flowi, u.ip4); } +static inline struct flowi_common *flowi4_to_flowi_common(struct flowi4 *fl4) +{ + return &(flowi4_to_flowi(fl4)->u.__fl_common); +} + static inline struct flowi *flowi6_to_flowi(struct flowi6 *fl6) { return container_of(fl6, struct flowi, u.ip6); } +static inline struct flowi_common *flowi6_to_flowi_common(struct flowi6 *fl6) +{ + return &(flowi6_to_flowi(fl6)->u.__fl_common); +} + static inline struct flowi *flowidn_to_flowi(struct flowidn *fldn) { return container_of(fldn, struct flowi, u.dn); diff --git a/include/net/route.h b/include/net/route.h index a07c277cd33e..2551f3f03b37 100644 --- a/include/net/route.h +++ b/include/net/route.h @@ -165,7 +165,7 @@ static inline struct rtable *ip_route_output_ports(struct net *net, struct flowi sk ? inet_sk_flowi_flags(sk) : 0, daddr, saddr, dport, sport, sock_net_uid(net, sk)); if (sk) - security_sk_classify_flow(sk, flowi4_to_flowi(fl4)); + security_sk_classify_flow(sk, flowi4_to_flowi_common(fl4)); return ip_route_output_flow(net, fl4, sk); } @@ -322,7 +322,7 @@ static inline struct rtable *ip_route_connect(struct flowi4 *fl4, ip_rt_put(rt); flowi4_update_output(fl4, oif, tos, fl4->daddr, fl4->saddr); } - security_sk_classify_flow(sk, flowi4_to_flowi(fl4)); + security_sk_classify_flow(sk, flowi4_to_flowi_common(fl4)); return ip_route_output_flow(net, fl4, sk); } @@ -338,7 +338,7 @@ static inline struct rtable *ip_route_newports(struct flowi4 *fl4, struct rtable flowi4_update_output(fl4, sk->sk_bound_dev_if, RT_CONN_FLAGS(sk), fl4->daddr, fl4->saddr); - security_sk_classify_flow(sk, flowi4_to_flowi(fl4)); + security_sk_classify_flow(sk, flowi4_to_flowi_common(fl4)); return ip_route_output_flow(sock_net(sk), fl4, sk); } return rt; diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c index b0b6e6a4784e..2455b0c0e486 100644 --- a/net/dccp/ipv4.c +++ b/net/dccp/ipv4.c @@ -464,7 +464,7 @@ static struct dst_entry* dccp_v4_route_skb(struct net *net, struct sock *sk, .fl4_dport = dccp_hdr(skb)->dccph_sport, }; - security_skb_classify_flow(skb, flowi4_to_flowi(&fl4)); + security_skb_classify_flow(skb, flowi4_to_flowi_common(&fl4)); rt = ip_route_output_flow(net, &fl4, sk); if (IS_ERR(rt)) { IP_INC_STATS(net, IPSTATS_MIB_OUTNOROUTES); diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c index 49f4034bf126..2be5c69824f9 100644 --- a/net/dccp/ipv6.c +++ b/net/dccp/ipv6.c @@ -203,7 +203,7 @@ static int dccp_v6_send_response(const struct sock *sk, struct request_sock *req fl6.flowi6_oif = ireq->ir_iif; fl6.fl6_dport = ireq->ir_rmt_port; fl6.fl6_sport = htons(ireq->ir_num); - security_req_classify_flow(req, flowi6_to_flowi(&fl6)); + security_req_classify_flow(req, flowi6_to_flowi_common(&fl6)); rcu_read_lock(); @@ -279,7 +279,7 @@ static void dccp_v6_ctl_send_reset(const struct sock *sk, struct sk_buff *rxskb) fl6.flowi6_oif = inet6_iif(rxskb); fl6.fl6_dport = dccp_hdr(skb)->dccph_dport; fl6.fl6_sport = dccp_hdr(skb)->dccph_sport; - security_skb_classify_flow(rxskb, flowi6_to_flowi(&fl6)); + security_skb_classify_flow(rxskb, flowi6_to_flowi_common(&fl6)); /* sk = NULL, but it is safe for now. RST socket required. */ dst = ip6_dst_lookup_flow(sock_net(ctl_sk), ctl_sk, &fl6, NULL); @@ -912,7 +912,7 @@ static int dccp_v6_connect(struct sock *sk, struct sockaddr *uaddr, fl6.flowi6_oif = sk->sk_bound_dev_if; fl6.fl6_dport = usin->sin6_port; fl6.fl6_sport = inet->inet_sport; - security_sk_classify_flow(sk, flowi6_to_flowi(&fl6)); + security_sk_classify_flow(sk, flowi6_to_flowi_common(&fl6)); opt = rcu_dereference_protected(np->opt, lockdep_sock_is_held(sk)); final_p = fl6_update_dst(&fl6, opt, &final); diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index b71b836cc7d1..cd65d3146c30 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -447,7 +447,7 @@ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb) fl4.flowi4_tos = RT_TOS(ip_hdr(skb)->tos); fl4.flowi4_proto = IPPROTO_ICMP; fl4.flowi4_oif = l3mdev_master_ifindex(skb->dev); - security_skb_classify_flow(skb, flowi4_to_flowi(&fl4)); + security_skb_classify_flow(skb, flowi4_to_flowi_common(&fl4)); rt = ip_route_output_key(net, &fl4); if (IS_ERR(rt)) goto out_unlock; @@ -503,7 +503,7 @@ static struct rtable *icmp_route_lookup(struct net *net, route_lookup_dev = icmp_get_route_lookup_dev(skb_in); fl4->flowi4_oif = l3mdev_master_ifindex(route_lookup_dev); - security_skb_classify_flow(skb_in, flowi4_to_flowi(fl4)); + security_skb_classify_flow(skb_in, flowi4_to_flowi_common(fl4)); rt = ip_route_output_key_hash(net, fl4, skb_in); if (IS_ERR(rt)) return rt; diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c index addd595bb3fe..7785a4775e58 100644 --- a/net/ipv4/inet_connection_sock.c +++ b/net/ipv4/inet_connection_sock.c @@ -602,7 +602,7 @@ struct dst_entry *inet_csk_route_req(const struct sock *sk, (opt && opt->opt.srr) ? opt->opt.faddr : ireq->ir_rmt_addr, ireq->ir_loc_addr, ireq->ir_rmt_port, htons(ireq->ir_num), sk->sk_uid); - security_req_classify_flow(req, flowi4_to_flowi(fl4)); + security_req_classify_flow(req, flowi4_to_flowi_common(fl4)); rt = ip_route_output_flow(net, fl4, sk); if (IS_ERR(rt)) goto no_route; @@ -640,7 +640,7 @@ struct dst_entry *inet_csk_route_child_sock(const struct sock *sk, (opt && opt->opt.srr) ? opt->opt.faddr : ireq->ir_rmt_addr, ireq->ir_loc_addr, ireq->ir_rmt_port, htons(ireq->ir_num), sk->sk_uid); - security_req_classify_flow(req, flowi4_to_flowi(fl4)); + security_req_classify_flow(req, flowi4_to_flowi_common(fl4)); rt = ip_route_output_flow(net, fl4, sk); if (IS_ERR(rt)) goto no_route; diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index 5e48b3d3a00d..f77b0af3cb65 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -1712,7 +1712,7 @@ void ip_send_unicast_reply(struct sock *sk, struct sk_buff *skb, daddr, saddr, tcp_hdr(skb)->source, tcp_hdr(skb)->dest, arg->uid); - security_skb_classify_flow(skb, flowi4_to_flowi(&fl4)); + security_skb_classify_flow(skb, flowi4_to_flowi_common(&fl4)); rt = ip_route_output_key(net, &fl4); if (IS_ERR(rt)) return; diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c index 2853a3f0fc63..1bad851b3fc3 100644 --- a/net/ipv4/ping.c +++ b/net/ipv4/ping.c @@ -796,7 +796,7 @@ static int ping_v4_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) fl4.fl4_icmp_type = user_icmph.type; fl4.fl4_icmp_code = user_icmph.code; - security_sk_classify_flow(sk, flowi4_to_flowi(&fl4)); + security_sk_classify_flow(sk, flowi4_to_flowi_common(&fl4)); rt = ip_route_output_flow(net, &fl4, sk); if (IS_ERR(rt)) { err = PTR_ERR(rt); diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c index 5d95f80314f9..4899ebe569eb 100644 --- a/net/ipv4/raw.c +++ b/net/ipv4/raw.c @@ -640,7 +640,7 @@ static int raw_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) goto done; } - security_sk_classify_flow(sk, flowi4_to_flowi(&fl4)); + security_sk_classify_flow(sk, flowi4_to_flowi_common(&fl4)); rt = ip_route_output_flow(net, &fl4, sk); if (IS_ERR(rt)) { err = PTR_ERR(rt); diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c index 0b616094e794..10b469aee492 100644 --- a/net/ipv4/syncookies.c +++ b/net/ipv4/syncookies.c @@ -424,7 +424,7 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb) inet_sk_flowi_flags(sk), opt->srr ? opt->faddr : ireq->ir_rmt_addr, ireq->ir_loc_addr, th->source, th->dest, sk->sk_uid); - security_req_classify_flow(req, flowi4_to_flowi(&fl4)); + security_req_classify_flow(req, flowi4_to_flowi_common(&fl4)); rt = ip_route_output_key(sock_net(sk), &fl4); if (IS_ERR(rt)) { reqsk_free(req); diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index e97a2dd206e1..6056d5609167 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -1204,7 +1204,7 @@ int udp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) faddr, saddr, dport, inet->inet_sport, sk->sk_uid); - security_sk_classify_flow(sk, flowi4_to_flowi(fl4)); + security_sk_classify_flow(sk, flowi4_to_flowi_common(fl4)); rt = ip_route_output_flow(net, fl4, sk); if (IS_ERR(rt)) { err = PTR_ERR(rt); diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c index 090575346daf..890a9cfc6ce2 100644 --- a/net/ipv6/af_inet6.c +++ b/net/ipv6/af_inet6.c @@ -819,7 +819,7 @@ int inet6_sk_rebuild_header(struct sock *sk) fl6.fl6_dport = inet->inet_dport; fl6.fl6_sport = inet->inet_sport; fl6.flowi6_uid = sk->sk_uid; - security_sk_classify_flow(sk, flowi6_to_flowi(&fl6)); + security_sk_classify_flow(sk, flowi6_to_flowi_common(&fl6)); rcu_read_lock(); final_p = fl6_update_dst(&fl6, rcu_dereference(np->opt), diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c index cc8ad7ddecda..206f66310a88 100644 --- a/net/ipv6/datagram.c +++ b/net/ipv6/datagram.c @@ -60,7 +60,7 @@ static void ip6_datagram_flow_key_init(struct flowi6 *fl6, struct sock *sk) if (!fl6->flowi6_oif && ipv6_addr_is_multicast(&fl6->daddr)) fl6->flowi6_oif = np->mcast_oif; - security_sk_classify_flow(sk, flowi6_to_flowi(fl6)); + security_sk_classify_flow(sk, flowi6_to_flowi_common(fl6)); } int ip6_datagram_dst_update(struct sock *sk, bool fix_sk_saddr) diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c index cbab41d557b2..fd1f896115c1 100644 --- a/net/ipv6/icmp.c +++ b/net/ipv6/icmp.c @@ -573,7 +573,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info, fl6.fl6_icmp_code = code; fl6.flowi6_uid = sock_net_uid(net, NULL); fl6.mp_hash = rt6_multipath_hash(net, &fl6, skb, NULL); - security_skb_classify_flow(skb, flowi6_to_flowi(&fl6)); + security_skb_classify_flow(skb, flowi6_to_flowi_common(&fl6)); np = inet6_sk(sk); @@ -755,7 +755,7 @@ static void icmpv6_echo_reply(struct sk_buff *skb) fl6.fl6_icmp_type = ICMPV6_ECHO_REPLY; fl6.flowi6_mark = mark; fl6.flowi6_uid = sock_net_uid(net, NULL); - security_skb_classify_flow(skb, flowi6_to_flowi(&fl6)); + security_skb_classify_flow(skb, flowi6_to_flowi_common(&fl6)); local_bh_disable(); sk = icmpv6_xmit_lock(net); @@ -1008,7 +1008,7 @@ void icmpv6_flow_init(struct sock *sk, struct flowi6 *fl6, fl6->fl6_icmp_type = type; fl6->fl6_icmp_code = 0; fl6->flowi6_oif = oif; - security_sk_classify_flow(sk, flowi6_to_flowi(fl6)); + security_sk_classify_flow(sk, flowi6_to_flowi_common(fl6)); } static void __net_exit icmpv6_sk_exit(struct net *net) diff --git a/net/ipv6/inet6_connection_sock.c b/net/ipv6/inet6_connection_sock.c index e315526fa244..5a9f4d722f35 100644 --- a/net/ipv6/inet6_connection_sock.c +++ b/net/ipv6/inet6_connection_sock.c @@ -46,7 +46,7 @@ struct dst_entry *inet6_csk_route_req(const struct sock *sk, fl6->fl6_dport = ireq->ir_rmt_port; fl6->fl6_sport = htons(ireq->ir_num); fl6->flowi6_uid = sk->sk_uid; - security_req_classify_flow(req, flowi6_to_flowi(fl6)); + security_req_classify_flow(req, flowi6_to_flowi_common(fl6)); dst = ip6_dst_lookup_flow(sock_net(sk), sk, fl6, final_p); if (IS_ERR(dst)) @@ -95,7 +95,7 @@ static struct dst_entry *inet6_csk_route_socket(struct sock *sk, fl6->fl6_sport = inet->inet_sport; fl6->fl6_dport = inet->inet_dport; fl6->flowi6_uid = sk->sk_uid; - security_sk_classify_flow(sk, flowi6_to_flowi(fl6)); + security_sk_classify_flow(sk, flowi6_to_flowi_common(fl6)); rcu_read_lock(); final_p = fl6_update_dst(fl6, rcu_dereference(np->opt), &final); diff --git a/net/ipv6/netfilter/nf_reject_ipv6.c b/net/ipv6/netfilter/nf_reject_ipv6.c index 4aef6baaa55e..bf95513736c9 100644 --- a/net/ipv6/netfilter/nf_reject_ipv6.c +++ b/net/ipv6/netfilter/nf_reject_ipv6.c @@ -179,7 +179,7 @@ void nf_send_reset6(struct net *net, struct sk_buff *oldskb, int hook) fl6.flowi6_oif = l3mdev_master_ifindex(skb_dst(oldskb)->dev); fl6.flowi6_mark = IP6_REPLY_MARK(net, oldskb->mark); - security_skb_classify_flow(oldskb, flowi6_to_flowi(&fl6)); + security_skb_classify_flow(oldskb, flowi6_to_flowi_common(&fl6)); dst = ip6_route_output(net, NULL, &fl6); if (dst->error) { dst_release(dst); diff --git a/net/ipv6/ping.c b/net/ipv6/ping.c index 6caa062f68e7..6ac88fe24a8e 100644 --- a/net/ipv6/ping.c +++ b/net/ipv6/ping.c @@ -111,7 +111,7 @@ static int ping_v6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) fl6.flowi6_uid = sk->sk_uid; fl6.fl6_icmp_type = user_icmph.icmp6_type; fl6.fl6_icmp_code = user_icmph.icmp6_code; - security_sk_classify_flow(sk, flowi6_to_flowi(&fl6)); + security_sk_classify_flow(sk, flowi6_to_flowi_common(&fl6)); ipcm6_init_sk(&ipc6, np); ipc6.sockc.mark = sk->sk_mark; diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c index 38349054e361..31eb54e92b3f 100644 --- a/net/ipv6/raw.c +++ b/net/ipv6/raw.c @@ -915,7 +915,7 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) fl6.flowi6_oif = np->mcast_oif; else if (!fl6.flowi6_oif) fl6.flowi6_oif = np->ucast_oif; - security_sk_classify_flow(sk, flowi6_to_flowi(&fl6)); + security_sk_classify_flow(sk, flowi6_to_flowi_common(&fl6)); if (hdrincl) fl6.flowi6_flags |= FLOWI_FLAG_KNOWN_NH; diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c index 5fa791cf39ca..ca92dd6981de 100644 --- a/net/ipv6/syncookies.c +++ b/net/ipv6/syncookies.c @@ -234,7 +234,7 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb) fl6.fl6_dport = ireq->ir_rmt_port; fl6.fl6_sport = inet_sk(sk)->inet_sport; fl6.flowi6_uid = sk->sk_uid; - security_req_classify_flow(req, flowi6_to_flowi(&fl6)); + security_req_classify_flow(req, flowi6_to_flowi_common(&fl6)); dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p); if (IS_ERR(dst)) diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index df33145b876c..303b54414a6c 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -278,7 +278,7 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr, opt = rcu_dereference_protected(np->opt, lockdep_sock_is_held(sk)); final_p = fl6_update_dst(&fl6, opt, &final); - security_sk_classify_flow(sk, flowi6_to_flowi(&fl6)); + security_sk_classify_flow(sk, flowi6_to_flowi_common(&fl6)); dst = ip6_dst_lookup_flow(sock_net(sk), sk, &fl6, final_p); if (IS_ERR(dst)) { @@ -975,7 +975,7 @@ static void tcp_v6_send_response(const struct sock *sk, struct sk_buff *skb, u32 fl6.fl6_dport = t1->dest; fl6.fl6_sport = t1->source; fl6.flowi6_uid = sock_net_uid(net, sk && sk_fullsock(sk) ? sk : NULL); - security_skb_classify_flow(skb, flowi6_to_flowi(&fl6)); + security_skb_classify_flow(skb, flowi6_to_flowi_common(&fl6)); /* Pass a socket to ip6_dst_lookup either it is for RST * Underlying function will use this to retrieve the network diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c index 10760164a80f..7745d8a40209 100644 --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -1497,7 +1497,7 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) } else if (!fl6.flowi6_oif) fl6.flowi6_oif = np->ucast_oif; - security_sk_classify_flow(sk, flowi6_to_flowi(&fl6)); + security_sk_classify_flow(sk, flowi6_to_flowi_common(&fl6)); if (ipc6.tclass < 0) ipc6.tclass = np->tclass; diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c index e5e5036257b0..96f975777438 100644 --- a/net/l2tp/l2tp_ip6.c +++ b/net/l2tp/l2tp_ip6.c @@ -606,7 +606,7 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) else if (!fl6.flowi6_oif) fl6.flowi6_oif = np->ucast_oif; - security_sk_classify_flow(sk, flowi6_to_flowi(&fl6)); + security_sk_classify_flow(sk, flowi6_to_flowi_common(&fl6)); if (ipc6.tclass < 0) ipc6.tclass = np->tclass; diff --git a/net/netfilter/nf_synproxy_core.c b/net/netfilter/nf_synproxy_core.c index 2fc4ae960769..3d6d49420db8 100644 --- a/net/netfilter/nf_synproxy_core.c +++ b/net/netfilter/nf_synproxy_core.c @@ -854,7 +854,7 @@ synproxy_send_tcp_ipv6(struct net *net, fl6.fl6_sport = nth->source; fl6.fl6_dport = nth->dest; security_skb_classify_flow((struct sk_buff *)skb, - flowi6_to_flowi(&fl6)); + flowi6_to_flowi_common(&fl6)); err = nf_ip6_route(net, &dst, flowi6_to_flowi(&fl6), false); if (err) { goto free_nskb; diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c index 1befc6db723b..717db5ecd0bd 100644 --- a/net/xfrm/xfrm_state.c +++ b/net/xfrm/xfrm_state.c @@ -1020,7 +1020,8 @@ static void xfrm_state_look_at(struct xfrm_policy *pol, struct xfrm_state *x, if ((x->sel.family && (x->sel.family != family || !xfrm_selector_match(&x->sel, fl, family))) || - !security_xfrm_state_pol_flow_match(x, pol, fl)) + !security_xfrm_state_pol_flow_match(x, pol, + &fl->u.__fl_common)) return; if (!*best || @@ -1035,7 +1036,8 @@ static void xfrm_state_look_at(struct xfrm_policy *pol, struct xfrm_state *x, if ((!x->sel.family || (x->sel.family == family && xfrm_selector_match(&x->sel, fl, family))) && - security_xfrm_state_pol_flow_match(x, pol, fl)) + security_xfrm_state_pol_flow_match(x, pol, + &fl->u.__fl_common)) *error = -ESRCH; } } diff --git a/security/security.c b/security/security.c index 360706cdabab..8ea826ea6167 100644 --- a/security/security.c +++ b/security/security.c @@ -2223,15 +2223,16 @@ void security_sk_clone(const struct sock *sk, struct sock *newsk) } EXPORT_SYMBOL(security_sk_clone); -void security_sk_classify_flow(struct sock *sk, struct flowi *fl) +void security_sk_classify_flow(struct sock *sk, struct flowi_common *flic) { - call_void_hook(sk_getsecid, sk, &fl->flowi_secid); + call_void_hook(sk_getsecid, sk, &flic->flowic_secid); } EXPORT_SYMBOL(security_sk_classify_flow); -void security_req_classify_flow(const struct request_sock *req, struct flowi *fl) +void security_req_classify_flow(const struct request_sock *req, + struct flowi_common *flic) { - call_void_hook(req_classify_flow, req, fl); + call_void_hook(req_classify_flow, req, flic); } EXPORT_SYMBOL(security_req_classify_flow); @@ -2423,7 +2424,7 @@ int security_xfrm_policy_lookup(struct xfrm_sec_ctx *ctx, u32 fl_secid, u8 dir) int security_xfrm_state_pol_flow_match(struct xfrm_state *x, struct xfrm_policy *xp, - const struct flowi *fl) + const struct flowi_common *flic) { struct security_hook_list *hp; int rc = LSM_RET_DEFAULT(xfrm_state_pol_flow_match); @@ -2439,7 +2440,7 @@ int security_xfrm_state_pol_flow_match(struct xfrm_state *x, */ hlist_for_each_entry(hp, &security_hook_heads.xfrm_state_pol_flow_match, list) { - rc = hp->hook.xfrm_state_pol_flow_match(x, xp, fl); + rc = hp->hook.xfrm_state_pol_flow_match(x, xp, flic); break; } return rc; @@ -2450,9 +2451,9 @@ int security_xfrm_decode_session(struct sk_buff *skb, u32 *secid) return call_int_hook(xfrm_decode_session, 0, skb, secid, 1); } -void security_skb_classify_flow(struct sk_buff *skb, struct flowi *fl) +void security_skb_classify_flow(struct sk_buff *skb, struct flowi_common *flic) { - int rc = call_int_hook(xfrm_decode_session, 0, skb, &fl->flowi_secid, + int rc = call_int_hook(xfrm_decode_session, 0, skb, &flic->flowic_secid, 0); BUG_ON(rc); diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index 8c901ae05dd8..ee37ce2e2619 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -5448,9 +5448,9 @@ static void selinux_secmark_refcount_dec(void) } static void selinux_req_classify_flow(const struct request_sock *req, - struct flowi *fl) + struct flowi_common *flic) { - fl->flowi_secid = req->secid; + flic->flowic_secid = req->secid; } static int selinux_tun_dev_alloc_security(void **security) diff --git a/security/selinux/include/xfrm.h b/security/selinux/include/xfrm.h index a0b465316292..0a6f34a7a971 100644 --- a/security/selinux/include/xfrm.h +++ b/security/selinux/include/xfrm.h @@ -26,7 +26,7 @@ int selinux_xfrm_state_delete(struct xfrm_state *x); int selinux_xfrm_policy_lookup(struct xfrm_sec_ctx *ctx, u32 fl_secid, u8 dir); int selinux_xfrm_state_pol_flow_match(struct xfrm_state *x, struct xfrm_policy *xp, - const struct flowi *fl); + const struct flowi_common *flic); #ifdef CONFIG_SECURITY_NETWORK_XFRM extern atomic_t selinux_xfrm_refcount; diff --git a/security/selinux/xfrm.c b/security/selinux/xfrm.c index 00e95f8bd7c7..114245b6f7c7 100644 --- a/security/selinux/xfrm.c +++ b/security/selinux/xfrm.c @@ -175,9 +175,10 @@ int selinux_xfrm_policy_lookup(struct xfrm_sec_ctx *ctx, u32 fl_secid, u8 dir) */ int selinux_xfrm_state_pol_flow_match(struct xfrm_state *x, struct xfrm_policy *xp, - const struct flowi *fl) + const struct flowi_common *flic) { u32 state_sid; + u32 flic_sid; if (!xp->security) if (x->security) @@ -196,17 +197,17 @@ int selinux_xfrm_state_pol_flow_match(struct xfrm_state *x, return 0; state_sid = x->security->ctx_sid; + flic_sid = flic->flowic_secid; - if (fl->flowi_secid != state_sid) + if (flic_sid != state_sid) return 0; /* We don't need a separate SA Vs. policy polmatch check since the SA * is now of the same label as the flow and a flow Vs. policy polmatch * check had already happened in selinux_xfrm_policy_lookup() above. */ - return (avc_has_perm(&selinux_state, - fl->flowi_secid, state_sid, - SECCLASS_ASSOCIATION, ASSOCIATION__SENDTO, - NULL) ? 0 : 1); + return (avc_has_perm(&selinux_state, flic_sid, state_sid, + SECCLASS_ASSOCIATION, ASSOCIATION__SENDTO, + NULL) ? 0 : 1); } static u32 selinux_xfrm_skb_sid_egress(struct sk_buff *skb) From 8113eedbab85f28fd52ba54de0a3dfa59dacf086 Mon Sep 17 00:00:00 2001 From: Eric Dumazet Date: Fri, 13 May 2022 11:55:42 -0700 Subject: [PATCH 0537/1842] sctp: read sk->sk_bound_dev_if once in sctp_rcv() [ Upstream commit a20ea298071f46effa3aaf965bf9bb34c901db3f ] sctp_rcv() reads sk->sk_bound_dev_if twice while the socket is not locked. Another cpu could change this field under us. Fixes: 0fd9a65a76e8 ("[SCTP] Support SO_BINDTODEVICE socket option on incoming packets.") Signed-off-by: Eric Dumazet Cc: Neil Horman Cc: Vlad Yasevich Cc: Marcelo Ricardo Leitner Acked-by: Marcelo Ricardo Leitner Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/sctp/input.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/net/sctp/input.c b/net/sctp/input.c index 34494a0b28bd..8f3aab6a4458 100644 --- a/net/sctp/input.c +++ b/net/sctp/input.c @@ -92,6 +92,7 @@ int sctp_rcv(struct sk_buff *skb) struct sctp_chunk *chunk; union sctp_addr src; union sctp_addr dest; + int bound_dev_if; int family; struct sctp_af *af; struct net *net = dev_net(skb->dev); @@ -169,7 +170,8 @@ int sctp_rcv(struct sk_buff *skb) * If a frame arrives on an interface and the receiving socket is * bound to another interface, via SO_BINDTODEVICE, treat it as OOTB */ - if (sk->sk_bound_dev_if && (sk->sk_bound_dev_if != af->skb_iif(skb))) { + bound_dev_if = READ_ONCE(sk->sk_bound_dev_if); + if (bound_dev_if && (bound_dev_if != af->skb_iif(skb))) { if (transport) { sctp_transport_put(transport); asoc = NULL; From 33411945c9ad8a6d58aa73806ef241ae563b0bc4 Mon Sep 17 00:00:00 2001 From: Zheng Bin Date: Fri, 13 May 2022 15:09:22 +0800 Subject: [PATCH 0538/1842] net: hinic: add missing destroy_workqueue in hinic_pf_to_mgmt_init [ Upstream commit 382d917bfc1e92339dae3c8a636b2730e8bb5132 ] hinic_pf_to_mgmt_init misses destroy_workqueue in error path, this patch fixes that. Fixes: 6dbb89014dc3 ("hinic: fix sending mailbox timeout in aeq event work") Signed-off-by: Zheng Bin Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c index 819fa13034c0..027dcc453506 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c @@ -647,6 +647,7 @@ int hinic_pf_to_mgmt_init(struct hinic_pf_to_mgmt *pf_to_mgmt, err = alloc_msg_buf(pf_to_mgmt); if (err) { dev_err(&pdev->dev, "Failed to allocate msg buffers\n"); + destroy_workqueue(pf_to_mgmt->workq); hinic_health_reporters_destroy(hwdev->devlink_dev); return err; } @@ -654,6 +655,7 @@ int hinic_pf_to_mgmt_init(struct hinic_pf_to_mgmt *pf_to_mgmt, err = hinic_api_cmd_init(pf_to_mgmt->cmd_chain, hwif); if (err) { dev_err(&pdev->dev, "Failed to initialize cmd chains\n"); + destroy_workqueue(pf_to_mgmt->workq); hinic_health_reporters_destroy(hwdev->devlink_dev); return err; } From 510e879420b410d88c612aecc6ca15dc6fe77473 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Thu, 12 May 2022 15:13:30 +0400 Subject: [PATCH 0539/1842] ASoC: ti: j721e-evm: Fix refcount leak in j721e_soc_probe_* [ Upstream commit a34840c4eb3278a7c29c9c57a65ce7541c66f9f2 ] of_parse_phandle() returns a node pointer with refcount incremented, we should use of_node_put() on it when not needed anymore. Add missing of_node_put() to avoid refcount leak. Fixes: 6748d0559059 ("ASoC: ti: Add custom machine driver for j721e EVM (CPB and IVI)") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220512111331.44774-1-linmq006@gmail.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/ti/j721e-evm.c | 44 ++++++++++++++++++++++++++++++---------- 1 file changed, 33 insertions(+), 11 deletions(-) diff --git a/sound/soc/ti/j721e-evm.c b/sound/soc/ti/j721e-evm.c index 265bbc5a2f96..756cd9694cbe 100644 --- a/sound/soc/ti/j721e-evm.c +++ b/sound/soc/ti/j721e-evm.c @@ -631,17 +631,18 @@ static int j721e_soc_probe_cpb(struct j721e_priv *priv, int *link_idx, codec_node = of_parse_phandle(node, "ti,cpb-codec", 0); if (!codec_node) { dev_err(priv->dev, "CPB codec node is not provided\n"); - return -EINVAL; + ret = -EINVAL; + goto put_dai_node; } domain = &priv->audio_domains[J721E_AUDIO_DOMAIN_CPB]; ret = j721e_get_clocks(priv->dev, &domain->codec, "cpb-codec-scki"); if (ret) - return ret; + goto put_codec_node; ret = j721e_get_clocks(priv->dev, &domain->mcasp, "cpb-mcasp-auxclk"); if (ret) - return ret; + goto put_codec_node; /* * Common Processor Board, two links @@ -651,8 +652,10 @@ static int j721e_soc_probe_cpb(struct j721e_priv *priv, int *link_idx, comp_count = 6; compnent = devm_kzalloc(priv->dev, comp_count * sizeof(*compnent), GFP_KERNEL); - if (!compnent) - return -ENOMEM; + if (!compnent) { + ret = -ENOMEM; + goto put_codec_node; + } comp_idx = 0; priv->dai_links[*link_idx].cpus = &compnent[comp_idx++]; @@ -703,6 +706,12 @@ static int j721e_soc_probe_cpb(struct j721e_priv *priv, int *link_idx, (*conf_idx)++; return 0; + +put_codec_node: + of_node_put(codec_node); +put_dai_node: + of_node_put(dai_node); + return ret; } static int j721e_soc_probe_ivi(struct j721e_priv *priv, int *link_idx, @@ -727,23 +736,25 @@ static int j721e_soc_probe_ivi(struct j721e_priv *priv, int *link_idx, codeca_node = of_parse_phandle(node, "ti,ivi-codec-a", 0); if (!codeca_node) { dev_err(priv->dev, "IVI codec-a node is not provided\n"); - return -EINVAL; + ret = -EINVAL; + goto put_dai_node; } codecb_node = of_parse_phandle(node, "ti,ivi-codec-b", 0); if (!codecb_node) { dev_warn(priv->dev, "IVI codec-b node is not provided\n"); - return 0; + ret = 0; + goto put_codeca_node; } domain = &priv->audio_domains[J721E_AUDIO_DOMAIN_IVI]; ret = j721e_get_clocks(priv->dev, &domain->codec, "ivi-codec-scki"); if (ret) - return ret; + goto put_codecb_node; ret = j721e_get_clocks(priv->dev, &domain->mcasp, "ivi-mcasp-auxclk"); if (ret) - return ret; + goto put_codecb_node; /* * IVI extension, two links @@ -755,8 +766,10 @@ static int j721e_soc_probe_ivi(struct j721e_priv *priv, int *link_idx, comp_count = 8; compnent = devm_kzalloc(priv->dev, comp_count * sizeof(*compnent), GFP_KERNEL); - if (!compnent) - return -ENOMEM; + if (!compnent) { + ret = -ENOMEM; + goto put_codecb_node; + } comp_idx = 0; priv->dai_links[*link_idx].cpus = &compnent[comp_idx++]; @@ -817,6 +830,15 @@ static int j721e_soc_probe_ivi(struct j721e_priv *priv, int *link_idx, (*conf_idx)++; return 0; + + +put_codecb_node: + of_node_put(codecb_node); +put_codeca_node: + of_node_put(codeca_node); +put_dai_node: + of_node_put(dai_node); + return ret; } static int j721e_soc_probe(struct platform_device *pdev) From 5c2456629433c0d05c3e3bd7b9ac5447a3f6222e Mon Sep 17 00:00:00 2001 From: Dongliang Mu Date: Fri, 22 Apr 2022 10:54:05 +0200 Subject: [PATCH 0540/1842] media: ov7670: remove ov7670_power_off from ov7670_remove [ Upstream commit 5bf19572e31375368f19edd2dbb2e0789518bb99 ] In ov7670_probe, it always invokes ov7670_power_off() no matter the execution is successful or failed. So we cannot invoke it agiain in ov7670_remove(). Fix this by removing ov7670_power_off from ov7670_remove. Fixes: 030f9f682e66 ("media: ov7670: control clock along with power") Signed-off-by: Dongliang Mu Signed-off-by: Sakari Ailus Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/media/i2c/ov7670.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c index b42b289faaef..154776d0069e 100644 --- a/drivers/media/i2c/ov7670.c +++ b/drivers/media/i2c/ov7670.c @@ -2000,7 +2000,6 @@ static int ov7670_remove(struct i2c_client *client) v4l2_async_unregister_subdev(sd); v4l2_ctrl_handler_free(&info->hdl); media_entity_cleanup(&info->sd.entity); - ov7670_power_off(sd); return 0; } From 82239e30ab04e1bc2c6b6a4519509e404d44ea15 Mon Sep 17 00:00:00 2001 From: Cai Huoqing Date: Wed, 8 Sep 2021 12:57:59 +0200 Subject: [PATCH 0541/1842] media: staging: media: rkvdec: Make use of the helper function devm_platform_ioremap_resource() [ Upstream commit 5a3683d60e56f4faa9552d3efafd87ef106dd393 ] Use the devm_platform_ioremap_resource() helper instead of calling platform_get_resource() and devm_ioremap_resource() separately Signed-off-by: Cai Huoqing Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/staging/media/rkvdec/rkvdec.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/staging/media/rkvdec/rkvdec.c b/drivers/staging/media/rkvdec/rkvdec.c index a7788e7a9542..e384ea8d7280 100644 --- a/drivers/staging/media/rkvdec/rkvdec.c +++ b/drivers/staging/media/rkvdec/rkvdec.c @@ -1000,7 +1000,6 @@ static const char * const rkvdec_clk_names[] = { static int rkvdec_probe(struct platform_device *pdev) { struct rkvdec_dev *rkvdec; - struct resource *res; unsigned int i; int ret, irq; @@ -1032,8 +1031,7 @@ static int rkvdec_probe(struct platform_device *pdev) */ clk_set_rate(rkvdec->clocks[0].clk, 500 * 1000 * 1000); - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - rkvdec->regs = devm_ioremap_resource(&pdev->dev, res); + rkvdec->regs = devm_platform_ioremap_resource(pdev, 0); if (IS_ERR(rkvdec->regs)) return PTR_ERR(rkvdec->regs); From b4805a77d5258f07c4aa4088e96e49eb5b9b5124 Mon Sep 17 00:00:00 2001 From: Nicolas Dufresne Date: Fri, 13 May 2022 22:29:11 +0200 Subject: [PATCH 0542/1842] media: rkvdec: h264: Fix dpb_valid implementation [ Upstream commit 7ab889f09dfa70e8097ec1b9186fd228124112cb ] The ref builder only provided references that are marked as valid in the dpb. Thus the current implementation of dpb_valid would always set the flag to 1. This is not representing missing frames (this is called 'non-existing' pictures in the spec). In some context, these non-existing pictures still need to occupy a slot in the reference list according to the spec. Fixes: cd33c830448ba ("media: rkvdec: Add the rkvdec driver") Signed-off-by: Nicolas Dufresne Reviewed-by: Sebastian Fricke Reviewed-by: Ezequiel Garcia Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/staging/media/rkvdec/rkvdec-h264.c | 33 ++++++++++++++++------ 1 file changed, 24 insertions(+), 9 deletions(-) diff --git a/drivers/staging/media/rkvdec/rkvdec-h264.c b/drivers/staging/media/rkvdec/rkvdec-h264.c index 5487f6d0bcb6..52ffa31f08ac 100644 --- a/drivers/staging/media/rkvdec/rkvdec-h264.c +++ b/drivers/staging/media/rkvdec/rkvdec-h264.c @@ -112,6 +112,7 @@ struct rkvdec_h264_run { const struct v4l2_ctrl_h264_sps *sps; const struct v4l2_ctrl_h264_pps *pps; const struct v4l2_ctrl_h264_scaling_matrix *scaling_matrix; + int ref_buf_idx[V4L2_H264_NUM_DPB_ENTRIES]; }; struct rkvdec_h264_ctx { @@ -725,6 +726,26 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx, } } +static void lookup_ref_buf_idx(struct rkvdec_ctx *ctx, + struct rkvdec_h264_run *run) +{ + const struct v4l2_ctrl_h264_decode_params *dec_params = run->decode_params; + u32 i; + + for (i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { + struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx; + const struct v4l2_h264_dpb_entry *dpb = run->decode_params->dpb; + struct vb2_queue *cap_q = &m2m_ctx->cap_q_ctx.q; + int buf_idx = -1; + + if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE) + buf_idx = vb2_find_timestamp(cap_q, + dpb[i].reference_ts, 0); + + run->ref_buf_idx[i] = buf_idx; + } +} + static void assemble_hw_rps(struct rkvdec_ctx *ctx, struct rkvdec_h264_run *run) { @@ -762,7 +783,7 @@ static void assemble_hw_rps(struct rkvdec_ctx *ctx, for (j = 0; j < RKVDEC_NUM_REFLIST; j++) { for (i = 0; i < h264_ctx->reflists.num_valid; i++) { - u8 dpb_valid = 0; + bool dpb_valid = run->ref_buf_idx[i] >= 0; u8 idx = 0; switch (j) { @@ -779,8 +800,6 @@ static void assemble_hw_rps(struct rkvdec_ctx *ctx, if (idx >= ARRAY_SIZE(dec_params->dpb)) continue; - dpb_valid = !!(dpb[idx].flags & - V4L2_H264_DPB_ENTRY_FLAG_ACTIVE); set_ps_field(hw_rps, DPB_INFO(i, j), idx | dpb_valid << 4); @@ -859,13 +878,8 @@ get_ref_buf(struct rkvdec_ctx *ctx, struct rkvdec_h264_run *run, unsigned int dpb_idx) { struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx; - const struct v4l2_h264_dpb_entry *dpb = run->decode_params->dpb; struct vb2_queue *cap_q = &m2m_ctx->cap_q_ctx.q; - int buf_idx = -1; - - if (dpb[dpb_idx].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE) - buf_idx = vb2_find_timestamp(cap_q, - dpb[dpb_idx].reference_ts, 0); + int buf_idx = run->ref_buf_idx[dpb_idx]; /* * If a DPB entry is unused or invalid, address of current destination @@ -1102,6 +1116,7 @@ static int rkvdec_h264_run(struct rkvdec_ctx *ctx) assemble_hw_scaling_list(ctx, &run); assemble_hw_pps(ctx, &run); + lookup_ref_buf_idx(ctx, &run); assemble_hw_rps(ctx, &run); config_registers(ctx, &run); From 63b7c0899564a8463ea699f8399b108a06140894 Mon Sep 17 00:00:00 2001 From: Jonas Karlman Date: Fri, 13 May 2022 22:29:12 +0200 Subject: [PATCH 0543/1842] media: rkvdec: h264: Fix bit depth wrap in pps packet [ Upstream commit a074aa4760d1dad0bd565c0f66e7250f5f219ab0 ] The luma and chroma bit depth fields in the pps packet are 3 bits wide. 8 is wrongly added to the bit depth values written to these 3 bit fields. Because only the 3 LSB are written, the hardware was configured correctly. Correct this by not adding 8 to the luma and chroma bit depth value. Fixes: cd33c830448ba ("media: rkvdec: Add the rkvdec driver") Signed-off-by: Jonas Karlman Signed-off-by: Nicolas Dufresne Reviewed-by: Ezequiel Garcia Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Sasha Levin --- drivers/staging/media/rkvdec/rkvdec-h264.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/staging/media/rkvdec/rkvdec-h264.c b/drivers/staging/media/rkvdec/rkvdec-h264.c index 52ffa31f08ac..7013f7ce3678 100644 --- a/drivers/staging/media/rkvdec/rkvdec-h264.c +++ b/drivers/staging/media/rkvdec/rkvdec-h264.c @@ -662,8 +662,8 @@ static void assemble_hw_pps(struct rkvdec_ctx *ctx, WRITE_PPS(0xff, PROFILE_IDC); WRITE_PPS(1, CONSTRAINT_SET3_FLAG); WRITE_PPS(sps->chroma_format_idc, CHROMA_FORMAT_IDC); - WRITE_PPS(sps->bit_depth_luma_minus8 + 8, BIT_DEPTH_LUMA); - WRITE_PPS(sps->bit_depth_chroma_minus8 + 8, BIT_DEPTH_CHROMA); + WRITE_PPS(sps->bit_depth_luma_minus8, BIT_DEPTH_LUMA); + WRITE_PPS(sps->bit_depth_chroma_minus8, BIT_DEPTH_CHROMA); WRITE_PPS(0, QPPRIME_Y_ZERO_TRANSFORM_BYPASS_FLAG); WRITE_PPS(sps->log2_max_frame_num_minus4, LOG2_MAX_FRAME_NUM_MINUS4); WRITE_PPS(sps->max_num_ref_frames, MAX_NUM_REF_FRAMES); From d54ac6ca48c1fa44868faffcba7d8aeab8937f2e Mon Sep 17 00:00:00 2001 From: Eric Biggers Date: Tue, 10 May 2022 11:32:32 -0700 Subject: [PATCH 0544/1842] ext4: reject the 'commit' option on ext2 filesystems [ Upstream commit cb8435dc8ba33bcafa41cf2aa253794320a3b8df ] The 'commit' option is only applicable for ext3 and ext4 filesystems, and has never been accepted by the ext2 filesystem driver, so the ext4 driver shouldn't allow it on ext2 filesystems. This fixes a failure in xfstest ext4/053. Fixes: 8dc0aa8cf0f7 ("ext4: check incompatible mount options while mounting ext2/3") Signed-off-by: Eric Biggers Reviewed-by: Ritesh Harjani Reviewed-by: Lukas Czerner Link: https://lore.kernel.org/r/20220510183232.172615-1-ebiggers@kernel.org Signed-off-by: Sasha Levin --- fs/ext4/super.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/ext4/super.c b/fs/ext4/super.c index 3e26edeca8c7..35d990adefc6 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -1960,6 +1960,7 @@ static const struct mount_opts { MOPT_EXT4_ONLY | MOPT_CLEAR}, {Opt_warn_on_error, EXT4_MOUNT_WARN_ON_ERROR, MOPT_SET}, {Opt_nowarn_on_error, EXT4_MOUNT_WARN_ON_ERROR, MOPT_CLEAR}, + {Opt_commit, 0, MOPT_NO_EXT2}, {Opt_nojournal_checksum, EXT4_MOUNT_JOURNAL_CHECKSUM, MOPT_EXT4_ONLY | MOPT_CLEAR}, {Opt_journal_checksum, EXT4_MOUNT_JOURNAL_CHECKSUM, From 48e82ce8cdb19c20a5020fa446b286d6a147450c Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Thu, 12 May 2022 16:19:50 +0400 Subject: [PATCH 0545/1842] drm/msm/a6xx: Fix refcount leak in a6xx_gpu_init [ Upstream commit c56de483093d7ad0782327f95dda7da97bc4c315 ] of_parse_phandle() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. a6xx_gmu_init() passes the node to of_find_device_by_node() and of_dma_configure(), of_find_device_by_node() will takes its reference, of_dma_configure() doesn't need the node after usage. Add missing of_node_put() to avoid refcount leak. Fixes: 4b565ca5a2cb ("drm/msm: Add A6XX device support") Signed-off-by: Miaoqian Lin Reviewed-by: Akhil P Oommen Link: https://lore.kernel.org/r/20220512121955.56937-1-linmq006@gmail.com Signed-off-by: Rob Clark Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 39563daff4a0..dffc133b8b1c 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -1308,6 +1308,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) BUG_ON(!node); ret = a6xx_gmu_init(a6xx_gpu, node); + of_node_put(node); if (ret) { a6xx_destroy(&(a6xx_gpu->base.base)); return ERR_PTR(ret); From 656aa3c51fc662064f17179b38ec3ce43af53bca Mon Sep 17 00:00:00 2001 From: Hangyu Hua Date: Mon, 9 May 2022 14:11:25 +0800 Subject: [PATCH 0546/1842] drm: msm: fix possible memory leak in mdp5_crtc_cursor_set() [ Upstream commit 947a844bb3ebff0f4736d244d792ce129f6700d7 ] drm_gem_object_lookup will call drm_gem_object_get inside. So cursor_bo needs to be put when msm_gem_get_and_pin_iova fails. Fixes: e172d10a9c4a ("drm/msm/mdp5: Add hardware cursor support") Signed-off-by: Hangyu Hua Link: https://lore.kernel.org/r/20220509061125.18585-1-hbh25y@gmail.com Signed-off-by: Rob Clark Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c index 06f19ef5dbf3..ff4f207cbdea 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c @@ -983,8 +983,10 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, ret = msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, &mdp5_crtc->cursor.iova); - if (ret) + if (ret) { + drm_gem_object_put(cursor_bo); return -EINVAL; + } pm_runtime_get_sync(&pdev->dev); From 2679de7d046fc0ee74aafed55b8ec1e13fd63129 Mon Sep 17 00:00:00 2001 From: Lai Jiangshan Date: Wed, 16 Mar 2022 12:16:12 +0800 Subject: [PATCH 0547/1842] x86/sev: Annotate stack change in the #VC handler [ Upstream commit c42b145181aafd59ed31ccd879493389e3ea5a08 ] In idtentry_vc(), vc_switch_off_ist() determines a safe stack to switch to, off of the IST stack. Annotate the new stack switch with ENCODE_FRAME_POINTER in case UNWINDER_FRAME_POINTER is used. A stack walk before looks like this: CPU: 0 PID: 0 Comm: swapper Not tainted 5.18.0-rc7+ #2 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Call Trace: dump_stack_lvl dump_stack kernel_exc_vmm_communication asm_exc_vmm_communication ? native_read_msr ? __x2apic_disable.part.0 ? x2apic_setup ? cpu_init ? trap_init ? start_kernel ? x86_64_start_reservations ? x86_64_start_kernel ? secondary_startup_64_no_verify and with the fix, the stack dump is exact: CPU: 0 PID: 0 Comm: swapper Not tainted 5.18.0-rc7+ #3 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Call Trace: dump_stack_lvl dump_stack kernel_exc_vmm_communication asm_exc_vmm_communication RIP: 0010:native_read_msr Code: ... < snipped regs > ? __x2apic_disable.part.0 x2apic_setup cpu_init trap_init start_kernel x86_64_start_reservations x86_64_start_kernel secondary_startup_64_no_verify [ bp: Test in a SEV-ES guest and rewrite the commit message to explain what exactly this does. ] Fixes: a13644f3a53d ("x86/entry/64: Add entry code for #VC handler") Signed-off-by: Lai Jiangshan Signed-off-by: Borislav Petkov Acked-by: Josh Poimboeuf Link: https://lore.kernel.org/r/20220316041612.71357-1-jiangshanlai@gmail.com Signed-off-by: Sasha Levin --- arch/x86/entry/entry_64.S | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index a24ce5905ab8..2f2d52729e17 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -500,6 +500,7 @@ SYM_CODE_START(\asmsym) call vc_switch_off_ist movq %rax, %rsp /* Switch to new stack */ + ENCODE_FRAME_POINTER UNWIND_HINT_REGS /* Update pt_regs */ From ffbcfb1688f6141f5286e6f41501fe9fb7af788a Mon Sep 17 00:00:00 2001 From: Abhinav Kumar Date: Wed, 18 May 2022 15:34:07 -0700 Subject: [PATCH 0548/1842] drm/msm/dpu: handle pm_runtime_get_sync() errors in bind path [ Upstream commit 64b22a0da12adb571c01edd671ee43634ebd7e41 ] If there are errors while trying to enable the pm in the bind path, it will lead to unclocked access of hw revision register thereby crashing the device. This will not address why the pm_runtime_get_sync() fails but at the very least we should be able to prevent the crash by handling the error and bailing out earlier. changes in v2: - use pm_runtime_resume_and_get() instead of pm_runtime_get_sync() Fixes: 25fdd5933e4c ("drm/msm: Add SDM845 DPU support") Signed-off-by: Abhinav Kumar Reviewed-by: Rob Clark Reviewed-by: Stephen Boyd Patchwork: https://patchwork.freedesktop.org/patch/486721/ Link: https://lore.kernel.org/r/20220518223407.26147-1-quic_abhinavk@quicinc.com Signed-off-by: Abhinav Kumar Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index b05ff46d773d..7503f093f3b6 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -939,7 +939,9 @@ static int dpu_kms_hw_init(struct msm_kms *kms) dpu_kms_parse_data_bus_icc_path(dpu_kms); - pm_runtime_get_sync(&dpu_kms->pdev->dev); + rc = pm_runtime_resume_and_get(&dpu_kms->pdev->dev); + if (rc < 0) + goto error; dpu_kms->core_rev = readl_relaxed(dpu_kms->mmio + 0x0); From 3161044e75b71d191b2b859826cfbb0d5589de5a Mon Sep 17 00:00:00 2001 From: Nathan Chancellor Date: Fri, 13 May 2022 08:51:36 +0100 Subject: [PATCH 0549/1842] drm/i915: Fix CFI violation with show_dynamic_id() [ Upstream commit 58606220a2f1407a7516c547f09a1ba7b4350a73 ] When an attribute group is created with sysfs_create_group(), the ->sysfs_ops() callback is set to kobj_sysfs_ops, which sets the ->show() callback to kobj_attr_show(). kobj_attr_show() uses container_of() to get the ->show() callback from the attribute it was passed, meaning the ->show() callback needs to be the same type as the ->show() callback in 'struct kobj_attribute'. However, show_dynamic_id() has the type of the ->show() callback in 'struct device_attribute', which causes a CFI violation when opening the 'id' sysfs node under drm/card0/metrics. This happens to work because the layout of 'struct kobj_attribute' and 'struct device_attribute' are the same, so the container_of() cast happens to allow the ->show() callback to still work. Change the type of show_dynamic_id() to match the ->show() callback in 'struct kobj_attributes' and update the type of sysfs_metric_id to match, which resolves the CFI violation. Fixes: f89823c21224 ("drm/i915/perf: Implement I915_PERF_ADD/REMOVE_CONFIG interface") Signed-off-by: Nathan Chancellor Reviewed-by: Kees Cook Reviewed-by: Sami Tolvanen Signed-off-by: Tvrtko Ursulin Link: https://patchwork.freedesktop.org/patch/msgid/20220513075136.1027007-1-tvrtko.ursulin@linux.intel.com (cherry picked from commit 18fb42db05a0b93ab5dd5eab5315e50eaa3ca620) Signed-off-by: Jani Nikula Signed-off-by: Sasha Levin --- drivers/gpu/drm/i915/i915_perf.c | 4 ++-- drivers/gpu/drm/i915/i915_perf_types.h | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c index 74e66dea5708..5e670aace59f 100644 --- a/drivers/gpu/drm/i915/i915_perf.c +++ b/drivers/gpu/drm/i915/i915_perf.c @@ -3964,8 +3964,8 @@ static struct i915_oa_reg *alloc_oa_regs(struct i915_perf *perf, return ERR_PTR(err); } -static ssize_t show_dynamic_id(struct device *dev, - struct device_attribute *attr, +static ssize_t show_dynamic_id(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) { struct i915_oa_config *oa_config = diff --git a/drivers/gpu/drm/i915/i915_perf_types.h b/drivers/gpu/drm/i915/i915_perf_types.h index a36a455ae336..534951ff38bb 100644 --- a/drivers/gpu/drm/i915/i915_perf_types.h +++ b/drivers/gpu/drm/i915/i915_perf_types.h @@ -54,7 +54,7 @@ struct i915_oa_config { struct attribute_group sysfs_metric; struct attribute *attrs[2]; - struct device_attribute sysfs_metric_id; + struct kobj_attribute sysfs_metric_id; struct kref ref; struct rcu_head rcu; From 83603802954068ccd1b8a3f2ccbbaf5e0862acb0 Mon Sep 17 00:00:00 2001 From: Stefan Wahren Date: Tue, 12 Apr 2022 21:54:23 +0200 Subject: [PATCH 0550/1842] thermal/drivers/bcm2711: Don't clamp temperature at zero [ Upstream commit 106e0121e243de4da7d634338089a68a8da2abe9 ] The thermal sensor on BCM2711 is capable of negative temperatures, so don't clamp the measurements at zero. Since this was the only use for variable t, drop it. This change based on a patch by Dom Cobley, who also tested the fix. Fixes: 59b781352dc4 ("thermal: Add BCM2711 thermal driver") Signed-off-by: Stefan Wahren Acked-by: Florian Fainelli Link: https://lore.kernel.org/r/20220412195423.104511-1-stefan.wahren@i2se.com Signed-off-by: Daniel Lezcano Signed-off-by: Sasha Levin --- drivers/thermal/broadcom/bcm2711_thermal.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/thermal/broadcom/bcm2711_thermal.c b/drivers/thermal/broadcom/bcm2711_thermal.c index 67c2a737bc9d..7b536c8a59dc 100644 --- a/drivers/thermal/broadcom/bcm2711_thermal.c +++ b/drivers/thermal/broadcom/bcm2711_thermal.c @@ -38,7 +38,6 @@ static int bcm2711_get_temp(void *data, int *temp) int offset = thermal_zone_get_offset(priv->thermal); u32 val; int ret; - long t; ret = regmap_read(priv->regmap, AVS_RO_TEMP_STATUS, &val); if (ret) @@ -50,9 +49,7 @@ static int bcm2711_get_temp(void *data, int *temp) val &= AVS_RO_TEMP_STATUS_DATA_MSK; /* Convert a HW code to a temperature reading (millidegree celsius) */ - t = slope * val + offset; - - *temp = t < 0 ? 0 : t; + *temp = slope * val + offset; return 0; } From 79098339ac2065f4b4352ef5921628970b6f47e6 Mon Sep 17 00:00:00 2001 From: Zheng Yongjun Date: Mon, 25 Apr 2022 09:29:29 +0000 Subject: [PATCH 0551/1842] thermal/drivers/broadcom: Fix potential NULL dereference in sr_thermal_probe [ Upstream commit e20d136ec7d6f309989c447638365840d3424c8e ] platform_get_resource() may return NULL, add proper check to avoid potential NULL dereferencing. Fixes: 250e211057c72 ("thermal: broadcom: Add Stingray thermal driver") Signed-off-by: Zheng Yongjun Link: https://lore.kernel.org/r/20220425092929.90412-1-zhengyongjun3@huawei.com Signed-off-by: Daniel Lezcano Signed-off-by: Sasha Levin --- drivers/thermal/broadcom/sr-thermal.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/thermal/broadcom/sr-thermal.c b/drivers/thermal/broadcom/sr-thermal.c index 475ce2900771..85ab9edd580c 100644 --- a/drivers/thermal/broadcom/sr-thermal.c +++ b/drivers/thermal/broadcom/sr-thermal.c @@ -60,6 +60,9 @@ static int sr_thermal_probe(struct platform_device *pdev) return -ENOMEM; res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!res) + return -ENOENT; + sr_thermal->regs = (void __iomem *)devm_memremap(&pdev->dev, res->start, resource_size(res), MEMREMAP_WB); From dcf5ffc91c91cf58c5f172e83daa34d5ea6b69fc Mon Sep 17 00:00:00 2001 From: Daniel Lezcano Date: Sun, 14 Mar 2021 12:13:29 +0100 Subject: [PATCH 0552/1842] thermal/drivers/core: Use a char pointer for the cooling device name [ Upstream commit 58483761810087e5ffdf36e84ac1bf26df909097 ] We want to have any kind of name for the cooling devices as we do no longer want to rely on auto-numbering. Let's replace the cooling device's fixed array by a char pointer to be allocated dynamically when registering the cooling device, so we don't limit the length of the name. Rework the error path at the same time as we have to rollback the allocations in case of error. Tested with a dummy device having the name: "Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch" A village on the island of Anglesey (Wales), known to have the longest name in Europe. Signed-off-by: Daniel Lezcano Reviewed-by: Lukasz Luba Tested-by: Ido Schimmel Link: https://lore.kernel.org/r/20210314111333.16551-1-daniel.lezcano@linaro.org Signed-off-by: Sasha Levin --- .../ethernet/mellanox/mlxsw/core_thermal.c | 2 +- drivers/thermal/thermal_core.c | 38 +++++++++++-------- include/linux/thermal.h | 2 +- 3 files changed, 24 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c index 7ec1d0ee9bee..ecd1856bef5e 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c +++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c @@ -133,7 +133,7 @@ static int mlxsw_get_cooling_device_idx(struct mlxsw_thermal *thermal, /* Allow mlxsw thermal zone binding to an external cooling device */ for (i = 0; i < ARRAY_SIZE(mlxsw_thermal_external_allowed_cdev); i++) { if (strnstr(cdev->type, mlxsw_thermal_external_allowed_cdev[i], - sizeof(cdev->type))) + strlen(cdev->type))) return 0; } diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c index d9e34ac37662..1abef64ccb5f 100644 --- a/drivers/thermal/thermal_core.c +++ b/drivers/thermal/thermal_core.c @@ -1092,10 +1092,7 @@ __thermal_cooling_device_register(struct device_node *np, { struct thermal_cooling_device *cdev; struct thermal_zone_device *pos = NULL; - int result; - - if (type && strlen(type) >= THERMAL_NAME_LENGTH) - return ERR_PTR(-EINVAL); + int ret; if (!ops || !ops->get_max_state || !ops->get_cur_state || !ops->set_cur_state) @@ -1105,14 +1102,17 @@ __thermal_cooling_device_register(struct device_node *np, if (!cdev) return ERR_PTR(-ENOMEM); - result = ida_simple_get(&thermal_cdev_ida, 0, 0, GFP_KERNEL); - if (result < 0) { - kfree(cdev); - return ERR_PTR(result); + ret = ida_simple_get(&thermal_cdev_ida, 0, 0, GFP_KERNEL); + if (ret < 0) + goto out_kfree_cdev; + cdev->id = ret; + + cdev->type = kstrdup(type ? type : "", GFP_KERNEL); + if (!cdev->type) { + ret = -ENOMEM; + goto out_ida_remove; } - cdev->id = result; - strlcpy(cdev->type, type ? : "", sizeof(cdev->type)); mutex_init(&cdev->lock); INIT_LIST_HEAD(&cdev->thermal_instances); cdev->np = np; @@ -1122,12 +1122,9 @@ __thermal_cooling_device_register(struct device_node *np, cdev->devdata = devdata; thermal_cooling_device_setup_sysfs(cdev); dev_set_name(&cdev->device, "cooling_device%d", cdev->id); - result = device_register(&cdev->device); - if (result) { - ida_simple_remove(&thermal_cdev_ida, cdev->id); - put_device(&cdev->device); - return ERR_PTR(result); - } + ret = device_register(&cdev->device); + if (ret) + goto out_kfree_type; /* Add 'this' new cdev to the global cdev list */ mutex_lock(&thermal_list_lock); @@ -1145,6 +1142,14 @@ __thermal_cooling_device_register(struct device_node *np, mutex_unlock(&thermal_list_lock); return cdev; + +out_kfree_type: + kfree(cdev->type); + put_device(&cdev->device); +out_ida_remove: + ida_simple_remove(&thermal_cdev_ida, cdev->id); +out_kfree_cdev: + return ERR_PTR(ret); } /** @@ -1303,6 +1308,7 @@ void thermal_cooling_device_unregister(struct thermal_cooling_device *cdev) ida_simple_remove(&thermal_cdev_ida, cdev->id); device_del(&cdev->device); thermal_cooling_device_destroy_sysfs(cdev); + kfree(cdev->type); put_device(&cdev->device); } EXPORT_SYMBOL_GPL(thermal_cooling_device_unregister); diff --git a/include/linux/thermal.h b/include/linux/thermal.h index 176d9454e8f3..7097d4dcfdd0 100644 --- a/include/linux/thermal.h +++ b/include/linux/thermal.h @@ -92,7 +92,7 @@ struct thermal_cooling_device_ops { struct thermal_cooling_device { int id; - char type[THERMAL_NAME_LENGTH]; + char *type; struct device device; struct device_node *np; void *devdata; From 18530bedd221160823f63ccc20dd55c7a03edbcf Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Wed, 11 May 2022 10:06:05 +0800 Subject: [PATCH 0553/1842] thermal/core: Fix memory leak in __thermal_cooling_device_register() [ Upstream commit 98a160e898c0f4a979af9de3ab48b4b1d42d1dbb ] I got memory leak as follows when doing fault injection test: unreferenced object 0xffff888010080000 (size 264312): comm "182", pid 102533, jiffies 4296434960 (age 10.100s) hex dump (first 32 bytes): 00 00 00 00 ad 4e ad de ff ff ff ff 00 00 00 00 .....N.......... ff ff ff ff ff ff ff ff 40 7f 1f b9 ff ff ff ff ........@....... backtrace: [<0000000038b2f4fc>] kmalloc_order_trace+0x1d/0x110 mm/slab_common.c:969 [<00000000ebcb8da5>] __kmalloc+0x373/0x420 include/linux/slab.h:510 [<0000000084137f13>] thermal_cooling_device_setup_sysfs+0x15d/0x2d0 include/linux/slab.h:586 [<00000000352b8755>] __thermal_cooling_device_register+0x332/0xa60 drivers/thermal/thermal_core.c:927 [<00000000fb9f331b>] devm_thermal_of_cooling_device_register+0x6b/0xf0 drivers/thermal/thermal_core.c:1041 [<000000009b8012d2>] max6650_probe.cold+0x557/0x6aa drivers/hwmon/max6650.c:211 [<00000000da0b7e04>] i2c_device_probe+0x472/0xac0 drivers/i2c/i2c-core-base.c:561 If device_register() fails, thermal_cooling_device_destroy_sysfs() need be called to free the memory allocated in thermal_cooling_device_setup_sysfs(). Fixes: 8ea229511e06 ("thermal: Add cooling device's statistics in sysfs") Reported-by: Hulk Robot Signed-off-by: Yang Yingliang Link: https://lore.kernel.org/r/20220511020605.3096734-1-yangyingliang@huawei.com Signed-off-by: Daniel Lezcano Signed-off-by: Sasha Levin --- drivers/thermal/thermal_core.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c index 1abef64ccb5f..5e48e9cfa044 100644 --- a/drivers/thermal/thermal_core.c +++ b/drivers/thermal/thermal_core.c @@ -1144,6 +1144,7 @@ __thermal_cooling_device_register(struct device_node *np, return cdev; out_kfree_type: + thermal_cooling_device_destroy_sysfs(cdev); kfree(cdev->type); put_device(&cdev->device); out_ida_remove: From 8bbf522a2c51ef939d0e8835e236bfcd252193af Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Tue, 17 May 2022 09:51:21 +0400 Subject: [PATCH 0554/1842] thermal/drivers/imx_sc_thermal: Fix refcount leak in imx_sc_thermal_probe [ Upstream commit 09700c504d8e63faffd2a2235074e8c5d130cb8f ] of_find_node_by_name() returns a node pointer with refcount incremented, we should use of_node_put() on it when done. Add missing of_node_put() to avoid refcount leak. Fixes: e20db70dba1c ("thermal: imx_sc: add i.MX system controller thermal support") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220517055121.18092-1-linmq006@gmail.com Signed-off-by: Daniel Lezcano Signed-off-by: Sasha Levin --- drivers/thermal/imx_sc_thermal.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/thermal/imx_sc_thermal.c b/drivers/thermal/imx_sc_thermal.c index 8d76dbfde6a9..331a241eb0ef 100644 --- a/drivers/thermal/imx_sc_thermal.c +++ b/drivers/thermal/imx_sc_thermal.c @@ -94,8 +94,8 @@ static int imx_sc_thermal_probe(struct platform_device *pdev) sensor = devm_kzalloc(&pdev->dev, sizeof(*sensor), GFP_KERNEL); if (!sensor) { of_node_put(child); - of_node_put(sensor_np); - return -ENOMEM; + ret = -ENOMEM; + goto put_node; } ret = thermal_zone_of_get_sensor_id(child, @@ -124,7 +124,9 @@ static int imx_sc_thermal_probe(struct platform_device *pdev) dev_warn(&pdev->dev, "failed to add hwmon sysfs attributes\n"); } +put_node: of_node_put(sensor_np); + of_node_put(np); return ret; } From 7a5e6a48980e292e440ed10fe2b8ad9e3596d07b Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Sat, 14 May 2022 17:10:53 +0800 Subject: [PATCH 0555/1842] ASoC: wm2000: fix missing clk_disable_unprepare() on error in wm2000_anc_transition() [ Upstream commit be2af740e2a9c7134f2d8ab4f104006e110b13de ] Fix the missing clk_disable_unprepare() before return from wm2000_anc_transition() in the error handling case. Fixes: 514cfd6dd725 ("ASoC: wm2000: Integrate with clock API") Signed-off-by: Yang Yingliang Acked-by: Charles Keepax Link: https://lore.kernel.org/r/20220514091053.686416-1-yangyingliang@huawei.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/wm2000.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/sound/soc/codecs/wm2000.c b/sound/soc/codecs/wm2000.c index 72e165cc6443..97ece3114b3d 100644 --- a/sound/soc/codecs/wm2000.c +++ b/sound/soc/codecs/wm2000.c @@ -536,7 +536,7 @@ static int wm2000_anc_transition(struct wm2000_priv *wm2000, { struct i2c_client *i2c = wm2000->i2c; int i, j; - int ret; + int ret = 0; if (wm2000->anc_mode == mode) return 0; @@ -566,13 +566,13 @@ static int wm2000_anc_transition(struct wm2000_priv *wm2000, ret = anc_transitions[i].step[j](i2c, anc_transitions[i].analogue); if (ret != 0) - return ret; + break; } if (anc_transitions[i].dest == ANC_OFF) clk_disable_unprepare(wm2000->mclk); - return 0; + return ret; } static int wm2000_anc_set_mode(struct wm2000_priv *wm2000) From d015f6f694ec9f6029ace9026149fb33443aafc2 Mon Sep 17 00:00:00 2001 From: Duoming Zhou Date: Wed, 18 May 2022 19:57:33 +0800 Subject: [PATCH 0556/1842] NFC: hci: fix sleep in atomic context bugs in nfc_hci_hcp_message_tx [ Upstream commit b413b0cb008646e9f24ce5253cb3cf7ee217aff6 ] There are sleep in atomic context bugs when the request to secure element of st21nfca is timeout. The root cause is that kzalloc and alloc_skb with GFP_KERNEL parameter and mutex_lock are called in st21nfca_se_wt_timeout which is a timer handler. The call tree shows the execution paths that could lead to bugs: (Interrupt context) st21nfca_se_wt_timeout nfc_hci_send_event nfc_hci_hcp_message_tx kzalloc(..., GFP_KERNEL) //may sleep alloc_skb(..., GFP_KERNEL) //may sleep mutex_lock() //may sleep This patch moves the operations that may sleep into a work item. The work item will run in another kernel thread which is in process context to execute the bottom half of the interrupt. So it could prevent atomic context from sleeping. Fixes: 2130fb97fecf ("NFC: st21nfca: Adding support for secure element") Signed-off-by: Duoming Zhou Reviewed-by: Krzysztof Kozlowski Link: https://lore.kernel.org/r/20220518115733.62111-1-duoming@zju.edu.cn Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/nfc/st21nfca/se.c | 17 ++++++++++++++--- drivers/nfc/st21nfca/st21nfca.h | 1 + 2 files changed, 15 insertions(+), 3 deletions(-) diff --git a/drivers/nfc/st21nfca/se.c b/drivers/nfc/st21nfca/se.c index 0841e0e370a0..0194e80193d9 100644 --- a/drivers/nfc/st21nfca/se.c +++ b/drivers/nfc/st21nfca/se.c @@ -241,7 +241,7 @@ int st21nfca_hci_se_io(struct nfc_hci_dev *hdev, u32 se_idx, } EXPORT_SYMBOL(st21nfca_hci_se_io); -static void st21nfca_se_wt_timeout(struct timer_list *t) +static void st21nfca_se_wt_work(struct work_struct *work) { /* * No answer from the secure element @@ -254,8 +254,9 @@ static void st21nfca_se_wt_timeout(struct timer_list *t) */ /* hardware reset managed through VCC_UICC_OUT power supply */ u8 param = 0x01; - struct st21nfca_hci_info *info = from_timer(info, t, - se_info.bwi_timer); + struct st21nfca_hci_info *info = container_of(work, + struct st21nfca_hci_info, + se_info.timeout_work); pr_debug("\n"); @@ -273,6 +274,13 @@ static void st21nfca_se_wt_timeout(struct timer_list *t) info->se_info.cb(info->se_info.cb_context, NULL, 0, -ETIME); } +static void st21nfca_se_wt_timeout(struct timer_list *t) +{ + struct st21nfca_hci_info *info = from_timer(info, t, se_info.bwi_timer); + + schedule_work(&info->se_info.timeout_work); +} + static void st21nfca_se_activation_timeout(struct timer_list *t) { struct st21nfca_hci_info *info = from_timer(info, t, @@ -364,6 +372,7 @@ int st21nfca_apdu_reader_event_received(struct nfc_hci_dev *hdev, switch (event) { case ST21NFCA_EVT_TRANSMIT_DATA: del_timer_sync(&info->se_info.bwi_timer); + cancel_work_sync(&info->se_info.timeout_work); info->se_info.bwi_active = false; r = nfc_hci_send_event(hdev, ST21NFCA_DEVICE_MGNT_GATE, ST21NFCA_EVT_SE_END_OF_APDU_TRANSFER, NULL, 0); @@ -393,6 +402,7 @@ void st21nfca_se_init(struct nfc_hci_dev *hdev) struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev); init_completion(&info->se_info.req_completion); + INIT_WORK(&info->se_info.timeout_work, st21nfca_se_wt_work); /* initialize timers */ timer_setup(&info->se_info.bwi_timer, st21nfca_se_wt_timeout, 0); info->se_info.bwi_active = false; @@ -420,6 +430,7 @@ void st21nfca_se_deinit(struct nfc_hci_dev *hdev) if (info->se_info.se_active) del_timer_sync(&info->se_info.se_active_timer); + cancel_work_sync(&info->se_info.timeout_work); info->se_info.bwi_active = false; info->se_info.se_active = false; } diff --git a/drivers/nfc/st21nfca/st21nfca.h b/drivers/nfc/st21nfca/st21nfca.h index 5e0de0fef1d4..0e4a93d11efb 100644 --- a/drivers/nfc/st21nfca/st21nfca.h +++ b/drivers/nfc/st21nfca/st21nfca.h @@ -141,6 +141,7 @@ struct st21nfca_se_info { se_io_cb_t cb; void *cb_context; + struct work_struct timeout_work; }; struct st21nfca_hci_info { From 7386f69041594f186045e01d5f2ca2f675a30916 Mon Sep 17 00:00:00 2001 From: Alexey Khoroshilov Date: Fri, 20 May 2022 01:31:26 +0300 Subject: [PATCH 0557/1842] ASoC: max98090: Move check for invalid values before casting in max98090_put_enab_tlv() [ Upstream commit f7a344468105ef8c54086dfdc800e6f5a8417d3e ] Validation of signed input should be done before casting to unsigned int. Found by Linux Verification Center (linuxtesting.org) with SVACE. Signed-off-by: Alexey Khoroshilov Suggested-by: Mark Brown Fixes: 2fbe467bcbfc ("ASoC: max98090: Reject invalid values in custom control put()") Link: https://lore.kernel.org/r/1652999486-29653-1-git-send-email-khoroshilov@ispras.ru Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/max98090.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/sound/soc/codecs/max98090.c b/sound/soc/codecs/max98090.c index 5b6405392f08..0c73979cad4a 100644 --- a/sound/soc/codecs/max98090.c +++ b/sound/soc/codecs/max98090.c @@ -393,7 +393,8 @@ static int max98090_put_enab_tlv(struct snd_kcontrol *kcontrol, struct soc_mixer_control *mc = (struct soc_mixer_control *)kcontrol->private_value; unsigned int mask = (1 << fls(mc->max)) - 1; - unsigned int sel = ucontrol->value.integer.value[0]; + int sel_unchecked = ucontrol->value.integer.value[0]; + unsigned int sel; unsigned int val = snd_soc_component_read(component, mc->reg); unsigned int *select; @@ -413,8 +414,9 @@ static int max98090_put_enab_tlv(struct snd_kcontrol *kcontrol, val = (val >> mc->shift) & mask; - if (sel < 0 || sel > mc->max) + if (sel_unchecked < 0 || sel_unchecked > mc->max) return -EINVAL; + sel = sel_unchecked; *select = sel; From 5c2b34d072c4de6701134d49a41dfceb7115969f Mon Sep 17 00:00:00 2001 From: "Gustavo A. R. Silva" Date: Wed, 6 Oct 2021 13:09:44 -0500 Subject: [PATCH 0558/1842] net: stmmac: selftests: Use kcalloc() instead of kzalloc() [ Upstream commit 36371876e000012ae4440fcf3097c2f0ed0f83e7 ] Use 2-factor multiplication argument form kcalloc() instead of kzalloc(). Link: https://github.com/KSPP/linux/issues/162 Signed-off-by: Gustavo A. R. Silva Link: https://lore.kernel.org/r/20211006180944.GA913477@embeddedor Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c index 0462dcc93e53..e649a3e6a529 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c @@ -1104,13 +1104,13 @@ static int stmmac_test_rxp(struct stmmac_priv *priv) goto cleanup_sel; } - actions = kzalloc(nk * sizeof(*actions), GFP_KERNEL); + actions = kcalloc(nk, sizeof(*actions), GFP_KERNEL); if (!actions) { ret = -ENOMEM; goto cleanup_exts; } - act = kzalloc(nk * sizeof(*act), GFP_KERNEL); + act = kcalloc(nk, sizeof(*act), GFP_KERNEL); if (!act) { ret = -ENOMEM; goto cleanup_actions; From deb16df5254d028fb3da6eb83e2210612f11adab Mon Sep 17 00:00:00 2001 From: Jakub Kicinski Date: Wed, 18 May 2022 17:43:05 -0700 Subject: [PATCH 0559/1842] net: stmmac: fix out-of-bounds access in a selftest MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit fe5c5fc145edcf98a759b895f52b646730eeb7be ] GCC 12 points out that struct tc_action is smaller than struct tcf_action: drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c: In function ‘stmmac_test_rxp’: drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c:1132:21: warning: array subscript ‘struct tcf_gact[0]’ is partly outside array bounds of ‘unsigned char[272]’ [-Warray-bounds] 1132 | gact->tcf_action = TC_ACT_SHOT; | ^~ Fixes: ccfc639a94f2 ("net: stmmac: selftests: Add a selftest for Flexible RX Parser") Link: https://lore.kernel.org/r/20220519004305.2109708-1-kuba@kernel.org Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- .../net/ethernet/stmicro/stmmac/stmmac_selftests.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c index e649a3e6a529..dd5c4ef92ef3 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c @@ -1084,8 +1084,9 @@ static int stmmac_test_rxp(struct stmmac_priv *priv) unsigned char addr[ETH_ALEN] = {0xde, 0xad, 0xbe, 0xef, 0x00, 0x00}; struct tc_cls_u32_offload cls_u32 = { }; struct stmmac_packet_attrs attr = { }; - struct tc_action **actions, *act; + struct tc_action **actions; struct tc_u32_sel *sel; + struct tcf_gact *gact; struct tcf_exts *exts; int ret, i, nk = 1; @@ -1110,8 +1111,8 @@ static int stmmac_test_rxp(struct stmmac_priv *priv) goto cleanup_exts; } - act = kcalloc(nk, sizeof(*act), GFP_KERNEL); - if (!act) { + gact = kcalloc(nk, sizeof(*gact), GFP_KERNEL); + if (!gact) { ret = -ENOMEM; goto cleanup_actions; } @@ -1126,9 +1127,7 @@ static int stmmac_test_rxp(struct stmmac_priv *priv) exts->nr_actions = nk; exts->actions = actions; for (i = 0; i < nk; i++) { - struct tcf_gact *gact = to_gact(&act[i]); - - actions[i] = &act[i]; + actions[i] = (struct tc_action *)&gact[i]; gact->tcf_action = TC_ACT_SHOT; } @@ -1152,7 +1151,7 @@ static int stmmac_test_rxp(struct stmmac_priv *priv) stmmac_tc_setup_cls_u32(priv, priv, &cls_u32); cleanup_act: - kfree(act); + kfree(gact); cleanup_actions: kfree(actions); cleanup_exts: From 541224201e1d4c2baf0d298a46e4a7cae04e03c3 Mon Sep 17 00:00:00 2001 From: Yongzhi Liu Date: Thu, 19 May 2022 05:09:48 -0700 Subject: [PATCH 0560/1842] hv_netvsc: Fix potential dereference of NULL pointer [ Upstream commit eb4c0788964730d12e8dd520bd8f5217ca48321c ] The return value of netvsc_devinfo_get() needs to be checked to avoid use of NULL pointer in case of an allocation failure. Fixes: 0efeea5fb153 ("hv_netvsc: Add the support of hibernation") Signed-off-by: Yongzhi Liu Reviewed-by: Haiyang Zhang Link: https://lore.kernel.org/r/1652962188-129281-1-git-send-email-lyz_cs@pku.edu.cn Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/hyperv/netvsc_drv.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c index e3676386d0ee..18484370da0d 100644 --- a/drivers/net/hyperv/netvsc_drv.c +++ b/drivers/net/hyperv/netvsc_drv.c @@ -2629,7 +2629,10 @@ static int netvsc_suspend(struct hv_device *dev) /* Save the current config info */ ndev_ctx->saved_netvsc_dev_info = netvsc_devinfo_get(nvdev); - + if (!ndev_ctx->saved_netvsc_dev_info) { + ret = -ENOMEM; + goto out; + } ret = netvsc_detach(net, nvdev); out: rtnl_unlock(); From 5b4826657d36c218e9f08e8d3223b0edce3de88f Mon Sep 17 00:00:00 2001 From: David Howells Date: Sat, 21 May 2022 09:03:04 +0100 Subject: [PATCH 0561/1842] rxrpc: Fix listen() setting the bar too high for the prealloc rings [ Upstream commit 88e22159750b0d55793302eeed8ee603f5c1a95c ] AF_RXRPC's listen() handler lets you set the backlog up to 32 (if you bump up the sysctl), but whilst the preallocation circular buffers have 32 slots in them, one of them has to be a dead slot because we're using CIRC_CNT(). This means that listen(rxrpc_sock, 32) will cause an oops when the socket is closed because rxrpc_service_prealloc_one() allocated one too many calls and rxrpc_discard_prealloc() won't then be able to get rid of them because it'll think the ring is empty. rxrpc_release_calls_on_socket() then tries to abort them, but oopses because call->peer isn't yet set. Fix this by setting the maximum backlog to RXRPC_BACKLOG_MAX - 1 to match the ring capacity. BUG: kernel NULL pointer dereference, address: 0000000000000086 ... RIP: 0010:rxrpc_send_abort_packet+0x73/0x240 [rxrpc] Call Trace: ? __wake_up_common_lock+0x7a/0x90 ? rxrpc_notify_socket+0x8e/0x140 [rxrpc] ? rxrpc_abort_call+0x4c/0x60 [rxrpc] rxrpc_release_calls_on_socket+0x107/0x1a0 [rxrpc] rxrpc_release+0xc9/0x1c0 [rxrpc] __sock_release+0x37/0xa0 sock_close+0x11/0x20 __fput+0x89/0x240 task_work_run+0x59/0x90 do_exit+0x319/0xaa0 Fixes: 00e907127e6f ("rxrpc: Preallocate peers, conns and calls for incoming service requests") Reported-by: Marc Dionne Signed-off-by: David Howells cc: linux-afs@lists.infradead.org Link: https://lists.infradead.org/pipermail/linux-afs/2022-March/005079.html Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/rxrpc/sysctl.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/rxrpc/sysctl.c b/net/rxrpc/sysctl.c index 540351d6a5f4..555e0910786b 100644 --- a/net/rxrpc/sysctl.c +++ b/net/rxrpc/sysctl.c @@ -12,7 +12,7 @@ static struct ctl_table_header *rxrpc_sysctl_reg_table; static const unsigned int four = 4; -static const unsigned int thirtytwo = 32; +static const unsigned int max_backlog = RXRPC_BACKLOG_MAX - 1; static const unsigned int n_65535 = 65535; static const unsigned int n_max_acks = RXRPC_RXTX_BUFF_SIZE - 1; static const unsigned long one_jiffy = 1; @@ -89,7 +89,7 @@ static struct ctl_table rxrpc_sysctl_table[] = { .mode = 0644, .proc_handler = proc_dointvec_minmax, .extra1 = (void *)&four, - .extra2 = (void *)&thirtytwo, + .extra2 = (void *)&max_backlog, }, { .procname = "rx_window_size", From 4f1c34ee6057f6c30e0e010f90be8bf439a3bcb9 Mon Sep 17 00:00:00 2001 From: David Howells Date: Sat, 21 May 2022 09:03:11 +0100 Subject: [PATCH 0562/1842] rxrpc: Don't try to resend the request if we're receiving the reply [ Upstream commit 114af61f88fbe34d641b13922d098ffec4c1be1b ] rxrpc has a timer to trigger resending of unacked data packets in a call. This is not cancelled when a client call switches to the receive phase on the basis that most calls don't last long enough for it to ever expire. However, if it *does* expire after we've started to receive the reply, we shouldn't then go into trying to retransmit or pinging the server to find out if an ack got lost. Fix this by skipping the resend code if we're into receiving the reply to a client call. Fixes: 17926a79320a ("[AF_RXRPC]: Provide secure RxRPC sockets for use by userspace and kernel both") Signed-off-by: David Howells cc: linux-afs@lists.infradead.org Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/rxrpc/call_event.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c index e426f6831aab..f8ecad2b730e 100644 --- a/net/rxrpc/call_event.c +++ b/net/rxrpc/call_event.c @@ -406,7 +406,8 @@ void rxrpc_process_call(struct work_struct *work) goto recheck_state; } - if (test_and_clear_bit(RXRPC_CALL_EV_RESEND, &call->events)) { + if (test_and_clear_bit(RXRPC_CALL_EV_RESEND, &call->events) && + call->state != RXRPC_CALL_CLIENT_RECV_REPLY) { rxrpc_resend(call, now); goto recheck_state; } From 573de88fc1077c141d11d047c9a8e9ad20c2533f Mon Sep 17 00:00:00 2001 From: David Howells Date: Sat, 21 May 2022 09:03:18 +0100 Subject: [PATCH 0563/1842] rxrpc: Fix overlapping ACK accounting [ Upstream commit 8940ba3cfe4841928777fd45eaa92051522c7f0c ] Fix accidental overlapping of Rx-phase ACK accounting with Tx-phase ACK accounting through variables shared between the two. call->acks_* members refer to ACKs received in the Tx phase and call->ackr_* members to ACKs sent/to be sent during the Rx phase. Fixes: 1a2391c30c0b ("rxrpc: Fix detection of out of order acks") Signed-off-by: David Howells cc: Jeffrey Altman cc: Marc Dionne cc: linux-afs@lists.infradead.org Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/rxrpc/ar-internal.h | 7 ++++--- net/rxrpc/input.c | 16 ++++++++-------- 2 files changed, 12 insertions(+), 11 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 3bad9f5f9102..2fc93467780d 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -659,10 +659,9 @@ struct rxrpc_call { spinlock_t input_lock; /* Lock for packet input to this call */ - /* receive-phase ACK management */ + /* Receive-phase ACK management (ACKs we send). */ u8 ackr_reason; /* reason to ACK */ rxrpc_serial_t ackr_serial; /* serial of packet being ACK'd */ - rxrpc_serial_t ackr_first_seq; /* first sequence number received */ rxrpc_seq_t ackr_prev_seq; /* previous sequence number received */ rxrpc_seq_t ackr_consumed; /* Highest packet shown consumed */ rxrpc_seq_t ackr_seen; /* Highest packet shown seen */ @@ -675,8 +674,10 @@ struct rxrpc_call { #define RXRPC_CALL_RTT_AVAIL_MASK 0xf #define RXRPC_CALL_RTT_PEND_SHIFT 8 - /* transmission-phase ACK management */ + /* Transmission-phase ACK management (ACKs we've received). */ ktime_t acks_latest_ts; /* Timestamp of latest ACK received */ + rxrpc_seq_t acks_first_seq; /* first sequence number received */ + rxrpc_seq_t acks_prev_seq; /* previous sequence number received */ rxrpc_seq_t acks_lowest_nak; /* Lowest NACK in the buffer (or ==tx_hard_ack) */ rxrpc_seq_t acks_lost_top; /* tx_top at the time lost-ack ping sent */ rxrpc_serial_t acks_lost_ping; /* Serial number of probe ACK */ diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index dc201363f2c4..f11673cda217 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -812,7 +812,7 @@ static void rxrpc_input_soft_acks(struct rxrpc_call *call, u8 *acks, static bool rxrpc_is_ack_valid(struct rxrpc_call *call, rxrpc_seq_t first_pkt, rxrpc_seq_t prev_pkt) { - rxrpc_seq_t base = READ_ONCE(call->ackr_first_seq); + rxrpc_seq_t base = READ_ONCE(call->acks_first_seq); if (after(first_pkt, base)) return true; /* The window advanced */ @@ -820,7 +820,7 @@ static bool rxrpc_is_ack_valid(struct rxrpc_call *call, if (before(first_pkt, base)) return false; /* firstPacket regressed */ - if (after_eq(prev_pkt, call->ackr_prev_seq)) + if (after_eq(prev_pkt, call->acks_prev_seq)) return true; /* previousPacket hasn't regressed. */ /* Some rx implementations put a serial number in previousPacket. */ @@ -906,8 +906,8 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) /* Discard any out-of-order or duplicate ACKs (outside lock). */ if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) { trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial, - first_soft_ack, call->ackr_first_seq, - prev_pkt, call->ackr_prev_seq); + first_soft_ack, call->acks_first_seq, + prev_pkt, call->acks_prev_seq); return; } @@ -922,14 +922,14 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) /* Discard any out-of-order or duplicate ACKs (inside lock). */ if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) { trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial, - first_soft_ack, call->ackr_first_seq, - prev_pkt, call->ackr_prev_seq); + first_soft_ack, call->acks_first_seq, + prev_pkt, call->acks_prev_seq); goto out; } call->acks_latest_ts = skb->tstamp; - call->ackr_first_seq = first_soft_ack; - call->ackr_prev_seq = prev_pkt; + call->acks_first_seq = first_soft_ack; + call->acks_prev_seq = prev_pkt; /* Parse rwind and mtu sizes if provided. */ if (buf.info.rxMTU) From 3eef677a25c72a070998cf2f283e6beaa9dbc7e7 Mon Sep 17 00:00:00 2001 From: David Howells Date: Sat, 21 May 2022 09:03:24 +0100 Subject: [PATCH 0564/1842] rxrpc: Don't let ack.previousPacket regress [ Upstream commit 81524b6312535897707f2942695da1d359a5e56b ] The previousPacket field in the rx ACK packet should never go backwards - it's now the highest DATA sequence number received, not the last on received (it used to be used for out of sequence detection). Fixes: 248f219cb8bc ("rxrpc: Rewrite the data and ack handling code") Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/rxrpc/ar-internal.h | 4 ++-- net/rxrpc/input.c | 4 +++- net/rxrpc/output.c | 2 +- 3 files changed, 6 insertions(+), 4 deletions(-) diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index 2fc93467780d..ed88552c237c 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -662,7 +662,7 @@ struct rxrpc_call { /* Receive-phase ACK management (ACKs we send). */ u8 ackr_reason; /* reason to ACK */ rxrpc_serial_t ackr_serial; /* serial of packet being ACK'd */ - rxrpc_seq_t ackr_prev_seq; /* previous sequence number received */ + rxrpc_seq_t ackr_highest_seq; /* Higest sequence number received */ rxrpc_seq_t ackr_consumed; /* Highest packet shown consumed */ rxrpc_seq_t ackr_seen; /* Highest packet shown seen */ @@ -677,7 +677,7 @@ struct rxrpc_call { /* Transmission-phase ACK management (ACKs we've received). */ ktime_t acks_latest_ts; /* Timestamp of latest ACK received */ rxrpc_seq_t acks_first_seq; /* first sequence number received */ - rxrpc_seq_t acks_prev_seq; /* previous sequence number received */ + rxrpc_seq_t acks_prev_seq; /* Highest previousPacket received */ rxrpc_seq_t acks_lowest_nak; /* Lowest NACK in the buffer (or ==tx_hard_ack) */ rxrpc_seq_t acks_lost_top; /* tx_top at the time lost-ack ping sent */ rxrpc_serial_t acks_lost_ping; /* Serial number of probe ACK */ diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index f11673cda217..2e61545ad8ca 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -453,7 +453,6 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) !rxrpc_receiving_reply(call)) goto unlock; - call->ackr_prev_seq = seq0; hard_ack = READ_ONCE(call->rx_hard_ack); nr_subpackets = sp->nr_subpackets; @@ -534,6 +533,9 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) ack_serial = serial; } + if (after(seq0, call->ackr_highest_seq)) + call->ackr_highest_seq = seq0; + /* Queue the packet. We use a couple of memory barriers here as need * to make sure that rx_top is perceived to be set after the buffer * pointer and that the buffer pointer is set after the annotation and diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index a45c83f22236..46aae9b7006f 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -89,7 +89,7 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn, pkt->ack.bufferSpace = htons(8); pkt->ack.maxSkew = htons(0); pkt->ack.firstPacket = htonl(hard_ack + 1); - pkt->ack.previousPacket = htonl(call->ackr_prev_seq); + pkt->ack.previousPacket = htonl(call->ackr_highest_seq); pkt->ack.serial = htonl(serial); pkt->ack.reason = reason; pkt->ack.nAcks = top - hard_ack; From 4790963ef433e2e248c9b5b149fdaa59aff359e5 Mon Sep 17 00:00:00 2001 From: David Howells Date: Sat, 21 May 2022 09:03:31 +0100 Subject: [PATCH 0565/1842] rxrpc: Fix decision on when to generate an IDLE ACK [ Upstream commit 9a3dedcf18096e8f7f22b8777d78c4acfdea1651 ] Fix the decision on when to generate an IDLE ACK by keeping a count of the number of packets we've received, but not yet soft-ACK'd, and the number of packets we've processed, but not yet hard-ACK'd, rather than trying to keep track of which DATA sequence numbers correspond to those points. We then generate an ACK when either counter exceeds 2. The counters are both cleared when we transcribe the information into any sort of ACK packet for transmission. IDLE and DELAY ACKs are skipped if both counters are 0 (ie. no change). Fixes: 805b21b929e2 ("rxrpc: Send an ACK after every few DATA packets we receive") Signed-off-by: David Howells cc: Marc Dionne cc: linux-afs@lists.infradead.org Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- include/trace/events/rxrpc.h | 2 +- net/rxrpc/ar-internal.h | 4 ++-- net/rxrpc/input.c | 11 +++++++++-- net/rxrpc/output.c | 18 +++++++++++------- net/rxrpc/recvmsg.c | 8 +++----- 5 files changed, 26 insertions(+), 17 deletions(-) diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index 4a3ab0ed6e06..1c714336b863 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -1509,7 +1509,7 @@ TRACE_EVENT(rxrpc_call_reset, __entry->call_serial = call->rx_serial; __entry->conn_serial = call->conn->hi_serial; __entry->tx_seq = call->tx_hard_ack; - __entry->rx_seq = call->ackr_seen; + __entry->rx_seq = call->rx_hard_ack; ), TP_printk("c=%08x %08x:%08x r=%08x/%08x tx=%08x rx=%08x", diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h index ed88552c237c..ccb65412b670 100644 --- a/net/rxrpc/ar-internal.h +++ b/net/rxrpc/ar-internal.h @@ -663,8 +663,8 @@ struct rxrpc_call { u8 ackr_reason; /* reason to ACK */ rxrpc_serial_t ackr_serial; /* serial of packet being ACK'd */ rxrpc_seq_t ackr_highest_seq; /* Higest sequence number received */ - rxrpc_seq_t ackr_consumed; /* Highest packet shown consumed */ - rxrpc_seq_t ackr_seen; /* Highest packet shown seen */ + atomic_t ackr_nr_unacked; /* Number of unacked packets */ + atomic_t ackr_nr_consumed; /* Number of packets needing hard ACK */ /* RTT management */ rxrpc_serial_t rtt_serial[4]; /* Serial number of DATA or PING sent */ diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c index 2e61545ad8ca..1145cb14d86f 100644 --- a/net/rxrpc/input.c +++ b/net/rxrpc/input.c @@ -412,8 +412,8 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) { struct rxrpc_skb_priv *sp = rxrpc_skb(skb); enum rxrpc_call_state state; - unsigned int j, nr_subpackets; - rxrpc_serial_t serial = sp->hdr.serial, ack_serial = 0; + unsigned int j, nr_subpackets, nr_unacked = 0; + rxrpc_serial_t serial = sp->hdr.serial, ack_serial = serial; rxrpc_seq_t seq0 = sp->hdr.seq, hard_ack; bool immediate_ack = false, jumbo_bad = false; u8 ack = 0; @@ -569,6 +569,8 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) sp = NULL; } + nr_unacked++; + if (last) { set_bit(RXRPC_CALL_RX_LAST, &call->flags); if (!ack) { @@ -588,9 +590,14 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) } call->rx_expect_next = seq + 1; } + if (!ack) + ack_serial = serial; } ack: + if (atomic_add_return(nr_unacked, &call->ackr_nr_unacked) > 2 && !ack) + ack = RXRPC_ACK_IDLE; + if (ack) rxrpc_propose_ACK(call, ack, ack_serial, immediate_ack, true, diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 46aae9b7006f..9683617db704 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -74,11 +74,18 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn, u8 reason) { rxrpc_serial_t serial; + unsigned int tmp; rxrpc_seq_t hard_ack, top, seq; int ix; u32 mtu, jmax; u8 *ackp = pkt->acks; + tmp = atomic_xchg(&call->ackr_nr_unacked, 0); + tmp |= atomic_xchg(&call->ackr_nr_consumed, 0); + if (!tmp && (reason == RXRPC_ACK_DELAY || + reason == RXRPC_ACK_IDLE)) + return 0; + /* Barrier against rxrpc_input_data(). */ serial = call->ackr_serial; hard_ack = READ_ONCE(call->rx_hard_ack); @@ -223,6 +230,10 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping, n = rxrpc_fill_out_ack(conn, call, pkt, &hard_ack, &top, reason); spin_unlock_bh(&call->lock); + if (n == 0) { + kfree(pkt); + return 0; + } iov[0].iov_base = pkt; iov[0].iov_len = sizeof(pkt->whdr) + sizeof(pkt->ack) + n; @@ -259,13 +270,6 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping, ntohl(pkt->ack.serial), false, true, rxrpc_propose_ack_retry_tx); - } else { - spin_lock_bh(&call->lock); - if (after(hard_ack, call->ackr_consumed)) - call->ackr_consumed = hard_ack; - if (after(top, call->ackr_seen)) - call->ackr_seen = top; - spin_unlock_bh(&call->lock); } rxrpc_set_keepalive(call); diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index 2c842851d72e..787826773937 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -260,11 +260,9 @@ static void rxrpc_rotate_rx_window(struct rxrpc_call *call) rxrpc_end_rx_phase(call, serial); } else { /* Check to see if there's an ACK that needs sending. */ - if (after_eq(hard_ack, call->ackr_consumed + 2) || - after_eq(top, call->ackr_seen + 2) || - (hard_ack == top && after(hard_ack, call->ackr_consumed))) - rxrpc_propose_ACK(call, RXRPC_ACK_DELAY, serial, - true, true, + if (atomic_inc_return(&call->ackr_nr_consumed) > 2) + rxrpc_propose_ACK(call, RXRPC_ACK_IDLE, serial, + true, false, rxrpc_propose_ack_rotate_rx); if (call->ackr_reason && call->ackr_reason != RXRPC_ACK_DELAY) rxrpc_send_ack_packet(call, false, NULL); From dc7753d60097f8fd2c75739b6e47d8140d1bb203 Mon Sep 17 00:00:00 2001 From: "Gustavo A. R. Silva" Date: Tue, 7 Dec 2021 22:03:11 -0600 Subject: [PATCH 0566/1842] net: huawei: hinic: Use devm_kcalloc() instead of devm_kzalloc() [ Upstream commit 9d922f5df53844228b9f7c62f2593f4f06c0b69b ] Use 2-factor multiplication argument form devm_kcalloc() instead of devm_kzalloc(). Link: https://github.com/KSPP/linux/issues/162 Signed-off-by: Gustavo A. R. Silva Reviewed-by: Kees Cook Link: https://lore.kernel.org/r/20211208040311.GA169838@embeddedor Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- .../ethernet/huawei/hinic/hinic_hw_api_cmd.c | 5 ++-- .../net/ethernet/huawei/hinic/hinic_hw_cmdq.c | 10 ++++---- .../net/ethernet/huawei/hinic/hinic_hw_dev.c | 5 ++-- .../net/ethernet/huawei/hinic/hinic_hw_eqs.c | 9 ++++---- .../net/ethernet/huawei/hinic/hinic_hw_wq.c | 23 +++++++++---------- .../net/ethernet/huawei/hinic/hinic_main.c | 10 ++++---- drivers/net/ethernet/huawei/hinic/hinic_tx.c | 9 ++++---- 7 files changed, 31 insertions(+), 40 deletions(-) diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c index 4e4029d5c8e1..9553d280ec1b 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c @@ -818,7 +818,6 @@ static int api_chain_init(struct hinic_api_cmd_chain *chain, { struct hinic_hwif *hwif = attr->hwif; struct pci_dev *pdev = hwif->pdev; - size_t cell_ctxt_size; chain->hwif = hwif; chain->chain_type = attr->chain_type; @@ -830,8 +829,8 @@ static int api_chain_init(struct hinic_api_cmd_chain *chain, sema_init(&chain->sem, 1); - cell_ctxt_size = chain->num_cells * sizeof(*chain->cell_ctxt); - chain->cell_ctxt = devm_kzalloc(&pdev->dev, cell_ctxt_size, GFP_KERNEL); + chain->cell_ctxt = devm_kcalloc(&pdev->dev, chain->num_cells, + sizeof(*chain->cell_ctxt), GFP_KERNEL); if (!chain->cell_ctxt) return -ENOMEM; diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c index 5a6bbee819cd..21b8235952d3 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c @@ -796,11 +796,10 @@ static int init_cmdqs_ctxt(struct hinic_hwdev *hwdev, struct hinic_cmdq_ctxt *cmdq_ctxts; struct pci_dev *pdev = hwif->pdev; struct hinic_pfhwdev *pfhwdev; - size_t cmdq_ctxts_size; int err; - cmdq_ctxts_size = HINIC_MAX_CMDQ_TYPES * sizeof(*cmdq_ctxts); - cmdq_ctxts = devm_kzalloc(&pdev->dev, cmdq_ctxts_size, GFP_KERNEL); + cmdq_ctxts = devm_kcalloc(&pdev->dev, HINIC_MAX_CMDQ_TYPES, + sizeof(*cmdq_ctxts), GFP_KERNEL); if (!cmdq_ctxts) return -ENOMEM; @@ -884,7 +883,6 @@ int hinic_init_cmdqs(struct hinic_cmdqs *cmdqs, struct hinic_hwif *hwif, struct hinic_func_to_io *func_to_io = cmdqs_to_func_to_io(cmdqs); struct pci_dev *pdev = hwif->pdev; struct hinic_hwdev *hwdev; - size_t saved_wqs_size; u16 max_wqe_size; int err; @@ -895,8 +893,8 @@ int hinic_init_cmdqs(struct hinic_cmdqs *cmdqs, struct hinic_hwif *hwif, if (!cmdqs->cmdq_buf_pool) return -ENOMEM; - saved_wqs_size = HINIC_MAX_CMDQ_TYPES * sizeof(struct hinic_wq); - cmdqs->saved_wqs = devm_kzalloc(&pdev->dev, saved_wqs_size, GFP_KERNEL); + cmdqs->saved_wqs = devm_kcalloc(&pdev->dev, HINIC_MAX_CMDQ_TYPES, + sizeof(*cmdqs->saved_wqs), GFP_KERNEL); if (!cmdqs->saved_wqs) { err = -ENOMEM; goto err_saved_wqs; diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c index 0c74f6674634..799b85c88eff 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c @@ -162,7 +162,6 @@ static int init_msix(struct hinic_hwdev *hwdev) struct hinic_hwif *hwif = hwdev->hwif; struct pci_dev *pdev = hwif->pdev; int nr_irqs, num_aeqs, num_ceqs; - size_t msix_entries_size; int i, err; num_aeqs = HINIC_HWIF_NUM_AEQS(hwif); @@ -171,8 +170,8 @@ static int init_msix(struct hinic_hwdev *hwdev) if (nr_irqs > HINIC_HWIF_NUM_IRQS(hwif)) nr_irqs = HINIC_HWIF_NUM_IRQS(hwif); - msix_entries_size = nr_irqs * sizeof(*hwdev->msix_entries); - hwdev->msix_entries = devm_kzalloc(&pdev->dev, msix_entries_size, + hwdev->msix_entries = devm_kcalloc(&pdev->dev, nr_irqs, + sizeof(*hwdev->msix_entries), GFP_KERNEL); if (!hwdev->msix_entries) return -ENOMEM; diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c index 19942fef99d9..7396158df64f 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c @@ -631,16 +631,15 @@ static int alloc_eq_pages(struct hinic_eq *eq) struct hinic_hwif *hwif = eq->hwif; struct pci_dev *pdev = hwif->pdev; u32 init_val, addr, val; - size_t addr_size; int err, pg; - addr_size = eq->num_pages * sizeof(*eq->dma_addr); - eq->dma_addr = devm_kzalloc(&pdev->dev, addr_size, GFP_KERNEL); + eq->dma_addr = devm_kcalloc(&pdev->dev, eq->num_pages, + sizeof(*eq->dma_addr), GFP_KERNEL); if (!eq->dma_addr) return -ENOMEM; - addr_size = eq->num_pages * sizeof(*eq->virt_addr); - eq->virt_addr = devm_kzalloc(&pdev->dev, addr_size, GFP_KERNEL); + eq->virt_addr = devm_kcalloc(&pdev->dev, eq->num_pages, + sizeof(*eq->virt_addr), GFP_KERNEL); if (!eq->virt_addr) { err = -ENOMEM; goto err_virt_addr_alloc; diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c index f04ac00e3e70..1932e07e97e0 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c @@ -192,20 +192,20 @@ static int alloc_page_arrays(struct hinic_wqs *wqs) { struct hinic_hwif *hwif = wqs->hwif; struct pci_dev *pdev = hwif->pdev; - size_t size; - size = wqs->num_pages * sizeof(*wqs->page_paddr); - wqs->page_paddr = devm_kzalloc(&pdev->dev, size, GFP_KERNEL); + wqs->page_paddr = devm_kcalloc(&pdev->dev, wqs->num_pages, + sizeof(*wqs->page_paddr), GFP_KERNEL); if (!wqs->page_paddr) return -ENOMEM; - size = wqs->num_pages * sizeof(*wqs->page_vaddr); - wqs->page_vaddr = devm_kzalloc(&pdev->dev, size, GFP_KERNEL); + wqs->page_vaddr = devm_kcalloc(&pdev->dev, wqs->num_pages, + sizeof(*wqs->page_vaddr), GFP_KERNEL); if (!wqs->page_vaddr) goto err_page_vaddr; - size = wqs->num_pages * sizeof(*wqs->shadow_page_vaddr); - wqs->shadow_page_vaddr = devm_kzalloc(&pdev->dev, size, GFP_KERNEL); + wqs->shadow_page_vaddr = devm_kcalloc(&pdev->dev, wqs->num_pages, + sizeof(*wqs->shadow_page_vaddr), + GFP_KERNEL); if (!wqs->shadow_page_vaddr) goto err_page_shadow_vaddr; @@ -378,15 +378,14 @@ static int alloc_wqes_shadow(struct hinic_wq *wq) { struct hinic_hwif *hwif = wq->hwif; struct pci_dev *pdev = hwif->pdev; - size_t size; - size = wq->num_q_pages * wq->max_wqe_size; - wq->shadow_wqe = devm_kzalloc(&pdev->dev, size, GFP_KERNEL); + wq->shadow_wqe = devm_kcalloc(&pdev->dev, wq->num_q_pages, + wq->max_wqe_size, GFP_KERNEL); if (!wq->shadow_wqe) return -ENOMEM; - size = wq->num_q_pages * sizeof(wq->prod_idx); - wq->shadow_idx = devm_kzalloc(&pdev->dev, size, GFP_KERNEL); + wq->shadow_idx = devm_kcalloc(&pdev->dev, wq->num_q_pages, + sizeof(wq->prod_idx), GFP_KERNEL); if (!wq->shadow_idx) goto err_shadow_idx; diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c index 350225bbe0be..ace949fe6233 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_main.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c @@ -144,13 +144,12 @@ static int create_txqs(struct hinic_dev *nic_dev) { int err, i, j, num_txqs = hinic_hwdev_num_qps(nic_dev->hwdev); struct net_device *netdev = nic_dev->netdev; - size_t txq_size; if (nic_dev->txqs) return -EINVAL; - txq_size = num_txqs * sizeof(*nic_dev->txqs); - nic_dev->txqs = devm_kzalloc(&netdev->dev, txq_size, GFP_KERNEL); + nic_dev->txqs = devm_kcalloc(&netdev->dev, num_txqs, + sizeof(*nic_dev->txqs), GFP_KERNEL); if (!nic_dev->txqs) return -ENOMEM; @@ -242,13 +241,12 @@ static int create_rxqs(struct hinic_dev *nic_dev) { int err, i, j, num_rxqs = hinic_hwdev_num_qps(nic_dev->hwdev); struct net_device *netdev = nic_dev->netdev; - size_t rxq_size; if (nic_dev->rxqs) return -EINVAL; - rxq_size = num_rxqs * sizeof(*nic_dev->rxqs); - nic_dev->rxqs = devm_kzalloc(&netdev->dev, rxq_size, GFP_KERNEL); + nic_dev->rxqs = devm_kcalloc(&netdev->dev, num_rxqs, + sizeof(*nic_dev->rxqs), GFP_KERNEL); if (!nic_dev->rxqs) return -ENOMEM; diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.c b/drivers/net/ethernet/huawei/hinic/hinic_tx.c index 8da7d46363b2..3828b09bfea3 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_tx.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.c @@ -861,7 +861,6 @@ int hinic_init_txq(struct hinic_txq *txq, struct hinic_sq *sq, struct hinic_dev *nic_dev = netdev_priv(netdev); struct hinic_hwdev *hwdev = nic_dev->hwdev; int err, irqname_len; - size_t sges_size; txq->netdev = netdev; txq->sq = sq; @@ -870,13 +869,13 @@ int hinic_init_txq(struct hinic_txq *txq, struct hinic_sq *sq, txq->max_sges = HINIC_MAX_SQ_BUFDESCS; - sges_size = txq->max_sges * sizeof(*txq->sges); - txq->sges = devm_kzalloc(&netdev->dev, sges_size, GFP_KERNEL); + txq->sges = devm_kcalloc(&netdev->dev, txq->max_sges, + sizeof(*txq->sges), GFP_KERNEL); if (!txq->sges) return -ENOMEM; - sges_size = txq->max_sges * sizeof(*txq->free_sges); - txq->free_sges = devm_kzalloc(&netdev->dev, sges_size, GFP_KERNEL); + txq->free_sges = devm_kcalloc(&netdev->dev, txq->max_sges, + sizeof(*txq->free_sges), GFP_KERNEL); if (!txq->free_sges) { err = -ENOMEM; goto err_alloc_free_sges; From 8096e2d7c0f912751e68e653f8a017d4c20ba590 Mon Sep 17 00:00:00 2001 From: Christophe JAILLET Date: Sat, 21 May 2022 08:33:01 +0200 Subject: [PATCH 0567/1842] hinic: Avoid some over memory allocation [ Upstream commit 15d221d0c345b76947911a3ac91897ffe2f1cc4e ] 'prod_idx' (atomic_t) is larger than 'shadow_idx' (u16), so some memory is over-allocated. Fixes: b15a9f37be2b ("net-next/hinic: Add wq") Signed-off-by: Christophe JAILLET Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c index 1932e07e97e0..f930cd6a75f7 100644 --- a/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c +++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c @@ -385,7 +385,7 @@ static int alloc_wqes_shadow(struct hinic_wq *wq) return -ENOMEM; wq->shadow_idx = devm_kcalloc(&pdev->dev, wq->num_q_pages, - sizeof(wq->prod_idx), GFP_KERNEL); + sizeof(*wq->shadow_idx), GFP_KERNEL); if (!wq->shadow_idx) goto err_shadow_idx; From 0c5f04da02b4c6051f9f83801b38faf0ccfd51d7 Mon Sep 17 00:00:00 2001 From: liuyacan Date: Mon, 23 May 2022 12:57:07 +0800 Subject: [PATCH 0568/1842] net/smc: postpone sk_refcnt increment in connect() [ Upstream commit 75c1edf23b95a9c66923d9269d8e86e4dbde151f ] Same trigger condition as commit 86434744. When setsockopt runs in parallel to a connect(), and switch the socket into fallback mode. Then the sk_refcnt is incremented in smc_connect(), but its state stay in SMC_INIT (NOT SMC_ACTIVE). This cause the corresponding sk_refcnt decrement in __smc_release() will not be performed. Fixes: 86434744fedf ("net/smc: add fallback check to connect()") Signed-off-by: liuyacan Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/smc/af_smc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c index 35db3260e8d5..5d7710dd9514 100644 --- a/net/smc/af_smc.c +++ b/net/smc/af_smc.c @@ -1118,9 +1118,9 @@ static int smc_connect(struct socket *sock, struct sockaddr *addr, if (rc && rc != -EINPROGRESS) goto out; - sock_hold(&smc->sk); /* sock put in passive closing */ if (smc->use_fallback) goto out; + sock_hold(&smc->sk); /* sock put in passive closing */ if (flags & O_NONBLOCK) { if (queue_work(smc_hs_wq, &smc->connect_work)) smc->connect_nonblock = 1; From 3960629bb584c91507fab00343ec233b9339752d Mon Sep 17 00:00:00 2001 From: Shawn Lin Date: Tue, 15 Mar 2022 17:27:06 +0800 Subject: [PATCH 0569/1842] arm64: dts: rockchip: Move drive-impedance-ohm to emmc phy on rk3399 [ Upstream commit 4246d0bab2a8685e3d4aec2cb0ef8c526689ce96 ] drive-impedance-ohm is introduced for emmc phy instead of pcie phy. Fixes: fb8b7460c995 ("arm64: dts: rockchip: Define drive-impedance-ohm for RK3399's emmc-phy.") Signed-off-by: Shawn Lin Link: https://lore.kernel.org/r/1647336426-154797-1-git-send-email-shawn.lin@rock-chips.com Signed-off-by: Heiko Stuebner Signed-off-by: Sasha Levin --- arch/arm64/boot/dts/rockchip/rk3399.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi index 52ba4d07e771..c5f3d4f8f4d2 100644 --- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi +++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi @@ -1471,6 +1471,7 @@ emmc_phy: phy@f780 { reg = <0xf780 0x24>; clocks = <&sdhci>; clock-names = "emmcclk"; + drive-impedance-ohm = <50>; #phy-cells = <0>; status = "disabled"; }; @@ -1481,7 +1482,6 @@ pcie_phy: pcie-phy { clock-names = "refclk"; #phy-cells = <1>; resets = <&cru SRST_PCIEPHY>; - drive-impedance-ohm = <50>; reset-names = "phy"; status = "disabled"; }; From ea4f1c6bb966104efbd634af789891e1b83f192a Mon Sep 17 00:00:00 2001 From: Christophe JAILLET Date: Sun, 20 Mar 2022 08:10:30 +0100 Subject: [PATCH 0570/1842] memory: samsung: exynos5422-dmc: Avoid some over memory allocation [ Upstream commit 56653827f0d7bc7c2d8bac0e119fd1521fa9990a ] 'dmc->counter' is a 'struct devfreq_event_dev **', so there is some over memory allocation. 'counters_size' should be computed with 'sizeof(struct devfreq_event_dev *)'. Use 'sizeof(*dmc->counter)' instead to fix it. While at it, use devm_kcalloc() instead of devm_kzalloc()+open coded multiplication. Fixes: 6e7674c3c6df ("memory: Add DMC driver for Exynos5422") Signed-off-by: Christophe JAILLET Link: https://lore.kernel.org/r/69d7e69346986e2fdb994d4382954c932f9f0993.1647760213.git.christophe.jaillet@wanadoo.fr Signed-off-by: Krzysztof Kozlowski Signed-off-by: Sasha Levin --- drivers/memory/samsung/exynos5422-dmc.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/memory/samsung/exynos5422-dmc.c b/drivers/memory/samsung/exynos5422-dmc.c index 3d230f07eaf2..049a1356f7dd 100644 --- a/drivers/memory/samsung/exynos5422-dmc.c +++ b/drivers/memory/samsung/exynos5422-dmc.c @@ -1327,7 +1327,6 @@ static int exynos5_dmc_init_clks(struct exynos5_dmc *dmc) */ static int exynos5_performance_counters_init(struct exynos5_dmc *dmc) { - int counters_size; int ret, i; dmc->num_counters = devfreq_event_get_edev_count(dmc->dev, @@ -1337,8 +1336,8 @@ static int exynos5_performance_counters_init(struct exynos5_dmc *dmc) return dmc->num_counters; } - counters_size = sizeof(struct devfreq_event_dev) * dmc->num_counters; - dmc->counter = devm_kzalloc(dmc->dev, counters_size, GFP_KERNEL); + dmc->counter = devm_kcalloc(dmc->dev, dmc->num_counters, + sizeof(*dmc->counter), GFP_KERNEL); if (!dmc->counter) return -ENOMEM; From 83653417988cfe9e226cda0c21a029bd7e0ea8d8 Mon Sep 17 00:00:00 2001 From: Andre Przywara Date: Thu, 17 Mar 2022 16:23:40 +0000 Subject: [PATCH 0571/1842] ARM: dts: suniv: F1C100: fix watchdog compatible [ Upstream commit 01a850ee61cbf0ab77dcbf26bb133fec2dd640d6 ] The F1C100 series of SoCs actually have their watchdog IP being compatible with the newer Allwinner generation, not the older one. The currently described sun4i-a10-wdt actually does not work, neither the watchdog functionality (just never fires), nor the reset part (reboot hangs). Replace the compatible string with the one used by the newer generation. Verified to work with both the watchdog and reboot functionality on a LicheePi Nano. Also add the missing interrupt line and clock source, to make it binding compliant. Fixes: 4ba16d17efdd ("ARM: dts: suniv: add initial DTSI file for F1C100s") Signed-off-by: Andre Przywara Acked-by: Guenter Roeck Signed-off-by: Jernej Skrabec Link: https://lore.kernel.org/r/20220317162349.739636-4-andre.przywara@arm.com Signed-off-by: Sasha Levin --- arch/arm/boot/dts/suniv-f1c100s.dtsi | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm/boot/dts/suniv-f1c100s.dtsi b/arch/arm/boot/dts/suniv-f1c100s.dtsi index 6100d3b75f61..def830101448 100644 --- a/arch/arm/boot/dts/suniv-f1c100s.dtsi +++ b/arch/arm/boot/dts/suniv-f1c100s.dtsi @@ -104,8 +104,10 @@ timer@1c20c00 { wdt: watchdog@1c20ca0 { compatible = "allwinner,suniv-f1c100s-wdt", - "allwinner,sun4i-a10-wdt"; + "allwinner,sun6i-a31-wdt"; reg = <0x01c20ca0 0x20>; + interrupts = <16>; + clocks = <&osc32k>; }; uart0: serial@1c25000 { From 7cbe94d296c0be9bd3630825b509874667d28a51 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Tue, 8 Mar 2022 07:19:42 +0000 Subject: [PATCH 0572/1842] soc: qcom: smp2p: Fix missing of_node_put() in smp2p_parse_ipc [ Upstream commit 8fd3f18ea31a398ecce4a6d3804433658678b0a3 ] The device_node pointer is returned by of_parse_phandle() with refcount incremented. We should use of_node_put() on it when done. Fixes: 50e99641413e ("soc: qcom: smp2p: Qualcomm Shared Memory Point to Point") Signed-off-by: Miaoqian Lin Signed-off-by: Bjorn Andersson Link: https://lore.kernel.org/r/20220308071942.22942-1-linmq006@gmail.com Signed-off-by: Sasha Levin --- drivers/soc/qcom/smp2p.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/soc/qcom/smp2p.c b/drivers/soc/qcom/smp2p.c index a9709aae54ab..fb76c8bc3c64 100644 --- a/drivers/soc/qcom/smp2p.c +++ b/drivers/soc/qcom/smp2p.c @@ -420,6 +420,7 @@ static int smp2p_parse_ipc(struct qcom_smp2p *smp2p) } smp2p->ipc_regmap = syscon_node_to_regmap(syscon); + of_node_put(syscon); if (IS_ERR(smp2p->ipc_regmap)) return PTR_ERR(smp2p->ipc_regmap); From a409d0b1f92906e5df0f2ff02f6fe8c319f24869 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Tue, 8 Mar 2022 07:36:48 +0000 Subject: [PATCH 0573/1842] soc: qcom: smsm: Fix missing of_node_put() in smsm_parse_ipc [ Upstream commit aad66a3c78da668f4506356c2fdb70b7a19ecc76 ] The device_node pointer is returned by of_parse_phandle() with refcount incremented. We should use of_node_put() on it when done. Fixes: c97c4090ff72 ("soc: qcom: smsm: Add driver for Qualcomm SMSM") Signed-off-by: Miaoqian Lin Signed-off-by: Bjorn Andersson Link: https://lore.kernel.org/r/20220308073648.24634-1-linmq006@gmail.com Signed-off-by: Sasha Levin --- drivers/soc/qcom/smsm.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/soc/qcom/smsm.c b/drivers/soc/qcom/smsm.c index c428d0f78816..6564f15c5319 100644 --- a/drivers/soc/qcom/smsm.c +++ b/drivers/soc/qcom/smsm.c @@ -359,6 +359,7 @@ static int smsm_parse_ipc(struct qcom_smsm *smsm, unsigned host_id) return 0; host->ipc_regmap = syscon_node_to_regmap(syscon); + of_node_put(syscon); if (IS_ERR(host->ipc_regmap)) return PTR_ERR(host->ipc_regmap); From 266f5cf6928a79a16cb8bc37f7237ea401247d2b Mon Sep 17 00:00:00 2001 From: Dan Carpenter Date: Tue, 15 Mar 2022 09:58:29 +0300 Subject: [PATCH 0574/1842] PCI: cadence: Fix find_first_zero_bit() limit [ Upstream commit 0aa3a0937feeb91a0e4e438c3c063b749b194192 ] The ep->ob_region_map bitmap is a long and it has BITS_PER_LONG bits. Link: https://lore.kernel.org/r/20220315065829.GA13572@kili Fixes: 37dddf14f1ae ("PCI: cadence: Add EndPoint Controller driver for Cadence PCIe controller") Signed-off-by: Dan Carpenter Signed-off-by: Lorenzo Pieralisi Signed-off-by: Sasha Levin --- drivers/pci/controller/cadence/pcie-cadence-ep.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c index 1af14474abcf..4c5e6349d78c 100644 --- a/drivers/pci/controller/cadence/pcie-cadence-ep.c +++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c @@ -154,8 +154,7 @@ static int cdns_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, phys_addr_t addr, struct cdns_pcie *pcie = &ep->pcie; u32 r; - r = find_first_zero_bit(&ep->ob_region_map, - sizeof(ep->ob_region_map) * BITS_PER_LONG); + r = find_first_zero_bit(&ep->ob_region_map, BITS_PER_LONG); if (r >= ep->max_regions - 1) { dev_err(&epc->dev, "no free outbound region\n"); return -EINVAL; From 92b7cab3076d0cfbf57c2d8270ff66e7d7c675fe Mon Sep 17 00:00:00 2001 From: Dan Carpenter Date: Tue, 15 Mar 2022 09:59:44 +0300 Subject: [PATCH 0575/1842] PCI: rockchip: Fix find_first_zero_bit() limit [ Upstream commit 096950e230b8d83645c7cf408b9f399f58c08b96 ] The ep->ob_region_map bitmap is a long and it has BITS_PER_LONG bits. Link: https://lore.kernel.org/r/20220315065944.GB13572@kili Fixes: cf590b078391 ("PCI: rockchip: Add EP driver for Rockchip PCIe controller") Signed-off-by: Dan Carpenter Signed-off-by: Lorenzo Pieralisi Signed-off-by: Sasha Levin --- drivers/pci/controller/pcie-rockchip-ep.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c index 7631dc3961c1..379cde59988c 100644 --- a/drivers/pci/controller/pcie-rockchip-ep.c +++ b/drivers/pci/controller/pcie-rockchip-ep.c @@ -264,8 +264,7 @@ static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, struct rockchip_pcie *pcie = &ep->rockchip; u32 r; - r = find_first_zero_bit(&ep->ob_region_map, - sizeof(ep->ob_region_map) * BITS_PER_LONG); + r = find_first_zero_bit(&ep->ob_region_map, BITS_PER_LONG); /* * Region 0 is reserved for configuration space and shouldn't * be used elsewhere per TRM, so leave it out. From acd99f384cb303bf8b7bb6fa5884e9fd26b69855 Mon Sep 17 00:00:00 2001 From: Jiantao Zhang Date: Wed, 9 Mar 2022 20:01:04 +0800 Subject: [PATCH 0576/1842] PCI: dwc: Fix setting error return on MSI DMA mapping failure [ Upstream commit 88557685cd72cf0db686a4ebff3fad4365cb6071 ] When dma_mapping_error() returns error because of no enough memory, but dw_pcie_host_init() returns success, which will mislead the callers. Link: https://lore.kernel.org/r/30170911-0e2f-98ce-9266-70465b9073e5@huawei.com Fixes: 07940c369a6b ("PCI: dwc: Fix MSI page leakage in suspend/resume") Signed-off-by: Jianrong Zhang Signed-off-by: Jiantao Zhang Signed-off-by: Lorenzo Pieralisi Reviewed-by: Rob Herring Signed-off-by: Sasha Levin --- drivers/pci/controller/dwc/pcie-designware-host.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c index 44c2a6572199..42d8116a4a00 100644 --- a/drivers/pci/controller/dwc/pcie-designware-host.c +++ b/drivers/pci/controller/dwc/pcie-designware-host.c @@ -392,7 +392,8 @@ int dw_pcie_host_init(struct pcie_port *pp) sizeof(pp->msi_msg), DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); - if (dma_mapping_error(pci->dev, pp->msi_data)) { + ret = dma_mapping_error(pci->dev, pp->msi_data); + if (ret) { dev_err(pci->dev, "Failed to map MSI data\n"); pp->msi_data = 0; goto err_free_msi; From ca7ce579a717bc04ff6a967e7161502e21adebb0 Mon Sep 17 00:00:00 2001 From: Thorsten Scherer Date: Wed, 6 Apr 2022 06:39:45 +0200 Subject: [PATCH 0577/1842] ARM: dts: ci4x10: Adapt to changes in imx6qdl.dtsi regarding fec clocks MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 3d397a1277853498e8b7b305f2610881357c033f ] Commit f3e7dae323ab ("ARM: dts: imx6qdl: add enet_out clk support") added another item to the list of clocks for the fec device. As imx6dl-eckelmann-ci4x10.dts only overwrites clocks, but not clock-names this resulted in an inconsistency with clocks having one item more than clock-names. Also overwrite clock-names with the same value as in imx6qdl.dtsi. This is a no-op today, but prevents similar inconsistencies if the soc file will be changed in a similar way in the future. Signed-off-by: Thorsten Scherer Reviewed-by: Uwe Kleine-König Fixes: f3e7dae323ab ("ARM: dts: imx6qdl: add enet_out clk support") Signed-off-by: Shawn Guo Signed-off-by: Sasha Levin --- arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts b/arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts index b4a9523e325b..864dc5018451 100644 --- a/arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts +++ b/arch/arm/boot/dts/imx6dl-eckelmann-ci4x10.dts @@ -297,7 +297,11 @@ &fec { phy-mode = "rmii"; phy-reset-gpios = <&gpio1 18 GPIO_ACTIVE_LOW>; phy-handle = <&phy>; - clocks = <&clks IMX6QDL_CLK_ENET>, <&clks IMX6QDL_CLK_ENET>, <&rmii_clk>; + clocks = <&clks IMX6QDL_CLK_ENET>, + <&clks IMX6QDL_CLK_ENET>, + <&rmii_clk>, + <&clks IMX6QDL_CLK_ENET_REF>; + clock-names = "ipg", "ahb", "ptp", "enet_out"; status = "okay"; mdio { From 259c1fad9fb003a383755732d27d676d757c534f Mon Sep 17 00:00:00 2001 From: Bjorn Andersson Date: Fri, 8 Apr 2022 14:33:36 -0700 Subject: [PATCH 0578/1842] soc: qcom: llcc: Add MODULE_DEVICE_TABLE() [ Upstream commit 5334a3b12a7233b31788de60d61bfd890059d783 ] The llcc-qcom driver can be compiled as a module, but lacks MODULE_DEVICE_TABLE() and will therefore not be loaded automatically. Fix this. Fixes: a3134fb09e0b ("drivers: soc: Add LLCC driver") Signed-off-by: Bjorn Andersson Reviewed-by: Sai Prakash Ranjan Reviewed-by: Dmitry Baryshkov Link: https://lore.kernel.org/r/20220408213336.581661-3-bjorn.andersson@linaro.org Signed-off-by: Sasha Levin --- drivers/soc/qcom/llcc-qcom.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c index 70fbe70c6213..2e06f48d683d 100644 --- a/drivers/soc/qcom/llcc-qcom.c +++ b/drivers/soc/qcom/llcc-qcom.c @@ -496,6 +496,7 @@ static const struct of_device_id qcom_llcc_of_match[] = { { .compatible = "qcom,sdm845-llcc", .data = &sdm845_cfg }, { } }; +MODULE_DEVICE_TABLE(of, qcom_llcc_of_match); static struct platform_driver qcom_llcc_driver = { .driver = { From fd7dca68a69befd3de17d59eb604a9908a13657b Mon Sep 17 00:00:00 2001 From: Sean Christopherson Date: Thu, 7 Apr 2022 00:23:14 +0000 Subject: [PATCH 0579/1842] KVM: nVMX: Leave most VM-Exit info fields unmodified on failed VM-Entry [ Upstream commit c3634d25fbee88e2368a8e0903ae0d0670eb9e71 ] Don't modify vmcs12 exit fields except EXIT_REASON and EXIT_QUALIFICATION when performing a nested VM-Exit due to failed VM-Entry. Per the SDM, only the two aformentioned fields are filled and "All other VM-exit information fields are unmodified". Fixes: 4704d0befb07 ("KVM: nVMX: Exiting from L2 to L1") Signed-off-by: Sean Christopherson Message-Id: <20220407002315.78092-3-seanjc@google.com> Signed-off-by: Paolo Bonzini Signed-off-by: Sasha Levin --- arch/x86/kvm/vmx/nested.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 0c2389d0fdaf..fe53cab1a8a6 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -4143,12 +4143,12 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, /* update exit information fields: */ vmcs12->vm_exit_reason = vm_exit_reason; vmcs12->exit_qualification = exit_qualification; - vmcs12->vm_exit_intr_info = exit_intr_info; - - vmcs12->idt_vectoring_info_field = 0; - vmcs12->vm_exit_instruction_len = vmcs_read32(VM_EXIT_INSTRUCTION_LEN); - vmcs12->vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO); + /* + * On VM-Exit due to a failed VM-Entry, the VMCS isn't marked launched + * and only EXIT_REASON and EXIT_QUALIFICATION are updated, all other + * exit info fields are unmodified. + */ if (!(vmcs12->vm_exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY)) { vmcs12->launch_state = 1; @@ -4160,8 +4160,13 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, * Transfer the event that L0 or L1 may wanted to inject into * L2 to IDT_VECTORING_INFO_FIELD. */ + vmcs12->idt_vectoring_info_field = 0; vmcs12_save_pending_event(vcpu, vmcs12); + vmcs12->vm_exit_intr_info = exit_intr_info; + vmcs12->vm_exit_instruction_len = vmcs_read32(VM_EXIT_INSTRUCTION_LEN); + vmcs12->vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO); + /* * According to spec, there's no need to store the guest's * MSRs if the exit is due to a VM-entry failure that occurs From e690350d3d9f248eb5fa7edb5371878473af1278 Mon Sep 17 00:00:00 2001 From: Sean Christopherson Date: Thu, 7 Apr 2022 00:23:15 +0000 Subject: [PATCH 0580/1842] KVM: nVMX: Clear IDT vectoring on nested VM-Exit for double/triple fault [ Upstream commit 9bd1f0efa859b61950d109b32ff8d529cc33a3ad ] Clear the IDT vectoring field in vmcs12 on next VM-Exit due to a double or triple fault. Per the SDM, a VM-Exit isn't considered to occur during event delivery if the exit is due to an intercepted double fault or a triple fault. Opportunistically move the default clearing (no event "pending") into the helper so that it's more obvious that KVM does indeed handle this case. Note, the double fault case is worded rather wierdly in the SDM: The original event results in a double-fault exception that causes the VM exit directly. Temporarily ignoring injected events, double faults can _only_ occur if an exception occurs while attempting to deliver a different exception, i.e. there's _always_ an original event. And for injected double fault, while there's no original event, injected events are never subject to interception. Presumably the SDM is calling out that a the vectoring info will be valid if a different exit occurs after a double fault, e.g. if a #PF occurs and is intercepted while vectoring #DF, then the vectoring info will show the double fault. In other words, the clause can simply be read as: The VM exit is caused by a double-fault exception. Fixes: 4704d0befb07 ("KVM: nVMX: Exiting from L2 to L1") Cc: Chenyi Qiang Signed-off-by: Sean Christopherson Message-Id: <20220407002315.78092-4-seanjc@google.com> Signed-off-by: Paolo Bonzini Signed-off-by: Sasha Levin --- arch/x86/kvm/vmx/nested.c | 32 ++++++++++++++++++++++++++++---- arch/x86/kvm/vmx/vmcs.h | 5 +++++ 2 files changed, 33 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index fe53cab1a8a6..90881d7b42ea 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3668,12 +3668,34 @@ vmcs12_guest_cr4(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12) } static void vmcs12_save_pending_event(struct kvm_vcpu *vcpu, - struct vmcs12 *vmcs12) + struct vmcs12 *vmcs12, + u32 vm_exit_reason, u32 exit_intr_info) { u32 idt_vectoring; unsigned int nr; - if (vcpu->arch.exception.injected) { + /* + * Per the SDM, VM-Exits due to double and triple faults are never + * considered to occur during event delivery, even if the double/triple + * fault is the result of an escalating vectoring issue. + * + * Note, the SDM qualifies the double fault behavior with "The original + * event results in a double-fault exception". It's unclear why the + * qualification exists since exits due to double fault can occur only + * while vectoring a different exception (injected events are never + * subject to interception), i.e. there's _always_ an original event. + * + * The SDM also uses NMI as a confusing example for the "original event + * causes the VM exit directly" clause. NMI isn't special in any way, + * the same rule applies to all events that cause an exit directly. + * NMI is an odd choice for the example because NMIs can only occur on + * instruction boundaries, i.e. they _can't_ occur during vectoring. + */ + if ((u16)vm_exit_reason == EXIT_REASON_TRIPLE_FAULT || + ((u16)vm_exit_reason == EXIT_REASON_EXCEPTION_NMI && + is_double_fault(exit_intr_info))) { + vmcs12->idt_vectoring_info_field = 0; + } else if (vcpu->arch.exception.injected) { nr = vcpu->arch.exception.nr; idt_vectoring = nr | VECTORING_INFO_VALID_MASK; @@ -3706,6 +3728,8 @@ static void vmcs12_save_pending_event(struct kvm_vcpu *vcpu, idt_vectoring |= INTR_TYPE_EXT_INTR; vmcs12->idt_vectoring_info_field = idt_vectoring; + } else { + vmcs12->idt_vectoring_info_field = 0; } } @@ -4160,8 +4184,8 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, * Transfer the event that L0 or L1 may wanted to inject into * L2 to IDT_VECTORING_INFO_FIELD. */ - vmcs12->idt_vectoring_info_field = 0; - vmcs12_save_pending_event(vcpu, vmcs12); + vmcs12_save_pending_event(vcpu, vmcs12, + vm_exit_reason, exit_intr_info); vmcs12->vm_exit_intr_info = exit_intr_info; vmcs12->vm_exit_instruction_len = vmcs_read32(VM_EXIT_INSTRUCTION_LEN); diff --git a/arch/x86/kvm/vmx/vmcs.h b/arch/x86/kvm/vmx/vmcs.h index 571d9ad80a59..69c147df957f 100644 --- a/arch/x86/kvm/vmx/vmcs.h +++ b/arch/x86/kvm/vmx/vmcs.h @@ -102,6 +102,11 @@ static inline bool is_breakpoint(u32 intr_info) return is_exception_n(intr_info, BP_VECTOR); } +static inline bool is_double_fault(u32 intr_info) +{ + return is_exception_n(intr_info, DF_VECTOR); +} + static inline bool is_page_fault(u32 intr_info) { return is_exception_n(intr_info, PF_VECTOR); From acd2313bd99d94fba4696d49ced7ab7a091e1c54 Mon Sep 17 00:00:00 2001 From: Tzung-Bi Shih Date: Wed, 16 Feb 2022 16:03:02 +0800 Subject: [PATCH 0581/1842] platform/chrome: cros_ec: fix error handling in cros_ec_register() [ Upstream commit 2cd01bd6b117df07b1bc2852f08694fdd29e40ed ] Fix cros_ec_register() to unregister platform devices if blocking_notifier_chain_register() fails. Also use the single exit path to handle the platform device unregistration. Fixes: 42cd0ab476e2 ("platform/chrome: cros_ec: Query EC protocol version if EC transitions between RO/RW") Reviewed-by: Prashant Malani Signed-off-by: Tzung-Bi Shih Signed-off-by: Sasha Levin --- drivers/platform/chrome/cros_ec.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/drivers/platform/chrome/cros_ec.c b/drivers/platform/chrome/cros_ec.c index 3104680b7485..979f92194e81 100644 --- a/drivers/platform/chrome/cros_ec.c +++ b/drivers/platform/chrome/cros_ec.c @@ -175,6 +175,8 @@ int cros_ec_register(struct cros_ec_device *ec_dev) ec_dev->max_request = sizeof(struct ec_params_hello); ec_dev->max_response = sizeof(struct ec_response_get_protocol_info); ec_dev->max_passthru = 0; + ec_dev->ec = NULL; + ec_dev->pd = NULL; ec_dev->din = devm_kzalloc(dev, ec_dev->din_size, GFP_KERNEL); if (!ec_dev->din) @@ -231,18 +233,16 @@ int cros_ec_register(struct cros_ec_device *ec_dev) if (IS_ERR(ec_dev->pd)) { dev_err(ec_dev->dev, "Failed to create CrOS PD platform device\n"); - platform_device_unregister(ec_dev->ec); - return PTR_ERR(ec_dev->pd); + err = PTR_ERR(ec_dev->pd); + goto exit; } } if (IS_ENABLED(CONFIG_OF) && dev->of_node) { err = devm_of_platform_populate(dev); if (err) { - platform_device_unregister(ec_dev->pd); - platform_device_unregister(ec_dev->ec); dev_err(dev, "Failed to register sub-devices\n"); - return err; + goto exit; } } @@ -264,12 +264,16 @@ int cros_ec_register(struct cros_ec_device *ec_dev) err = blocking_notifier_chain_register(&ec_dev->event_notifier, &ec_dev->notifier_ready); if (err) - return err; + goto exit; } dev_info(dev, "Chrome EC device registered\n"); return 0; +exit: + platform_device_unregister(ec_dev->ec); + platform_device_unregister(ec_dev->pd); + return err; } EXPORT_SYMBOL(cros_ec_register); From b0bf87b1b3884c4c1f74be161135229b17f94eec Mon Sep 17 00:00:00 2001 From: Max Krummenacher Date: Mon, 11 Apr 2022 17:22:24 +0200 Subject: [PATCH 0582/1842] ARM: dts: imx6dl-colibri: Fix I2C pinmuxing [ Upstream commit 5f5c579a34a87117c20b411df583ae816c1ec84f ] Fix names of extra pingroup node and property for gpio bus recovery. Without the change i2c2 is not functional. Fixes: 56f0df6b6b58 ("ARM: dts: imx*(colibri|apalis): add missing recovery modes to i2c") Signed-off-by: Max Krummenacher Signed-off-by: Shawn Guo Signed-off-by: Sasha Levin --- arch/arm/boot/dts/imx6qdl-colibri.dtsi | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm/boot/dts/imx6qdl-colibri.dtsi b/arch/arm/boot/dts/imx6qdl-colibri.dtsi index 4e2a309c93fa..1e86b3814708 100644 --- a/arch/arm/boot/dts/imx6qdl-colibri.dtsi +++ b/arch/arm/boot/dts/imx6qdl-colibri.dtsi @@ -1,6 +1,6 @@ // SPDX-License-Identifier: GPL-2.0+ OR MIT /* - * Copyright 2014-2020 Toradex + * Copyright 2014-2022 Toradex * Copyright 2012 Freescale Semiconductor, Inc. * Copyright 2011 Linaro Ltd. */ @@ -132,7 +132,7 @@ &i2c2 { clock-frequency = <100000>; pinctrl-names = "default", "gpio"; pinctrl-0 = <&pinctrl_i2c2>; - pinctrl-0 = <&pinctrl_i2c2_gpio>; + pinctrl-1 = <&pinctrl_i2c2_gpio>; scl-gpios = <&gpio2 30 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>; sda-gpios = <&gpio3 16 (GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN)>; status = "okay"; @@ -488,7 +488,7 @@ MX6QDL_PAD_EIM_D16__I2C2_SDA 0x4001b8b1 >; }; - pinctrl_i2c2_gpio: i2c2grp { + pinctrl_i2c2_gpio: i2c2gpiogrp { fsl,pins = < MX6QDL_PAD_EIM_EB2__GPIO2_IO30 0x4001b8b1 MX6QDL_PAD_EIM_D16__GPIO3_IO16 0x4001b8b1 From 875a17c3adb4ae7cd84c41adeac628963f96fafc Mon Sep 17 00:00:00 2001 From: Guenter Roeck Date: Fri, 18 Mar 2022 09:54:22 -0700 Subject: [PATCH 0583/1842] platform/chrome: Re-introduce cros_ec_cmd_xfer and use it for ioctls [ Upstream commit 57b888ca2541785de2fcb90575b378921919b6c0 ] Commit 413dda8f2c6f ("platform/chrome: cros_ec_chardev: Use cros_ec_cmd_xfer_status helper") inadvertendly changed the userspace ABI. Previously, cros_ec ioctls would only report errors if the EC communication failed, and otherwise return success and the result of the EC communication. An EC command execution failure was reported in the EC response field. The above mentioned commit changed this behavior, and the ioctl itself would fail. This breaks userspace commands trying to analyze the EC command execution error since the actual EC command response is no longer reported to userspace. Fix the problem by re-introducing the cros_ec_cmd_xfer() helper, and use it to handle ioctl messages. Fixes: 413dda8f2c6f ("platform/chrome: cros_ec_chardev: Use cros_ec_cmd_xfer_status helper") Cc: Daisuke Nojiri Cc: Rob Barnes Cc: Rajat Jain Cc: Brian Norris Cc: Parth Malkan Reviewed-by: Daisuke Nojiri Reviewed-by: Brian Norris Signed-off-by: Guenter Roeck Signed-off-by: Tzung-Bi Shih Signed-off-by: Sasha Levin --- drivers/platform/chrome/cros_ec_chardev.c | 2 +- drivers/platform/chrome/cros_ec_proto.c | 50 +++++++++++++++++---- include/linux/platform_data/cros_ec_proto.h | 3 ++ 3 files changed, 45 insertions(+), 10 deletions(-) diff --git a/drivers/platform/chrome/cros_ec_chardev.c b/drivers/platform/chrome/cros_ec_chardev.c index e0bce869c49a..fd33de546aee 100644 --- a/drivers/platform/chrome/cros_ec_chardev.c +++ b/drivers/platform/chrome/cros_ec_chardev.c @@ -301,7 +301,7 @@ static long cros_ec_chardev_ioctl_xcmd(struct cros_ec_dev *ec, void __user *arg) } s_cmd->command += ec->cmd_offset; - ret = cros_ec_cmd_xfer_status(ec->ec_dev, s_cmd); + ret = cros_ec_cmd_xfer(ec->ec_dev, s_cmd); /* Only copy data to userland if data was received. */ if (ret < 0) goto exit; diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c index 9f698a7aad12..e1fadf059e05 100644 --- a/drivers/platform/chrome/cros_ec_proto.c +++ b/drivers/platform/chrome/cros_ec_proto.c @@ -560,22 +560,28 @@ int cros_ec_query_all(struct cros_ec_device *ec_dev) EXPORT_SYMBOL(cros_ec_query_all); /** - * cros_ec_cmd_xfer_status() - Send a command to the ChromeOS EC. + * cros_ec_cmd_xfer() - Send a command to the ChromeOS EC. * @ec_dev: EC device. * @msg: Message to write. * - * Call this to send a command to the ChromeOS EC. This should be used instead of calling the EC's - * cmd_xfer() callback directly. It returns success status only if both the command was transmitted - * successfully and the EC replied with success status. + * Call this to send a command to the ChromeOS EC. This should be used instead + * of calling the EC's cmd_xfer() callback directly. This function does not + * convert EC command execution error codes to Linux error codes. Most + * in-kernel users will want to use cros_ec_cmd_xfer_status() instead since + * that function implements the conversion. * * Return: - * >=0 - The number of bytes transferred - * <0 - Linux error code + * >0 - EC command was executed successfully. The return value is the number + * of bytes returned by the EC (excluding the header). + * =0 - EC communication was successful. EC command execution results are + * reported in msg->result. The result will be EC_RES_SUCCESS if the + * command was executed successfully or report an EC command execution + * error. + * <0 - EC communication error. Return value is the Linux error code. */ -int cros_ec_cmd_xfer_status(struct cros_ec_device *ec_dev, - struct cros_ec_command *msg) +int cros_ec_cmd_xfer(struct cros_ec_device *ec_dev, struct cros_ec_command *msg) { - int ret, mapped; + int ret; mutex_lock(&ec_dev->lock); if (ec_dev->proto_version == EC_PROTO_VERSION_UNKNOWN) { @@ -616,6 +622,32 @@ int cros_ec_cmd_xfer_status(struct cros_ec_device *ec_dev, ret = send_command(ec_dev, msg); mutex_unlock(&ec_dev->lock); + return ret; +} +EXPORT_SYMBOL(cros_ec_cmd_xfer); + +/** + * cros_ec_cmd_xfer_status() - Send a command to the ChromeOS EC. + * @ec_dev: EC device. + * @msg: Message to write. + * + * Call this to send a command to the ChromeOS EC. This should be used instead of calling the EC's + * cmd_xfer() callback directly. It returns success status only if both the command was transmitted + * successfully and the EC replied with success status. + * + * Return: + * >=0 - The number of bytes transferred. + * <0 - Linux error code + */ +int cros_ec_cmd_xfer_status(struct cros_ec_device *ec_dev, + struct cros_ec_command *msg) +{ + int ret, mapped; + + ret = cros_ec_cmd_xfer(ec_dev, msg); + if (ret < 0) + return ret; + mapped = cros_ec_map_error(msg->result); if (mapped) { dev_dbg(ec_dev->dev, "Command result (err: %d [%d])\n", diff --git a/include/linux/platform_data/cros_ec_proto.h b/include/linux/platform_data/cros_ec_proto.h index 02599687770c..7f03e02c48cd 100644 --- a/include/linux/platform_data/cros_ec_proto.h +++ b/include/linux/platform_data/cros_ec_proto.h @@ -216,6 +216,9 @@ int cros_ec_prepare_tx(struct cros_ec_device *ec_dev, int cros_ec_check_result(struct cros_ec_device *ec_dev, struct cros_ec_command *msg); +int cros_ec_cmd_xfer(struct cros_ec_device *ec_dev, + struct cros_ec_command *msg); + int cros_ec_cmd_xfer_status(struct cros_ec_device *ec_dev, struct cros_ec_command *msg); From b439f7addd2bf10e0ff7528426b62ba86bf25524 Mon Sep 17 00:00:00 2001 From: Marc Kleine-Budde Date: Thu, 17 Mar 2022 21:29:07 +0100 Subject: [PATCH 0584/1842] can: xilinx_can: mark bit timing constants as const [ Upstream commit ae38fda02996d43d9fb09f16e81e0008704dd524 ] This patch marks the bit timing constants as const. Fixes: c223da689324 ("can: xilinx_can: Add support for CANFD FD frames") Link: https://lore.kernel.org/all/20220317203119.792552-1-mkl@pengutronix.de Cc: Appana Durga Kedareswara rao Cc: Naga Sureshkumar Relli Signed-off-by: Marc Kleine-Budde Signed-off-by: Sasha Levin --- drivers/net/can/xilinx_can.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c index 375998263af7..1c42417810fc 100644 --- a/drivers/net/can/xilinx_can.c +++ b/drivers/net/can/xilinx_can.c @@ -239,7 +239,7 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd = { }; /* AXI CANFD Data Bittiming constants as per AXI CANFD 1.0 specs */ -static struct can_bittiming_const xcan_data_bittiming_const_canfd = { +static const struct can_bittiming_const xcan_data_bittiming_const_canfd = { .name = DRIVER_NAME, .tseg1_min = 1, .tseg1_max = 16, @@ -265,7 +265,7 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd2 = { }; /* AXI CANFD 2.0 Data Bittiming constants as per AXI CANFD 2.0 spec */ -static struct can_bittiming_const xcan_data_bittiming_const_canfd2 = { +static const struct can_bittiming_const xcan_data_bittiming_const_canfd2 = { .name = DRIVER_NAME, .tseg1_min = 1, .tseg1_max = 32, From 95000ae68025e7c928c1aef97f61b1fe573babb1 Mon Sep 17 00:00:00 2001 From: Marek Vasut Date: Fri, 25 Mar 2022 18:58:51 +0100 Subject: [PATCH 0585/1842] ARM: dts: stm32: Fix PHY post-reset delay on Avenger96 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit ef2d90708883f4025a801feb0ba8411a7a4387e1 ] Per KSZ9031RNX PHY datasheet FIGURE 7-5: POWER-UP/POWER-DOWN/RESET TIMING Note 2: After the de-assertion of reset, wait a minimum of 100 μs before starting programming on the MIIM (MDC/MDIO) interface. Add 1ms post-reset delay to guarantee this figure. Fixes: 010ca9fe500bf ("ARM: dts: stm32: Add missing ethernet PHY reset on AV96") Signed-off-by: Marek Vasut Cc: Alexandre Torgue Cc: Patrice Chotard Cc: Patrick Delaunay Cc: linux-stm32@st-md-mailman.stormreply.com To: linux-arm-kernel@lists.infradead.org Signed-off-by: Alexandre Torgue Signed-off-by: Sasha Levin --- arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi index 944d38b85eef..f3e0c790a4b1 100644 --- a/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi +++ b/arch/arm/boot/dts/stm32mp15xx-dhcor-avenger96.dtsi @@ -141,6 +141,7 @@ mdio0 { compatible = "snps,dwmac-mdio"; reset-gpios = <&gpioz 2 GPIO_ACTIVE_LOW>; reset-delay-us = <1000>; + reset-post-delay-us = <1000>; phy0: ethernet-phy@7 { reg = <7>; From daffdb08306ec146c4e902393bca9d3979f4dc54 Mon Sep 17 00:00:00 2001 From: Phil Elwell Date: Mon, 11 Apr 2022 22:01:38 +0200 Subject: [PATCH 0586/1842] ARM: dts: bcm2835-rpi-zero-w: Fix GPIO line name for Wifi/BT [ Upstream commit 2c663e5e5bbf2a5b85e0f76ccb69663f583c3e33 ] The GPIOs 30 to 39 are connected to the Cypress CYW43438 (Wifi/BT). So fix the GPIO line names accordingly. Fixes: 2c7c040c73e9 ("ARM: dts: bcm2835: Add Raspberry Pi Zero W") Signed-off-by: Phil Elwell Signed-off-by: Stefan Wahren Signed-off-by: Florian Fainelli Signed-off-by: Sasha Levin --- arch/arm/boot/dts/bcm2835-rpi-zero-w.dts | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts index 33b2b77aa47d..00582eb2c12e 100644 --- a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts +++ b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts @@ -74,16 +74,18 @@ &gpio { "GPIO27", "SDA0", "SCL0", - "NC", /* GPIO30 */ - "NC", /* GPIO31 */ - "NC", /* GPIO32 */ - "NC", /* GPIO33 */ - "NC", /* GPIO34 */ - "NC", /* GPIO35 */ - "NC", /* GPIO36 */ - "NC", /* GPIO37 */ - "NC", /* GPIO38 */ - "NC", /* GPIO39 */ + /* Used by BT module */ + "CTS0", + "RTS0", + "TXD0", + "RXD0", + /* Used by Wifi */ + "SD1_CLK", + "SD1_CMD", + "SD1_DATA0", + "SD1_DATA1", + "SD1_DATA2", + "SD1_DATA3", "CAM_GPIO1", /* GPIO40 */ "WL_ON", /* GPIO41 */ "NC", /* GPIO42 */ From bd7ffc171ca5d4cae30b87f76d41e217c37374fb Mon Sep 17 00:00:00 2001 From: Phil Elwell Date: Mon, 11 Apr 2022 22:01:39 +0200 Subject: [PATCH 0587/1842] ARM: dts: bcm2837-rpi-cm3-io3: Fix GPIO line names for SMPS I2C [ Upstream commit 9fd26fd02749ec964eb0d588a3bab9e09bf77927 ] The GPIOs 46 & 47 are already used for a I2C interface to a SMPS. So fix the GPIO line names accordingly. Fixes: a54fe8a6cf66 ("ARM: dts: add Raspberry Pi Compute Module 3 and IO board") Signed-off-by: Phil Elwell Signed-off-by: Stefan Wahren Signed-off-by: Florian Fainelli Signed-off-by: Sasha Levin --- arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts b/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts index 588d9411ceb6..3dfce4312dfc 100644 --- a/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts +++ b/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts @@ -63,8 +63,8 @@ &gpio { "GPIO43", "GPIO44", "GPIO45", - "GPIO46", - "GPIO47", + "SMPS_SCL", + "SMPS_SDA", /* Used by eMMC */ "SD_CLK_R", "SD_CMD_R", From 0a4ee6cdaa14af68226759aa74721b99f4ca3c66 Mon Sep 17 00:00:00 2001 From: Phil Elwell Date: Mon, 11 Apr 2022 22:01:40 +0200 Subject: [PATCH 0588/1842] ARM: dts: bcm2837-rpi-3-b-plus: Fix GPIO line name of power LED [ Upstream commit 57f718aa4b93392fb1a8c0a874ab882b9e18136a ] The red LED on the Raspberry Pi 3 B Plus is the power LED. So fix the GPIO line name accordingly. Fixes: 71c0cd2283f2 ("ARM: dts: bcm2837: Add Raspberry Pi 3 B+") Signed-off-by: Phil Elwell Signed-off-by: Stefan Wahren Signed-off-by: Florian Fainelli Signed-off-by: Sasha Levin --- arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts b/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts index 61010266ca9a..90472e76a313 100644 --- a/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts +++ b/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts @@ -45,7 +45,7 @@ expgpio: gpio { #gpio-cells = <2>; gpio-line-names = "BT_ON", "WL_ON", - "STATUS_LED_R", + "PWR_LED_R", "LAN_RUN", "", "CAM_GPIO0", From 22c3fea20a9418fa82286d6d58e8222c66725861 Mon Sep 17 00:00:00 2001 From: Stefan Wahren Date: Mon, 11 Apr 2022 22:01:41 +0200 Subject: [PATCH 0589/1842] ARM: dts: bcm2835-rpi-b: Fix GPIO line names [ Upstream commit 97bd8659c1c46c23e4daea7e040befca30939950 ] Recently this has been fixed in the vendor tree, so upstream this. Fixes: 731b26a6ac17 ("ARM: bcm2835: Add names for the Raspberry Pi GPIO lines") Signed-off-by: Phil Elwell Signed-off-by: Stefan Wahren Signed-off-by: Florian Fainelli Signed-off-by: Sasha Levin --- arch/arm/boot/dts/bcm2835-rpi-b.dts | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/arm/boot/dts/bcm2835-rpi-b.dts b/arch/arm/boot/dts/bcm2835-rpi-b.dts index 1b63d6b19750..25d87212cefd 100644 --- a/arch/arm/boot/dts/bcm2835-rpi-b.dts +++ b/arch/arm/boot/dts/bcm2835-rpi-b.dts @@ -53,18 +53,17 @@ &gpio { "GPIO18", "NC", /* GPIO19 */ "NC", /* GPIO20 */ - "GPIO21", + "CAM_GPIO0", "GPIO22", "GPIO23", "GPIO24", "GPIO25", "NC", /* GPIO26 */ - "CAM_GPIO0", - /* Binary number representing build/revision */ - "CONFIG0", - "CONFIG1", - "CONFIG2", - "CONFIG3", + "GPIO27", + "GPIO28", + "GPIO29", + "GPIO30", + "GPIO31", "NC", /* GPIO32 */ "NC", /* GPIO33 */ "NC", /* GPIO34 */ From ee89d8dee55ab4b3b8ad8b70866b2841ba334767 Mon Sep 17 00:00:00 2001 From: Hangyu Hua Date: Mon, 18 Apr 2022 16:57:58 +0800 Subject: [PATCH 0590/1842] misc: ocxl: fix possible double free in ocxl_file_register_afu [ Upstream commit 950cf957fe34d40d63dfa3bf3968210430b6491e ] info_release() will be called in device_unregister() when info->dev's reference count is 0. So there is no need to call ocxl_afu_put() and kfree() again. Fix this by adding free_minor() and return to err_unregister error path. Fixes: 75ca758adbaf ("ocxl: Create a clear delineation between ocxl backend & frontend") Signed-off-by: Hangyu Hua Acked-by: Frederic Barrat Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220418085758.38145-1-hbh25y@gmail.com Signed-off-by: Sasha Levin --- drivers/misc/ocxl/file.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/misc/ocxl/file.c b/drivers/misc/ocxl/file.c index 4d1b44de1492..c742ab02ae18 100644 --- a/drivers/misc/ocxl/file.c +++ b/drivers/misc/ocxl/file.c @@ -558,7 +558,9 @@ int ocxl_file_register_afu(struct ocxl_afu *afu) err_unregister: ocxl_sysfs_unregister_afu(info); // safe to call even if register failed + free_minor(info); device_unregister(&info->dev); + return rc; err_put: ocxl_afu_put(afu); free_minor(info); From a26dfdf0a63b23c105f6f19abea16437056df990 Mon Sep 17 00:00:00 2001 From: Corentin Labbe Date: Wed, 13 Apr 2022 19:11:54 +0000 Subject: [PATCH 0591/1842] crypto: marvell/cesa - ECB does not IV [ Upstream commit 4ffa1763622ae5752961499588f3f8874315f974 ] The DES3 ECB has an IV size set but ECB does not need one. Fixes: 4ada483978237 ("crypto: marvell/cesa - add Triple-DES support") Signed-off-by: Corentin Labbe Signed-off-by: Herbert Xu Signed-off-by: Sasha Levin --- drivers/crypto/marvell/cesa/cipher.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/crypto/marvell/cesa/cipher.c b/drivers/crypto/marvell/cesa/cipher.c index b4a6ff9dd6d5..596a8c74e40a 100644 --- a/drivers/crypto/marvell/cesa/cipher.c +++ b/drivers/crypto/marvell/cesa/cipher.c @@ -614,7 +614,6 @@ struct skcipher_alg mv_cesa_ecb_des3_ede_alg = { .decrypt = mv_cesa_ecb_des3_ede_decrypt, .min_keysize = DES3_EDE_KEY_SIZE, .max_keysize = DES3_EDE_KEY_SIZE, - .ivsize = DES3_EDE_BLOCK_SIZE, .base = { .cra_name = "ecb(des3_ede)", .cra_driver_name = "mv-ecb-des3-ede", From cda45b715d7052c23c2eb1c270023f758b209b16 Mon Sep 17 00:00:00 2001 From: Stefan Wahren Date: Sat, 9 Apr 2022 11:51:28 +0200 Subject: [PATCH 0592/1842] gpiolib: of: Introduce hook for missing gpio-ranges [ Upstream commit 3550bba25d5587a701e6edf20e20984d2ee72c78 ] Since commit 2ab73c6d8323 ("gpio: Support GPIO controllers without pin-ranges") the device tree nodes of GPIO controller need the gpio-ranges property to handle gpio-hogs. Unfortunately it's impossible to guarantee that every new kernel is shipped with an updated device tree binary. In order to provide backward compatibility with those older DTB, we need a callback within of_gpiochip_add_pin_range() so the relevant platform driver can handle this case. Fixes: 2ab73c6d8323 ("gpio: Support GPIO controllers without pin-ranges") Signed-off-by: Stefan Wahren Reviewed-by: Florian Fainelli Tested-by: Florian Fainelli Acked-by: Bartosz Golaszewski Link: https://lore.kernel.org/r/20220409095129.45786-2-stefan.wahren@i2se.com Signed-off-by: Linus Walleij Signed-off-by: Sasha Levin --- drivers/gpio/gpiolib-of.c | 5 +++++ include/linux/gpio/driver.h | 12 ++++++++++++ 2 files changed, 17 insertions(+) diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c index 921a99578ff0..01424af654db 100644 --- a/drivers/gpio/gpiolib-of.c +++ b/drivers/gpio/gpiolib-of.c @@ -933,6 +933,11 @@ static int of_gpiochip_add_pin_range(struct gpio_chip *chip) if (!np) return 0; + if (!of_property_read_bool(np, "gpio-ranges") && + chip->of_gpio_ranges_fallback) { + return chip->of_gpio_ranges_fallback(chip, np); + } + group_names = of_find_property(np, group_names_propname, NULL); for (;; index++) { diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h index b216899b4745..0552a9859a01 100644 --- a/include/linux/gpio/driver.h +++ b/include/linux/gpio/driver.h @@ -477,6 +477,18 @@ struct gpio_chip { */ int (*of_xlate)(struct gpio_chip *gc, const struct of_phandle_args *gpiospec, u32 *flags); + + /** + * @of_gpio_ranges_fallback: + * + * Optional hook for the case that no gpio-ranges property is defined + * within the device tree node "np" (usually DT before introduction + * of gpio-ranges). So this callback is helpful to provide the + * necessary backward compatibility for the pin ranges. + */ + int (*of_gpio_ranges_fallback)(struct gpio_chip *gc, + struct device_node *np); + #endif /* CONFIG_OF_GPIO */ }; From ceb61ab22dbd5635b450b002f7dce84ca55e53f3 Mon Sep 17 00:00:00 2001 From: Stefan Wahren Date: Sat, 9 Apr 2022 11:51:29 +0200 Subject: [PATCH 0593/1842] pinctrl: bcm2835: implement hook for missing gpio-ranges [ Upstream commit d2b67744fd99b06555b7e4d67302ede6c7c6a638 ] The commit c8013355ead6 ("ARM: dts: gpio-ranges property is now required") fixed the GPIO probing issues caused by "pinctrl: bcm2835: Change init order for gpio hogs". This changed only the kernel DTS files. Unfortunately it isn't guaranteed that these files are shipped to all users. So implement the necessary backward compatibility for BCM2835 and BCM2711 platform. Fixes: 266423e60ea1 ("pinctrl: bcm2835: Change init order for gpio hogs") Signed-off-by: Stefan Wahren Reviewed-by: Florian Fainelli Tested-by: Florian Fainelli Link: https://lore.kernel.org/r/20220409095129.45786-3-stefan.wahren@i2se.com Signed-off-by: Linus Walleij Signed-off-by: Sasha Levin --- drivers/pinctrl/bcm/pinctrl-bcm2835.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c index 6768b2f03d68..39d2024dc2ee 100644 --- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c +++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c @@ -351,6 +351,22 @@ static int bcm2835_gpio_direction_output(struct gpio_chip *chip, return pinctrl_gpio_direction_output(chip->base + offset); } +static int bcm2835_of_gpio_ranges_fallback(struct gpio_chip *gc, + struct device_node *np) +{ + struct pinctrl_dev *pctldev = of_pinctrl_get(np); + + of_node_put(np); + + if (!pctldev) + return 0; + + gpiochip_add_pin_range(gc, pinctrl_dev_get_devname(pctldev), 0, 0, + gc->ngpio); + + return 0; +} + static const struct gpio_chip bcm2835_gpio_chip = { .label = MODULE_NAME, .owner = THIS_MODULE, @@ -365,6 +381,7 @@ static const struct gpio_chip bcm2835_gpio_chip = { .base = -1, .ngpio = BCM2835_NUM_GPIOS, .can_sleep = false, + .of_gpio_ranges_fallback = bcm2835_of_gpio_ranges_fallback, }; static const struct gpio_chip bcm2711_gpio_chip = { @@ -381,6 +398,7 @@ static const struct gpio_chip bcm2711_gpio_chip = { .base = -1, .ngpio = BCM2711_NUM_GPIOS, .can_sleep = false, + .of_gpio_ranges_fallback = bcm2835_of_gpio_ranges_fallback, }; static void bcm2835_gpio_irq_handle_bank(struct bcm2835_pinctrl *pc, From 08b053d32b16318425b0b1897c531e018e593e81 Mon Sep 17 00:00:00 2001 From: Chuanhong Guo Date: Sat, 9 Apr 2022 17:13:47 +0800 Subject: [PATCH 0594/1842] arm: mediatek: select arch timer for mt7629 [ Upstream commit d66aea197d534e23d4989eb72fca9c0c114b97c9 ] This chip has an armv7 arch timer according to the dts. Select it in Kconfig to enforce the support for it. Otherwise the system time is just completely wrong if user forget to enable ARM_ARCH_TIMER in kernel config. Fixes: a43379dddf1b ("arm: mediatek: add MT7629 smp bring up code") Signed-off-by: Chuanhong Guo Link: https://lore.kernel.org/r/20220409091347.2473449-1-gch981213@gmail.com Signed-off-by: Matthias Brugger Signed-off-by: Sasha Levin --- arch/arm/mach-mediatek/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm/mach-mediatek/Kconfig b/arch/arm/mach-mediatek/Kconfig index 9e0f592d87d8..35a3430c7942 100644 --- a/arch/arm/mach-mediatek/Kconfig +++ b/arch/arm/mach-mediatek/Kconfig @@ -30,6 +30,7 @@ config MACH_MT7623 config MACH_MT7629 bool "MediaTek MT7629 SoCs support" default ARCH_MEDIATEK + select HAVE_ARM_ARCH_TIMER config MACH_MT8127 bool "MediaTek MT8127 SoCs support" From 82c6c8a66c2e6041255cc1a65ae390182d587bb0 Mon Sep 17 00:00:00 2001 From: Hari Bathini Date: Wed, 6 Apr 2022 15:08:37 +0530 Subject: [PATCH 0595/1842] powerpc/fadump: fix PT_LOAD segment for boot memory area [ Upstream commit 15eb77f873255cf9f4d703b63cfbd23c46579654 ] Boot memory area is setup as separate PT_LOAD segment in the vmcore as it is moved by f/w, on crash, to a destination address provided by the kernel. Having separate PT_LOAD segment helps in handling the different physical address and offset for boot memory area in the vmcore. Commit ced1bf52f477 ("powerpc/fadump: merge adjacent memory ranges to reduce PT_LOAD segements") inadvertly broke this pre-condition for cases where some of the first kernel memory is available adjacent to boot memory area. This scenario is rare but possible when memory for fadump could not be reserved adjacent to boot memory area owing to memory hole or such. Reading memory from a vmcore exported in such scenario provides incorrect data. Fix it by ensuring no other region is folded into boot memory area. Fixes: ced1bf52f477 ("powerpc/fadump: merge adjacent memory ranges to reduce PT_LOAD segements") Signed-off-by: Hari Bathini Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220406093839.206608-2-hbathini@linux.ibm.com Signed-off-by: Sasha Levin --- arch/powerpc/kernel/fadump.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c index c3bb800dc435..1a5ba26aab15 100644 --- a/arch/powerpc/kernel/fadump.c +++ b/arch/powerpc/kernel/fadump.c @@ -861,7 +861,6 @@ static int fadump_alloc_mem_ranges(struct fadump_mrange_info *mrange_info) sizeof(struct fadump_memory_range)); return 0; } - static inline int fadump_add_mem_range(struct fadump_mrange_info *mrange_info, u64 base, u64 end) { @@ -880,7 +879,12 @@ static inline int fadump_add_mem_range(struct fadump_mrange_info *mrange_info, start = mem_ranges[mrange_info->mem_range_cnt - 1].base; size = mem_ranges[mrange_info->mem_range_cnt - 1].size; - if ((start + size) == base) + /* + * Boot memory area needs separate PT_LOAD segment(s) as it + * is moved to a different location at the time of crash. + * So, fold only if the region is not boot memory area. + */ + if ((start + size) == base && start >= fw_dump.boot_mem_top) is_adjacent = true; } if (!is_adjacent) { From 17d9d7d26406d8c524b547c3cfd6c727f464cdf6 Mon Sep 17 00:00:00 2001 From: Lv Ruyi Date: Tue, 12 Apr 2022 08:53:05 +0000 Subject: [PATCH 0596/1842] mfd: ipaq-micro: Fix error check return value of platform_get_irq() [ Upstream commit 3b49ae380ce1a3054e0c505dd9a356b82a5b48e8 ] platform_get_irq() return negative value on failure, so null check of irq is incorrect. Fix it by comparing whether it is less than zero. Fixes: dcc21cc09e3c ("mfd: Add driver for Atmel Microcontroller on iPaq h3xxx") Reported-by: Zeal Robot Signed-off-by: Lv Ruyi Reviewed-by: Linus Walleij Signed-off-by: Lee Jones Link: https://lore.kernel.org/r/20220412085305.2533030-1-lv.ruyi@zte.com.cn Signed-off-by: Sasha Levin --- drivers/mfd/ipaq-micro.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/mfd/ipaq-micro.c b/drivers/mfd/ipaq-micro.c index e92eeeb67a98..4cd5ecc72211 100644 --- a/drivers/mfd/ipaq-micro.c +++ b/drivers/mfd/ipaq-micro.c @@ -403,7 +403,7 @@ static int __init micro_probe(struct platform_device *pdev) micro_reset_comm(micro); irq = platform_get_irq(pdev, 0); - if (!irq) + if (irq < 0) return -EINVAL; ret = devm_request_irq(&pdev->dev, irq, micro_serial_isr, IRQF_SHARED, "ipaq-micro", From 7a55a5159daef285d9761f8d640b9fab4ead0591 Mon Sep 17 00:00:00 2001 From: "Gustavo A. R. Silva" Date: Thu, 3 Mar 2022 17:55:21 -0600 Subject: [PATCH 0597/1842] scsi: fcoe: Fix Wstringop-overflow warnings in fcoe_wwn_from_mac() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 54db804d5d7d36709d1ce70bde3b9a6c61b290b6 ] Fix the following Wstringop-overflow warnings when building with GCC-11: drivers/scsi/fcoe/fcoe.c: In function ‘fcoe_netdev_config’: drivers/scsi/fcoe/fcoe.c:744:32: warning: ‘fcoe_wwn_from_mac’ accessing 32 bytes in a region of size 6 [-Wstringop-overflow=] 744 | wwnn = fcoe_wwn_from_mac(ctlr->ctl_src_addr, 1, 0); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/scsi/fcoe/fcoe.c:744:32: note: referencing argument 1 of type ‘unsigned char *’ In file included from drivers/scsi/fcoe/fcoe.c:36: ./include/scsi/libfcoe.h:252:5: note: in a call to function ‘fcoe_wwn_from_mac’ 252 | u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN], unsigned int, unsigned int); | ^~~~~~~~~~~~~~~~~ drivers/scsi/fcoe/fcoe.c:747:32: warning: ‘fcoe_wwn_from_mac’ accessing 32 bytes in a region of size 6 [-Wstringop-overflow=] 747 | wwpn = fcoe_wwn_from_mac(ctlr->ctl_src_addr, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 748 | 2, 0); | ~~~~~ drivers/scsi/fcoe/fcoe.c:747:32: note: referencing argument 1 of type ‘unsigned char *’ In file included from drivers/scsi/fcoe/fcoe.c:36: ./include/scsi/libfcoe.h:252:5: note: in a call to function ‘fcoe_wwn_from_mac’ 252 | u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN], unsigned int, unsigned int); | ^~~~~~~~~~~~~~~~~ CC drivers/scsi/bnx2fc/bnx2fc_io.o In function ‘bnx2fc_net_config’, inlined from ‘bnx2fc_if_create’ at drivers/scsi/bnx2fc/bnx2fc_fcoe.c:1543:7: drivers/scsi/bnx2fc/bnx2fc_fcoe.c:833:32: warning: ‘fcoe_wwn_from_mac’ accessing 32 bytes in a region of size 6 [-Wstringop-overflow=] 833 | wwnn = fcoe_wwn_from_mac(ctlr->ctl_src_addr, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 834 | 1, 0); | ~~~~~ drivers/scsi/bnx2fc/bnx2fc_fcoe.c: In function ‘bnx2fc_if_create’: drivers/scsi/bnx2fc/bnx2fc_fcoe.c:833:32: note: referencing argument 1 of type ‘unsigned char *’ In file included from drivers/scsi/bnx2fc/bnx2fc.h:53, from drivers/scsi/bnx2fc/bnx2fc_fcoe.c:17: ./include/scsi/libfcoe.h:252:5: note: in a call to function ‘fcoe_wwn_from_mac’ 252 | u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN], unsigned int, unsigned int); | ^~~~~~~~~~~~~~~~~ In function ‘bnx2fc_net_config’, inlined from ‘bnx2fc_if_create’ at drivers/scsi/bnx2fc/bnx2fc_fcoe.c:1543:7: drivers/scsi/bnx2fc/bnx2fc_fcoe.c:839:32: warning: ‘fcoe_wwn_from_mac’ accessing 32 bytes in a region of size 6 [-Wstringop-overflow=] 839 | wwpn = fcoe_wwn_from_mac(ctlr->ctl_src_addr, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 840 | 2, 0); | ~~~~~ drivers/scsi/bnx2fc/bnx2fc_fcoe.c: In function ‘bnx2fc_if_create’: drivers/scsi/bnx2fc/bnx2fc_fcoe.c:839:32: note: referencing argument 1 of type ‘unsigned char *’ In file included from drivers/scsi/bnx2fc/bnx2fc.h:53, from drivers/scsi/bnx2fc/bnx2fc_fcoe.c:17: ./include/scsi/libfcoe.h:252:5: note: in a call to function ‘fcoe_wwn_from_mac’ 252 | u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN], unsigned int, unsigned int); | ^~~~~~~~~~~~~~~~~ drivers/scsi/qedf/qedf_main.c: In function ‘__qedf_probe’: drivers/scsi/qedf/qedf_main.c:3520:30: warning: ‘fcoe_wwn_from_mac’ accessing 32 bytes in a region of size 6 [-Wstringop-overflow=] 3520 | qedf->wwnn = fcoe_wwn_from_mac(qedf->mac, 1, 0); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/scsi/qedf/qedf_main.c:3520:30: note: referencing argument 1 of type ‘unsigned char *’ In file included from drivers/scsi/qedf/qedf.h:9, from drivers/scsi/qedf/qedf_main.c:23: ./include/scsi/libfcoe.h:252:5: note: in a call to function ‘fcoe_wwn_from_mac’ 252 | u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN], unsigned int, unsigned int); | ^~~~~~~~~~~~~~~~~ drivers/scsi/qedf/qedf_main.c:3521:30: warning: ‘fcoe_wwn_from_mac’ accessing 32 bytes in a region of size 6 [-Wstringop-overflow=] 3521 | qedf->wwpn = fcoe_wwn_from_mac(qedf->mac, 2, 0); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/scsi/qedf/qedf_main.c:3521:30: note: referencing argument 1 of type ‘unsigned char *’ In file included from drivers/scsi/qedf/qedf.h:9, from drivers/scsi/qedf/qedf_main.c:23: ./include/scsi/libfcoe.h:252:5: note: in a call to function ‘fcoe_wwn_from_mac’ 252 | u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN], unsigned int, unsigned int); | ^~~~~~~~~~~~~~~~~ by changing the array size to the correct value of ETH_ALEN in the argument declaration. Also, fix a couple of checkpatch warnings: WARNING: function definition argument 'unsigned int' should also have an identifier name This helps with the ongoing efforts to globally enable -Wstringop-overflow. Link: https://github.com/KSPP/linux/issues/181 Fixes: 85b4aa4926a5 ("[SCSI] fcoe: Fibre Channel over Ethernet") Signed-off-by: Gustavo A. R. Silva Signed-off-by: Sasha Levin --- drivers/scsi/fcoe/fcoe_ctlr.c | 2 +- include/scsi/libfcoe.h | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c index 5ea426effa60..bbc5d6b9be73 100644 --- a/drivers/scsi/fcoe/fcoe_ctlr.c +++ b/drivers/scsi/fcoe/fcoe_ctlr.c @@ -1969,7 +1969,7 @@ EXPORT_SYMBOL(fcoe_ctlr_recv_flogi); * * Returns: u64 fc world wide name */ -u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN], +u64 fcoe_wwn_from_mac(unsigned char mac[ETH_ALEN], unsigned int scheme, unsigned int port) { u64 wwn; diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h index fac8e89aed81..310e0dbffda9 100644 --- a/include/scsi/libfcoe.h +++ b/include/scsi/libfcoe.h @@ -249,7 +249,8 @@ int fcoe_ctlr_recv_flogi(struct fcoe_ctlr *, struct fc_lport *, struct fc_frame *); /* libfcoe funcs */ -u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN], unsigned int, unsigned int); +u64 fcoe_wwn_from_mac(unsigned char mac[ETH_ALEN], unsigned int scheme, + unsigned int port); int fcoe_libfc_config(struct fc_lport *, struct fcoe_ctlr *, const struct libfc_function_template *, int init_fcp); u32 fcoe_fc_crc(struct fc_frame *fp); From 1052f22e127d0c34c3387bb389424ba1c61491ff Mon Sep 17 00:00:00 2001 From: Cristian Marussi Date: Wed, 30 Mar 2022 16:05:32 +0100 Subject: [PATCH 0598/1842] firmware: arm_scmi: Fix list protocols enumeration in the base protocol [ Upstream commit 8009120e0354a67068e920eb10dce532391361d0 ] While enumerating protocols implemented by the SCMI platform using BASE_DISCOVER_LIST_PROTOCOLS, the number of returned protocols is currently validated in an improper way since the check employs a sum between unsigned integers that could overflow and cause the check itself to be silently bypassed if the returned value 'loop_num_ret' is big enough. Fix the validation avoiding the addition. Link: https://lore.kernel.org/r/20220330150551.2573938-4-cristian.marussi@arm.com Fixes: b6f20ff8bd94 ("firmware: arm_scmi: add common infrastructure and support for base protocol") Signed-off-by: Cristian Marussi Signed-off-by: Sudeep Holla Signed-off-by: Sasha Levin --- drivers/firmware/arm_scmi/base.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/firmware/arm_scmi/base.c b/drivers/firmware/arm_scmi/base.c index 017e5d8bd869..e51d28ba2833 100644 --- a/drivers/firmware/arm_scmi/base.c +++ b/drivers/firmware/arm_scmi/base.c @@ -188,7 +188,7 @@ static int scmi_base_implementation_list_get(const struct scmi_handle *handle, break; loop_num_ret = le32_to_cpu(*num_ret); - if (tot_num_ret + loop_num_ret > MAX_PROTOCOLS_IMP) { + if (loop_num_ret > MAX_PROTOCOLS_IMP - tot_num_ret) { dev_err(dev, "No. of Protocol > MAX_PROTOCOLS_IMP"); break; } From 641649f31e20df630310f5c22f26c071acc676d4 Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Tue, 26 Apr 2022 13:23:05 -0700 Subject: [PATCH 0599/1842] nvdimm: Fix firmware activation deadlock scenarios [ Upstream commit e6829d1bd3c4b58296ee9e412f7ed4d6cb390192 ] Lockdep reports the following deadlock scenarios for CXL root device power-management, device_prepare(), operations, and device_shutdown() operations for 'nd_region' devices: Chain exists of: &nvdimm_region_key --> &nvdimm_bus->reconfig_mutex --> system_transition_mutex Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(system_transition_mutex); lock(&nvdimm_bus->reconfig_mutex); lock(system_transition_mutex); lock(&nvdimm_region_key); Chain exists of: &cxl_nvdimm_bridge_key --> acpi_scan_lock --> &cxl_root_key Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&cxl_root_key); lock(acpi_scan_lock); lock(&cxl_root_key); lock(&cxl_nvdimm_bridge_key); These stem from holding nvdimm_bus_lock() over hibernate_quiet_exec() which walks the entire system device topology taking device_lock() along the way. The nvdimm_bus_lock() is protecting against unregistration, multiple simultaneous ops callers, and preventing activate_show() from racing activate_store(). For the first 2, the lock is redundant. Unregistration already flushes all ops users, and sysfs already prevents multiple threads to be active in an ops handler at the same time. For the last userspace should already be waiting for its last activate_store() to complete, and does not need activate_show() to flush the write side, so this lock usage can be deleted in these attributes. Fixes: 48001ea50d17 ("PM, libnvdimm: Add runtime firmware activation support") Reviewed-by: Ira Weiny Link: https://lore.kernel.org/r/165074883800.4116052.10737040861825806582.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams Signed-off-by: Sasha Levin --- drivers/nvdimm/core.c | 9 --------- 1 file changed, 9 deletions(-) diff --git a/drivers/nvdimm/core.c b/drivers/nvdimm/core.c index c21ba0602029..1c92c883afdd 100644 --- a/drivers/nvdimm/core.c +++ b/drivers/nvdimm/core.c @@ -400,9 +400,7 @@ static ssize_t capability_show(struct device *dev, if (!nd_desc->fw_ops) return -EOPNOTSUPP; - nvdimm_bus_lock(dev); cap = nd_desc->fw_ops->capability(nd_desc); - nvdimm_bus_unlock(dev); switch (cap) { case NVDIMM_FWA_CAP_QUIESCE: @@ -427,10 +425,8 @@ static ssize_t activate_show(struct device *dev, if (!nd_desc->fw_ops) return -EOPNOTSUPP; - nvdimm_bus_lock(dev); cap = nd_desc->fw_ops->capability(nd_desc); state = nd_desc->fw_ops->activate_state(nd_desc); - nvdimm_bus_unlock(dev); if (cap < NVDIMM_FWA_CAP_QUIESCE) return -EOPNOTSUPP; @@ -475,7 +471,6 @@ static ssize_t activate_store(struct device *dev, else return -EINVAL; - nvdimm_bus_lock(dev); state = nd_desc->fw_ops->activate_state(nd_desc); switch (state) { @@ -493,7 +488,6 @@ static ssize_t activate_store(struct device *dev, default: rc = -ENXIO; } - nvdimm_bus_unlock(dev); if (rc == 0) rc = len; @@ -516,10 +510,7 @@ static umode_t nvdimm_bus_firmware_visible(struct kobject *kobj, struct attribut if (!nd_desc->fw_ops) return 0; - nvdimm_bus_lock(dev); cap = nd_desc->fw_ops->capability(nd_desc); - nvdimm_bus_unlock(dev); - if (cap < NVDIMM_FWA_CAP_QUIESCE) return 0; From 8a8b40d00753c95869e8d0ab275d752756dec16f Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Thu, 28 Apr 2022 15:47:46 -0700 Subject: [PATCH 0600/1842] nvdimm: Allow overwrite in the presence of disabled dimms [ Upstream commit bb7bf697fed58eae9d3445944e457ab0de4da54f ] It is not clear why the original implementation of overwrite support required the dimm driver to be active before overwrite could proceed. In fact that can lead to cases where the kernel retains an invalid cached copy of the labels from before the overwrite. Unfortunately the kernel has not only allowed that case, but enforced it. Going forward, allow for overwrite to happen while the label area is offline, and follow-on with updates to 'ndctl sanitize-dimm --overwrite' to trigger the label area invalidation by default. Cc: Vishal Verma Cc: Dave Jiang Cc: Ira Weiny Cc: Jeff Moyer Reported-by: Krzysztof Kensicki Fixes: 7d988097c546 ("acpi/nfit, libnvdimm/security: Add security DSM overwrite support") Signed-off-by: Dan Williams Signed-off-by: Sasha Levin --- drivers/nvdimm/security.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/drivers/nvdimm/security.c b/drivers/nvdimm/security.c index 4b80150e4afa..b5aa55c61461 100644 --- a/drivers/nvdimm/security.c +++ b/drivers/nvdimm/security.c @@ -379,11 +379,6 @@ static int security_overwrite(struct nvdimm *nvdimm, unsigned int keyid) || !nvdimm->sec.flags) return -EOPNOTSUPP; - if (dev->driver == NULL) { - dev_dbg(dev, "Unable to overwrite while DIMM active.\n"); - return -EINVAL; - } - rc = check_security_state(nvdimm); if (rc) return rc; From 84958f066dec844aebbfd97d450b75614c73c8ba Mon Sep 17 00:00:00 2001 From: Krzysztof Kozlowski Date: Fri, 22 Apr 2022 12:53:38 +0200 Subject: [PATCH 0601/1842] pinctrl: mvebu: Fix irq_of_parse_and_map() return value [ Upstream commit 71bc7cf3be65bab441e03667cf215c557712976c ] The irq_of_parse_and_map() returns 0 on failure, not a negative ERRNO. Fixes: 2f227605394b ("pinctrl: armada-37xx: Add irqchip support") Signed-off-by: Krzysztof Kozlowski Link: https://lore.kernel.org/r/20220422105339.78810-1-krzysztof.kozlowski@linaro.org Signed-off-by: Linus Walleij Signed-off-by: Sasha Levin --- drivers/pinctrl/mvebu/pinctrl-armada-37xx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c index 5cb018f98800..85a0052bb0e6 100644 --- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c +++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c @@ -781,7 +781,7 @@ static int armada_37xx_irqchip_register(struct platform_device *pdev, for (i = 0; i < nr_irq_parent; i++) { int irq = irq_of_parse_and_map(np, i); - if (irq < 0) + if (!irq) continue; girq->parents[i] = irq; } From d8a5bdc767f17281da648555cdbd286f98fd98ee Mon Sep 17 00:00:00 2001 From: Miaohe Lin Date: Thu, 28 Apr 2022 23:16:06 -0700 Subject: [PATCH 0602/1842] drivers/base/node.c: fix compaction sysfs file leak [ Upstream commit da63dc84befaa9e6079a0bc363ff0eaa975f9073 ] Compaction sysfs file is created via compaction_register_node in register_node. But we forgot to remove it in unregister_node. Thus compaction sysfs file is leaked. Using compaction_unregister_node to fix this issue. Link: https://lkml.kernel.org/r/20220401070905.43679-1-linmiaohe@huawei.com Fixes: ed4a6d7f0676 ("mm: compaction: add /sys trigger for per-node memory compaction") Signed-off-by: Miaohe Lin Cc: Greg Kroah-Hartman Cc: Rafael J. Wysocki Cc: Mel Gorman Cc: Minchan Kim Cc: KAMEZAWA Hiroyuki Cc: KOSAKI Motohiro Signed-off-by: Andrew Morton Signed-off-by: Sasha Levin --- drivers/base/node.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/base/node.c b/drivers/base/node.c index 21965de8538b..5f745c906c33 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -655,6 +655,7 @@ static int register_node(struct node *node, int num) */ void unregister_node(struct node *node) { + compaction_unregister_node(node); hugetlb_unregister_node(node); /* no-op, if memoryless node */ node_remove_accesses(node); node_remove_caches(node); From c1219429179d861aef8e723596bd9e4d928e543a Mon Sep 17 00:00:00 2001 From: Muchun Song Date: Thu, 28 Apr 2022 23:16:09 -0700 Subject: [PATCH 0603/1842] dax: fix cache flush on PMD-mapped pages [ Upstream commit e583b5c472bd23d450e06f148dc1f37be74f7666 ] The flush_cache_page() only remove a PAGE_SIZE sized range from the cache. However, it does not cover the full pages in a THP except a head page. Replace it with flush_cache_range() to fix this issue. This is just a documentation issue with the respect to properly documenting the expected usage of cache flushing before modifying the pmd. However, in practice this is not a problem due to the fact that DAX is not available on architectures with virtually indexed caches per: commit d92576f1167c ("dax: does not work correctly with virtual aliasing caches") Link: https://lkml.kernel.org/r/20220403053957.10770-3-songmuchun@bytedance.com Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean") Signed-off-by: Muchun Song Reviewed-by: Dan Williams Reviewed-by: Christoph Hellwig Cc: Alistair Popple Cc: Al Viro Cc: Hugh Dickins Cc: Jan Kara Cc: "Kirill A. Shutemov" Cc: Matthew Wilcox Cc: Ralph Campbell Cc: Ross Zwisler Cc: Xiongchun Duan Cc: Xiyu Yang Cc: Yang Shi Signed-off-by: Andrew Morton Signed-off-by: Sasha Levin --- fs/dax.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/dax.c b/fs/dax.c index d5d7b9393bca..3e7e9a57fd28 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -846,7 +846,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) goto unlock_pmd; - flush_cache_page(vma, address, pfn); + flush_cache_range(vma, address, + address + HPAGE_PMD_SIZE); pmd = pmdp_invalidate(vma, address, pmdp); pmd = pmd_wrprotect(pmd); pmd = pmd_mkclean(pmd); From 49a5b1735cd9c26da29dd8cad79202421e62faba Mon Sep 17 00:00:00 2001 From: Christophe JAILLET Date: Thu, 28 Apr 2022 23:16:19 -0700 Subject: [PATCH 0604/1842] drivers/base/memory: fix an unlikely reference counting issue in __add_memory_block() [ Upstream commit f47f758cff59c68015d6b9b9c077110df7c2c828 ] __add_memory_block() calls both put_device() and device_unregister() when storing the memory block into the xarray. This is incorrect because xarray doesn't take an additional reference and device_unregister() already calls put_device(). Triggering the issue looks really unlikely and its only effect should be to log a spurious warning about a ref counted issue. Link: https://lkml.kernel.org/r/d44c63d78affe844f020dc02ad6af29abc448fc4.1650611702.git.christophe.jaillet@wanadoo.fr Fixes: 4fb6eabf1037 ("drivers/base/memory.c: cache memory blocks in xarray to accelerate lookup") Signed-off-by: Christophe JAILLET Acked-by: Michal Hocko Reviewed-by: David Hildenbrand Cc: Greg Kroah-Hartman Cc: "Rafael J. Wysocki" Cc: Scott Cheloha Cc: Nathan Lynch Signed-off-by: Andrew Morton Signed-off-by: Sasha Levin --- drivers/base/memory.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/base/memory.c b/drivers/base/memory.c index de058d15b33e..49eb14271f28 100644 --- a/drivers/base/memory.c +++ b/drivers/base/memory.c @@ -560,10 +560,9 @@ int register_memory(struct memory_block *memory) } ret = xa_err(xa_store(&memory_blocks, memory->dev.id, memory, GFP_KERNEL)); - if (ret) { - put_device(&memory->dev); + if (ret) device_unregister(&memory->dev); - } + return ret; } From f7c290eac8f2bff477b6812864923662b4e1b249 Mon Sep 17 00:00:00 2001 From: Randy Dunlap Date: Thu, 21 Jan 2021 17:08:19 -0800 Subject: [PATCH 0605/1842] powerpc/8xx: export 'cpm_setbrg' for modules [ Upstream commit 22f8e625ebabd7ed3185b82b44b4f12fc0402113 ] Fix missing export for a loadable module build: ERROR: modpost: "cpm_setbrg" [drivers/tty/serial/cpm_uart/cpm_uart.ko] undefined! Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc") Signed-off-by: Randy Dunlap Reported-by: kernel test robot [chleroy: Changed Fixes: tag] Signed-off-by: Christophe Leroy Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20210122010819.30986-1-rdunlap@infradead.org Signed-off-by: Sasha Levin --- arch/powerpc/platforms/8xx/cpm1.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/powerpc/platforms/8xx/cpm1.c b/arch/powerpc/platforms/8xx/cpm1.c index c58b6f1c40e3..3ef5e9fd3a9b 100644 --- a/arch/powerpc/platforms/8xx/cpm1.c +++ b/arch/powerpc/platforms/8xx/cpm1.c @@ -280,6 +280,7 @@ cpm_setbrg(uint brg, uint rate) out_be32(bp, (((BRG_UART_CLK_DIV16 / rate) - 1) << 1) | CPM_BRG_EN | CPM_BRG_DIV16); } +EXPORT_SYMBOL(cpm_setbrg); struct cpm_ioport16 { __be16 dir, par, odr_sor, dat, intr; From f991879762392c19661af5b722578089a12b305f Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Fri, 29 Apr 2022 16:26:36 +0800 Subject: [PATCH 0606/1842] pinctrl: renesas: core: Fix possible null-ptr-deref in sh_pfc_map_resources() [ Upstream commit 5376e3d904532e657fd7ca1a9b1ff3d351527b90 ] It will cause null-ptr-deref when using 'res', if platform_get_resource() returns NULL, so move using 'res' after devm_ioremap_resource() that will check it to avoid null-ptr-deref. And use devm_platform_get_and_ioremap_resource() to simplify code. Fixes: c7977ec4a336 ("pinctrl: sh-pfc: Convert to platform_get_*()") Signed-off-by: Yang Yingliang Link: https://lore.kernel.org/r/20220429082637.1308182-1-yangyingliang@huawei.com Signed-off-by: Geert Uytterhoeven Signed-off-by: Sasha Levin --- drivers/pinctrl/renesas/core.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/pinctrl/renesas/core.c b/drivers/pinctrl/renesas/core.c index 258972672eda..54f1a7334027 100644 --- a/drivers/pinctrl/renesas/core.c +++ b/drivers/pinctrl/renesas/core.c @@ -71,12 +71,11 @@ static int sh_pfc_map_resources(struct sh_pfc *pfc, /* Fill them. */ for (i = 0; i < num_windows; i++) { - res = platform_get_resource(pdev, IORESOURCE_MEM, i); - windows->phys = res->start; - windows->size = resource_size(res); - windows->virt = devm_ioremap_resource(pfc->dev, res); + windows->virt = devm_platform_get_and_ioremap_resource(pdev, i, &res); if (IS_ERR(windows->virt)) return -ENOMEM; + windows->phys = res->start; + windows->size = resource_size(res); windows++; } for (i = 0; i < num_irqs; i++) From de5bc923186c94da8a5cf25c8e559dc1c34d93f2 Mon Sep 17 00:00:00 2001 From: Randy Dunlap Date: Mon, 2 May 2022 12:29:25 -0700 Subject: [PATCH 0607/1842] powerpc/idle: Fix return value of __setup() handler [ Upstream commit b793a01000122d2bd133ba451a76cc135b5e162c ] __setup() handlers should return 1 to obsolete_checksetup() in init/main.c to indicate that the boot option has been handled. A return of 0 causes the boot option/value to be listed as an Unknown kernel parameter and added to init's (limited) argument or environment strings. Also, error return codes don't mean anything to obsolete_checksetup() -- only non-zero (usually 1) or zero. So return 1 from powersave_off(). Fixes: 302eca184fb8 ("[POWERPC] cell: use ppc_md->power_save instead of cbe_idle_loop") Reported-by: Igor Zhbanov Signed-off-by: Randy Dunlap Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220502192925.19954-1-rdunlap@infradead.org Signed-off-by: Sasha Levin --- arch/powerpc/kernel/idle.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/kernel/idle.c b/arch/powerpc/kernel/idle.c index 1f835539fda4..f0271daa8f6a 100644 --- a/arch/powerpc/kernel/idle.c +++ b/arch/powerpc/kernel/idle.c @@ -37,7 +37,7 @@ static int __init powersave_off(char *arg) { ppc_md.power_save = NULL; cpuidle_disable = IDLE_POWERSAVE_OFF; - return 0; + return 1; } __setup("powersave=off", powersave_off); From 456105105e78a918111d109702f679a33a01451b Mon Sep 17 00:00:00 2001 From: Randy Dunlap Date: Mon, 2 May 2022 12:29:41 -0700 Subject: [PATCH 0608/1842] powerpc/4xx/cpm: Fix return value of __setup() handler [ Upstream commit 5bb99fd4090fe1acfdb90a97993fcda7f8f5a3d6 ] __setup() handlers should return 1 to obsolete_checksetup() in init/main.c to indicate that the boot option has been handled. A return of 0 causes the boot option/value to be listed as an Unknown kernel parameter and added to init's (limited) argument or environment strings. Also, error return codes don't mean anything to obsolete_checksetup() -- only non-zero (usually 1) or zero. So return 1 from cpm_powersave_off(). Fixes: d164f6d4f910 ("powerpc/4xx: Add suspend and idle support") Reported-by: Igor Zhbanov Signed-off-by: Randy Dunlap Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220502192941.20955-1-rdunlap@infradead.org Signed-off-by: Sasha Levin --- arch/powerpc/platforms/4xx/cpm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/platforms/4xx/cpm.c b/arch/powerpc/platforms/4xx/cpm.c index ae8b812c9202..2481e78c0423 100644 --- a/arch/powerpc/platforms/4xx/cpm.c +++ b/arch/powerpc/platforms/4xx/cpm.c @@ -327,6 +327,6 @@ late_initcall(cpm_init); static int __init cpm_powersave_off(char *arg) { cpm.powersave_off = 1; - return 0; + return 1; } __setup("powersave=off", cpm_powersave_off); From b716e4168df9785f22e40dd331e81d48f903df0c Mon Sep 17 00:00:00 2001 From: Charles Keepax Date: Wed, 4 May 2022 18:08:29 +0100 Subject: [PATCH 0609/1842] ASoC: atmel-pdmic: Remove endianness flag on pdmic component [ Upstream commit 52857c3baa0e5ddeba7b2c84e56bb71c9674e048 ] The endianness flag should have been removed when the driver was ported across from having both a CODEC and CPU side component, to just having a CPU component and using the dummy for the CODEC. The endianness flag is used to indicate that the device is completely ambivalent to the endianness of the data, typically due to the endianness being lost over the hardware link (ie. the link defines bit ordering). It's usage didn't have any effect when the driver had both a CPU and CODEC component, since the union of those equals the CPU side settings, but now causes the driver to falsely report it supports big endian. Correct this by removing the flag. Fixes: f3c668074a04 ("ASoC: atmel-pdmic: remove codec component") Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220504170905.332415-3-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/atmel/atmel-pdmic.c | 1 - 1 file changed, 1 deletion(-) diff --git a/sound/soc/atmel/atmel-pdmic.c b/sound/soc/atmel/atmel-pdmic.c index 8e1d8230b180..049383e5405e 100644 --- a/sound/soc/atmel/atmel-pdmic.c +++ b/sound/soc/atmel/atmel-pdmic.c @@ -481,7 +481,6 @@ static const struct snd_soc_component_driver atmel_pdmic_cpu_dai_component = { .num_controls = ARRAY_SIZE(atmel_pdmic_snd_controls), .idle_bias_on = 1, .use_pmdown_time = 1, - .endianness = 1, }; /* ASoC sound card */ From ab0c26e441398ff146b8ac9b84848c2b26ab1b7e Mon Sep 17 00:00:00 2001 From: Charles Keepax Date: Wed, 4 May 2022 18:08:30 +0100 Subject: [PATCH 0610/1842] ASoC: atmel-classd: Remove endianness flag on class d component [ Upstream commit 0104d52a6a69b06b0e8167f7c1247e8c76aca070 ] The endianness flag should have been removed when the driver was ported across from having both a CODEC and CPU side component, to just having a CPU component and using the dummy for the CODEC. The endianness flag is used to indicate that the device is completely ambivalent to the endianness of the data, typically due to the endianness being lost over the hardware link (ie. the link defines bit ordering). It's usage didn't have any effect when the driver had both a CPU and CODEC component, since the union of those equals the CPU side settings, but now causes the driver to falsely report it supports big endian. Correct this by removing the flag. Fixes: 1dfdbe73ccf9 ("ASoC: atmel-classd: remove codec component") Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220504170905.332415-4-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/atmel/atmel-classd.c | 1 - 1 file changed, 1 deletion(-) diff --git a/sound/soc/atmel/atmel-classd.c b/sound/soc/atmel/atmel-classd.c index b1a28a9382fb..f91a0e728ed5 100644 --- a/sound/soc/atmel/atmel-classd.c +++ b/sound/soc/atmel/atmel-classd.c @@ -458,7 +458,6 @@ static const struct snd_soc_component_driver atmel_classd_cpu_dai_component = { .num_controls = ARRAY_SIZE(atmel_classd_snd_controls), .idle_bias_on = 1, .use_pmdown_time = 1, - .endianness = 1, }; /* ASoC sound card */ From f061ddfed9a70155c17d8e80737b9224b3b96ffd Mon Sep 17 00:00:00 2001 From: Alexey Dobriyan Date: Mon, 9 May 2022 18:29:19 -0700 Subject: [PATCH 0611/1842] proc: fix dentry/inode overinstantiating under /proc/${pid}/net [ Upstream commit 7055197705709c59b8ab77e6a5c7d46d61edd96e ] When a process exits, /proc/${pid}, and /proc/${pid}/net dentries are flushed. However some leaf dentries like /proc/${pid}/net/arp_cache aren't. That's because respective PDEs have proc_misc_d_revalidate() hook which returns 1 and leaves dentries/inodes in the LRU. Force revalidation/lookup on everything under /proc/${pid}/net by inheriting proc_net_dentry_ops. [akpm@linux-foundation.org: coding-style cleanups] Link: https://lkml.kernel.org/r/YjdVHgildbWO7diJ@localhost.localdomain Fixes: c6c75deda813 ("proc: fix lookup in /proc/net subdirectories after setns(2)") Signed-off-by: Alexey Dobriyan Reported-by: hui li Cc: Al Viro Signed-off-by: Andrew Morton Signed-off-by: Sasha Levin --- fs/proc/generic.c | 3 +++ fs/proc/proc_net.c | 3 +++ 2 files changed, 6 insertions(+) diff --git a/fs/proc/generic.c b/fs/proc/generic.c index 09e4d8a499a3..5898761698c2 100644 --- a/fs/proc/generic.c +++ b/fs/proc/generic.c @@ -453,6 +453,9 @@ static struct proc_dir_entry *__proc_create(struct proc_dir_entry **parent, proc_set_user(ent, (*parent)->uid, (*parent)->gid); ent->proc_dops = &proc_misc_dentry_ops; + /* Revalidate everything under /proc/${pid}/net */ + if ((*parent)->proc_dops == &proc_net_dentry_ops) + pde_force_lookup(ent); out: return ent; diff --git a/fs/proc/proc_net.c b/fs/proc/proc_net.c index 1aa9236bf1af..707477e27f83 100644 --- a/fs/proc/proc_net.c +++ b/fs/proc/proc_net.c @@ -362,6 +362,9 @@ static __net_init int proc_net_ns_init(struct net *net) proc_set_user(netd, uid, gid); + /* Seed dentry revalidation for /proc/${pid}/net */ + pde_force_lookup(netd); + err = -EEXIST; net_statd = proc_net_mkdir(net, "stat", netd); if (!net_statd) From 2a9d3b51185b3f04253e5474f3178aee7ff948fd Mon Sep 17 00:00:00 2001 From: Waiman Long Date: Mon, 9 May 2022 18:29:21 -0700 Subject: [PATCH 0612/1842] ipc/mqueue: use get_tree_nodev() in mqueue_get_tree() [ Upstream commit d60c4d01a98bc1942dba6e3adc02031f5519f94b ] When running the stress-ng clone benchmark with multiple testing threads, it was found that there were significant spinlock contention in sget_fc(). The contended spinlock was the sb_lock. It is under heavy contention because the following code in the critcal section of sget_fc(): hlist_for_each_entry(old, &fc->fs_type->fs_supers, s_instances) { if (test(old, fc)) goto share_extant_sb; } After testing with added instrumentation code, it was found that the benchmark could generate thousands of ipc namespaces with the corresponding number of entries in the mqueue's fs_supers list where the namespaces are the key for the search. This leads to excessive time in scanning the list for a match. Looking back at the mqueue calling sequence leading to sget_fc(): mq_init_ns() => mq_create_mount() => fc_mount() => vfs_get_tree() => mqueue_get_tree() => get_tree_keyed() => vfs_get_super() => sget_fc() Currently, mq_init_ns() is the only mqueue function that will indirectly call mqueue_get_tree() with a newly allocated ipc namespace as the key for searching. As a result, there will never be a match with the exising ipc namespaces stored in the mqueue's fs_supers list. So using get_tree_keyed() to do an existing ipc namespace search is just a waste of time. Instead, we could use get_tree_nodev() to eliminate the useless search. By doing so, we can greatly reduce the sb_lock hold time and avoid the spinlock contention problem in case a large number of ipc namespaces are present. Of course, if the code is modified in the future to allow mqueue_get_tree() to be called with an existing ipc namespace instead of a new one, we will have to use get_tree_keyed() in this case. The following stress-ng clone benchmark command was run on a 2-socket 48-core Intel system: ./stress-ng --clone 32 --verbose --oomable --metrics-brief -t 20 The "bogo ops/s" increased from 5948.45 before patch to 9137.06 after patch. This is an increase of 54% in performance. Link: https://lkml.kernel.org/r/20220121172315.19652-1-longman@redhat.com Fixes: 935c6912b198 ("ipc: Convert mqueue fs to fs_context") Signed-off-by: Waiman Long Cc: Al Viro Cc: David Howells Cc: Manfred Spraul Cc: Davidlohr Bueso Signed-off-by: Andrew Morton Signed-off-by: Sasha Levin --- ipc/mqueue.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/ipc/mqueue.c b/ipc/mqueue.c index 05d2176cc471..86969de17084 100644 --- a/ipc/mqueue.c +++ b/ipc/mqueue.c @@ -45,6 +45,7 @@ struct mqueue_fs_context { struct ipc_namespace *ipc_ns; + bool newns; /* Set if newly created ipc namespace */ }; #define MQUEUE_MAGIC 0x19800202 @@ -424,6 +425,14 @@ static int mqueue_get_tree(struct fs_context *fc) { struct mqueue_fs_context *ctx = fc->fs_private; + /* + * With a newly created ipc namespace, we don't need to do a search + * for an ipc namespace match, but we still need to set s_fs_info. + */ + if (ctx->newns) { + fc->s_fs_info = ctx->ipc_ns; + return get_tree_nodev(fc, mqueue_fill_super); + } return get_tree_keyed(fc, mqueue_fill_super, ctx->ipc_ns); } @@ -451,6 +460,10 @@ static int mqueue_init_fs_context(struct fs_context *fc) return 0; } +/* + * mq_init_ns() is currently the only caller of mq_create_mount(). + * So the ns parameter is always a newly created ipc namespace. + */ static struct vfsmount *mq_create_mount(struct ipc_namespace *ns) { struct mqueue_fs_context *ctx; @@ -462,6 +475,7 @@ static struct vfsmount *mq_create_mount(struct ipc_namespace *ns) return ERR_CAST(fc); ctx = fc->fs_private; + ctx->newns = true; put_ipc_ns(ctx->ipc_ns); ctx->ipc_ns = get_ipc_ns(ns); put_user_ns(fc->user_ns); From a21d4dab776a606bde30d6ab330305b66c8d559b Mon Sep 17 00:00:00 2001 From: Francesco Dolcini Date: Mon, 4 Apr 2022 10:15:09 +0200 Subject: [PATCH 0613/1842] PCI: imx6: Fix PERST# start-up sequence [ Upstream commit a6809941c1f17f455db2cf4ca19c6d8c8746ec25 ] According to the PCIe standard the PERST# signal (reset-gpio in fsl,imx* compatible dts) should be kept asserted for at least 100 usec before the PCIe refclock is stable, should be kept asserted for at least 100 msec after the power rails are stable and the host should wait at least 100 msec after it is de-asserted before accessing the configuration space of any attached device. From PCIe CEM r2.0, sec 2.6.2 T-PVPERL: Power stable to PERST# inactive - 100 msec T-PERST-CLK: REFCLK stable before PERST# inactive - 100 usec. From PCIe r5.0, sec 6.6.1 With a Downstream Port that does not support Link speeds greater than 5.0 GT/s, software must wait a minimum of 100 ms before sending a Configuration Request to the device immediately below that Port. Failure to do so could prevent PCIe devices to be working correctly, and this was experienced with real devices. Move reset assert to imx6_pcie_assert_core_reset(), this way we ensure that PERST# is asserted before enabling any clock, move de-assert to the end of imx6_pcie_deassert_core_reset() after the clock is enabled and deemed stable and add a new delay of 100 msec just afterward. Link: https://lore.kernel.org/all/20220211152550.286821-1-francesco.dolcini@toradex.com Link: https://lore.kernel.org/r/20220404081509.94356-1-francesco.dolcini@toradex.com Fixes: bb38919ec56e ("PCI: imx6: Add support for i.MX6 PCIe controller") Signed-off-by: Francesco Dolcini Signed-off-by: Lorenzo Pieralisi Reviewed-by: Lucas Stach Acked-by: Richard Zhu Signed-off-by: Sasha Levin --- drivers/pci/controller/dwc/pci-imx6.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c index 5cf1ef12fb9b..ceb4815379cd 100644 --- a/drivers/pci/controller/dwc/pci-imx6.c +++ b/drivers/pci/controller/dwc/pci-imx6.c @@ -401,6 +401,11 @@ static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie) dev_err(dev, "failed to disable vpcie regulator: %d\n", ret); } + + /* Some boards don't have PCIe reset GPIO. */ + if (gpio_is_valid(imx6_pcie->reset_gpio)) + gpio_set_value_cansleep(imx6_pcie->reset_gpio, + imx6_pcie->gpio_active_high); } static unsigned int imx6_pcie_grp_offset(const struct imx6_pcie *imx6_pcie) @@ -523,15 +528,6 @@ static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie) /* allow the clocks to stabilize */ usleep_range(200, 500); - /* Some boards don't have PCIe reset GPIO. */ - if (gpio_is_valid(imx6_pcie->reset_gpio)) { - gpio_set_value_cansleep(imx6_pcie->reset_gpio, - imx6_pcie->gpio_active_high); - msleep(100); - gpio_set_value_cansleep(imx6_pcie->reset_gpio, - !imx6_pcie->gpio_active_high); - } - switch (imx6_pcie->drvdata->variant) { case IMX8MQ: reset_control_deassert(imx6_pcie->pciephy_reset); @@ -574,6 +570,15 @@ static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie) break; } + /* Some boards don't have PCIe reset GPIO. */ + if (gpio_is_valid(imx6_pcie->reset_gpio)) { + msleep(100); + gpio_set_value_cansleep(imx6_pcie->reset_gpio, + !imx6_pcie->gpio_active_high); + /* Wait for 100ms after PERST# deassertion (PCIe r5.0, 6.6.1) */ + msleep(100); + } + return; err_ref_clk: From 9834b13e8b962caa28fbcf1f422dd82413da4ede Mon Sep 17 00:00:00 2001 From: Qi Zheng Date: Thu, 12 May 2022 20:38:37 -0700 Subject: [PATCH 0614/1842] tty: fix deadlock caused by calling printk() under tty_port->lock [ Upstream commit 6b9dbedbe3499fef862c4dff5217cf91f34e43b3 ] pty_write() invokes kmalloc() which may invoke a normal printk() to print failure message. This can cause a deadlock in the scenario reported by syz-bot below: CPU0 CPU1 CPU2 ---- ---- ---- lock(console_owner); lock(&port_lock_key); lock(&port->lock); lock(&port_lock_key); lock(&port->lock); lock(console_owner); As commit dbdda842fe96 ("printk: Add console owner and waiter logic to load balance console writes") said, such deadlock can be prevented by using printk_deferred() in kmalloc() (which is invoked in the section guarded by the port->lock). But there are too many printk() on the kmalloc() path, and kmalloc() can be called from anywhere, so changing printk() to printk_deferred() is too complicated and inelegant. Therefore, this patch chooses to specify __GFP_NOWARN to kmalloc(), so that printk() will not be called, and this deadlock problem can be avoided. Syzbot reported the following lockdep error: ====================================================== WARNING: possible circular locking dependency detected 5.4.143-00237-g08ccc19a-dirty #10 Not tainted ------------------------------------------------------ syz-executor.4/29420 is trying to acquire lock: ffffffff8aedb2a0 (console_owner){....}-{0:0}, at: console_trylock_spinning kernel/printk/printk.c:1752 [inline] ffffffff8aedb2a0 (console_owner){....}-{0:0}, at: vprintk_emit+0x2ca/0x470 kernel/printk/printk.c:2023 but task is already holding lock: ffff8880119c9158 (&port->lock){-.-.}-{2:2}, at: pty_write+0xf4/0x1f0 drivers/tty/pty.c:120 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&port->lock){-.-.}-{2:2}: __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x35/0x50 kernel/locking/spinlock.c:159 tty_port_tty_get drivers/tty/tty_port.c:288 [inline] <-- lock(&port->lock); tty_port_default_wakeup+0x1d/0xb0 drivers/tty/tty_port.c:47 serial8250_tx_chars+0x530/0xa80 drivers/tty/serial/8250/8250_port.c:1767 serial8250_handle_irq.part.0+0x31f/0x3d0 drivers/tty/serial/8250/8250_port.c:1854 serial8250_handle_irq drivers/tty/serial/8250/8250_port.c:1827 [inline] <-- lock(&port_lock_key); serial8250_default_handle_irq+0xb2/0x220 drivers/tty/serial/8250/8250_port.c:1870 serial8250_interrupt+0xfd/0x200 drivers/tty/serial/8250/8250_core.c:126 __handle_irq_event_percpu+0x109/0xa50 kernel/irq/handle.c:156 [...] -> #1 (&port_lock_key){-.-.}-{2:2}: __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x35/0x50 kernel/locking/spinlock.c:159 serial8250_console_write+0x184/0xa40 drivers/tty/serial/8250/8250_port.c:3198 <-- lock(&port_lock_key); call_console_drivers kernel/printk/printk.c:1819 [inline] console_unlock+0x8cb/0xd00 kernel/printk/printk.c:2504 vprintk_emit+0x1b5/0x470 kernel/printk/printk.c:2024 <-- lock(console_owner); vprintk_func+0x8d/0x250 kernel/printk/printk_safe.c:394 printk+0xba/0xed kernel/printk/printk.c:2084 register_console+0x8b3/0xc10 kernel/printk/printk.c:2829 univ8250_console_init+0x3a/0x46 drivers/tty/serial/8250/8250_core.c:681 console_init+0x49d/0x6d3 kernel/printk/printk.c:2915 start_kernel+0x5e9/0x879 init/main.c:713 secondary_startup_64+0xa4/0xb0 arch/x86/kernel/head_64.S:241 -> #0 (console_owner){....}-{0:0}: [...] lock_acquire+0x127/0x340 kernel/locking/lockdep.c:4734 console_trylock_spinning kernel/printk/printk.c:1773 [inline] <-- lock(console_owner); vprintk_emit+0x307/0x470 kernel/printk/printk.c:2023 vprintk_func+0x8d/0x250 kernel/printk/printk_safe.c:394 printk+0xba/0xed kernel/printk/printk.c:2084 fail_dump lib/fault-inject.c:45 [inline] should_fail+0x67b/0x7c0 lib/fault-inject.c:144 __should_failslab+0x152/0x1c0 mm/failslab.c:33 should_failslab+0x5/0x10 mm/slab_common.c:1224 slab_pre_alloc_hook mm/slab.h:468 [inline] slab_alloc_node mm/slub.c:2723 [inline] slab_alloc mm/slub.c:2807 [inline] __kmalloc+0x72/0x300 mm/slub.c:3871 kmalloc include/linux/slab.h:582 [inline] tty_buffer_alloc+0x23f/0x2a0 drivers/tty/tty_buffer.c:175 __tty_buffer_request_room+0x156/0x2a0 drivers/tty/tty_buffer.c:273 tty_insert_flip_string_fixed_flag+0x93/0x250 drivers/tty/tty_buffer.c:318 tty_insert_flip_string include/linux/tty_flip.h:37 [inline] pty_write+0x126/0x1f0 drivers/tty/pty.c:122 <-- lock(&port->lock); n_tty_write+0xa7a/0xfc0 drivers/tty/n_tty.c:2356 do_tty_write drivers/tty/tty_io.c:961 [inline] tty_write+0x512/0x930 drivers/tty/tty_io.c:1045 __vfs_write+0x76/0x100 fs/read_write.c:494 [...] other info that might help us debug this: Chain exists of: console_owner --> &port_lock_key --> &port->lock Link: https://lkml.kernel.org/r/20220511061951.1114-2-zhengqi.arch@bytedance.com Link: https://lkml.kernel.org/r/20220510113809.80626-2-zhengqi.arch@bytedance.com Fixes: b6da31b2c07c ("tty: Fix data race in tty_insert_flip_string_fixed_flag") Signed-off-by: Qi Zheng Acked-by: Jiri Slaby Acked-by: Greg Kroah-Hartman Cc: Akinobu Mita Cc: Vlastimil Babka Cc: Steven Rostedt (Google) Signed-off-by: Andrew Morton Signed-off-by: Sasha Levin --- drivers/tty/tty_buffer.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c index 0fc473321d3e..6c4a50addadd 100644 --- a/drivers/tty/tty_buffer.c +++ b/drivers/tty/tty_buffer.c @@ -172,7 +172,8 @@ static struct tty_buffer *tty_buffer_alloc(struct tty_port *port, size_t size) have queued and recycle that ? */ if (atomic_read(&port->buf.mem_used) > port->buf.mem_limit) return NULL; - p = kmalloc(sizeof(struct tty_buffer) + 2 * size, GFP_ATOMIC); + p = kmalloc(sizeof(struct tty_buffer) + 2 * size, + GFP_ATOMIC | __GFP_NOWARN); if (p == NULL) return NULL; From 5bea8f700a698e7eb81af912de6f4bd36a791c36 Mon Sep 17 00:00:00 2001 From: Corentin Labbe Date: Mon, 2 May 2022 20:19:14 +0000 Subject: [PATCH 0615/1842] crypto: sun8i-ss - rework handling of IV [ Upstream commit 359e893e8af456be2fefabe851716237df289cbf ] sun8i-ss fail handling IVs when doing decryption of multiple SGs in-place. It should backup the last block of each SG source for using it later as IVs. In the same time remove allocation on requests path for storing all IVs. Fixes: f08fcced6d00 ("crypto: allwinner - Add sun8i-ss cryptographic offloader") Signed-off-by: Corentin Labbe Signed-off-by: Herbert Xu Signed-off-by: Sasha Levin --- .../allwinner/sun8i-ss/sun8i-ss-cipher.c | 115 ++++++++++++------ .../crypto/allwinner/sun8i-ss/sun8i-ss-core.c | 30 +++-- drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h | 14 ++- 3 files changed, 107 insertions(+), 52 deletions(-) diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c index f783748462f9..7b3be3dc2210 100644 --- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c +++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-cipher.c @@ -93,6 +93,68 @@ static int sun8i_ss_cipher_fallback(struct skcipher_request *areq) return err; } +static int sun8i_ss_setup_ivs(struct skcipher_request *areq) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq); + struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm); + struct sun8i_ss_dev *ss = op->ss; + struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq); + struct scatterlist *sg = areq->src; + unsigned int todo, offset; + unsigned int len = areq->cryptlen; + unsigned int ivsize = crypto_skcipher_ivsize(tfm); + struct sun8i_ss_flow *sf = &ss->flows[rctx->flow]; + int i = 0; + u32 a; + int err; + + rctx->ivlen = ivsize; + if (rctx->op_dir & SS_DECRYPTION) { + offset = areq->cryptlen - ivsize; + scatterwalk_map_and_copy(sf->biv, areq->src, offset, + ivsize, 0); + } + + /* we need to copy all IVs from source in case DMA is bi-directionnal */ + while (sg && len) { + if (sg_dma_len(sg) == 0) { + sg = sg_next(sg); + continue; + } + if (i == 0) + memcpy(sf->iv[0], areq->iv, ivsize); + a = dma_map_single(ss->dev, sf->iv[i], ivsize, DMA_TO_DEVICE); + if (dma_mapping_error(ss->dev, a)) { + memzero_explicit(sf->iv[i], ivsize); + dev_err(ss->dev, "Cannot DMA MAP IV\n"); + err = -EFAULT; + goto dma_iv_error; + } + rctx->p_iv[i] = a; + /* we need to setup all others IVs only in the decrypt way */ + if (rctx->op_dir & SS_ENCRYPTION) + return 0; + todo = min(len, sg_dma_len(sg)); + len -= todo; + i++; + if (i < MAX_SG) { + offset = sg->length - ivsize; + scatterwalk_map_and_copy(sf->iv[i], sg, offset, ivsize, 0); + } + rctx->niv = i; + sg = sg_next(sg); + } + + return 0; +dma_iv_error: + i--; + while (i >= 0) { + dma_unmap_single(ss->dev, rctx->p_iv[i], ivsize, DMA_TO_DEVICE); + memzero_explicit(sf->iv[i], ivsize); + } + return err; +} + static int sun8i_ss_cipher(struct skcipher_request *areq) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(areq); @@ -101,9 +163,9 @@ static int sun8i_ss_cipher(struct skcipher_request *areq) struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(areq); struct skcipher_alg *alg = crypto_skcipher_alg(tfm); struct sun8i_ss_alg_template *algt; + struct sun8i_ss_flow *sf = &ss->flows[rctx->flow]; struct scatterlist *sg; unsigned int todo, len, offset, ivsize; - void *backup_iv = NULL; int nr_sgs = 0; int nr_sgd = 0; int err = 0; @@ -134,30 +196,9 @@ static int sun8i_ss_cipher(struct skcipher_request *areq) ivsize = crypto_skcipher_ivsize(tfm); if (areq->iv && crypto_skcipher_ivsize(tfm) > 0) { - rctx->ivlen = ivsize; - rctx->biv = kzalloc(ivsize, GFP_KERNEL | GFP_DMA); - if (!rctx->biv) { - err = -ENOMEM; + err = sun8i_ss_setup_ivs(areq); + if (err) goto theend_key; - } - if (rctx->op_dir & SS_DECRYPTION) { - backup_iv = kzalloc(ivsize, GFP_KERNEL); - if (!backup_iv) { - err = -ENOMEM; - goto theend_key; - } - offset = areq->cryptlen - ivsize; - scatterwalk_map_and_copy(backup_iv, areq->src, offset, - ivsize, 0); - } - memcpy(rctx->biv, areq->iv, ivsize); - rctx->p_iv = dma_map_single(ss->dev, rctx->biv, rctx->ivlen, - DMA_TO_DEVICE); - if (dma_mapping_error(ss->dev, rctx->p_iv)) { - dev_err(ss->dev, "Cannot DMA MAP IV\n"); - err = -ENOMEM; - goto theend_iv; - } } if (areq->src == areq->dst) { nr_sgs = dma_map_sg(ss->dev, areq->src, sg_nents(areq->src), @@ -240,21 +281,19 @@ static int sun8i_ss_cipher(struct skcipher_request *areq) } theend_iv: - if (rctx->p_iv) - dma_unmap_single(ss->dev, rctx->p_iv, rctx->ivlen, - DMA_TO_DEVICE); - if (areq->iv && ivsize > 0) { - if (rctx->biv) { - offset = areq->cryptlen - ivsize; - if (rctx->op_dir & SS_DECRYPTION) { - memcpy(areq->iv, backup_iv, ivsize); - kfree_sensitive(backup_iv); - } else { - scatterwalk_map_and_copy(areq->iv, areq->dst, offset, - ivsize, 0); - } - kfree(rctx->biv); + for (i = 0; i < rctx->niv; i++) { + dma_unmap_single(ss->dev, rctx->p_iv[i], ivsize, DMA_TO_DEVICE); + memzero_explicit(sf->iv[i], ivsize); + } + + offset = areq->cryptlen - ivsize; + if (rctx->op_dir & SS_DECRYPTION) { + memcpy(areq->iv, sf->biv, ivsize); + memzero_explicit(sf->biv, ivsize); + } else { + scatterwalk_map_and_copy(areq->iv, areq->dst, offset, + ivsize, 0); } } diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c index 319fe3279a71..657530578643 100644 --- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c +++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c @@ -66,6 +66,7 @@ int sun8i_ss_run_task(struct sun8i_ss_dev *ss, struct sun8i_cipher_req_ctx *rctx const char *name) { int flow = rctx->flow; + unsigned int ivlen = rctx->ivlen; u32 v = SS_START; int i; @@ -104,15 +105,14 @@ int sun8i_ss_run_task(struct sun8i_ss_dev *ss, struct sun8i_cipher_req_ctx *rctx mutex_lock(&ss->mlock); writel(rctx->p_key, ss->base + SS_KEY_ADR_REG); - if (i == 0) { - if (rctx->p_iv) - writel(rctx->p_iv, ss->base + SS_IV_ADR_REG); - } else { - if (rctx->biv) { - if (rctx->op_dir == SS_ENCRYPTION) - writel(rctx->t_dst[i - 1].addr + rctx->t_dst[i - 1].len * 4 - rctx->ivlen, ss->base + SS_IV_ADR_REG); + if (ivlen) { + if (rctx->op_dir == SS_ENCRYPTION) { + if (i == 0) + writel(rctx->p_iv[0], ss->base + SS_IV_ADR_REG); else - writel(rctx->t_src[i - 1].addr + rctx->t_src[i - 1].len * 4 - rctx->ivlen, ss->base + SS_IV_ADR_REG); + writel(rctx->t_dst[i - 1].addr + rctx->t_dst[i - 1].len * 4 - ivlen, ss->base + SS_IV_ADR_REG); + } else { + writel(rctx->p_iv[i], ss->base + SS_IV_ADR_REG); } } @@ -464,7 +464,7 @@ static void sun8i_ss_free_flows(struct sun8i_ss_dev *ss, int i) */ static int allocate_flows(struct sun8i_ss_dev *ss) { - int i, err; + int i, j, err; ss->flows = devm_kcalloc(ss->dev, MAXFLOW, sizeof(struct sun8i_ss_flow), GFP_KERNEL); @@ -474,6 +474,18 @@ static int allocate_flows(struct sun8i_ss_dev *ss) for (i = 0; i < MAXFLOW; i++) { init_completion(&ss->flows[i].complete); + ss->flows[i].biv = devm_kmalloc(ss->dev, AES_BLOCK_SIZE, + GFP_KERNEL | GFP_DMA); + if (!ss->flows[i].biv) + goto error_engine; + + for (j = 0; j < MAX_SG; j++) { + ss->flows[i].iv[j] = devm_kmalloc(ss->dev, AES_BLOCK_SIZE, + GFP_KERNEL | GFP_DMA); + if (!ss->flows[i].iv[j]) + goto error_engine; + } + ss->flows[i].engine = crypto_engine_alloc_init(ss->dev, true); if (!ss->flows[i].engine) { dev_err(ss->dev, "Cannot allocate engine\n"); diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h index 1a66457f4a20..49147195ecf6 100644 --- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h +++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss.h @@ -120,11 +120,15 @@ struct sginfo { * @complete: completion for the current task on this flow * @status: set to 1 by interrupt if task is done * @stat_req: number of request done by this flow + * @iv: list of IV to use for each step + * @biv: buffer which contain the backuped IV */ struct sun8i_ss_flow { struct crypto_engine *engine; struct completion complete; int status; + u8 *iv[MAX_SG]; + u8 *biv; #ifdef CONFIG_CRYPTO_DEV_SUN8I_SS_DEBUG unsigned long stat_req; #endif @@ -163,28 +167,28 @@ struct sun8i_ss_dev { * @t_src: list of mapped SGs with their size * @t_dst: list of mapped SGs with their size * @p_key: DMA address of the key - * @p_iv: DMA address of the IV + * @p_iv: DMA address of the IVs + * @niv: Number of IVs DMA mapped * @method: current algorithm for this request * @op_mode: op_mode for this request * @op_dir: direction (encrypt vs decrypt) for this request * @flow: the flow to use for this request - * @ivlen: size of biv + * @ivlen: size of IVs * @keylen: keylen for this request - * @biv: buffer which contain the IV * @fallback_req: request struct for invoking the fallback skcipher TFM */ struct sun8i_cipher_req_ctx { struct sginfo t_src[MAX_SG]; struct sginfo t_dst[MAX_SG]; u32 p_key; - u32 p_iv; + u32 p_iv[MAX_SG]; + int niv; u32 method; u32 op_mode; u32 op_dir; int flow; unsigned int ivlen; unsigned int keylen; - void *biv; struct skcipher_request fallback_req; // keep at the end }; From 40c41a7bfd594b5de14ac41af8bce62183f3fbe3 Mon Sep 17 00:00:00 2001 From: Corentin Labbe Date: Mon, 2 May 2022 20:19:15 +0000 Subject: [PATCH 0616/1842] crypto: sun8i-ss - handle zero sized sg [ Upstream commit c149e4763d28bb4c0e5daae8a59f2c74e889f407 ] sun8i-ss does not handle well the possible zero sized sg. Fixes: d9b45418a917 ("crypto: sun8i-ss - support hash algorithms") Signed-off-by: Corentin Labbe Signed-off-by: Herbert Xu Signed-off-by: Sasha Levin --- drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c index c9edecd43ef9..55d652cd468b 100644 --- a/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c +++ b/drivers/crypto/allwinner/sun8i-ss/sun8i-ss-hash.c @@ -379,13 +379,21 @@ int sun8i_ss_hash_run(struct crypto_engine *engine, void *breq) } len = areq->nbytes; - for_each_sg(areq->src, sg, nr_sgs, i) { + sg = areq->src; + i = 0; + while (len > 0 && sg) { + if (sg_dma_len(sg) == 0) { + sg = sg_next(sg); + continue; + } rctx->t_src[i].addr = sg_dma_address(sg); todo = min(len, sg_dma_len(sg)); rctx->t_src[i].len = todo / 4; len -= todo; rctx->t_dst[i].addr = addr_res; rctx->t_dst[i].len = digestsize / 4; + sg = sg_next(sg); + i++; } if (len > 0) { dev_err(ss->dev, "remaining len %d\n", len); From 76badb0a4d941cfdf9b68b6ebef45da7b1fd6770 Mon Sep 17 00:00:00 2001 From: Sebastian Andrzej Siewior Date: Wed, 4 May 2022 17:07:36 +0200 Subject: [PATCH 0617/1842] crypto: cryptd - Protect per-CPU resource by disabling BH. [ Upstream commit 91e8bcd7b4da182e09ea19a2c73167345fe14c98 ] The access to cryptd_queue::cpu_queue is synchronized by disabling preemption in cryptd_enqueue_request() and disabling BH in cryptd_queue_worker(). This implies that access is allowed from BH. If cryptd_enqueue_request() is invoked from preemptible context _and_ soft interrupt then this can lead to list corruption since cryptd_enqueue_request() is not protected against access from soft interrupt. Replace get_cpu() in cryptd_enqueue_request() with local_bh_disable() to ensure BH is always disabled. Remove preempt_disable() from cryptd_queue_worker() since it is not needed because local_bh_disable() ensures synchronisation. Fixes: 254eff771441 ("crypto: cryptd - Per-CPU thread implementation...") Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Herbert Xu Signed-off-by: Sasha Levin --- crypto/cryptd.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/crypto/cryptd.c b/crypto/cryptd.c index a1bea0f4baa8..668095eca0fa 100644 --- a/crypto/cryptd.c +++ b/crypto/cryptd.c @@ -39,6 +39,10 @@ struct cryptd_cpu_queue { }; struct cryptd_queue { + /* + * Protected by disabling BH to allow enqueueing from softinterrupt and + * dequeuing from kworker (cryptd_queue_worker()). + */ struct cryptd_cpu_queue __percpu *cpu_queue; }; @@ -125,28 +129,28 @@ static void cryptd_fini_queue(struct cryptd_queue *queue) static int cryptd_enqueue_request(struct cryptd_queue *queue, struct crypto_async_request *request) { - int cpu, err; + int err; struct cryptd_cpu_queue *cpu_queue; refcount_t *refcnt; - cpu = get_cpu(); + local_bh_disable(); cpu_queue = this_cpu_ptr(queue->cpu_queue); err = crypto_enqueue_request(&cpu_queue->queue, request); refcnt = crypto_tfm_ctx(request->tfm); if (err == -ENOSPC) - goto out_put_cpu; + goto out; - queue_work_on(cpu, cryptd_wq, &cpu_queue->work); + queue_work_on(smp_processor_id(), cryptd_wq, &cpu_queue->work); if (!refcount_read(refcnt)) - goto out_put_cpu; + goto out; refcount_inc(refcnt); -out_put_cpu: - put_cpu(); +out: + local_bh_enable(); return err; } @@ -162,15 +166,10 @@ static void cryptd_queue_worker(struct work_struct *work) cpu_queue = container_of(work, struct cryptd_cpu_queue, work); /* * Only handle one request at a time to avoid hogging crypto workqueue. - * preempt_disable/enable is used to prevent being preempted by - * cryptd_enqueue_request(). local_bh_disable/enable is used to prevent - * cryptd_enqueue_request() being accessed from software interrupts. */ local_bh_disable(); - preempt_disable(); backlog = crypto_get_backlog(&cpu_queue->queue); req = crypto_dequeue_request(&cpu_queue->queue); - preempt_enable(); local_bh_enable(); if (!req) From 6e07ccc7d56130f760d23f67a70c45366c07debc Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Mon, 16 May 2022 14:55:55 -0700 Subject: [PATCH 0618/1842] Input: sparcspkr - fix refcount leak in bbc_beep_probe [ Upstream commit c8994b30d71d64d5dcc9bc0edbfdf367171aa96f ] of_find_node_by_path() calls of_find_node_opts_by_path(), which returns a node pointer with refcount incremented, we should use of_node_put() on it when done. Add missing of_node_put() to avoid refcount leak. Fixes: 9c1a5077fdca ("input: Rewrite sparcspkr device probing.") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220516081018.42728-1-linmq006@gmail.com Signed-off-by: Dmitry Torokhov Signed-off-by: Sasha Levin --- drivers/input/misc/sparcspkr.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/input/misc/sparcspkr.c b/drivers/input/misc/sparcspkr.c index fe43e5557ed7..cdcb7737c46a 100644 --- a/drivers/input/misc/sparcspkr.c +++ b/drivers/input/misc/sparcspkr.c @@ -205,6 +205,7 @@ static int bbc_beep_probe(struct platform_device *op) info = &state->u.bbc; info->clock_freq = of_getintprop_default(dp, "clock-frequency", 0); + of_node_put(dp); if (!info->clock_freq) goto out_free; From 40d428b528c59dda1ba6377dbf387e2ff1c7f8ea Mon Sep 17 00:00:00 2001 From: Kuppuswamy Sathyanarayanan Date: Mon, 18 Apr 2022 15:02:37 +0000 Subject: [PATCH 0619/1842] PCI/AER: Clear MULTI_ERR_COR/UNCOR_RCV bits [ Upstream commit 203926da2bff8e172200a2f11c758987af112d4a ] When a Root Port or Root Complex Event Collector receives an error Message e.g., ERR_COR, it sets PCI_ERR_ROOT_COR_RCV in the Root Error Status register and logs the Requester ID in the Error Source Identification register. If it receives a second ERR_COR Message before software clears PCI_ERR_ROOT_COR_RCV, hardware sets PCI_ERR_ROOT_MULTI_COR_RCV and the Requester ID is lost. In the following scenario, PCI_ERR_ROOT_MULTI_COR_RCV was never cleared: - hardware receives ERR_COR message - hardware sets PCI_ERR_ROOT_COR_RCV - aer_irq() entered - aer_irq(): status = pci_read_config_dword(PCI_ERR_ROOT_STATUS) - aer_irq(): now status == PCI_ERR_ROOT_COR_RCV - hardware receives second ERR_COR message - hardware sets PCI_ERR_ROOT_MULTI_COR_RCV - aer_irq(): pci_write_config_dword(PCI_ERR_ROOT_STATUS, status) - PCI_ERR_ROOT_COR_RCV is cleared; PCI_ERR_ROOT_MULTI_COR_RCV is set - aer_irq() entered again - aer_irq(): status = pci_read_config_dword(PCI_ERR_ROOT_STATUS) - aer_irq(): now status == PCI_ERR_ROOT_MULTI_COR_RCV - aer_irq() exits because PCI_ERR_ROOT_COR_RCV not set - PCI_ERR_ROOT_MULTI_COR_RCV is still set The same problem occurred with ERR_NONFATAL/ERR_FATAL Messages and PCI_ERR_ROOT_UNCOR_RCV and PCI_ERR_ROOT_MULTI_UNCOR_RCV. Fix the problem by queueing an AER event and clearing the Root Error Status bits when any of these bits are set: PCI_ERR_ROOT_COR_RCV PCI_ERR_ROOT_UNCOR_RCV PCI_ERR_ROOT_MULTI_COR_RCV PCI_ERR_ROOT_MULTI_UNCOR_RCV See the bugzilla link for details from Eric about how to reproduce this problem. [bhelgaas: commit log, move repro details to bugzilla] Fixes: e167bfcaa4cd ("PCI: aerdrv: remove magical ROOT_ERR_STATUS_MASKS") Link: https://bugzilla.kernel.org/show_bug.cgi?id=215992 Link: https://lore.kernel.org/r/20220418150237.1021519-1-sathyanarayanan.kuppuswamy@linux.intel.com Reported-by: Eric Badger Signed-off-by: Kuppuswamy Sathyanarayanan Signed-off-by: Bjorn Helgaas Reviewed-by: Ashok Raj Signed-off-by: Sasha Levin --- drivers/pci/pcie/aer.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c index 65dff5f3457a..c40546eeecb3 100644 --- a/drivers/pci/pcie/aer.c +++ b/drivers/pci/pcie/aer.c @@ -101,6 +101,11 @@ struct aer_stats { #define ERR_COR_ID(d) (d & 0xffff) #define ERR_UNCOR_ID(d) (d >> 16) +#define AER_ERR_STATUS_MASK (PCI_ERR_ROOT_UNCOR_RCV | \ + PCI_ERR_ROOT_COR_RCV | \ + PCI_ERR_ROOT_MULTI_COR_RCV | \ + PCI_ERR_ROOT_MULTI_UNCOR_RCV) + static int pcie_aer_disable; static pci_ers_result_t aer_root_reset(struct pci_dev *dev); @@ -1187,7 +1192,7 @@ static irqreturn_t aer_irq(int irq, void *context) struct aer_err_source e_src = {}; pci_read_config_dword(rp, aer + PCI_ERR_ROOT_STATUS, &e_src.status); - if (!(e_src.status & (PCI_ERR_ROOT_UNCOR_RCV|PCI_ERR_ROOT_COR_RCV))) + if (!(e_src.status & AER_ERR_STATUS_MASK)) return IRQ_NONE; pci_read_config_dword(rp, aer + PCI_ERR_ROOT_ERR_SRC, &e_src.id); From 9b28515641895c49c7445eaed8c19b9f068e71e9 Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Sat, 14 May 2022 16:42:41 +0800 Subject: [PATCH 0620/1842] hwrng: omap3-rom - fix using wrong clk_disable() in omap_rom_rng_runtime_resume() [ Upstream commit e4e62bbc6aba49a5edb3156ec65f6698ff37d228 ] 'ddata->clk' is enabled by clk_prepare_enable(), it should be disabled by clk_disable_unprepare(). Fixes: 8d9d4bdc495f ("hwrng: omap3-rom - Use runtime PM instead of custom functions") Signed-off-by: Yang Yingliang Signed-off-by: Herbert Xu Signed-off-by: Sasha Levin --- drivers/char/hw_random/omap3-rom-rng.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/char/hw_random/omap3-rom-rng.c b/drivers/char/hw_random/omap3-rom-rng.c index e0d77fa048fb..f06e4f95114f 100644 --- a/drivers/char/hw_random/omap3-rom-rng.c +++ b/drivers/char/hw_random/omap3-rom-rng.c @@ -92,7 +92,7 @@ static int __maybe_unused omap_rom_rng_runtime_resume(struct device *dev) r = ddata->rom_rng_call(0, 0, RNG_GEN_PRNG_HW_INIT); if (r != 0) { - clk_disable(ddata->clk); + clk_disable_unprepare(ddata->clk); dev_err(dev, "HW init failed: %d\n", r); return -EIO; From 7620a280dadea252f6ab033862f6b780ce1aab76 Mon Sep 17 00:00:00 2001 From: Michael Ellerman Date: Thu, 7 Apr 2022 00:58:01 +1000 Subject: [PATCH 0621/1842] powerpc/64: Only WARN if __pa()/__va() called with bad addresses [ Upstream commit c4bce84d0bd3f396f702d69be2e92bbd8af97583 ] We added checks to __pa() / __va() to ensure they're only called with appropriate addresses. But using BUG_ON() is too strong, it means virt_addr_valid() will BUG when DEBUG_VIRTUAL is enabled. Instead switch them to warnings, arm64 does the same. Fixes: 4dd7554a6456 ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses") Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220406145802.538416-5-mpe@ellerman.id.au Signed-off-by: Sasha Levin --- arch/powerpc/include/asm/page.h | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h index f2c5c26869f1..03ae544eb6cc 100644 --- a/arch/powerpc/include/asm/page.h +++ b/arch/powerpc/include/asm/page.h @@ -216,6 +216,9 @@ static inline bool pfn_valid(unsigned long pfn) #define __pa(x) ((phys_addr_t)(unsigned long)(x) - VIRT_PHYS_OFFSET) #else #ifdef CONFIG_PPC64 + +#define VIRTUAL_WARN_ON(x) WARN_ON(IS_ENABLED(CONFIG_DEBUG_VIRTUAL) && (x)) + /* * gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET * with -mcmodel=medium, so we use & and | instead of - and + on 64-bit. @@ -223,13 +226,13 @@ static inline bool pfn_valid(unsigned long pfn) */ #define __va(x) \ ({ \ - VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET); \ + VIRTUAL_WARN_ON((unsigned long)(x) >= PAGE_OFFSET); \ (void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET); \ }) #define __pa(x) \ ({ \ - VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET); \ + VIRTUAL_WARN_ON((unsigned long)(x) < PAGE_OFFSET); \ (unsigned long)(x) & 0x0fffffffffffffffUL; \ }) From cca915d69127a0d4faa69b8d03359d76ff3b418a Mon Sep 17 00:00:00 2001 From: Kajol Jain Date: Fri, 6 May 2022 11:40:15 +0530 Subject: [PATCH 0622/1842] powerpc/perf: Fix the threshold compare group constraint for power9 [ Upstream commit ab0cc6bbf0c812731c703ec757fcc3fc3a457a34 ] Thresh compare bits for a event is used to program thresh compare field in Monitor Mode Control Register A (MMCRA: 9-18 bits for power9). When scheduling events as a group, all events in that group should match value in threshold bits (like thresh compare, thresh control, thresh select). Otherwise event open for the sibling events should fail. But in the current code, incase thresh compare bits are not valid, we are not failing in group_constraint function which can result in invalid group schduling. Fix the issue by returning -1 incase event is threshold and threshold compare value is not valid. Thresh control bits in the event code is used to program thresh_ctl field in Monitor Mode Control Register A (MMCRA: 48-55). In below example, the scheduling of group events PM_MRK_INST_CMPL (873534401e0) and PM_THRESH_MET (8734340101ec) is expected to fail as both event request different thresh control bits and invalid thresh compare value. Result before the patch changes: [command]# perf stat -e "{r8735340401e0,r8734340101ec}" sleep 1 Performance counter stats for 'sleep 1': 11,048 r8735340401e0 1,967 r8734340101ec 1.001354036 seconds time elapsed 0.001421000 seconds user 0.000000000 seconds sys Result after the patch changes: [command]# perf stat -e "{r8735340401e0,r8734340101ec}" sleep 1 Error: The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (r8735340401e0). /bin/dmesg | grep -i perf may provide additional information. Fixes: 78a16d9fc1206 ("powerpc/perf: Avoid FAB_*_MATCH checks for power9") Signed-off-by: Kajol Jain Reviewed-by: Athira Rajeev Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220506061015.43916-2-kjain@linux.ibm.com Signed-off-by: Sasha Levin --- arch/powerpc/perf/isa207-common.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c index 58448f0e4721..52990becbdfc 100644 --- a/arch/powerpc/perf/isa207-common.c +++ b/arch/powerpc/perf/isa207-common.c @@ -363,7 +363,8 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp) if (event_is_threshold(event) && is_thresh_cmp_valid(event)) { mask |= CNST_THRESH_MASK; value |= CNST_THRESH_VAL(event >> EVENT_THRESH_SHIFT); - } + } else if (event_is_threshold(event)) + return -1; } else { /* * Special case for PM_MRK_FAB_RSP_MATCH and PM_MRK_FAB_RSP_MATCH_CYC, From b8ef79697b62dced79fc2ecaeee103c77170b34e Mon Sep 17 00:00:00 2001 From: Randy Dunlap Date: Sun, 10 Apr 2022 09:10:35 -0700 Subject: [PATCH 0623/1842] macintosh: via-pmu and via-cuda need RTC_LIB [ Upstream commit 9a9c5ff5fff87eb1a43db0d899473554e408fd7b ] Fix build when RTC_LIB is not set/enabled. Eliminates these build errors: m68k-linux-ld: drivers/macintosh/via-pmu.o: in function `pmu_set_rtc_time': drivers/macintosh/via-pmu.c:1769: undefined reference to `rtc_tm_to_time64' m68k-linux-ld: drivers/macintosh/via-cuda.o: in function `cuda_set_rtc_time': drivers/macintosh/via-cuda.c:797: undefined reference to `rtc_tm_to_time64' Fixes: 0792a2c8e0bb ("macintosh: Use common code to access RTC") Reported-by: kernel test robot Suggested-by: Christophe Leroy Signed-off-by: Randy Dunlap Acked-by: Arnd Bergmann Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220410161035.592-1-rdunlap@infradead.org Signed-off-by: Sasha Levin --- drivers/macintosh/Kconfig | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/macintosh/Kconfig b/drivers/macintosh/Kconfig index 3942db15a2b8..539a2ed4e13d 100644 --- a/drivers/macintosh/Kconfig +++ b/drivers/macintosh/Kconfig @@ -44,6 +44,7 @@ config ADB_IOP config ADB_CUDA bool "Support for Cuda/Egret based Macs and PowerMacs" depends on (ADB || PPC_PMAC) && !PPC_PMAC64 + select RTC_LIB help This provides support for Cuda/Egret based Macintosh and Power Macintosh systems. This includes most m68k based Macs, @@ -57,6 +58,7 @@ config ADB_CUDA config ADB_PMU bool "Support for PMU based PowerMacs and PowerBooks" depends on PPC_PMAC || MAC + select RTC_LIB help On PowerBooks, iBooks, and recent iMacs and Power Macintoshes, the PMU is an embedded microprocessor whose primary function is to From 46fd994763cf6884b88a2da712af918f3ed54d7b Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Thu, 12 May 2022 16:37:18 +0400 Subject: [PATCH 0624/1842] powerpc/fsl_rio: Fix refcount leak in fsl_rio_setup [ Upstream commit fcee96924ba1596ca80a6770b2567ca546f9a482 ] of_parse_phandle() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. Add missing of_node_put() to avoid refcount leak. Fixes: abc3aeae3aaa ("fsl-rio: Add two ports and rapidio message units support") Signed-off-by: Miaoqian Lin Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220512123724.62931-1-linmq006@gmail.com Signed-off-by: Sasha Levin --- arch/powerpc/sysdev/fsl_rio.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/powerpc/sysdev/fsl_rio.c b/arch/powerpc/sysdev/fsl_rio.c index 07c164f7f8cf..3f9f78621cf3 100644 --- a/arch/powerpc/sysdev/fsl_rio.c +++ b/arch/powerpc/sysdev/fsl_rio.c @@ -505,8 +505,10 @@ int fsl_rio_setup(struct platform_device *dev) if (rc) { dev_err(&dev->dev, "Can't get %pOF property 'reg'\n", rmu_node); + of_node_put(rmu_node); goto err_rmu; } + of_node_put(rmu_node); rmu_regs_win = ioremap(rmu_regs.start, resource_size(&rmu_regs)); if (!rmu_regs_win) { dev_err(&dev->dev, "Unable to map rmu register window\n"); From a1d4941d9a24999f680799f9bbde7f57351ca637 Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Tue, 26 Apr 2022 11:08:57 +0800 Subject: [PATCH 0625/1842] mfd: davinci_voicecodec: Fix possible null-ptr-deref davinci_vc_probe() [ Upstream commit 311242c7703df0da14c206260b7e855f69cb0264 ] It will cause null-ptr-deref when using 'res', if platform_get_resource() returns NULL, so move using 'res' after devm_ioremap_resource() that will check it to avoid null-ptr-deref. And use devm_platform_get_and_ioremap_resource() to simplify code. Fixes: b5e29aa880be ("mfd: davinci_voicecodec: Remove pointless #include") Signed-off-by: Yang Yingliang Signed-off-by: Lee Jones Link: https://lore.kernel.org/r/20220426030857.3539336-1-yangyingliang@huawei.com Signed-off-by: Sasha Levin --- drivers/mfd/davinci_voicecodec.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/drivers/mfd/davinci_voicecodec.c b/drivers/mfd/davinci_voicecodec.c index e5c8bc998eb4..965820481f1e 100644 --- a/drivers/mfd/davinci_voicecodec.c +++ b/drivers/mfd/davinci_voicecodec.c @@ -46,14 +46,12 @@ static int __init davinci_vc_probe(struct platform_device *pdev) } clk_enable(davinci_vc->clk); - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - - fifo_base = (dma_addr_t)res->start; - davinci_vc->base = devm_ioremap_resource(&pdev->dev, res); + davinci_vc->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res); if (IS_ERR(davinci_vc->base)) { ret = PTR_ERR(davinci_vc->base); goto fail; } + fifo_base = (dma_addr_t)res->start; davinci_vc->regmap = devm_regmap_init_mmio(&pdev->dev, davinci_vc->base, From bb2220e0672b7433a9a42618599cd261b2629240 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20Ard=C3=B6?= Date: Thu, 31 Mar 2022 09:01:15 +0200 Subject: [PATCH 0626/1842] mailbox: forward the hrtimer if not queued and under a lock MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit bca1a1004615efe141fd78f360ecc48c60bc4ad5 ] This reverts commit c7dacf5b0f32957b24ef29df1207dc2cd8307743, "mailbox: avoid timer start from callback" The previous commit was reverted since it lead to a race that caused the hrtimer to not be started at all. The check for hrtimer_active() in msg_submit() will return true if the callback function txdone_hrtimer() is currently running. This function could return HRTIMER_NORESTART and then the timer will not be restarted, and also msg_submit() will not start the timer. This will lead to a message actually being submitted but no timer will start to check for its compleation. The original fix that added checking hrtimer_active() was added to avoid a warning with hrtimer_forward. Looking in the kernel another solution to avoid this warning is to check hrtimer_is_queued() before calling hrtimer_forward_now() instead. This however requires a lock so the timer is not started by msg_submit() inbetween this check and the hrtimer_forward() call. Fixes: c7dacf5b0f32 ("mailbox: avoid timer start from callback") Signed-off-by: Björn Ardö Signed-off-by: Jassi Brar Signed-off-by: Sasha Levin --- drivers/mailbox/mailbox.c | 19 +++++++++++++------ include/linux/mailbox_controller.h | 1 + 2 files changed, 14 insertions(+), 6 deletions(-) diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c index 3e7d4b20ab34..4229b9b5da98 100644 --- a/drivers/mailbox/mailbox.c +++ b/drivers/mailbox/mailbox.c @@ -82,11 +82,11 @@ static void msg_submit(struct mbox_chan *chan) exit: spin_unlock_irqrestore(&chan->lock, flags); - /* kick start the timer immediately to avoid delays */ if (!err && (chan->txdone_method & TXDONE_BY_POLL)) { - /* but only if not already active */ - if (!hrtimer_active(&chan->mbox->poll_hrt)) - hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL); + /* kick start the timer immediately to avoid delays */ + spin_lock_irqsave(&chan->mbox->poll_hrt_lock, flags); + hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL); + spin_unlock_irqrestore(&chan->mbox->poll_hrt_lock, flags); } } @@ -120,20 +120,26 @@ static enum hrtimer_restart txdone_hrtimer(struct hrtimer *hrtimer) container_of(hrtimer, struct mbox_controller, poll_hrt); bool txdone, resched = false; int i; + unsigned long flags; for (i = 0; i < mbox->num_chans; i++) { struct mbox_chan *chan = &mbox->chans[i]; if (chan->active_req && chan->cl) { - resched = true; txdone = chan->mbox->ops->last_tx_done(chan); if (txdone) tx_tick(chan, 0); + else + resched = true; } } if (resched) { - hrtimer_forward_now(hrtimer, ms_to_ktime(mbox->txpoll_period)); + spin_lock_irqsave(&mbox->poll_hrt_lock, flags); + if (!hrtimer_is_queued(hrtimer)) + hrtimer_forward_now(hrtimer, ms_to_ktime(mbox->txpoll_period)); + spin_unlock_irqrestore(&mbox->poll_hrt_lock, flags); + return HRTIMER_RESTART; } return HRTIMER_NORESTART; @@ -500,6 +506,7 @@ int mbox_controller_register(struct mbox_controller *mbox) hrtimer_init(&mbox->poll_hrt, CLOCK_MONOTONIC, HRTIMER_MODE_REL); mbox->poll_hrt.function = txdone_hrtimer; + spin_lock_init(&mbox->poll_hrt_lock); } for (i = 0; i < mbox->num_chans; i++) { diff --git a/include/linux/mailbox_controller.h b/include/linux/mailbox_controller.h index 36d6ce673503..6fee33cb52f5 100644 --- a/include/linux/mailbox_controller.h +++ b/include/linux/mailbox_controller.h @@ -83,6 +83,7 @@ struct mbox_controller { const struct of_phandle_args *sp); /* Internal to API */ struct hrtimer poll_hrt; + spinlock_t poll_hrt_lock; struct list_head node; }; From fc0750e659db7b315bf6348902cc8ca3cdd4b8d8 Mon Sep 17 00:00:00 2001 From: Douglas Miller Date: Fri, 20 May 2022 14:37:01 -0400 Subject: [PATCH 0627/1842] RDMA/hfi1: Prevent use of lock before it is initialized [ Upstream commit 05c03dfd09c069c4ffd783b47b2da5dcc9421f2c ] If there is a failure during probe of hfi1 before the sdma_map_lock is initialized, the call to hfi1_free_devdata() will attempt to use a lock that has not been initialized. If the locking correctness validator is on then an INFO message and stack trace resembling the following may be seen: INFO: trying to register non-static key. The code is fine but needs lockdep annotation, or maybe you didn't initialize this object before use? turning off the locking correctness validator. Call Trace: register_lock_class+0x11b/0x880 __lock_acquire+0xf3/0x7930 lock_acquire+0xff/0x2d0 _raw_spin_lock_irq+0x46/0x60 sdma_clean+0x42a/0x660 [hfi1] hfi1_free_devdata+0x3a7/0x420 [hfi1] init_one+0x867/0x11a0 [hfi1] pci_device_probe+0x40e/0x8d0 The use of sdma_map_lock in sdma_clean() is for freeing the sdma_map memory, and sdma_map is not allocated/initialized until after sdma_map_lock has been initialized. This code only needs to be run if sdma_map is not NULL, and so checking for that condition will avoid trying to use the lock before it is initialized. Fixes: 473291b3ea0e ("IB/hfi1: Fix for early release of sdma context") Fixes: 7724105686e7 ("IB/hfi1: add driver files") Link: https://lore.kernel.org/r/20220520183701.48973.72434.stgit@awfm-01.cornelisnetworks.com Reported-by: Zheyu Ma Signed-off-by: Douglas Miller Signed-off-by: Dennis Dalessandro Signed-off-by: Jason Gunthorpe Signed-off-by: Sasha Levin --- drivers/infiniband/hw/hfi1/sdma.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c index 0b73dc7847aa..a044bee257f9 100644 --- a/drivers/infiniband/hw/hfi1/sdma.c +++ b/drivers/infiniband/hw/hfi1/sdma.c @@ -1330,11 +1330,13 @@ void sdma_clean(struct hfi1_devdata *dd, size_t num_engines) kvfree(sde->tx_ring); sde->tx_ring = NULL; } - spin_lock_irq(&dd->sde_map_lock); - sdma_map_free(rcu_access_pointer(dd->sdma_map)); - RCU_INIT_POINTER(dd->sdma_map, NULL); - spin_unlock_irq(&dd->sde_map_lock); - synchronize_rcu(); + if (rcu_access_pointer(dd->sdma_map)) { + spin_lock_irq(&dd->sde_map_lock); + sdma_map_free(rcu_access_pointer(dd->sdma_map)); + RCU_INIT_POINTER(dd->sdma_map, NULL); + spin_unlock_irq(&dd->sde_map_lock); + synchronize_rcu(); + } kfree(dd->per_sdma); dd->per_sdma = NULL; From baf86afed74548adf3e8623ce8592edc8caa5cd3 Mon Sep 17 00:00:00 2001 From: Dmitry Torokhov Date: Wed, 25 May 2022 09:51:08 -0700 Subject: [PATCH 0628/1842] Input: stmfts - do not leave device disabled in stmfts_input_open [ Upstream commit 5f76955ab1e43e5795a9631b22ca4f918a0ae986 ] The commit 26623eea0da3 attempted to deal with potential leak of runtime PM counter when opening the touchscreen device, however it ended up erroneously dropping the counter in the case of successfully enabling the device. Let's address this by using pm_runtime_resume_and_get() and then executing pm_runtime_put_sync() only when we fail to send "sense on" command to the device. Fixes: 26623eea0da3 ("Input: stmfts - fix reference leak in stmfts_input_open") Reported-by: Pavel Machek Signed-off-by: Dmitry Torokhov Signed-off-by: Sasha Levin --- drivers/input/touchscreen/stmfts.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/input/touchscreen/stmfts.c b/drivers/input/touchscreen/stmfts.c index 64b690a72d10..a05a7a66b4ed 100644 --- a/drivers/input/touchscreen/stmfts.c +++ b/drivers/input/touchscreen/stmfts.c @@ -337,13 +337,15 @@ static int stmfts_input_open(struct input_dev *dev) struct stmfts_data *sdata = input_get_drvdata(dev); int err; - err = pm_runtime_get_sync(&sdata->client->dev); - if (err < 0) - goto out; + err = pm_runtime_resume_and_get(&sdata->client->dev); + if (err) + return err; err = i2c_smbus_write_byte(sdata->client, STMFTS_MS_MT_SENSE_ON); - if (err) - goto out; + if (err) { + pm_runtime_put_sync(&sdata->client->dev); + return err; + } mutex_lock(&sdata->mutex); sdata->running = true; @@ -366,9 +368,7 @@ static int stmfts_input_open(struct input_dev *dev) "failed to enable touchkey\n"); } -out: - pm_runtime_put_noidle(&sdata->client->dev); - return err; + return 0; } static void stmfts_input_close(struct input_dev *dev) From 0e0faa14316b96f0556748f00e5aae3ce6c1f5f4 Mon Sep 17 00:00:00 2001 From: Dan Carpenter Date: Wed, 6 Apr 2022 09:40:14 +0300 Subject: [PATCH 0629/1842] OPP: call of_node_put() on error path in _bandwidth_supported() [ Upstream commit 907ed123b9d096c73e9361f6cd4097f0691497f2 ] This code does not call of_node_put(opp_np) if of_get_next_available_child() returns NULL. But it should. Fixes: 45679f9b508f ("opp: Don't parse icc paths unnecessarily") Signed-off-by: Dan Carpenter Signed-off-by: Viresh Kumar Signed-off-by: Sasha Levin --- drivers/opp/of.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/opp/of.c b/drivers/opp/of.c index 5de46aa99d24..3d7adc0de128 100644 --- a/drivers/opp/of.c +++ b/drivers/opp/of.c @@ -346,11 +346,11 @@ static int _bandwidth_supported(struct device *dev, struct opp_table *opp_table) /* Checking only first OPP is sufficient */ np = of_get_next_available_child(opp_np, NULL); + of_node_put(opp_np); if (!np) { dev_err(dev, "OPP table empty\n"); return -EINVAL; } - of_node_put(opp_np); prop = of_find_property(np, "opp-peak-kBps", NULL); of_node_put(np); From 51d584704d18e60fa473823654f35611c777b291 Mon Sep 17 00:00:00 2001 From: Jakob Koschel Date: Fri, 1 Apr 2022 00:34:14 +0200 Subject: [PATCH 0630/1842] f2fs: fix dereference of stale list iterator after loop body [ Upstream commit 2aaf51dd39afb6d01d13f1e6fe20b684733b37d5 ] The list iterator variable will be a bogus pointer if no break was hit. Dereferencing it (cur->page in this case) could load an out-of-bounds/undefined value making it unsafe to use that in the comparision to determine if the specific element was found. Since 'cur->page' *can* be out-ouf-bounds it cannot be guaranteed that by chance (or intention of an attacker) it matches the value of 'page' even though the correct element was not found. This is fixed by using a separate list iterator variable for the loop and only setting the original variable if a suitable element was found. Then determing if the element was found is simply checking if the variable is set. Fixes: 8c242db9b8c0 ("f2fs: fix stale ATOMIC_WRITTEN_PAGE private pointer") Signed-off-by: Jakob Koschel Reviewed-by: Chao Yu Signed-off-by: Jaegeuk Kim Signed-off-by: Sasha Levin --- fs/f2fs/segment.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index 49f5cb532738..736fb57423a6 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -356,16 +356,19 @@ void f2fs_drop_inmem_page(struct inode *inode, struct page *page) struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct list_head *head = &fi->inmem_pages; struct inmem_pages *cur = NULL; + struct inmem_pages *tmp; f2fs_bug_on(sbi, !IS_ATOMIC_WRITTEN_PAGE(page)); mutex_lock(&fi->inmem_lock); - list_for_each_entry(cur, head, list) { - if (cur->page == page) + list_for_each_entry(tmp, head, list) { + if (tmp->page == page) { + cur = tmp; break; + } } - f2fs_bug_on(sbi, list_empty(head) || cur->page != page); + f2fs_bug_on(sbi, !cur); list_del(&cur->list); mutex_unlock(&fi->inmem_lock); From da748d263a6400fc3eff20ea0a738e82c78f4576 Mon Sep 17 00:00:00 2001 From: Yong Wu Date: Tue, 3 May 2022 15:13:56 +0800 Subject: [PATCH 0631/1842] iommu/mediatek: Add list_del in mtk_iommu_remove [ Upstream commit ee55f75e4bcade81d253163641b63bef3e76cac4 ] Lack the list_del in the mtk_iommu_remove, and remove bus_set_iommu(*, NULL) since there may be several iommu HWs. we can not bus_set_iommu null when one iommu driver unbind. This could be a fix for mt2712 which support 2 M4U HW and list them. Fixes: 7c3a2ec02806 ("iommu/mediatek: Merge 2 M4U HWs into one iommu domain") Signed-off-by: Yong Wu Reviewed-by: AngeloGioacchino Del Regno Reviewed-by: Matthias Brugger Link: https://lore.kernel.org/r/20220503071427.2285-6-yong.wu@mediatek.com Signed-off-by: Joerg Roedel Signed-off-by: Sasha Levin --- drivers/iommu/mtk_iommu.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c index 19387d2bc4b4..051815c9d2bb 100644 --- a/drivers/iommu/mtk_iommu.c +++ b/drivers/iommu/mtk_iommu.c @@ -768,8 +768,7 @@ static int mtk_iommu_remove(struct platform_device *pdev) iommu_device_sysfs_remove(&data->iommu); iommu_device_unregister(&data->iommu); - if (iommu_present(&platform_bus_type)) - bus_set_iommu(&platform_bus_type, NULL); + list_del(&data->list); clk_disable_unprepare(data->bclk); devm_free_irq(&pdev->dev, data->irq, data); From fb02d6b5432d4de707bcfb47b9579da21e6bc17b Mon Sep 17 00:00:00 2001 From: Michael Walle Date: Thu, 7 Apr 2022 17:08:28 +0200 Subject: [PATCH 0632/1842] i2c: at91: use dma safe buffers [ Upstream commit 03fbb903c8bf7e53e101e8d9a7b261264317c411 ] The supplied buffer might be on the stack and we get the following error message: [ 3.312058] at91_i2c e0070600.i2c: rejecting DMA map of vmalloc memory Use i2c_{get,put}_dma_safe_msg_buf() to get a DMA-able memory region if necessary. Fixes: 60937b2cdbf9 ("i2c: at91: add dma support") Signed-off-by: Michael Walle Reviewed-by: Codrin Ciubotariu Signed-off-by: Wolfram Sang Signed-off-by: Sasha Levin --- drivers/i2c/busses/i2c-at91-master.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/i2c/busses/i2c-at91-master.c b/drivers/i2c/busses/i2c-at91-master.c index 66864f9cf7ac..974225faaf96 100644 --- a/drivers/i2c/busses/i2c-at91-master.c +++ b/drivers/i2c/busses/i2c-at91-master.c @@ -657,6 +657,7 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num) unsigned int_addr_flag = 0; struct i2c_msg *m_start = msg; bool is_read; + u8 *dma_buf; dev_dbg(&adap->dev, "at91_xfer: processing %d messages:\n", num); @@ -704,7 +705,17 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num) dev->msg = m_start; dev->recv_len_abort = false; + if (dev->use_dma) { + dma_buf = i2c_get_dma_safe_msg_buf(m_start, 1); + if (!dma_buf) { + ret = -ENOMEM; + goto out; + } + dev->buf = dma_buf; + } + ret = at91_do_twi_transfer(dev); + i2c_put_dma_safe_msg_buf(dma_buf, m_start, !ret); ret = (ret < 0) ? ret : num; out: From c7b0ec974457b609aa35f11f8e2125c8a7b9ee05 Mon Sep 17 00:00:00 2001 From: Qinglang Miao Date: Sat, 31 Oct 2020 09:18:54 +0800 Subject: [PATCH 0633/1842] cpufreq: mediatek: add missing platform_driver_unregister() on error in mtk_cpufreq_driver_init [ Upstream commit 2f05c19d9ef4f5a42634f83bdb0db596ffc0dd30 ] Add the missing platform_driver_unregister() before return from mtk_cpufreq_driver_init in the error handling case when failed to register mtk-cpufreq platform device Signed-off-by: Qinglang Miao Signed-off-by: Viresh Kumar Signed-off-by: Sasha Levin --- drivers/cpufreq/mediatek-cpufreq.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c index a310372dc53e..f2e5ba3c539b 100644 --- a/drivers/cpufreq/mediatek-cpufreq.c +++ b/drivers/cpufreq/mediatek-cpufreq.c @@ -573,6 +573,7 @@ static int __init mtk_cpufreq_driver_init(void) pdev = platform_device_register_simple("mtk-cpufreq", -1, NULL, 0); if (IS_ERR(pdev)) { pr_err("failed to register mtk-cpufreq platform device\n"); + platform_driver_unregister(&mtk_cpufreq_platdrv); return PTR_ERR(pdev); } From 9d91400fff46b221806befc843654e5f5f38a82b Mon Sep 17 00:00:00 2001 From: Jia-Wei Chang Date: Fri, 8 Apr 2022 12:58:55 +0800 Subject: [PATCH 0634/1842] cpufreq: mediatek: Use module_init and add module_exit [ Upstream commit b7070187c81cb90549d7561c0e750d7c7eb751f4 ] - Use module_init instead of device_initcall. - Add a function for module_exit to unregister driver. Signed-off-by: Jia-Wei Chang Reviewed-by: AngeloGioacchino Del Regno Signed-off-by: Viresh Kumar Signed-off-by: Sasha Levin --- drivers/cpufreq/mediatek-cpufreq.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c index f2e5ba3c539b..07ba238a0e0e 100644 --- a/drivers/cpufreq/mediatek-cpufreq.c +++ b/drivers/cpufreq/mediatek-cpufreq.c @@ -579,7 +579,13 @@ static int __init mtk_cpufreq_driver_init(void) return 0; } -device_initcall(mtk_cpufreq_driver_init); +module_init(mtk_cpufreq_driver_init) + +static void __exit mtk_cpufreq_driver_exit(void) +{ + platform_driver_unregister(&mtk_cpufreq_platdrv); +} +module_exit(mtk_cpufreq_driver_exit) MODULE_DESCRIPTION("MediaTek CPUFreq driver"); MODULE_AUTHOR("Pi-Cheng Chen "); From ec5ded7acb38c0084ce1e060517a725e3c12c6b8 Mon Sep 17 00:00:00 2001 From: Rex-BC Chen Date: Thu, 5 May 2022 19:52:18 +0800 Subject: [PATCH 0635/1842] cpufreq: mediatek: Unregister platform device on exit [ Upstream commit f126fbadce92b92c3a7be41e4abc1fbae93ae2ef ] We register the platform device when driver inits. However, we do not unregister it when driver exits. To resolve this, we declare the platform data to be a global static variable and rename it to be "cpufreq_pdev". With this global variable, we can do platform_device_unregister() when driver exits. Fixes: 501c574f4e3a ("cpufreq: mediatek: Add support of cpufreq to MT2701/MT7623 SoC") Signed-off-by: Rex-BC Chen [ Viresh: Commit log and Subject ] Signed-off-by: Viresh Kumar Signed-off-by: Sasha Levin --- drivers/cpufreq/mediatek-cpufreq.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c index 07ba238a0e0e..82f6592bbadb 100644 --- a/drivers/cpufreq/mediatek-cpufreq.c +++ b/drivers/cpufreq/mediatek-cpufreq.c @@ -44,6 +44,8 @@ struct mtk_cpu_dvfs_info { bool need_voltage_tracking; }; +static struct platform_device *cpufreq_pdev; + static LIST_HEAD(dvfs_info_list); static struct mtk_cpu_dvfs_info *mtk_cpu_dvfs_info_lookup(int cpu) @@ -546,7 +548,6 @@ static int __init mtk_cpufreq_driver_init(void) { struct device_node *np; const struct of_device_id *match; - struct platform_device *pdev; int err; np = of_find_node_by_path("/"); @@ -570,11 +571,11 @@ static int __init mtk_cpufreq_driver_init(void) * and the device registration codes are put here to handle defer * probing. */ - pdev = platform_device_register_simple("mtk-cpufreq", -1, NULL, 0); - if (IS_ERR(pdev)) { + cpufreq_pdev = platform_device_register_simple("mtk-cpufreq", -1, NULL, 0); + if (IS_ERR(cpufreq_pdev)) { pr_err("failed to register mtk-cpufreq platform device\n"); platform_driver_unregister(&mtk_cpufreq_platdrv); - return PTR_ERR(pdev); + return PTR_ERR(cpufreq_pdev); } return 0; @@ -583,6 +584,7 @@ module_init(mtk_cpufreq_driver_init) static void __exit mtk_cpufreq_driver_exit(void) { + platform_device_unregister(cpufreq_pdev); platform_driver_unregister(&mtk_cpufreq_platdrv); } module_exit(mtk_cpufreq_driver_exit) From 8e49773a7596b1327a29e4841f30ed267c39a59c Mon Sep 17 00:00:00 2001 From: Guenter Roeck Date: Wed, 11 May 2022 07:56:59 -0700 Subject: [PATCH 0636/1842] MIPS: Loongson: Use hwmon_device_register_with_groups() to register hwmon [ Upstream commit abae018a03821be2b65c01ebe2bef06fd7d85a4c ] Calling hwmon_device_register_with_info() with NULL dev and/or chip information parameters is an ABI abuse and not a real conversion to the new API. Also, the code creates sysfs attributes _after_ creating the hwmon device, which is racy and unsupported to start with. On top of that, the removal code tries to remove the name attribute which is owned by the hwmon core. Use hwmon_device_register_with_groups() to register the hwmon device instead. In the future, the hwmon subsystem will reject calls to hwmon_device_register_with_info with NULL dev or chip/info parameters. Without this patch, the hwmon device will fail to register. Fixes: f59dc5119192 ("MIPS: Loongson: Fix boot warning about hwmon_device_register()") Cc: Zhi Li Signed-off-by: Guenter Roeck Signed-off-by: Thomas Bogendoerfer Signed-off-by: Sasha Levin --- drivers/platform/mips/cpu_hwmon.c | 127 ++++++++++-------------------- 1 file changed, 41 insertions(+), 86 deletions(-) diff --git a/drivers/platform/mips/cpu_hwmon.c b/drivers/platform/mips/cpu_hwmon.c index 386389ffec41..d8c5f9195f85 100644 --- a/drivers/platform/mips/cpu_hwmon.c +++ b/drivers/platform/mips/cpu_hwmon.c @@ -55,55 +55,6 @@ int loongson3_cpu_temp(int cpu) static int nr_packages; static struct device *cpu_hwmon_dev; -static SENSOR_DEVICE_ATTR(name, 0444, NULL, NULL, 0); - -static struct attribute *cpu_hwmon_attributes[] = { - &sensor_dev_attr_name.dev_attr.attr, - NULL -}; - -/* Hwmon device attribute group */ -static struct attribute_group cpu_hwmon_attribute_group = { - .attrs = cpu_hwmon_attributes, -}; - -static ssize_t get_cpu_temp(struct device *dev, - struct device_attribute *attr, char *buf); -static ssize_t cpu_temp_label(struct device *dev, - struct device_attribute *attr, char *buf); - -static SENSOR_DEVICE_ATTR(temp1_input, 0444, get_cpu_temp, NULL, 1); -static SENSOR_DEVICE_ATTR(temp1_label, 0444, cpu_temp_label, NULL, 1); -static SENSOR_DEVICE_ATTR(temp2_input, 0444, get_cpu_temp, NULL, 2); -static SENSOR_DEVICE_ATTR(temp2_label, 0444, cpu_temp_label, NULL, 2); -static SENSOR_DEVICE_ATTR(temp3_input, 0444, get_cpu_temp, NULL, 3); -static SENSOR_DEVICE_ATTR(temp3_label, 0444, cpu_temp_label, NULL, 3); -static SENSOR_DEVICE_ATTR(temp4_input, 0444, get_cpu_temp, NULL, 4); -static SENSOR_DEVICE_ATTR(temp4_label, 0444, cpu_temp_label, NULL, 4); - -static const struct attribute *hwmon_cputemp[4][3] = { - { - &sensor_dev_attr_temp1_input.dev_attr.attr, - &sensor_dev_attr_temp1_label.dev_attr.attr, - NULL - }, - { - &sensor_dev_attr_temp2_input.dev_attr.attr, - &sensor_dev_attr_temp2_label.dev_attr.attr, - NULL - }, - { - &sensor_dev_attr_temp3_input.dev_attr.attr, - &sensor_dev_attr_temp3_label.dev_attr.attr, - NULL - }, - { - &sensor_dev_attr_temp4_input.dev_attr.attr, - &sensor_dev_attr_temp4_label.dev_attr.attr, - NULL - } -}; - static ssize_t cpu_temp_label(struct device *dev, struct device_attribute *attr, char *buf) { @@ -121,23 +72,46 @@ static ssize_t get_cpu_temp(struct device *dev, return sprintf(buf, "%d\n", value); } -static int create_sysfs_cputemp_files(struct kobject *kobj) +static SENSOR_DEVICE_ATTR(temp1_input, 0444, get_cpu_temp, NULL, 1); +static SENSOR_DEVICE_ATTR(temp1_label, 0444, cpu_temp_label, NULL, 1); +static SENSOR_DEVICE_ATTR(temp2_input, 0444, get_cpu_temp, NULL, 2); +static SENSOR_DEVICE_ATTR(temp2_label, 0444, cpu_temp_label, NULL, 2); +static SENSOR_DEVICE_ATTR(temp3_input, 0444, get_cpu_temp, NULL, 3); +static SENSOR_DEVICE_ATTR(temp3_label, 0444, cpu_temp_label, NULL, 3); +static SENSOR_DEVICE_ATTR(temp4_input, 0444, get_cpu_temp, NULL, 4); +static SENSOR_DEVICE_ATTR(temp4_label, 0444, cpu_temp_label, NULL, 4); + +static struct attribute *cpu_hwmon_attributes[] = { + &sensor_dev_attr_temp1_input.dev_attr.attr, + &sensor_dev_attr_temp1_label.dev_attr.attr, + &sensor_dev_attr_temp2_input.dev_attr.attr, + &sensor_dev_attr_temp2_label.dev_attr.attr, + &sensor_dev_attr_temp3_input.dev_attr.attr, + &sensor_dev_attr_temp3_label.dev_attr.attr, + &sensor_dev_attr_temp4_input.dev_attr.attr, + &sensor_dev_attr_temp4_label.dev_attr.attr, + NULL +}; + +static umode_t cpu_hwmon_is_visible(struct kobject *kobj, + struct attribute *attr, int i) { - int i, ret = 0; + int id = i / 2; - for (i = 0; i < nr_packages; i++) - ret = sysfs_create_files(kobj, hwmon_cputemp[i]); - - return ret; + if (id < nr_packages) + return attr->mode; + return 0; } -static void remove_sysfs_cputemp_files(struct kobject *kobj) -{ - int i; +static struct attribute_group cpu_hwmon_group = { + .attrs = cpu_hwmon_attributes, + .is_visible = cpu_hwmon_is_visible, +}; - for (i = 0; i < nr_packages; i++) - sysfs_remove_files(kobj, hwmon_cputemp[i]); -} +static const struct attribute_group *cpu_hwmon_groups[] = { + &cpu_hwmon_group, + NULL +}; #define CPU_THERMAL_THRESHOLD 90000 static struct delayed_work thermal_work; @@ -159,50 +133,31 @@ static void do_thermal_timer(struct work_struct *work) static int __init loongson_hwmon_init(void) { - int ret; - pr_info("Loongson Hwmon Enter...\n"); if (cpu_has_csr()) csr_temp_enable = csr_readl(LOONGSON_CSR_FEATURES) & LOONGSON_CSRF_TEMP; - cpu_hwmon_dev = hwmon_device_register_with_info(NULL, "cpu_hwmon", NULL, NULL, NULL); - if (IS_ERR(cpu_hwmon_dev)) { - ret = PTR_ERR(cpu_hwmon_dev); - pr_err("hwmon_device_register fail!\n"); - goto fail_hwmon_device_register; - } - nr_packages = loongson_sysconf.nr_cpus / loongson_sysconf.cores_per_package; - ret = create_sysfs_cputemp_files(&cpu_hwmon_dev->kobj); - if (ret) { - pr_err("fail to create cpu temperature interface!\n"); - goto fail_create_sysfs_cputemp_files; + cpu_hwmon_dev = hwmon_device_register_with_groups(NULL, "cpu_hwmon", + NULL, cpu_hwmon_groups); + if (IS_ERR(cpu_hwmon_dev)) { + pr_err("hwmon_device_register fail!\n"); + return PTR_ERR(cpu_hwmon_dev); } INIT_DEFERRABLE_WORK(&thermal_work, do_thermal_timer); schedule_delayed_work(&thermal_work, msecs_to_jiffies(20000)); - return ret; - -fail_create_sysfs_cputemp_files: - sysfs_remove_group(&cpu_hwmon_dev->kobj, - &cpu_hwmon_attribute_group); - hwmon_device_unregister(cpu_hwmon_dev); - -fail_hwmon_device_register: - return ret; + return 0; } static void __exit loongson_hwmon_exit(void) { cancel_delayed_work_sync(&thermal_work); - remove_sysfs_cputemp_files(&cpu_hwmon_dev->kobj); - sysfs_remove_group(&cpu_hwmon_dev->kobj, - &cpu_hwmon_attribute_group); hwmon_device_unregister(cpu_hwmon_dev); } From f57696bc63418642f0f262ad8baded7d1192296f Mon Sep 17 00:00:00 2001 From: Nathan Chancellor Date: Thu, 5 May 2022 08:27:38 -0700 Subject: [PATCH 0637/1842] i2c: at91: Initialize dma_buf in at91_twi_xfer() [ Upstream commit 6977262c2eee111645668fe9e235ef2f5694abf7 ] Clang warns: drivers/i2c/busses/i2c-at91-master.c:707:6: warning: variable 'dma_buf' is used uninitialized whenever 'if' condition is false [-Wsometimes-uninitialized] if (dev->use_dma) { ^~~~~~~~~~~~ drivers/i2c/busses/i2c-at91-master.c:717:27: note: uninitialized use occurs here i2c_put_dma_safe_msg_buf(dma_buf, m_start, !ret); ^~~~~~~ Initialize dma_buf to NULL, as i2c_put_dma_safe_msg_buf() is a no-op when the first argument is NULL, which will work for the !dev->use_dma case. Fixes: 03fbb903c8bf ("i2c: at91: use dma safe buffers") Link: https://github.com/ClangBuiltLinux/linux/issues/1629 Signed-off-by: Nathan Chancellor Reviewed-by: Michael Walle Signed-off-by: Wolfram Sang Signed-off-by: Sasha Levin --- drivers/i2c/busses/i2c-at91-master.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/i2c/busses/i2c-at91-master.c b/drivers/i2c/busses/i2c-at91-master.c index 974225faaf96..7960fa4b8c5b 100644 --- a/drivers/i2c/busses/i2c-at91-master.c +++ b/drivers/i2c/busses/i2c-at91-master.c @@ -657,7 +657,7 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num) unsigned int_addr_flag = 0; struct i2c_msg *m_start = msg; bool is_read; - u8 *dma_buf; + u8 *dma_buf = NULL; dev_dbg(&adap->dev, "at91_xfer: processing %d messages:\n", num); From 6073af78156b8c3fc1198f8bcc190b7ac3ac0143 Mon Sep 17 00:00:00 2001 From: Christophe JAILLET Date: Thu, 21 Apr 2022 08:13:38 +0200 Subject: [PATCH 0638/1842] dmaengine: idxd: Fix the error handling path in idxd_cdev_register() [ Upstream commit aab08c1aac01097815fbcf10fce7021d2396a31f ] If a call to alloc_chrdev_region() fails, the already allocated resources are leaking. Add the needed error handling path to fix the leak. Fixes: 42d279f9137a ("dmaengine: idxd: add char driver to expose submission portal to userland") Signed-off-by: Christophe JAILLET Acked-by: Dave Jiang Link: https://lore.kernel.org/r/1b5033dcc87b5f2a953c413f0306e883e6114542.1650521591.git.christophe.jaillet@wanadoo.fr Signed-off-by: Vinod Koul Signed-off-by: Sasha Levin --- drivers/dma/idxd/cdev.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c index 4da88578ed64..ae65eb90afab 100644 --- a/drivers/dma/idxd/cdev.c +++ b/drivers/dma/idxd/cdev.c @@ -266,10 +266,16 @@ int idxd_cdev_register(void) rc = alloc_chrdev_region(&ictx[i].devt, 0, MINORMASK, ictx[i].name); if (rc) - return rc; + goto err_free_chrdev_region; } return 0; + +err_free_chrdev_region: + for (i--; i >= 0; i--) + unregister_chrdev_region(ictx[i].devt, MINORMASK); + + return rc; } void idxd_cdev_remove(void) From 418b9fa4349a0479e6c3a9407a3ca208abd87bec Mon Sep 17 00:00:00 2001 From: Trond Myklebust Date: Sat, 14 May 2022 10:27:00 -0400 Subject: [PATCH 0639/1842] NFS: Do not report EINTR/ERESTARTSYS as mapping errors [ Upstream commit cea9ba7239dcc84175041174304c6cdeae3226e5 ] If the attempt to flush data was interrupted due to a local signal, then just requeue the writes back for I/O. Fixes: 6fbda89b257f ("NFS: Replace custom error reporting mechanism with generic one") Signed-off-by: Trond Myklebust Signed-off-by: Anna Schumaker Signed-off-by: Sasha Levin --- fs/nfs/write.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/nfs/write.c b/fs/nfs/write.c index 5d07799513a6..b08323ed0c25 100644 --- a/fs/nfs/write.c +++ b/fs/nfs/write.c @@ -1411,7 +1411,7 @@ static void nfs_async_write_error(struct list_head *head, int error) while (!list_empty(head)) { req = nfs_list_entry(head->next); nfs_list_remove_request(req); - if (nfs_error_is_fatal(error)) + if (nfs_error_is_fatal_on_server(error)) nfs_write_error(req, error); else nfs_redirty_request(req); From c5a0e59bbe0546b2b28ade53c47bf2efb643fb33 Mon Sep 17 00:00:00 2001 From: Trond Myklebust Date: Sat, 14 May 2022 10:27:01 -0400 Subject: [PATCH 0640/1842] NFS: fsync() should report filesystem errors over EINTR/ERESTARTSYS [ Upstream commit 9641d9bc9b75f11f70646f5c6ee9f5f519a1012e ] If the commit to disk is interrupted, we should still first check for filesystem errors so that we can report them in preference to the error due to the signal. Fixes: 2197e9b06c22 ("NFS: Fix up fsync() when the server rebooted") Signed-off-by: Trond Myklebust Signed-off-by: Anna Schumaker Signed-off-by: Sasha Levin --- fs/nfs/file.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/fs/nfs/file.c b/fs/nfs/file.c index 7b47f9b063f1..887faff3a73e 100644 --- a/fs/nfs/file.c +++ b/fs/nfs/file.c @@ -208,15 +208,16 @@ static int nfs_file_fsync_commit(struct file *file, int datasync) { struct inode *inode = file_inode(file); - int ret; + int ret, ret2; dprintk("NFS: fsync file(%pD2) datasync %d\n", file, datasync); nfs_inc_stats(inode, NFSIOS_VFSFSYNC); ret = nfs_commit_inode(inode, FLUSH_SYNC); - if (ret < 0) - return ret; - return file_check_and_advance_wb_err(file); + ret2 = file_check_and_advance_wb_err(file); + if (ret2 < 0) + return ret2; + return ret; } int From 040242365c9e2b2aadf4f40cb4083f42c90ee433 Mon Sep 17 00:00:00 2001 From: Trond Myklebust Date: Sat, 14 May 2022 10:27:03 -0400 Subject: [PATCH 0641/1842] NFS: Do not report flush errors in nfs_write_end() [ Upstream commit d95b26650e86175e4a97698d89bc1626cd1df0c6 ] If we do flush cached writebacks in nfs_write_end() due to the imminent expiration of an RPCSEC_GSS session, then we should defer reporting any resulting errors until the calls to file_check_and_advance_wb_err() in nfs_file_write() and nfs_file_fsync(). Fixes: 6fbda89b257f ("NFS: Replace custom error reporting mechanism with generic one") Signed-off-by: Trond Myklebust Signed-off-by: Anna Schumaker Signed-off-by: Sasha Levin --- fs/nfs/file.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/fs/nfs/file.c b/fs/nfs/file.c index 887faff3a73e..ad856b7b9a46 100644 --- a/fs/nfs/file.c +++ b/fs/nfs/file.c @@ -390,11 +390,8 @@ static int nfs_write_end(struct file *file, struct address_space *mapping, return status; NFS_I(mapping->host)->write_io += copied; - if (nfs_ctx_key_to_expire(ctx, mapping->host)) { - status = nfs_wb_all(mapping->host); - if (status < 0) - return status; - } + if (nfs_ctx_key_to_expire(ctx, mapping->host)) + nfs_wb_all(mapping->host); return copied; } From 83839a333fbf47cb8a25957902f2400356cde7ab Mon Sep 17 00:00:00 2001 From: Trond Myklebust Date: Sat, 14 May 2022 10:27:04 -0400 Subject: [PATCH 0642/1842] NFS: Don't report errors from nfs_pageio_complete() more than once [ Upstream commit c5e483b77cc2edb318da152abe07e33006b975fd ] Since errors from nfs_pageio_complete() are already being reported through nfs_async_write_error(), we should not be returning them to the callers of do_writepages() as well. They will end up being reported through the generic mechanism instead. Fixes: 6fbda89b257f ("NFS: Replace custom error reporting mechanism with generic one") Signed-off-by: Trond Myklebust Signed-off-by: Anna Schumaker Signed-off-by: Sasha Levin --- fs/nfs/write.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/fs/nfs/write.c b/fs/nfs/write.c index b08323ed0c25..dc08a0c02f09 100644 --- a/fs/nfs/write.c +++ b/fs/nfs/write.c @@ -675,11 +675,7 @@ static int nfs_writepage_locked(struct page *page, err = nfs_do_writepage(page, wbc, &pgio); pgio.pg_error = 0; nfs_pageio_complete(&pgio); - if (err < 0) - return err; - if (nfs_error_is_fatal(pgio.pg_error)) - return pgio.pg_error; - return 0; + return err; } int nfs_writepage(struct page *page, struct writeback_control *wbc) @@ -730,9 +726,6 @@ int nfs_writepages(struct address_space *mapping, struct writeback_control *wbc) if (err < 0) goto out_err; - err = pgio.pg_error; - if (nfs_error_is_fatal(err)) - goto out_err; return 0; out_err: return err; From 96fdbb1c8563ab4382937e1142dda5114360e286 Mon Sep 17 00:00:00 2001 From: Trond Myklebust Date: Sat, 14 May 2022 10:08:11 -0400 Subject: [PATCH 0643/1842] NFSv4/pNFS: Do not fail I/O when we fail to allocate the pNFS layout [ Upstream commit 3764a17e31d579cf9b4bd0a69894b577e8d75702 ] Commit 587f03deb69b caused pnfs_update_layout() to stop returning ENOMEM when the memory allocation fails, and hence causes it to fall back to trying to do I/O through the MDS. There is no guarantee that this will fare any better. If we're failing the pNFS layout allocation, then we should just redirty the page and retry later. Reported-by: Olga Kornievskaia Fixes: 587f03deb69b ("pnfs: refactor send_layoutget") Signed-off-by: Trond Myklebust Signed-off-by: Anna Schumaker Signed-off-by: Sasha Levin --- fs/nfs/pnfs.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c index b3b9eff5d572..8c0803d98008 100644 --- a/fs/nfs/pnfs.c +++ b/fs/nfs/pnfs.c @@ -2006,6 +2006,7 @@ pnfs_update_layout(struct inode *ino, lo = pnfs_find_alloc_layout(ino, ctx, gfp_flags); if (lo == NULL) { spin_unlock(&ino->i_lock); + lseg = ERR_PTR(-ENOMEM); trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg, PNFS_UPDATE_LAYOUT_NOMEM); goto out; @@ -2134,6 +2135,7 @@ pnfs_update_layout(struct inode *ino, lgp = pnfs_alloc_init_layoutget_args(ino, ctx, &stateid, &arg, gfp_flags); if (!lgp) { + lseg = ERR_PTR(-ENOMEM); trace_pnfs_update_layout(ino, pos, count, iomode, lo, NULL, PNFS_UPDATE_LAYOUT_NOMEM); nfs_layoutget_end(lo); From c1c4405222b6fc98c16e8c2aa679c14e41d81465 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Thu, 12 May 2022 15:59:08 +0400 Subject: [PATCH 0644/1842] video: fbdev: clcdfb: Fix refcount leak in clcdfb_of_vram_setup [ Upstream commit b23789a59fa6f00e98a319291819f91fbba0deb8 ] of_parse_phandle() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. Add missing of_node_put() to avoid refcount leak. Fixes: d10715be03bd ("video: ARM CLCD: Add DT support") Signed-off-by: Miaoqian Lin Signed-off-by: Helge Deller Signed-off-by: Sasha Levin --- drivers/video/fbdev/amba-clcd.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/video/fbdev/amba-clcd.c b/drivers/video/fbdev/amba-clcd.c index 33595cc4778e..79efefd224f4 100644 --- a/drivers/video/fbdev/amba-clcd.c +++ b/drivers/video/fbdev/amba-clcd.c @@ -771,12 +771,15 @@ static int clcdfb_of_vram_setup(struct clcd_fb *fb) return -ENODEV; fb->fb.screen_base = of_iomap(memory, 0); - if (!fb->fb.screen_base) + if (!fb->fb.screen_base) { + of_node_put(memory); return -ENOMEM; + } fb->fb.fix.smem_start = of_translate_address(memory, of_get_address(memory, 0, &size, NULL)); fb->fb.fix.smem_len = size; + of_node_put(memory); return 0; } From 0f87bd8b5fbf4d681b55bd8d320e04ec4f85be22 Mon Sep 17 00:00:00 2001 From: Amelie Delaunay Date: Wed, 4 May 2022 17:53:20 +0200 Subject: [PATCH 0645/1842] dmaengine: stm32-mdma: remove GISR1 register [ Upstream commit 9d6a2d92e450926c483e45eaf426080a19219f4e ] GISR1 was described in a not up-to-date documentation when the stm32-mdma driver has been developed. This register has not been added in reference manual of STM32 SoC with MDMA, which have only 32 MDMA channels. So remove it from stm32-mdma driver. Fixes: a4ffb13c8946 ("dmaengine: Add STM32 MDMA driver") Signed-off-by: Amelie Delaunay Link: https://lore.kernel.org/r/20220504155322.121431-2-amelie.delaunay@foss.st.com Signed-off-by: Vinod Koul Signed-off-by: Sasha Levin --- drivers/dma/stm32-mdma.c | 21 +++++---------------- 1 file changed, 5 insertions(+), 16 deletions(-) diff --git a/drivers/dma/stm32-mdma.c b/drivers/dma/stm32-mdma.c index fe36738f2dd7..cd394624085a 100644 --- a/drivers/dma/stm32-mdma.c +++ b/drivers/dma/stm32-mdma.c @@ -40,7 +40,6 @@ STM32_MDMA_SHIFT(mask)) #define STM32_MDMA_GISR0 0x0000 /* MDMA Int Status Reg 1 */ -#define STM32_MDMA_GISR1 0x0004 /* MDMA Int Status Reg 2 */ /* MDMA Channel x interrupt/status register */ #define STM32_MDMA_CISR(x) (0x40 + 0x40 * (x)) /* x = 0..62 */ @@ -196,7 +195,7 @@ #define STM32_MDMA_MAX_BUF_LEN 128 #define STM32_MDMA_MAX_BLOCK_LEN 65536 -#define STM32_MDMA_MAX_CHANNELS 63 +#define STM32_MDMA_MAX_CHANNELS 32 #define STM32_MDMA_MAX_REQUESTS 256 #define STM32_MDMA_MAX_BURST 128 #define STM32_MDMA_VERY_HIGH_PRIORITY 0x11 @@ -1350,21 +1349,11 @@ static irqreturn_t stm32_mdma_irq_handler(int irq, void *devid) /* Find out which channel generates the interrupt */ status = readl_relaxed(dmadev->base + STM32_MDMA_GISR0); - if (status) { - id = __ffs(status); - } else { - status = readl_relaxed(dmadev->base + STM32_MDMA_GISR1); - if (!status) { - dev_dbg(mdma2dev(dmadev), "spurious it\n"); - return IRQ_NONE; - } - id = __ffs(status); - /* - * As GISR0 provides status for channel id from 0 to 31, - * so GISR1 provides status for channel id from 32 to 62 - */ - id += 32; + if (!status) { + dev_dbg(mdma2dev(dmadev), "spurious it\n"); + return IRQ_NONE; } + id = __ffs(status); chan = &dmadev->chan[id]; if (!chan) { From 13d8d11dfaf925343a9a830e008f1ab4a18bb3c7 Mon Sep 17 00:00:00 2001 From: Amelie Delaunay Date: Fri, 20 Nov 2020 15:33:20 +0100 Subject: [PATCH 0646/1842] dmaengine: stm32-mdma: rework interrupt handler [ Upstream commit 1d3dd68749b9f4a4da272f39608d03b4bae0b69f ] To avoid multiple entries in MDMA interrupt handler for each flag&interrupt enable, manage all flags set at once. Signed-off-by: Amelie Delaunay Link: https://lore.kernel.org/r/20201120143320.30367-5-amelie.delaunay@st.com Signed-off-by: Vinod Koul Signed-off-by: Sasha Levin --- drivers/dma/stm32-mdma.c | 64 +++++++++++++++++++++------------------- 1 file changed, 34 insertions(+), 30 deletions(-) diff --git a/drivers/dma/stm32-mdma.c b/drivers/dma/stm32-mdma.c index cd394624085a..4ec6f5b69f56 100644 --- a/drivers/dma/stm32-mdma.c +++ b/drivers/dma/stm32-mdma.c @@ -1345,7 +1345,7 @@ static irqreturn_t stm32_mdma_irq_handler(int irq, void *devid) { struct stm32_mdma_device *dmadev = devid; struct stm32_mdma_chan *chan = devid; - u32 reg, id, ien, status, flag; + u32 reg, id, ccr, ien, status; /* Find out which channel generates the interrupt */ status = readl_relaxed(dmadev->base + STM32_MDMA_GISR0); @@ -1357,67 +1357,71 @@ static irqreturn_t stm32_mdma_irq_handler(int irq, void *devid) chan = &dmadev->chan[id]; if (!chan) { - dev_dbg(mdma2dev(dmadev), "MDMA channel not initialized\n"); - goto exit; + dev_warn(mdma2dev(dmadev), "MDMA channel not initialized\n"); + return IRQ_NONE; } /* Handle interrupt for the channel */ spin_lock(&chan->vchan.lock); - status = stm32_mdma_read(dmadev, STM32_MDMA_CISR(chan->id)); - ien = stm32_mdma_read(dmadev, STM32_MDMA_CCR(chan->id)); - ien &= STM32_MDMA_CCR_IRQ_MASK; - ien >>= 1; + status = stm32_mdma_read(dmadev, STM32_MDMA_CISR(id)); + /* Mask Channel ReQuest Active bit which can be set in case of MEM2MEM */ + status &= ~STM32_MDMA_CISR_CRQA; + ccr = stm32_mdma_read(dmadev, STM32_MDMA_CCR(id)); + ien = (ccr & STM32_MDMA_CCR_IRQ_MASK) >> 1; if (!(status & ien)) { spin_unlock(&chan->vchan.lock); - dev_dbg(chan2dev(chan), - "spurious it (status=0x%04x, ien=0x%04x)\n", - status, ien); + dev_warn(chan2dev(chan), + "spurious it (status=0x%04x, ien=0x%04x)\n", + status, ien); return IRQ_NONE; } - flag = __ffs(status & ien); - reg = STM32_MDMA_CIFCR(chan->id); + reg = STM32_MDMA_CIFCR(id); - switch (1 << flag) { - case STM32_MDMA_CISR_TEIF: - id = chan->id; - status = readl_relaxed(dmadev->base + STM32_MDMA_CESR(id)); - dev_err(chan2dev(chan), "Transfer Err: stat=0x%08x\n", status); + if (status & STM32_MDMA_CISR_TEIF) { + dev_err(chan2dev(chan), "Transfer Err: stat=0x%08x\n", + readl_relaxed(dmadev->base + STM32_MDMA_CESR(id))); stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CTEIF); - break; + status &= ~STM32_MDMA_CISR_TEIF; + } - case STM32_MDMA_CISR_CTCIF: + if (status & STM32_MDMA_CISR_CTCIF) { stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CCTCIF); + status &= ~STM32_MDMA_CISR_CTCIF; stm32_mdma_xfer_end(chan); - break; + } - case STM32_MDMA_CISR_BRTIF: + if (status & STM32_MDMA_CISR_BRTIF) { stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CBRTIF); - break; + status &= ~STM32_MDMA_CISR_BRTIF; + } - case STM32_MDMA_CISR_BTIF: + if (status & STM32_MDMA_CISR_BTIF) { stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CBTIF); + status &= ~STM32_MDMA_CISR_BTIF; chan->curr_hwdesc++; if (chan->desc && chan->desc->cyclic) { if (chan->curr_hwdesc == chan->desc->count) chan->curr_hwdesc = 0; vchan_cyclic_callback(&chan->desc->vdesc); } - break; + } - case STM32_MDMA_CISR_TCIF: + if (status & STM32_MDMA_CISR_TCIF) { stm32_mdma_set_bits(dmadev, reg, STM32_MDMA_CIFCR_CLTCIF); - break; + status &= ~STM32_MDMA_CISR_TCIF; + } - default: - dev_err(chan2dev(chan), "it %d unhandled (status=0x%04x)\n", - 1 << flag, status); + if (status) { + stm32_mdma_set_bits(dmadev, reg, status); + dev_err(chan2dev(chan), "DMA error: status=0x%08x\n", status); + if (!(ccr & STM32_MDMA_CCR_EN)) + dev_err(chan2dev(chan), "chan disabled by HW\n"); } spin_unlock(&chan->vchan.lock); -exit: return IRQ_HANDLED; } From 3cfb546439878e158f11851c601fde6b4b65b12b Mon Sep 17 00:00:00 2001 From: Amelie Delaunay Date: Wed, 4 May 2022 17:53:21 +0200 Subject: [PATCH 0647/1842] dmaengine: stm32-mdma: fix chan initialization in stm32_mdma_irq_handler() [ Upstream commit da3b8ddb464bd49b6248d00ca888ad751c9e44fd ] The parameter to pass back to the handler function when irq has been requested is a struct stm32_mdma_device pointer, not a struct stm32_mdma_chan pointer. Even if chan is reinit later in the function, remove this wrong initialization. Fixes: a4ffb13c8946 ("dmaengine: Add STM32 MDMA driver") Signed-off-by: Amelie Delaunay Link: https://lore.kernel.org/r/20220504155322.121431-3-amelie.delaunay@foss.st.com Signed-off-by: Vinod Koul Signed-off-by: Sasha Levin --- drivers/dma/stm32-mdma.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/dma/stm32-mdma.c b/drivers/dma/stm32-mdma.c index 4ec6f5b69f56..9d54746c422c 100644 --- a/drivers/dma/stm32-mdma.c +++ b/drivers/dma/stm32-mdma.c @@ -1344,7 +1344,7 @@ static void stm32_mdma_xfer_end(struct stm32_mdma_chan *chan) static irqreturn_t stm32_mdma_irq_handler(int irq, void *devid) { struct stm32_mdma_device *dmadev = devid; - struct stm32_mdma_chan *chan = devid; + struct stm32_mdma_chan *chan; u32 reg, id, ccr, ien, status; /* Find out which channel generates the interrupt */ From de6f6b5400be79457eb7a17fe1a0f499299f16b8 Mon Sep 17 00:00:00 2001 From: Joerg Roedel Date: Fri, 20 May 2022 12:22:14 +0200 Subject: [PATCH 0648/1842] iommu/amd: Increase timeout waiting for GA log enablement MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 42bb5aa043382f09bef2cc33b8431be867c70f8e ] On some systems it can take a long time for the hardware to enable the GA log of the AMD IOMMU. The current wait time is only 0.1ms, but testing showed that it can take up to 14ms for the GA log to enter running state after it has been enabled. Sometimes the long delay happens when booting the system, sometimes only on resume. Adjust the timeout accordingly to not print a warning when hardware takes a longer than usual. There has already been an attempt to fix this with commit 9b45a7738eec ("iommu/amd: Fix loop timeout issue in iommu_ga_log_enable()") But that commit was based on some wrong math and did not fix the issue in all cases. Cc: "D. Ziegfeld" Cc: Jörg-Volker Peetz Fixes: 8bda0cfbdc1a ("iommu/amd: Detect and initialize guest vAPIC log") Signed-off-by: Joerg Roedel Link: https://lore.kernel.org/r/20220520102214.12563-1-joro@8bytes.org Signed-off-by: Sasha Levin --- drivers/iommu/amd/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c index 6eaefc9e7b3d..e988f6f198c5 100644 --- a/drivers/iommu/amd/init.c +++ b/drivers/iommu/amd/init.c @@ -84,7 +84,7 @@ #define ACPI_DEVFLAG_LINT1 0x80 #define ACPI_DEVFLAG_ATSDIS 0x10000000 -#define LOOP_TIMEOUT 100000 +#define LOOP_TIMEOUT 2000000 /* * ACPI table definitions * From 06cb0f056ba1414bc13a36884913acfadb1ddc84 Mon Sep 17 00:00:00 2001 From: Tali Perry Date: Tue, 17 May 2022 18:11:36 +0800 Subject: [PATCH 0649/1842] i2c: npcm: Fix timeout calculation [ Upstream commit 288b204492fddf28889cea6dc95a23976632c7a0 ] Use adap.timeout for timeout calculation instead of hard-coded value of 35ms. Fixes: 56a1485b102e ("i2c: npcm7xx: Add Nuvoton NPCM I2C controller driver") Signed-off-by: Tali Perry Signed-off-by: Tyrone Ting Signed-off-by: Wolfram Sang Signed-off-by: Sasha Levin --- drivers/i2c/busses/i2c-npcm7xx.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c index 2ad166355ec9..92fd88a3f415 100644 --- a/drivers/i2c/busses/i2c-npcm7xx.c +++ b/drivers/i2c/busses/i2c-npcm7xx.c @@ -2047,7 +2047,7 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, u16 nwrite, nread; u8 *write_data, *read_data; u8 slave_addr; - int timeout; + unsigned long timeout; int ret = 0; bool read_block = false; bool read_PEC = false; @@ -2099,13 +2099,13 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, * 9: bits per transaction (including the ack/nack) */ timeout_usec = (2 * 9 * USEC_PER_SEC / bus->bus_freq) * (2 + nread + nwrite); - timeout = max(msecs_to_jiffies(35), usecs_to_jiffies(timeout_usec)); + timeout = max_t(unsigned long, bus->adap.timeout, usecs_to_jiffies(timeout_usec)); if (nwrite >= 32 * 1024 || nread >= 32 * 1024) { dev_err(bus->dev, "i2c%d buffer too big\n", bus->num); return -EINVAL; } - time_left = jiffies + msecs_to_jiffies(DEFAULT_STALL_COUNT) + 1; + time_left = jiffies + timeout + 1; do { /* * we must clear slave address immediately when the bus is not @@ -2269,7 +2269,7 @@ static int npcm_i2c_probe_bus(struct platform_device *pdev) adap = &bus->adap; adap->owner = THIS_MODULE; adap->retries = 3; - adap->timeout = HZ; + adap->timeout = msecs_to_jiffies(35); adap->algo = &npcm_i2c_algo; adap->quirks = &npcm_i2c_quirks; adap->algo_data = bus; From 5c0dfca6b9ccfa5d86c031645f7fecb322db11eb Mon Sep 17 00:00:00 2001 From: Tyrone Ting Date: Tue, 17 May 2022 18:11:38 +0800 Subject: [PATCH 0650/1842] i2c: npcm: Correct register access width MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit ea9f8426d17620214ee345ffb77ee6cc196ff14f ] The SMBnCTL3 register is 8-bit wide and the 32-bit access was always incorrect, but simply didn't cause a visible error on the 32-bit machine. On the 64-bit machine, the kernel message reports that ESR value is 0x96000021. Checking Arm Architecture Reference Manual Armv8 suggests that it's the alignment fault. SMBnCTL3's address is 0xE. Fixes: 56a1485b102e ("i2c: npcm7xx: Add Nuvoton NPCM I2C controller driver") Signed-off-by: Tyrone Ting Reviewed-by: Jonathan Neuschäfer Signed-off-by: Wolfram Sang Signed-off-by: Sasha Levin --- drivers/i2c/busses/i2c-npcm7xx.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c index 92fd88a3f415..cdea7f440a9e 100644 --- a/drivers/i2c/busses/i2c-npcm7xx.c +++ b/drivers/i2c/busses/i2c-npcm7xx.c @@ -359,14 +359,14 @@ static int npcm_i2c_get_SCL(struct i2c_adapter *_adap) { struct npcm_i2c *bus = container_of(_adap, struct npcm_i2c, adap); - return !!(I2CCTL3_SCL_LVL & ioread32(bus->reg + NPCM_I2CCTL3)); + return !!(I2CCTL3_SCL_LVL & ioread8(bus->reg + NPCM_I2CCTL3)); } static int npcm_i2c_get_SDA(struct i2c_adapter *_adap) { struct npcm_i2c *bus = container_of(_adap, struct npcm_i2c, adap); - return !!(I2CCTL3_SDA_LVL & ioread32(bus->reg + NPCM_I2CCTL3)); + return !!(I2CCTL3_SDA_LVL & ioread8(bus->reg + NPCM_I2CCTL3)); } static inline u16 npcm_i2c_get_index(struct npcm_i2c *bus) From ebd4f37ac1e6440b3b2a81c751aacf1ed67a97a3 Mon Sep 17 00:00:00 2001 From: Tali Perry Date: Tue, 17 May 2022 18:11:39 +0800 Subject: [PATCH 0651/1842] i2c: npcm: Handle spurious interrupts [ Upstream commit e5222d408de2a88e6b206c38217b48d092184553 ] On some platforms in rare cases (1 to 100,000 transactions), the i2c gets a spurious interrupt which means that we enter an interrupt but in the interrupt handler we don't find any status bit that points to the reason we got this interrupt. This may be a case of a rare HW issue or signal integrity issue that is still under investigation. In order to overcome this we are doing the following: 1. Disable incoming interrupts in master mode only when slave mode is not enabled. 2. Clear end of busy (EOB) after every interrupt. 3. Clear other status bits (just in case since we found them cleared) 4. Return correct status during the interrupt that will finish the transaction. On next xmit transaction if the bus is still busy the master will issue a recovery process before issuing the new transaction. Fixes: 56a1485b102e ("i2c: npcm7xx: Add Nuvoton NPCM I2C controller driver") Signed-off-by: Tali Perry Signed-off-by: Tyrone Ting Signed-off-by: Wolfram Sang Signed-off-by: Sasha Levin --- drivers/i2c/busses/i2c-npcm7xx.c | 91 ++++++++++++++++++++++---------- 1 file changed, 62 insertions(+), 29 deletions(-) diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c index cdea7f440a9e..20a2f903b7f6 100644 --- a/drivers/i2c/busses/i2c-npcm7xx.c +++ b/drivers/i2c/busses/i2c-npcm7xx.c @@ -563,6 +563,15 @@ static inline void npcm_i2c_nack(struct npcm_i2c *bus) iowrite8(val, bus->reg + NPCM_I2CCTL1); } +static inline void npcm_i2c_clear_master_status(struct npcm_i2c *bus) +{ + u8 val; + + /* Clear NEGACK, STASTR and BER bits */ + val = NPCM_I2CST_BER | NPCM_I2CST_NEGACK | NPCM_I2CST_STASTR; + iowrite8(val, bus->reg + NPCM_I2CST); +} + #if IS_ENABLED(CONFIG_I2C_SLAVE) static void npcm_i2c_slave_int_enable(struct npcm_i2c *bus, bool enable) { @@ -642,8 +651,8 @@ static void npcm_i2c_reset(struct npcm_i2c *bus) iowrite8(NPCM_I2CCST_BB, bus->reg + NPCM_I2CCST); iowrite8(0xFF, bus->reg + NPCM_I2CST); - /* Clear EOB bit */ - iowrite8(NPCM_I2CCST3_EO_BUSY, bus->reg + NPCM_I2CCST3); + /* Clear and disable EOB */ + npcm_i2c_eob_int(bus, false); /* Clear all fifo bits: */ iowrite8(NPCM_I2CFIF_CTS_CLR_FIFO, bus->reg + NPCM_I2CFIF_CTS); @@ -655,6 +664,9 @@ static void npcm_i2c_reset(struct npcm_i2c *bus) } #endif + /* clear status bits for spurious interrupts */ + npcm_i2c_clear_master_status(bus); + bus->state = I2C_IDLE; } @@ -815,15 +827,6 @@ static void npcm_i2c_read_fifo(struct npcm_i2c *bus, u8 bytes_in_fifo) } } -static inline void npcm_i2c_clear_master_status(struct npcm_i2c *bus) -{ - u8 val; - - /* Clear NEGACK, STASTR and BER bits */ - val = NPCM_I2CST_BER | NPCM_I2CST_NEGACK | NPCM_I2CST_STASTR; - iowrite8(val, bus->reg + NPCM_I2CST); -} - static void npcm_i2c_master_abort(struct npcm_i2c *bus) { /* Only current master is allowed to issue a stop condition */ @@ -1231,7 +1234,16 @@ static irqreturn_t npcm_i2c_int_slave_handler(struct npcm_i2c *bus) ret = IRQ_HANDLED; } /* SDAST */ - return ret; + /* + * if irq is not one of the above, make sure EOB is disabled and all + * status bits are cleared. + */ + if (ret == IRQ_NONE) { + npcm_i2c_eob_int(bus, false); + npcm_i2c_clear_master_status(bus); + } + + return IRQ_HANDLED; } static int npcm_i2c_reg_slave(struct i2c_client *client) @@ -1467,6 +1479,9 @@ static void npcm_i2c_irq_handle_nack(struct npcm_i2c *bus) npcm_i2c_eob_int(bus, false); npcm_i2c_master_stop(bus); + /* Clear SDA Status bit (by reading dummy byte) */ + npcm_i2c_rd_byte(bus); + /* * The bus is released from stall only after the SW clears * NEGACK bit. Then a Stop condition is sent. @@ -1474,6 +1489,8 @@ static void npcm_i2c_irq_handle_nack(struct npcm_i2c *bus) npcm_i2c_clear_master_status(bus); readx_poll_timeout_atomic(ioread8, bus->reg + NPCM_I2CCST, val, !(val & NPCM_I2CCST_BUSY), 10, 200); + /* verify no status bits are still set after bus is released */ + npcm_i2c_clear_master_status(bus); } bus->state = I2C_IDLE; @@ -1672,10 +1689,10 @@ static int npcm_i2c_recovery_tgclk(struct i2c_adapter *_adap) int iter = 27; if ((npcm_i2c_get_SDA(_adap) == 1) && (npcm_i2c_get_SCL(_adap) == 1)) { - dev_dbg(bus->dev, "bus%d recovery skipped, bus not stuck", - bus->num); + dev_dbg(bus->dev, "bus%d-0x%x recovery skipped, bus not stuck", + bus->num, bus->dest_addr); npcm_i2c_reset(bus); - return status; + return 0; } npcm_i2c_int_enable(bus, false); @@ -1909,6 +1926,7 @@ static int npcm_i2c_init_module(struct npcm_i2c *bus, enum i2c_mode mode, bus_freq_hz < I2C_FREQ_MIN_HZ || bus_freq_hz > I2C_FREQ_MAX_HZ) return -EINVAL; + npcm_i2c_int_enable(bus, false); npcm_i2c_disable(bus); /* Configure FIFO mode : */ @@ -1937,10 +1955,17 @@ static int npcm_i2c_init_module(struct npcm_i2c *bus, enum i2c_mode mode, val = (val | NPCM_I2CCTL1_NMINTE) & ~NPCM_I2CCTL1_RWS; iowrite8(val, bus->reg + NPCM_I2CCTL1); - npcm_i2c_int_enable(bus, true); - npcm_i2c_reset(bus); + /* check HW is OK: SDA and SCL should be high at this point. */ + if ((npcm_i2c_get_SDA(&bus->adap) == 0) || (npcm_i2c_get_SCL(&bus->adap) == 0)) { + dev_err(bus->dev, "I2C%d init fail: lines are low\n", bus->num); + dev_err(bus->dev, "SDA=%d SCL=%d\n", npcm_i2c_get_SDA(&bus->adap), + npcm_i2c_get_SCL(&bus->adap)); + return -ENXIO; + } + + npcm_i2c_int_enable(bus, true); return 0; } @@ -1988,10 +2013,14 @@ static irqreturn_t npcm_i2c_bus_irq(int irq, void *dev_id) #if IS_ENABLED(CONFIG_I2C_SLAVE) if (bus->slave) { bus->master_or_slave = I2C_SLAVE; - return npcm_i2c_int_slave_handler(bus); + if (npcm_i2c_int_slave_handler(bus)) + return IRQ_HANDLED; } #endif - return IRQ_NONE; + /* clear status bits for spurious interrupts */ + npcm_i2c_clear_master_status(bus); + + return IRQ_HANDLED; } static bool npcm_i2c_master_start_xmit(struct npcm_i2c *bus, @@ -2048,7 +2077,6 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, u8 *write_data, *read_data; u8 slave_addr; unsigned long timeout; - int ret = 0; bool read_block = false; bool read_PEC = false; u8 bus_busy; @@ -2138,12 +2166,12 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, bus->read_block_use = read_block; reinit_completion(&bus->cmd_complete); - if (!npcm_i2c_master_start_xmit(bus, slave_addr, nwrite, nread, - write_data, read_data, read_PEC, - read_block)) - ret = -EBUSY; - if (ret != -EBUSY) { + npcm_i2c_int_enable(bus, true); + + if (npcm_i2c_master_start_xmit(bus, slave_addr, nwrite, nread, + write_data, read_data, read_PEC, + read_block)) { time_left = wait_for_completion_timeout(&bus->cmd_complete, timeout); @@ -2157,26 +2185,31 @@ static int npcm_i2c_master_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, } } } - ret = bus->cmd_err; /* if there was BER, check if need to recover the bus: */ if (bus->cmd_err == -EAGAIN) - ret = i2c_recover_bus(adap); + bus->cmd_err = i2c_recover_bus(adap); /* * After any type of error, check if LAST bit is still set, * due to a HW issue. * It cannot be cleared without resetting the module. */ - if (bus->cmd_err && - (NPCM_I2CRXF_CTL_LAST_PEC & ioread8(bus->reg + NPCM_I2CRXF_CTL))) + else if (bus->cmd_err && + (NPCM_I2CRXF_CTL_LAST_PEC & ioread8(bus->reg + NPCM_I2CRXF_CTL))) npcm_i2c_reset(bus); + /* after any xfer, successful or not, stall and EOB must be disabled */ + npcm_i2c_stall_after_start(bus, false); + npcm_i2c_eob_int(bus, false); + #if IS_ENABLED(CONFIG_I2C_SLAVE) /* reenable slave if it was enabled */ if (bus->slave) iowrite8((bus->slave->addr & 0x7F) | NPCM_I2CADDR_SAEN, bus->reg + NPCM_I2CADDR1); +#else + npcm_i2c_int_enable(bus, false); #endif return bus->cmd_err; } From c9542f5f901ba425e7eff0361b81d49dea0d5259 Mon Sep 17 00:00:00 2001 From: Kuninori Morimoto Date: Fri, 20 May 2022 11:54:21 +0200 Subject: [PATCH 0652/1842] i2c: rcar: fix PM ref counts in probe error paths [ Upstream commit 3fe2ec59db1a7569e18594b9c0cf1f4f1afd498e ] We have to take care of ID_P_PM_BLOCKED when bailing out during probe. Fixes: 7ee24eb508d6 ("i2c: rcar: disable PM in multi-master mode") Signed-off-by: Kuninori Morimoto Signed-off-by: Wolfram Sang Signed-off-by: Wolfram Sang Signed-off-by: Sasha Levin --- drivers/i2c/busses/i2c-rcar.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c index 8722ca23f889..6a7a7a074a97 100644 --- a/drivers/i2c/busses/i2c-rcar.c +++ b/drivers/i2c/busses/i2c-rcar.c @@ -999,8 +999,10 @@ static int rcar_i2c_probe(struct platform_device *pdev) pm_runtime_enable(dev); pm_runtime_get_sync(dev); ret = rcar_i2c_clock_calculate(priv); - if (ret < 0) - goto out_pm_put; + if (ret < 0) { + pm_runtime_put(dev); + goto out_pm_disable; + } rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */ @@ -1029,19 +1031,19 @@ static int rcar_i2c_probe(struct platform_device *pdev) ret = platform_get_irq(pdev, 0); if (ret < 0) - goto out_pm_disable; + goto out_pm_put; priv->irq = ret; ret = devm_request_irq(dev, priv->irq, irqhandler, irqflags, dev_name(dev), priv); if (ret < 0) { dev_err(dev, "cannot get irq %d\n", priv->irq); - goto out_pm_disable; + goto out_pm_put; } platform_set_drvdata(pdev, priv); ret = i2c_add_numbered_adapter(adap); if (ret < 0) - goto out_pm_disable; + goto out_pm_put; if (priv->flags & ID_P_HOST_NOTIFY) { priv->host_notify_client = i2c_new_slave_host_notify_device(adap); @@ -1058,7 +1060,8 @@ static int rcar_i2c_probe(struct platform_device *pdev) out_del_device: i2c_del_adapter(&priv->adap); out_pm_put: - pm_runtime_put(dev); + if (priv->flags & ID_P_PM_BLOCKED) + pm_runtime_put(dev); out_pm_disable: pm_runtime_disable(dev); return ret; From c8c2802407aa62ca0177802b0978edc689dabf3e Mon Sep 17 00:00:00 2001 From: Leo Yan Date: Thu, 26 May 2022 22:54:00 +0800 Subject: [PATCH 0653/1842] perf c2c: Use stdio interface if slang is not supported [ Upstream commit c4040212bc97d16040712a410335f93bc94d2262 ] If the slang lib is not installed on the system, perf c2c tool disables TUI mode and roll back to use stdio mode; but the flag 'c2c.use_stdio' is missed to set true and thus it wrongly applies UI quirks in the function ui_quirks(). This commit forces to use stdio interface if slang is not supported, and it can avoid to apply the UI quirks and show the correct metric header. Before: ================================================= Shared Cache Line Distribution Pareto ================================================= ------------------------------------------------------------------------------- 0 0 0 99 0 0 0 0xaaaac17d6000 ------------------------------------------------------------------------------- 0.00% 0.00% 6.06% 0.00% 0.00% 0.00% 0x20 N/A 0 0xaaaac17c25ac 0 0 43 375 18469 2 [.] 0x00000000000025ac memstress memstress[25ac] 0 0.00% 0.00% 93.94% 0.00% 0.00% 0.00% 0x29 N/A 0 0xaaaac17c3e88 0 0 173 180 135 2 [.] 0x0000000000003e88 memstress memstress[3e88] 0 After: ================================================= Shared Cache Line Distribution Pareto ================================================= ------------------------------------------------------------------------------- 0 0 0 99 0 0 0 0xaaaac17d6000 ------------------------------------------------------------------------------- 0.00% 0.00% 6.06% 0.00% 0.00% 0.00% 0x20 N/A 0 0xaaaac17c25ac 0 0 43 375 18469 2 [.] 0x00000000000025ac memstress memstress[25ac] 0 0.00% 0.00% 93.94% 0.00% 0.00% 0.00% 0x29 N/A 0 0xaaaac17c3e88 0 0 173 180 135 2 [.] 0x0000000000003e88 memstress memstress[3e88] 0 Fixes: 5a1a99cd2e4e1557 ("perf c2c report: Add main TUI browser") Reported-by: Joe Mario Signed-off-by: Leo Yan Cc: Alexander Shishkin Cc: Jiri Olsa Cc: Mark Rutland Cc: Namhyung Kim Cc: Peter Zijlstra Link: http://lore.kernel.org/lkml/20220526145400.611249-1-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Sasha Levin --- tools/perf/builtin-c2c.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c index d5bea5d3cd51..7f7111d4b3ad 100644 --- a/tools/perf/builtin-c2c.c +++ b/tools/perf/builtin-c2c.c @@ -2694,9 +2694,7 @@ static int perf_c2c__report(int argc, const char **argv) "the input file to process"), OPT_INCR('N', "node-info", &c2c.node_info, "show extra node info in report (repeat for more info)"), -#ifdef HAVE_SLANG_SUPPORT OPT_BOOLEAN(0, "stdio", &c2c.use_stdio, "Use the stdio interface"), -#endif OPT_BOOLEAN(0, "stats", &c2c.stats_only, "Display only statistic tables (implies --stdio)"), OPT_BOOLEAN(0, "full-symbols", &c2c.symbol_full, @@ -2725,6 +2723,10 @@ static int perf_c2c__report(int argc, const char **argv) if (argc) usage_with_options(report_c2c_usage, options); +#ifndef HAVE_SLANG_SUPPORT + c2c.use_stdio = true; +#endif + if (c2c.stats_only) c2c.use_stdio = true; From d8b6aaeb9a91ae5e4d01ffa3cac5f7f52b9451f0 Mon Sep 17 00:00:00 2001 From: Zhengjun Xing Date: Wed, 25 May 2022 22:04:10 +0800 Subject: [PATCH 0654/1842] perf jevents: Fix event syntax error caused by ExtSel MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit f4df0dbbe62ee8e4405a57b27ccd54393971c773 ] In the origin code, when "ExtSel" is 1, the eventcode will change to "eventcode |= 1 << 21”. For event “UNC_Q_RxL_CREDITS_CONSUMED_VN0.DRS", its "ExtSel" is "1", its eventcode will change from 0x1E to 0x20001E, but in fact the eventcode should <=0x1FF, so this will cause the parse fail: # perf stat -e "UNC_Q_RxL_CREDITS_CONSUMED_VN0.DRS" -a sleep 0.1 event syntax error: '.._RxL_CREDITS_CONSUMED_VN0.DRS' \___ value too big for format, maximum is 511 On the perf kernel side, the kernel assumes the valid bits are continuous. It will adjust the 0x100 (bit 8 for perf tool) to bit 21 in HW. DEFINE_UNCORE_FORMAT_ATTR(event_ext, event, "config:0-7,21"); So the perf tool follows the kernel side and just set bit8 other than bit21. Fixes: fedb2b518239cbc0 ("perf jevents: Add support for parsing uncore json files") Reviewed-by: Kan Liang Signed-off-by: Xing Zhengjun Acked-by: Ian Rogers Cc: Adrian Hunter Cc: Alexander Shishkin Cc: Andi Kleen Cc: Ingo Molnar Cc: Jiri Olsa Cc: Peter Zijlstra Link: https://lore.kernel.org/r/20220525140410.1706851-1-zhengjun.xing@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Sasha Levin --- tools/perf/pmu-events/jevents.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c index c679a79aef51..1f20f587e053 100644 --- a/tools/perf/pmu-events/jevents.c +++ b/tools/perf/pmu-events/jevents.c @@ -579,7 +579,7 @@ static int json_events(const char *fn, } else if (json_streq(map, field, "ExtSel")) { char *code = NULL; addfield(map, &code, "", "", val); - eventcode |= strtoul(code, NULL, 0) << 21; + eventcode |= strtoul(code, NULL, 0) << 8; free(code); } else if (json_streq(map, field, "EventName")) { addfield(map, &je.name, "", "", val); From 2766ddaf45b69252bb8fe526b5b6e56904a9ae7a Mon Sep 17 00:00:00 2001 From: Chao Yu Date: Wed, 27 Apr 2022 01:06:02 +0800 Subject: [PATCH 0655/1842] f2fs: fix to avoid f2fs_bug_on() in dec_valid_node_count() commit 4d17e6fe9293d57081ffdc11e1cf313e25e8fd9e upstream. As Yanming reported in bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215897 I have encountered a bug in F2FS file system in kernel v5.17. The kernel should enable CONFIG_KASAN=y and CONFIG_KASAN_INLINE=y. You can reproduce the bug by running the following commands: The kernel message is shown below: kernel BUG at fs/f2fs/f2fs.h:2511! Call Trace: f2fs_remove_inode_page+0x2a2/0x830 f2fs_evict_inode+0x9b7/0x1510 evict+0x282/0x4e0 do_unlinkat+0x33a/0x540 __x64_sys_unlinkat+0x8e/0xd0 do_syscall_64+0x3b/0x90 entry_SYSCALL_64_after_hwframe+0x44/0xae The root cause is: .total_valid_block_count or .total_valid_node_count could fuzzed to zero, then once dec_valid_node_count() was called, it will cause BUG_ON(), this patch fixes to print warning info and set SBI_NEED_FSCK into CP instead of panic. Cc: stable@vger.kernel.org Reported-by: Ming Yan Signed-off-by: Chao Yu Signed-off-by: Jaegeuk Kim Signed-off-by: Greg Kroah-Hartman --- fs/f2fs/f2fs.h | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 6c4bf22a3e83..658faef70355 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -2284,11 +2284,17 @@ static inline void dec_valid_node_count(struct f2fs_sb_info *sbi, { spin_lock(&sbi->stat_lock); - f2fs_bug_on(sbi, !sbi->total_valid_block_count); - f2fs_bug_on(sbi, !sbi->total_valid_node_count); + if (unlikely(!sbi->total_valid_block_count || + !sbi->total_valid_node_count)) { + f2fs_warn(sbi, "dec_valid_node_count: inconsistent block counts, total_valid_block:%u, total_valid_node:%u", + sbi->total_valid_block_count, + sbi->total_valid_node_count); + set_sbi_flag(sbi, SBI_NEED_FSCK); + } else { + sbi->total_valid_block_count--; + sbi->total_valid_node_count--; + } - sbi->total_valid_node_count--; - sbi->total_valid_block_count--; if (sbi->reserved_blocks && sbi->current_reserved_blocks < sbi->reserved_blocks) sbi->current_reserved_blocks++; From a34d7b49894b0533222188a52e2958750f830efd Mon Sep 17 00:00:00 2001 From: Chao Yu Date: Wed, 27 Apr 2022 17:51:40 +0800 Subject: [PATCH 0656/1842] f2fs: fix to do sanity check on block address in f2fs_do_zero_range() commit 25f8236213a91efdf708b9d77e9e51b6fc3e141c upstream. As Yanming reported in bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215894 I have encountered a bug in F2FS file system in kernel v5.17. I have uploaded the system call sequence as case.c, and a fuzzed image can be found in google net disk The kernel should enable CONFIG_KASAN=y and CONFIG_KASAN_INLINE=y. You can reproduce the bug by running the following commands: kernel BUG at fs/f2fs/segment.c:2291! Call Trace: f2fs_invalidate_blocks+0x193/0x2d0 f2fs_fallocate+0x2593/0x4a70 vfs_fallocate+0x2a5/0xac0 ksys_fallocate+0x35/0x70 __x64_sys_fallocate+0x8e/0xf0 do_syscall_64+0x3b/0x90 entry_SYSCALL_64_after_hwframe+0x44/0xae The root cause is, after image was fuzzed, block mapping info in inode will be inconsistent with SIT table, so in f2fs_fallocate(), it will cause panic when updating SIT with invalid blkaddr. Let's fix the issue by adding sanity check on block address before updating SIT table with it. Cc: stable@vger.kernel.org Reported-by: Ming Yan Signed-off-by: Chao Yu Signed-off-by: Jaegeuk Kim Signed-off-by: Greg Kroah-Hartman --- fs/f2fs/file.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c index 792f9059d897..06f200234d00 100644 --- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -1413,11 +1413,19 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start, ret = -ENOSPC; break; } - if (dn->data_blkaddr != NEW_ADDR) { - f2fs_invalidate_blocks(sbi, dn->data_blkaddr); - dn->data_blkaddr = NEW_ADDR; - f2fs_set_data_blkaddr(dn); + + if (dn->data_blkaddr == NEW_ADDR) + continue; + + if (!f2fs_is_valid_blkaddr(sbi, dn->data_blkaddr, + DATA_GENERIC_ENHANCE)) { + ret = -EFSCORRUPTED; + break; } + + f2fs_invalidate_blocks(sbi, dn->data_blkaddr); + dn->data_blkaddr = NEW_ADDR; + f2fs_set_data_blkaddr(dn); } f2fs_update_extent_cache_range(dn, start, 0, index - start); From ccd58045beb997544b94558a9156be4742628491 Mon Sep 17 00:00:00 2001 From: Chao Yu Date: Sat, 30 Apr 2022 21:19:24 +0800 Subject: [PATCH 0657/1842] f2fs: fix to clear dirty inode in f2fs_evict_inode() commit f2db71053dc0409fae785096ad19cce4c8a95af7 upstream. As Yanming reported in bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215904 The kernel message is shown below: kernel BUG at fs/f2fs/inode.c:825! Call Trace: evict+0x282/0x4e0 __dentry_kill+0x2b2/0x4d0 shrink_dentry_list+0x17c/0x4f0 shrink_dcache_parent+0x143/0x1e0 do_one_tree+0x9/0x30 shrink_dcache_for_umount+0x51/0x120 generic_shutdown_super+0x5c/0x3a0 kill_block_super+0x90/0xd0 kill_f2fs_super+0x225/0x310 deactivate_locked_super+0x78/0xc0 cleanup_mnt+0x2b7/0x480 task_work_run+0xc8/0x150 exit_to_user_mode_prepare+0x14a/0x150 syscall_exit_to_user_mode+0x1d/0x40 do_syscall_64+0x48/0x90 The root cause is: inode node and dnode node share the same nid, so during f2fs_evict_inode(), dnode node truncation will invalidate its NAT entry, so when truncating inode node, it fails due to invalid NAT entry, result in inode is still marked as dirty, fix this issue by clearing dirty for inode and setting SBI_NEED_FSCK flag in filesystem. output from dump.f2fs: [print_node_info: 354] Node ID [0xf:15] is inode i_nid[0] [0x f : 15] Cc: stable@vger.kernel.org Reported-by: Ming Yan Signed-off-by: Chao Yu Signed-off-by: Jaegeuk Kim Signed-off-by: Greg Kroah-Hartman --- fs/f2fs/inode.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c index 98483f50e5e9..6e788506c921 100644 --- a/fs/f2fs/inode.c +++ b/fs/f2fs/inode.c @@ -757,8 +757,22 @@ void f2fs_evict_inode(struct inode *inode) f2fs_lock_op(sbi); err = f2fs_remove_inode_page(inode); f2fs_unlock_op(sbi); - if (err == -ENOENT) + if (err == -ENOENT) { err = 0; + + /* + * in fuzzed image, another node may has the same + * block address as inode's, if it was truncated + * previously, truncation of inode node will fail. + */ + if (is_inode_flag_set(inode, FI_DIRTY_INODE)) { + f2fs_warn(F2FS_I_SB(inode), + "f2fs_evict_inode: inconsistent node id, ino:%lu", + inode->i_ino); + f2fs_inode_synced(inode); + set_sbi_flag(sbi, SBI_NEED_FSCK); + } + } } /* give more chances, if ENOMEM case */ From 2e790aa37858f7edf696361d0b6b3d85dd1b2dd2 Mon Sep 17 00:00:00 2001 From: Chao Yu Date: Wed, 4 May 2022 14:09:22 +0800 Subject: [PATCH 0658/1842] f2fs: fix deadloop in foreground GC commit cfd66bb715fd11fde3338d0660cffa1396adc27d upstream. As Yanming reported in bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215914 The root cause is: in a very small sized image, it's very easy to exceed threshold of foreground GC, if we calculate free space and dirty data based on section granularity, in corner case, has_not_enough_free_secs() will always return true, result in deadloop in f2fs_gc(). So this patch refactors has_not_enough_free_secs() as below to fix this issue: 1. calculate needed space based on block granularity, and separate all blocks to two parts, section part, and block part, comparing section part to free section, and comparing block part to free space in openned log. 2. account F2FS_DIRTY_NODES, F2FS_DIRTY_IMETA and F2FS_DIRTY_DENTS as node block consumer; 3. account F2FS_DIRTY_DENTS as data block consumer; Cc: stable@vger.kernel.org Reported-by: Ming Yan Signed-off-by: Chao Yu Signed-off-by: Jaegeuk Kim Signed-off-by: Greg Kroah-Hartman --- fs/f2fs/segment.h | 32 ++++++++++++++++++++------------ 1 file changed, 20 insertions(+), 12 deletions(-) diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h index beef833a6960..88c0799b99f2 100644 --- a/fs/f2fs/segment.h +++ b/fs/f2fs/segment.h @@ -573,11 +573,10 @@ static inline int reserved_sections(struct f2fs_sb_info *sbi) return GET_SEC_FROM_SEG(sbi, reserved_segments(sbi)); } -static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi) +static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi, + unsigned int node_blocks, unsigned int dent_blocks) { - unsigned int node_blocks = get_pages(sbi, F2FS_DIRTY_NODES) + - get_pages(sbi, F2FS_DIRTY_DENTS); - unsigned int dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS); + unsigned int segno, left_blocks; int i; @@ -603,19 +602,28 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi) static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi, int freed, int needed) { - int node_secs = get_blocktype_secs(sbi, F2FS_DIRTY_NODES); - int dent_secs = get_blocktype_secs(sbi, F2FS_DIRTY_DENTS); - int imeta_secs = get_blocktype_secs(sbi, F2FS_DIRTY_IMETA); + unsigned int total_node_blocks = get_pages(sbi, F2FS_DIRTY_NODES) + + get_pages(sbi, F2FS_DIRTY_DENTS) + + get_pages(sbi, F2FS_DIRTY_IMETA); + unsigned int total_dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS); + unsigned int node_secs = total_node_blocks / BLKS_PER_SEC(sbi); + unsigned int dent_secs = total_dent_blocks / BLKS_PER_SEC(sbi); + unsigned int node_blocks = total_node_blocks % BLKS_PER_SEC(sbi); + unsigned int dent_blocks = total_dent_blocks % BLKS_PER_SEC(sbi); + unsigned int free, need_lower, need_upper; if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING))) return false; - if (free_sections(sbi) + freed == reserved_sections(sbi) + needed && - has_curseg_enough_space(sbi)) + free = free_sections(sbi) + freed; + need_lower = node_secs + dent_secs + reserved_sections(sbi) + needed; + need_upper = need_lower + (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0); + + if (free > need_upper) return false; - return (free_sections(sbi) + freed) <= - (node_secs + 2 * dent_secs + imeta_secs + - reserved_sections(sbi) + needed); + else if (free <= need_lower) + return true; + return !has_curseg_enough_space(sbi, node_blocks, dent_blocks); } static inline bool f2fs_is_checkpoint_ready(struct f2fs_sb_info *sbi) From 196f72e089b7972ad5e4589332dc6f13f8c1589d Mon Sep 17 00:00:00 2001 From: Jaegeuk Kim Date: Thu, 5 May 2022 17:40:25 -0700 Subject: [PATCH 0659/1842] f2fs: don't need inode lock for system hidden quota commit 6213f5d4d23c50d393a31dc8e351e63a1fd10dbe upstream. Let's avoid false-alarmed lockdep warning. [ 58.914674] [T1501146] -> #2 (&sb->s_type->i_mutex_key#20){+.+.}-{3:3}: [ 58.915975] [T1501146] system_server: down_write+0x7c/0xe0 [ 58.916738] [T1501146] system_server: f2fs_quota_sync+0x60/0x1a8 [ 58.917563] [T1501146] system_server: block_operations+0x16c/0x43c [ 58.918410] [T1501146] system_server: f2fs_write_checkpoint+0x114/0x318 [ 58.919312] [T1501146] system_server: f2fs_issue_checkpoint+0x178/0x21c [ 58.920214] [T1501146] system_server: f2fs_sync_fs+0x48/0x6c [ 58.920999] [T1501146] system_server: f2fs_do_sync_file+0x334/0x738 [ 58.921862] [T1501146] system_server: f2fs_sync_file+0x30/0x48 [ 58.922667] [T1501146] system_server: __arm64_sys_fsync+0x84/0xf8 [ 58.923506] [T1501146] system_server: el0_svc_common.llvm.12821150825140585682+0xd8/0x20c [ 58.924604] [T1501146] system_server: do_el0_svc+0x28/0xa0 [ 58.925366] [T1501146] system_server: el0_svc+0x24/0x38 [ 58.926094] [T1501146] system_server: el0_sync_handler+0x88/0xec [ 58.926920] [T1501146] system_server: el0_sync+0x1b4/0x1c0 [ 58.927681] [T1501146] -> #1 (&sbi->cp_global_sem){+.+.}-{3:3}: [ 58.928889] [T1501146] system_server: down_write+0x7c/0xe0 [ 58.929650] [T1501146] system_server: f2fs_write_checkpoint+0xbc/0x318 [ 58.930541] [T1501146] system_server: f2fs_issue_checkpoint+0x178/0x21c [ 58.931443] [T1501146] system_server: f2fs_sync_fs+0x48/0x6c [ 58.932226] [T1501146] system_server: sync_filesystem+0xac/0x130 [ 58.933053] [T1501146] system_server: generic_shutdown_super+0x38/0x150 [ 58.933958] [T1501146] system_server: kill_block_super+0x24/0x58 [ 58.934791] [T1501146] system_server: kill_f2fs_super+0xcc/0x124 [ 58.935618] [T1501146] system_server: deactivate_locked_super+0x90/0x120 [ 58.936529] [T1501146] system_server: deactivate_super+0x74/0xac [ 58.937356] [T1501146] system_server: cleanup_mnt+0x128/0x168 [ 58.938150] [T1501146] system_server: __cleanup_mnt+0x18/0x28 [ 58.938944] [T1501146] system_server: task_work_run+0xb8/0x14c [ 58.939749] [T1501146] system_server: do_notify_resume+0x114/0x1e8 [ 58.940595] [T1501146] system_server: work_pending+0xc/0x5f0 [ 58.941375] [T1501146] -> #0 (&sbi->gc_lock){+.+.}-{3:3}: [ 58.942519] [T1501146] system_server: __lock_acquire+0x1270/0x2868 [ 58.943366] [T1501146] system_server: lock_acquire+0x114/0x294 [ 58.944169] [T1501146] system_server: down_write+0x7c/0xe0 [ 58.944930] [T1501146] system_server: f2fs_issue_checkpoint+0x13c/0x21c [ 58.945831] [T1501146] system_server: f2fs_sync_fs+0x48/0x6c [ 58.946614] [T1501146] system_server: f2fs_do_sync_file+0x334/0x738 [ 58.947472] [T1501146] system_server: f2fs_ioc_commit_atomic_write+0xc8/0x14c [ 58.948439] [T1501146] system_server: __f2fs_ioctl+0x674/0x154c [ 58.949253] [T1501146] system_server: f2fs_ioctl+0x54/0x88 [ 58.950018] [T1501146] system_server: __arm64_sys_ioctl+0xa8/0x110 [ 58.950865] [T1501146] system_server: el0_svc_common.llvm.12821150825140585682+0xd8/0x20c [ 58.951965] [T1501146] system_server: do_el0_svc+0x28/0xa0 [ 58.952727] [T1501146] system_server: el0_svc+0x24/0x38 [ 58.953454] [T1501146] system_server: el0_sync_handler+0x88/0xec [ 58.954279] [T1501146] system_server: el0_sync+0x1b4/0x1c0 Cc: stable@vger.kernel.org Signed-off-by: Jaegeuk Kim Signed-off-by: Greg Kroah-Hartman --- fs/f2fs/super.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index 78ee14f6e939..ccfb6c5a8fbc 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -2292,7 +2292,8 @@ int f2fs_quota_sync(struct super_block *sb, int type) if (!sb_has_quota_active(sb, cnt)) continue; - inode_lock(dqopt->files[cnt]); + if (!f2fs_sb_has_quota_ino(sbi)) + inode_lock(dqopt->files[cnt]); /* * do_quotactl @@ -2311,7 +2312,8 @@ int f2fs_quota_sync(struct super_block *sb, int type) up_read(&sbi->quota_sem); f2fs_unlock_op(sbi); - inode_unlock(dqopt->files[cnt]); + if (!f2fs_sb_has_quota_ino(sbi)) + inode_unlock(dqopt->files[cnt]); if (ret) break; From ef221b738b26d8c9f7e7967f4586db2dd3bd5288 Mon Sep 17 00:00:00 2001 From: Chao Yu Date: Fri, 6 May 2022 09:33:06 +0800 Subject: [PATCH 0660/1842] f2fs: fix to do sanity check on total_data_blocks commit 6b8beca0edd32075a769bfe4178ca00c0dcd22a9 upstream. As Yanming reported in bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215916 The kernel message is shown below: kernel BUG at fs/f2fs/segment.c:2560! Call Trace: allocate_segment_by_default+0x228/0x440 f2fs_allocate_data_block+0x13d1/0x31f0 do_write_page+0x18d/0x710 f2fs_outplace_write_data+0x151/0x250 f2fs_do_write_data_page+0xef9/0x1980 move_data_page+0x6af/0xbc0 do_garbage_collect+0x312f/0x46f0 f2fs_gc+0x6b0/0x3bc0 f2fs_balance_fs+0x921/0x2260 f2fs_write_single_data_page+0x16be/0x2370 f2fs_write_cache_pages+0x428/0xd00 f2fs_write_data_pages+0x96e/0xd50 do_writepages+0x168/0x550 __writeback_single_inode+0x9f/0x870 writeback_sb_inodes+0x47d/0xb20 __writeback_inodes_wb+0xb2/0x200 wb_writeback+0x4bd/0x660 wb_workfn+0x5f3/0xab0 process_one_work+0x79f/0x13e0 worker_thread+0x89/0xf60 kthread+0x26a/0x300 ret_from_fork+0x22/0x30 RIP: 0010:new_curseg+0xe8d/0x15f0 The root cause is: ckpt.valid_block_count is inconsistent with SIT table, stat info indicates filesystem has free blocks, but SIT table indicates filesystem has no free segment. So that during garbage colloection, it triggers panic when LFS allocator fails to find free segment. This patch tries to fix this issue by checking consistency in between ckpt.valid_block_count and block accounted from SIT. Cc: stable@vger.kernel.org Reported-by: Ming Yan Signed-off-by: Chao Yu Signed-off-by: Jaegeuk Kim Signed-off-by: Greg Kroah-Hartman --- fs/f2fs/f2fs.h | 4 ++-- fs/f2fs/segment.c | 33 ++++++++++++++++++++++----------- fs/f2fs/segment.h | 1 + 3 files changed, 25 insertions(+), 13 deletions(-) diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 658faef70355..273795ba8fc0 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -1021,8 +1021,8 @@ enum count_type { */ #define PAGE_TYPE_OF_BIO(type) ((type) > META ? META : (type)) enum page_type { - DATA, - NODE, + DATA = 0, + NODE = 1, /* should not change this */ META, NR_PAGE_TYPE, META_FLUSH, diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index 736fb57423a6..20091f4cf84d 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -4423,7 +4423,7 @@ static int build_sit_entries(struct f2fs_sb_info *sbi) unsigned int i, start, end; unsigned int readed, start_blk = 0; int err = 0; - block_t total_node_blocks = 0; + block_t sit_valid_blocks[2] = {0, 0}; do { readed = f2fs_ra_meta_pages(sbi, start_blk, BIO_MAX_PAGES, @@ -4448,8 +4448,8 @@ static int build_sit_entries(struct f2fs_sb_info *sbi) if (err) return err; seg_info_from_raw_sit(se, &sit); - if (IS_NODESEG(se->type)) - total_node_blocks += se->valid_blocks; + + sit_valid_blocks[SE_PAGETYPE(se)] += se->valid_blocks; /* build discard map only one time */ if (is_set_ckpt_flags(sbi, CP_TRIMMED_FLAG)) { @@ -4487,15 +4487,15 @@ static int build_sit_entries(struct f2fs_sb_info *sbi) sit = sit_in_journal(journal, i); old_valid_blocks = se->valid_blocks; - if (IS_NODESEG(se->type)) - total_node_blocks -= old_valid_blocks; + + sit_valid_blocks[SE_PAGETYPE(se)] -= old_valid_blocks; err = check_block_count(sbi, start, &sit); if (err) break; seg_info_from_raw_sit(se, &sit); - if (IS_NODESEG(se->type)) - total_node_blocks += se->valid_blocks; + + sit_valid_blocks[SE_PAGETYPE(se)] += se->valid_blocks; if (is_set_ckpt_flags(sbi, CP_TRIMMED_FLAG)) { memset(se->discard_map, 0xff, SIT_VBLOCK_MAP_SIZE); @@ -4515,13 +4515,24 @@ static int build_sit_entries(struct f2fs_sb_info *sbi) } up_read(&curseg->journal_rwsem); - if (!err && total_node_blocks != valid_node_count(sbi)) { + if (err) + return err; + + if (sit_valid_blocks[NODE] != valid_node_count(sbi)) { f2fs_err(sbi, "SIT is corrupted node# %u vs %u", - total_node_blocks, valid_node_count(sbi)); - err = -EFSCORRUPTED; + sit_valid_blocks[NODE], valid_node_count(sbi)); + return -EFSCORRUPTED; } - return err; + if (sit_valid_blocks[DATA] + sit_valid_blocks[NODE] > + valid_user_blocks(sbi)) { + f2fs_err(sbi, "SIT is corrupted data# %u %u vs %u", + sit_valid_blocks[DATA], sit_valid_blocks[NODE], + valid_user_blocks(sbi)); + return -EFSCORRUPTED; + } + + return 0; } static void init_free_segmap(struct f2fs_sb_info *sbi) diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h index 88c0799b99f2..eafd89f0c77e 100644 --- a/fs/f2fs/segment.h +++ b/fs/f2fs/segment.h @@ -24,6 +24,7 @@ #define IS_DATASEG(t) ((t) <= CURSEG_COLD_DATA) #define IS_NODESEG(t) ((t) >= CURSEG_HOT_NODE && (t) <= CURSEG_COLD_NODE) +#define SE_PAGETYPE(se) ((IS_NODESEG((se)->type) ? NODE : DATA)) static inline void sanity_check_seg_type(struct f2fs_sb_info *sbi, unsigned short seg_type) From 2221a2d41018da2de6957093afa17bb57eda93a0 Mon Sep 17 00:00:00 2001 From: Chao Yu Date: Tue, 17 May 2022 11:37:23 +0800 Subject: [PATCH 0661/1842] f2fs: fix fallocate to use file_modified to update permissions consistently commit 958ed92922028ec67f504dcdc72bfdfd0f43936a upstream. This patch tries to fix permission consistency issue as all other mainline filesystems. Since the initial introduction of (posix) fallocate back at the turn of the century, it has been possible to use this syscall to change the user-visible contents of files. This can happen by extending the file size during a preallocation, or through any of the newer modes (punch, zero, collapse, insert range). Because the call can be used to change file contents, we should treat it like we do any other modification to a file -- update the mtime, and drop set[ug]id privileges/capabilities. The VFS function file_modified() does all this for us if pass it a locked inode, so let's make fallocate drop permissions correctly. Cc: stable@kernel.org Signed-off-by: Chao Yu Signed-off-by: Jaegeuk Kim Signed-off-by: Greg Kroah-Hartman --- fs/f2fs/file.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c index 06f200234d00..defa068b4c7c 100644 --- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -1744,6 +1744,10 @@ static long f2fs_fallocate(struct file *file, int mode, inode_lock(inode); + ret = file_modified(file); + if (ret) + goto out; + if (mode & FALLOC_FL_PUNCH_HOLE) { if (offset >= inode->i_size) goto out; From efdefbe8b7564602ab446474788225a1f2a323b5 Mon Sep 17 00:00:00 2001 From: Chao Yu Date: Wed, 18 May 2022 20:28:41 +0800 Subject: [PATCH 0662/1842] f2fs: fix to do sanity check for inline inode commit 677a82b44ebf263d4f9a0cfbd576a6ade797a07b upstream. Yanming reported a kernel bug in Bugzilla kernel [1], which can be reproduced. The bug message is: The kernel message is shown below: kernel BUG at fs/inode.c:611! Call Trace: evict+0x282/0x4e0 __dentry_kill+0x2b2/0x4d0 dput+0x2dd/0x720 do_renameat2+0x596/0x970 __x64_sys_rename+0x78/0x90 do_syscall_64+0x3b/0x90 [1] https://bugzilla.kernel.org/show_bug.cgi?id=215895 The bug is due to fuzzed inode has both inline_data and encrypted flags. During f2fs_evict_inode(), as the inode was deleted by rename(), it will cause inline data conversion due to conflicting flags. The page cache will be polluted and the panic will be triggered in clear_inode(). Try fixing the bug by doing more sanity checks for inline data inode in sanity_check_inode(). Cc: stable@vger.kernel.org Reported-by: Ming Yan Signed-off-by: Chao Yu Signed-off-by: Jaegeuk Kim Signed-off-by: Greg Kroah-Hartman --- fs/f2fs/f2fs.h | 1 + fs/f2fs/inline.c | 29 ++++++++++++++++++++++++----- fs/f2fs/inode.c | 3 +-- 3 files changed, 26 insertions(+), 7 deletions(-) diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 273795ba8fc0..1066725c3c5d 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -3735,6 +3735,7 @@ extern struct kmem_cache *f2fs_inode_entry_slab; * inline.c */ bool f2fs_may_inline_data(struct inode *inode); +bool f2fs_sanity_check_inline_data(struct inode *inode); bool f2fs_may_inline_dentry(struct inode *inode); void f2fs_do_read_inline_data(struct page *page, struct page *ipage); void f2fs_truncate_inline_inode(struct inode *inode, diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c index 1d7dafdaffe3..f97c23ec93ce 100644 --- a/fs/f2fs/inline.c +++ b/fs/f2fs/inline.c @@ -14,21 +14,40 @@ #include "node.h" #include -bool f2fs_may_inline_data(struct inode *inode) +static bool support_inline_data(struct inode *inode) { if (f2fs_is_atomic_file(inode)) return false; - if (!S_ISREG(inode->i_mode) && !S_ISLNK(inode->i_mode)) return false; - if (i_size_read(inode) > MAX_INLINE_DATA(inode)) return false; + return true; +} - if (f2fs_post_read_required(inode)) +bool f2fs_may_inline_data(struct inode *inode) +{ + if (!support_inline_data(inode)) return false; - return true; + return !f2fs_post_read_required(inode); +} + +bool f2fs_sanity_check_inline_data(struct inode *inode) +{ + if (!f2fs_has_inline_data(inode)) + return false; + + if (!support_inline_data(inode)) + return true; + + /* + * used by sanity_check_inode(), when disk layout fields has not + * been synchronized to inmem fields. + */ + return (S_ISREG(inode->i_mode) && + (file_is_encrypt(inode) || file_is_verity(inode) || + (F2FS_I(inode)->i_flags & F2FS_COMPR_FL))); } bool f2fs_may_inline_dentry(struct inode *inode) diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c index 6e788506c921..87752550f78c 100644 --- a/fs/f2fs/inode.c +++ b/fs/f2fs/inode.c @@ -272,8 +272,7 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page) } } - if (f2fs_has_inline_data(inode) && - (!S_ISREG(inode->i_mode) && !S_ISLNK(inode->i_mode))) { + if (f2fs_sanity_check_inline_data(inode)) { set_sbi_flag(sbi, SBI_NEED_FSCK); f2fs_warn(sbi, "%s: inode (ino=%lx, mode=%u) should not have inline_data, run fsck to fix", __func__, inode->i_ino, inode->i_mode); From 6118bbdf69f4718b02d26bbcf2e497eb66004331 Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Wed, 1 Jun 2022 09:19:36 +0200 Subject: [PATCH 0663/1842] wifi: mac80211: fix use-after-free in chanctx code commit 2965c4cdf7ad9ce0796fac5e57debb9519ea721e upstream. In ieee80211_vif_use_reserved_context(), when we have an old context and the new context's replace_state is set to IEEE80211_CHANCTX_REPLACE_NONE, we free the old context in ieee80211_vif_use_reserved_reassign(). Therefore, we cannot check the old_ctx anymore, so we should set it to NULL after this point. However, since the new_ctx replace state is clearly not IEEE80211_CHANCTX_REPLACES_OTHER, we're not going to do anything else in this function and can just return to avoid accessing the freed old_ctx. Cc: stable@vger.kernel.org Fixes: 5bcae31d9cb1 ("mac80211: implement multi-vif in-place reservations") Signed-off-by: Johannes Berg Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/20220601091926.df419d91b165.I17a9b3894ff0b8323ce2afdb153b101124c821e5@changeid Signed-off-by: Greg Kroah-Hartman --- net/mac80211/chan.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c index 8f48aff74c7b..5639a71257e4 100644 --- a/net/mac80211/chan.c +++ b/net/mac80211/chan.c @@ -1652,12 +1652,9 @@ int ieee80211_vif_use_reserved_context(struct ieee80211_sub_if_data *sdata) if (new_ctx->replace_state == IEEE80211_CHANCTX_REPLACE_NONE) { if (old_ctx) - err = ieee80211_vif_use_reserved_reassign(sdata); - else - err = ieee80211_vif_use_reserved_assign(sdata); + return ieee80211_vif_use_reserved_reassign(sdata); - if (err) - return err; + return ieee80211_vif_use_reserved_assign(sdata); } /* From c1ad58de1300be1081de3023ec225f40f16ebc73 Mon Sep 17 00:00:00 2001 From: Emmanuel Grumbach Date: Tue, 17 May 2022 12:05:09 +0300 Subject: [PATCH 0664/1842] iwlwifi: mvm: fix assert 1F04 upon reconfig commit 9d096e3d3061dbf4ee10e2b59fc2c06e05bdb997 upstream. When we reconfig we must not send the MAC_POWER command that relates to a MAC that was not yet added to the firmware. Ignore those in the iterator. Cc: stable@vger.kernel.org Signed-off-by: Emmanuel Grumbach Signed-off-by: Gregory Greenman Link: https://lore.kernel.org/r/20220517120044.ed2ffc8ce732.If786e19512d0da4334a6382ea6148703422c7d7b@changeid Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman --- drivers/net/wireless/intel/iwlwifi/mvm/power.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/power.c b/drivers/net/wireless/intel/iwlwifi/mvm/power.c index c146303ec73b..66a968de3bc2 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/power.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/power.c @@ -621,6 +621,9 @@ static void iwl_mvm_power_get_vifs_iterator(void *_data, u8 *mac, struct iwl_power_vifs *power_iterator = _data; bool active = mvmvif->phy_ctxt && mvmvif->phy_ctxt->id < NUM_PHY_CTX; + if (!mvmvif->uploaded) + return; + switch (ieee80211_vif_type_p2p(vif)) { case NL80211_IFTYPE_P2P_DEVICE: break; From 9a9dc60da79a280269d3e8794725f049c367d686 Mon Sep 17 00:00:00 2001 From: Zhihao Cheng Date: Tue, 10 May 2022 21:38:05 +0800 Subject: [PATCH 0665/1842] =?UTF-8?q?fs-writeback:=20writeback=5Fsb=5Finod?= =?UTF-8?q?es=EF=BC=9ARecalculate=20'wrote'=20according=20skipped=20pages?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 68f4c6eba70df70a720188bce95c85570ddfcc87 upstream. Commit 505a666ee3fc ("writeback: plug writeback in wb_writeback() and writeback_inodes_wb()") has us holding a plug during wb_writeback, which may cause a potential ABBA dead lock: wb_writeback fat_file_fsync blk_start_plug(&plug) for (;;) { iter i-1: some reqs have been added into plug->mq_list // LOCK A iter i: progress = __writeback_inodes_wb(wb, work) . writeback_sb_inodes // fat's bdev . __writeback_single_inode . . generic_writepages . . __block_write_full_page . . . . __generic_file_fsync . . . . sync_inode_metadata . . . . writeback_single_inode . . . . __writeback_single_inode . . . . fat_write_inode . . . . __fat_write_inode . . . . sync_dirty_buffer // fat's bdev . . . . lock_buffer(bh) // LOCK B . . . . submit_bh . . . . blk_mq_get_tag // LOCK A . . . trylock_buffer(bh) // LOCK B . . . redirty_page_for_writepage . . . wbc->pages_skipped++ . . --wbc->nr_to_write . wrote += write_chunk - wbc.nr_to_write // wrote > 0 . requeue_inode . redirty_tail_locked if (progress) // progress > 0 continue; iter i+1: queue_io // similar process with iter i, infinite for-loop ! } blk_finish_plug(&plug) // flush plug won't be called Above process triggers a hungtask like: [ 399.044861] INFO: task bb:2607 blocked for more than 30 seconds. [ 399.046824] Not tainted 5.18.0-rc1-00005-gefae4d9eb6a2-dirty [ 399.051539] task:bb state:D stack: 0 pid: 2607 ppid: 2426 flags:0x00004000 [ 399.051556] Call Trace: [ 399.051570] __schedule+0x480/0x1050 [ 399.051592] schedule+0x92/0x1a0 [ 399.051602] io_schedule+0x22/0x50 [ 399.051613] blk_mq_get_tag+0x1d3/0x3c0 [ 399.051640] __blk_mq_alloc_requests+0x21d/0x3f0 [ 399.051657] blk_mq_submit_bio+0x68d/0xca0 [ 399.051674] __submit_bio+0x1b5/0x2d0 [ 399.051708] submit_bio_noacct+0x34e/0x720 [ 399.051718] submit_bio+0x3b/0x150 [ 399.051725] submit_bh_wbc+0x161/0x230 [ 399.051734] __sync_dirty_buffer+0xd1/0x420 [ 399.051744] sync_dirty_buffer+0x17/0x20 [ 399.051750] __fat_write_inode+0x289/0x310 [ 399.051766] fat_write_inode+0x2a/0xa0 [ 399.051783] __writeback_single_inode+0x53c/0x6f0 [ 399.051795] writeback_single_inode+0x145/0x200 [ 399.051803] sync_inode_metadata+0x45/0x70 [ 399.051856] __generic_file_fsync+0xa3/0x150 [ 399.051880] fat_file_fsync+0x1d/0x80 [ 399.051895] vfs_fsync_range+0x40/0xb0 [ 399.051929] __x64_sys_fsync+0x18/0x30 In my test, 'need_resched()' (which is imported by 590dca3a71 "fs-writeback: unplug before cond_resched in writeback_sb_inodes") in function 'writeback_sb_inodes()' seldom comes true, unless cond_resched() is deleted from write_cache_pages(). Fix it by correcting wrote number according number of skipped pages in writeback_sb_inodes(). Goto Link to find a reproducer. Link: https://bugzilla.kernel.org/show_bug.cgi?id=215837 Cc: stable@vger.kernel.org # v4.3 Signed-off-by: Zhihao Cheng Reviewed-by: Jan Kara Reviewed-by: Christoph Hellwig Link: https://lore.kernel.org/r/20220510133805.1988292-1-chengzhihao1@huawei.com Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- fs/fs-writeback.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index a0869194ab73..46c15dd2405c 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -1650,11 +1650,12 @@ static long writeback_sb_inodes(struct super_block *sb, }; unsigned long start_time = jiffies; long write_chunk; - long wrote = 0; /* count both pages and inodes */ + long total_wrote = 0; /* count both pages and inodes */ while (!list_empty(&wb->b_io)) { struct inode *inode = wb_inode(wb->b_io.prev); struct bdi_writeback *tmp_wb; + long wrote; if (inode->i_sb != sb) { if (work->sb) { @@ -1730,7 +1731,9 @@ static long writeback_sb_inodes(struct super_block *sb, wbc_detach_inode(&wbc); work->nr_pages -= write_chunk - wbc.nr_to_write; - wrote += write_chunk - wbc.nr_to_write; + wrote = write_chunk - wbc.nr_to_write - wbc.pages_skipped; + wrote = wrote < 0 ? 0 : wrote; + total_wrote += wrote; if (need_resched()) { /* @@ -1752,7 +1755,7 @@ static long writeback_sb_inodes(struct super_block *sb, tmp_wb = inode_to_wb_and_lock_list(inode); spin_lock(&inode->i_lock); if (!(inode->i_state & I_DIRTY_ALL)) - wrote++; + total_wrote++; requeue_inode(inode, tmp_wb, &wbc); inode_sync_complete(inode); spin_unlock(&inode->i_lock); @@ -1766,14 +1769,14 @@ static long writeback_sb_inodes(struct super_block *sb, * bail out to wb_writeback() often enough to check * background threshold and other termination conditions. */ - if (wrote) { + if (total_wrote) { if (time_is_before_jiffies(start_time + HZ / 10UL)) break; if (work->nr_pages <= 0) break; } } - return wrote; + return total_wrote; } static long __writeback_inodes_wb(struct bdi_writeback *wb, From c072cab98bac11f6ef9db640fb51834d9552e2e6 Mon Sep 17 00:00:00 2001 From: Aditya Garg Date: Fri, 15 Apr 2022 17:02:46 +0000 Subject: [PATCH 0666/1842] efi: Do not import certificates from UEFI Secure Boot for T2 Macs commit 155ca952c7ca19aa32ecfb7373a32bbc2e1ec6eb upstream. On Apple T2 Macs, when Linux attempts to read the db and dbx efi variables at early boot to load UEFI Secure Boot certificates, a page fault occurs in Apple firmware code and EFI runtime services are disabled with the following logs: [Firmware Bug]: Page fault caused by firmware at PA: 0xffffb1edc0068000 WARNING: CPU: 3 PID: 104 at arch/x86/platform/efi/quirks.c:735 efi_crash_gracefully_on_page_fault+0x50/0xf0 (Removed some logs from here) Call Trace: page_fault_oops+0x4f/0x2c0 ? search_bpf_extables+0x6b/0x80 ? search_module_extables+0x50/0x80 ? search_exception_tables+0x5b/0x60 kernelmode_fixup_or_oops+0x9e/0x110 __bad_area_nosemaphore+0x155/0x190 bad_area_nosemaphore+0x16/0x20 do_kern_addr_fault+0x8c/0xa0 exc_page_fault+0xd8/0x180 asm_exc_page_fault+0x1e/0x30 (Removed some logs from here) ? __efi_call+0x28/0x30 ? switch_mm+0x20/0x30 ? efi_call_rts+0x19a/0x8e0 ? process_one_work+0x222/0x3f0 ? worker_thread+0x4a/0x3d0 ? kthread+0x17a/0x1a0 ? process_one_work+0x3f0/0x3f0 ? set_kthread_struct+0x40/0x40 ? ret_from_fork+0x22/0x30 ---[ end trace 1f82023595a5927f ]--- efi: Froze efi_rts_wq and disabled EFI Runtime Services integrity: Couldn't get size: 0x8000000000000015 integrity: MODSIGN: Couldn't get UEFI db list efi: EFI Runtime Services are disabled! integrity: Couldn't get size: 0x8000000000000015 integrity: Couldn't get UEFI dbx list integrity: Couldn't get size: 0x8000000000000015 integrity: Couldn't get mokx list integrity: Couldn't get size: 0x80000000 So we avoid reading these UEFI variables and thus prevent the crash. Cc: stable@vger.kernel.org Signed-off-by: Aditya Garg Reviewed-by: Mimi Zohar Signed-off-by: Mimi Zohar Signed-off-by: Greg Kroah-Hartman --- .../platform_certs/keyring_handler.h | 8 +++++ security/integrity/platform_certs/load_uefi.c | 33 +++++++++++++++++++ 2 files changed, 41 insertions(+) diff --git a/security/integrity/platform_certs/keyring_handler.h b/security/integrity/platform_certs/keyring_handler.h index 2462bfa08fe3..cd06bd6072be 100644 --- a/security/integrity/platform_certs/keyring_handler.h +++ b/security/integrity/platform_certs/keyring_handler.h @@ -30,3 +30,11 @@ efi_element_handler_t get_handler_for_db(const efi_guid_t *sig_type); efi_element_handler_t get_handler_for_dbx(const efi_guid_t *sig_type); #endif + +#ifndef UEFI_QUIRK_SKIP_CERT +#define UEFI_QUIRK_SKIP_CERT(vendor, product) \ + .matches = { \ + DMI_MATCH(DMI_BOARD_VENDOR, vendor), \ + DMI_MATCH(DMI_PRODUCT_NAME, product), \ + }, +#endif diff --git a/security/integrity/platform_certs/load_uefi.c b/security/integrity/platform_certs/load_uefi.c index f290f78c3f30..555d2dfc0ff7 100644 --- a/security/integrity/platform_certs/load_uefi.c +++ b/security/integrity/platform_certs/load_uefi.c @@ -3,6 +3,7 @@ #include #include #include +#include #include #include #include @@ -11,6 +12,31 @@ #include "../integrity.h" #include "keyring_handler.h" +/* + * On T2 Macs reading the db and dbx efi variables to load UEFI Secure Boot + * certificates causes occurrence of a page fault in Apple's firmware and + * a crash disabling EFI runtime services. The following quirk skips reading + * these variables. + */ +static const struct dmi_system_id uefi_skip_cert[] = { + { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,1") }, + { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,2") }, + { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,3") }, + { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,4") }, + { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,1") }, + { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,2") }, + { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,3") }, + { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,4") }, + { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir8,1") }, + { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir8,2") }, + { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir9,1") }, + { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacMini8,1") }, + { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacPro7,1") }, + { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMac20,1") }, + { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMac20,2") }, + { } +}; + /* * Look to see if a UEFI variable called MokIgnoreDB exists and return true if * it does. @@ -137,6 +163,13 @@ static int __init load_uefi_certs(void) unsigned long dbsize = 0, dbxsize = 0, mokxsize = 0; efi_status_t status; int rc = 0; + const struct dmi_system_id *dmi_id; + + dmi_id = dmi_first_match(uefi_skip_cert); + if (dmi_id) { + pr_err("Reading UEFI Secure Boot Certs is not supported on T2 Macs.\n"); + return false; + } if (!efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE)) return false; From 4dfc12f8c94c8052e975060f595938f75e8b7165 Mon Sep 17 00:00:00 2001 From: Jan Kara Date: Fri, 1 Apr 2022 12:27:44 +0200 Subject: [PATCH 0667/1842] bfq: Split shared queues on move between cgroups commit 3bc5e683c67d94bd839a1da2e796c15847b51b69 upstream. When bfqq is shared by multiple processes it can happen that one of the processes gets moved to a different cgroup (or just starts submitting IO for different cgroup). In case that happens we need to split the merged bfqq as otherwise we will have IO for multiple cgroups in one bfqq and we will just account IO time to wrong entities etc. Similarly if the bfqq is scheduled to merge with another bfqq but the merge didn't happen yet, cancel the merge as it need not be valid anymore. CC: stable@vger.kernel.org Fixes: e21b7a0b9887 ("block, bfq: add full hierarchical scheduling and cgroups support") Tested-by: "yukuai (C)" Signed-off-by: Jan Kara Reviewed-by: Christoph Hellwig Link: https://lore.kernel.org/r/20220401102752.8599-3-jack@suse.cz Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- block/bfq-cgroup.c | 36 +++++++++++++++++++++++++++++++++--- block/bfq-iosched.c | 2 +- block/bfq-iosched.h | 1 + 3 files changed, 35 insertions(+), 4 deletions(-) diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index c2fdd6fcdaee..01516b80019e 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -725,9 +725,39 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd, } if (sync_bfqq) { - entity = &sync_bfqq->entity; - if (entity->sched_data != &bfqg->sched_data) - bfq_bfqq_move(bfqd, sync_bfqq, bfqg); + if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) { + /* We are the only user of this bfqq, just move it */ + if (sync_bfqq->entity.sched_data != &bfqg->sched_data) + bfq_bfqq_move(bfqd, sync_bfqq, bfqg); + } else { + struct bfq_queue *bfqq; + + /* + * The queue was merged to a different queue. Check + * that the merge chain still belongs to the same + * cgroup. + */ + for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq) + if (bfqq->entity.sched_data != + &bfqg->sched_data) + break; + if (bfqq) { + /* + * Some queue changed cgroup so the merge is + * not valid anymore. We cannot easily just + * cancel the merge (by clearing new_bfqq) as + * there may be other processes using this + * queue and holding refs to all queues below + * sync_bfqq->new_bfqq. Similarly if the merge + * already happened, we need to detach from + * bfqq now so that we cannot merge bio to a + * request from the old cgroup. + */ + bfq_put_cooperator(sync_bfqq); + bfq_release_process_ref(bfqd, sync_bfqq); + bic_set_bfqq(bic, NULL, 1); + } + } } return bfqg; diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index de2cd4bd602f..b945696ba019 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -4917,7 +4917,7 @@ void bfq_put_queue(struct bfq_queue *bfqq) bfqg_and_blkg_put(bfqg); } -static void bfq_put_cooperator(struct bfq_queue *bfqq) +void bfq_put_cooperator(struct bfq_queue *bfqq) { struct bfq_queue *__bfqq, *next; diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index 703895224562..fb51e5ce9400 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -954,6 +954,7 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd, void bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq, bool compensate, enum bfqq_expiration reason); void bfq_put_queue(struct bfq_queue *bfqq); +void bfq_put_cooperator(struct bfq_queue *bfqq); void bfq_end_wr_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg); void bfq_release_process_ref(struct bfq_data *bfqd, struct bfq_queue *bfqq); void bfq_schedule_dispatch(struct bfq_data *bfqd); From b06691af08b41dfd81052a3362514d9827b44bb1 Mon Sep 17 00:00:00 2001 From: Jan Kara Date: Fri, 1 Apr 2022 12:27:45 +0200 Subject: [PATCH 0668/1842] bfq: Update cgroup information before merging bio commit ea591cd4eb270393810e7be01feb8fde6a34fbbe upstream. When the process is migrated to a different cgroup (or in case of writeback just starts submitting bios associated with a different cgroup) bfq_merge_bio() can operate with stale cgroup information in bic. Thus the bio can be merged to a request from a different cgroup or it can result in merging of bfqqs for different cgroups or bfqqs of already dead cgroups and causing possible use-after-free issues. Fix the problem by updating cgroup information in bfq_merge_bio(). CC: stable@vger.kernel.org Fixes: e21b7a0b9887 ("block, bfq: add full hierarchical scheduling and cgroups support") Tested-by: "yukuai (C)" Signed-off-by: Jan Kara Reviewed-by: Christoph Hellwig Link: https://lore.kernel.org/r/20220401102752.8599-4-jack@suse.cz Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- block/bfq-iosched.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index b945696ba019..bab5122062f5 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -2227,10 +2227,17 @@ static bool bfq_bio_merge(struct request_queue *q, struct bio *bio, spin_lock_irq(&bfqd->lock); - if (bic) + if (bic) { + /* + * Make sure cgroup info is uptodate for current process before + * considering the merge. + */ + bfq_bic_update_cgroup(bic, bio); + bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf)); - else + } else { bfqd->bio_bfqq = NULL; + } bfqd->bio_bic = bic; ret = blk_mq_sched_try_merge(q, bio, nr_segs, &free); From 70a7dea84639bcd029130e00e01792eb9207fb38 Mon Sep 17 00:00:00 2001 From: Jan Kara Date: Fri, 1 Apr 2022 12:27:48 +0200 Subject: [PATCH 0669/1842] bfq: Track whether bfq_group is still online commit 09f871868080c33992cd6a9b72a5ca49582578fa upstream. Track whether bfq_group is still online. We cannot rely on blkcg_gq->online because that gets cleared only after all policies are offlined and we need something that gets updated already under bfqd->lock when we are cleaning up our bfq_group to be able to guarantee that when we see online bfq_group, it will stay online while we are holding bfqd->lock lock. CC: stable@vger.kernel.org Tested-by: "yukuai (C)" Signed-off-by: Jan Kara Reviewed-by: Christoph Hellwig Link: https://lore.kernel.org/r/20220401102752.8599-7-jack@suse.cz Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- block/bfq-cgroup.c | 3 ++- block/bfq-iosched.h | 2 ++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index 01516b80019e..0077f44ad63f 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -553,6 +553,7 @@ static void bfq_pd_init(struct blkg_policy_data *pd) */ bfqg->bfqd = bfqd; bfqg->active_entities = 0; + bfqg->online = true; bfqg->rq_pos_tree = RB_ROOT; } @@ -599,7 +600,6 @@ struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd, struct bfq_entity *entity; bfqg = bfq_lookup_bfqg(bfqd, blkcg); - if (unlikely(!bfqg)) return NULL; @@ -961,6 +961,7 @@ static void bfq_pd_offline(struct blkg_policy_data *pd) put_async_queues: bfq_put_async_queues(bfqd, bfqg); + bfqg->online = false; spin_unlock_irqrestore(&bfqd->lock, flags); /* diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index fb51e5ce9400..b116ed48d27d 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -901,6 +901,8 @@ struct bfq_group { /* reference counter (see comments in bfq_bic_update_cgroup) */ int ref; + /* Is bfq_group still online? */ + bool online; struct bfq_entity entity; struct bfq_sched_data sched_data; From dd887f83ea54aea5b780a84527e23ab95f777fed Mon Sep 17 00:00:00 2001 From: Ye Bin Date: Thu, 14 Apr 2022 10:52:23 +0800 Subject: [PATCH 0670/1842] ext4: fix use-after-free in ext4_rename_dir_prepare commit 0be698ecbe4471fcad80e81ec6a05001421041b3 upstream. We got issue as follows: EXT4-fs (loop0): mounted filesystem without journal. Opts: ,errors=continue ext4_get_first_dir_block: bh->b_data=0xffff88810bee6000 len=34478 ext4_get_first_dir_block: *parent_de=0xffff88810beee6ae bh->b_data=0xffff88810bee6000 ext4_rename_dir_prepare: [1] parent_de=0xffff88810beee6ae ================================================================== BUG: KASAN: use-after-free in ext4_rename_dir_prepare+0x152/0x220 Read of size 4 at addr ffff88810beee6ae by task rep/1895 CPU: 13 PID: 1895 Comm: rep Not tainted 5.10.0+ #241 Call Trace: dump_stack+0xbe/0xf9 print_address_description.constprop.0+0x1e/0x220 kasan_report.cold+0x37/0x7f ext4_rename_dir_prepare+0x152/0x220 ext4_rename+0xf44/0x1ad0 ext4_rename2+0x11c/0x170 vfs_rename+0xa84/0x1440 do_renameat2+0x683/0x8f0 __x64_sys_renameat+0x53/0x60 do_syscall_64+0x33/0x40 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x7f45a6fc41c9 RSP: 002b:00007ffc5a470218 EFLAGS: 00000246 ORIG_RAX: 0000000000000108 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f45a6fc41c9 RDX: 0000000000000005 RSI: 0000000020000180 RDI: 0000000000000005 RBP: 00007ffc5a470240 R08: 00007ffc5a470160 R09: 0000000020000080 R10: 00000000200001c0 R11: 0000000000000246 R12: 0000000000400bb0 R13: 00007ffc5a470320 R14: 0000000000000000 R15: 0000000000000000 The buggy address belongs to the page: page:00000000440015ce refcount:0 mapcount:0 mapping:0000000000000000 index:0x1 pfn:0x10beee flags: 0x200000000000000() raw: 0200000000000000 ffffea00043ff4c8 ffffea0004325608 0000000000000000 raw: 0000000000000001 0000000000000000 00000000ffffffff 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff88810beee580: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ffff88810beee600: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff >ffff88810beee680: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ^ ffff88810beee700: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ffff88810beee780: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ================================================================== Disabling lock debugging due to kernel taint ext4_rename_dir_prepare: [2] parent_de->inode=3537895424 ext4_rename_dir_prepare: [3] dir=0xffff888124170140 ext4_rename_dir_prepare: [4] ino=2 ext4_rename_dir_prepare: ent->dir->i_ino=2 parent=-757071872 Reason is first directory entry which 'rec_len' is 34478, then will get illegal parent entry. Now, we do not check directory entry after read directory block in 'ext4_get_first_dir_block'. To solve this issue, check directory entry in 'ext4_get_first_dir_block'. [ Trigger an ext4_error() instead of just warning if the directory is missing a '.' or '..' entry. Also make sure we return an error code if the file system is corrupted. -TYT ] Signed-off-by: Ye Bin Reviewed-by: Jan Kara Link: https://lore.kernel.org/r/20220414025223.4113128-1-yebin10@huawei.com Signed-off-by: Theodore Ts'o Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman --- fs/ext4/namei.c | 30 +++++++++++++++++++++++++++--- 1 file changed, 27 insertions(+), 3 deletions(-) diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c index 47ea35e98ffe..81a7c0f3b40c 100644 --- a/fs/ext4/namei.c +++ b/fs/ext4/namei.c @@ -3500,6 +3500,9 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle, struct buffer_head *bh; if (!ext4_has_inline_data(inode)) { + struct ext4_dir_entry_2 *de; + unsigned int offset; + /* The first directory block must not be a hole, so * treat it as DIRENT_HTREE */ @@ -3508,9 +3511,30 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle, *retval = PTR_ERR(bh); return NULL; } - *parent_de = ext4_next_entry( - (struct ext4_dir_entry_2 *)bh->b_data, - inode->i_sb->s_blocksize); + + de = (struct ext4_dir_entry_2 *) bh->b_data; + if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data, + bh->b_size, 0) || + le32_to_cpu(de->inode) != inode->i_ino || + strcmp(".", de->name)) { + EXT4_ERROR_INODE(inode, "directory missing '.'"); + brelse(bh); + *retval = -EFSCORRUPTED; + return NULL; + } + offset = ext4_rec_len_from_disk(de->rec_len, + inode->i_sb->s_blocksize); + de = ext4_next_entry(de, inode->i_sb->s_blocksize); + if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data, + bh->b_size, offset) || + le32_to_cpu(de->inode) == 0 || strcmp("..", de->name)) { + EXT4_ERROR_INODE(inode, "directory missing '..'"); + brelse(bh); + *retval = -EFSCORRUPTED; + return NULL; + } + *parent_de = de; + return bh; } From adf490083ca52ebfb0b2fe64ff1ead00c0452dd7 Mon Sep 17 00:00:00 2001 From: Ye Bin Date: Sat, 26 Mar 2022 14:53:51 +0800 Subject: [PATCH 0671/1842] ext4: fix warning in ext4_handle_inode_extension commit f4534c9fc94d22383f187b9409abb3f9df2e3db3 upstream. We got issue as follows: EXT4-fs error (device loop0) in ext4_reserve_inode_write:5741: Out of memory EXT4-fs error (device loop0): ext4_setattr:5462: inode #13: comm syz-executor.0: mark_inode_dirty error EXT4-fs error (device loop0) in ext4_setattr:5519: Out of memory EXT4-fs error (device loop0): ext4_ind_map_blocks:595: inode #13: comm syz-executor.0: Can't allocate blocks for non-extent mapped inodes with bigalloc ------------[ cut here ]------------ WARNING: CPU: 1 PID: 4361 at fs/ext4/file.c:301 ext4_file_write_iter+0x11c9/0x1220 Modules linked in: CPU: 1 PID: 4361 Comm: syz-executor.0 Not tainted 5.10.0+ #1 RIP: 0010:ext4_file_write_iter+0x11c9/0x1220 RSP: 0018:ffff924d80b27c00 EFLAGS: 00010282 RAX: ffffffff815a3379 RBX: 0000000000000000 RCX: 000000003b000000 RDX: ffff924d81601000 RSI: 00000000000009cc RDI: 00000000000009cd RBP: 000000000000000d R08: ffffffffbc5a2c6b R09: 0000902e0e52a96f R10: ffff902e2b7c1b40 R11: ffff902e2b7c1b40 R12: 000000000000000a R13: 0000000000000001 R14: ffff902e0e52aa10 R15: ffffffffffffff8b FS: 00007f81a7f65700(0000) GS:ffff902e3bc80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: ffffffffff600400 CR3: 000000012db88001 CR4: 00000000003706e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: do_iter_readv_writev+0x2e5/0x360 do_iter_write+0x112/0x4c0 do_pwritev+0x1e5/0x390 __x64_sys_pwritev2+0x7e/0xa0 do_syscall_64+0x37/0x50 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Above issue may happen as follows: Assume inode.i_size=4096 EXT4_I(inode)->i_disksize=4096 step 1: set inode->i_isize = 8192 ext4_setattr if (attr->ia_size != inode->i_size) EXT4_I(inode)->i_disksize = attr->ia_size; rc = ext4_mark_inode_dirty ext4_reserve_inode_write ext4_get_inode_loc __ext4_get_inode_loc sb_getblk --> return -ENOMEM ... if (!error) ->will not update i_size i_size_write(inode, attr->ia_size); Now: inode.i_size=4096 EXT4_I(inode)->i_disksize=8192 step 2: Direct write 4096 bytes ext4_file_write_iter ext4_dio_write_iter iomap_dio_rw ->return error if (extend) ext4_handle_inode_extension WARN_ON_ONCE(i_size_read(inode) < EXT4_I(inode)->i_disksize); ->Then trigger warning. To solve above issue, if mark inode dirty failed in ext4_setattr just set 'EXT4_I(inode)->i_disksize' with old value. Signed-off-by: Ye Bin Link: https://lore.kernel.org/r/20220326065351.761952-1-yebin10@huawei.com Signed-off-by: Theodore Ts'o Cc: stable@kernel.org Signed-off-by: Greg Kroah-Hartman --- fs/ext4/inode.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 31ab73c4b07e..72e3f55f1e07 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -5444,6 +5444,7 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr) if (attr->ia_valid & ATTR_SIZE) { handle_t *handle; loff_t oldsize = inode->i_size; + loff_t old_disksize; int shrink = (attr->ia_size < inode->i_size); if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) { @@ -5517,6 +5518,7 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr) inode->i_sb->s_blocksize_bits); down_write(&EXT4_I(inode)->i_data_sem); + old_disksize = EXT4_I(inode)->i_disksize; EXT4_I(inode)->i_disksize = attr->ia_size; rc = ext4_mark_inode_dirty(handle, inode); if (!error) @@ -5528,6 +5530,8 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr) */ if (!error) i_size_write(inode, attr->ia_size); + else + EXT4_I(inode)->i_disksize = old_disksize; up_write(&EXT4_I(inode)->i_data_sem); ext4_journal_stop(handle); if (error) From 1b061af037646c9cdb0afd8a8d2f1e1c06285866 Mon Sep 17 00:00:00 2001 From: Ye Bin Date: Mon, 16 May 2022 20:26:34 +0800 Subject: [PATCH 0672/1842] ext4: fix bug_on in ext4_writepages commit ef09ed5d37b84d18562b30cf7253e57062d0db05 upstream. we got issue as follows: EXT4-fs error (device loop0): ext4_mb_generate_buddy:1141: group 0, block bitmap and bg descriptor inconsistent: 25 vs 31513 free cls ------------[ cut here ]------------ kernel BUG at fs/ext4/inode.c:2708! invalid opcode: 0000 [#1] PREEMPT SMP KASAN PTI CPU: 2 PID: 2147 Comm: rep Not tainted 5.18.0-rc2-next-20220413+ #155 RIP: 0010:ext4_writepages+0x1977/0x1c10 RSP: 0018:ffff88811d3e7880 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 0000000000000001 RCX: ffff88811c098000 RDX: 0000000000000000 RSI: ffff88811c098000 RDI: 0000000000000002 RBP: ffff888128140f50 R08: ffffffffb1ff6387 R09: 0000000000000000 R10: 0000000000000007 R11: ffffed10250281ea R12: 0000000000000001 R13: 00000000000000a4 R14: ffff88811d3e7bb8 R15: ffff888128141028 FS: 00007f443aed9740(0000) GS:ffff8883aef00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020007200 CR3: 000000011c2a4000 CR4: 00000000000006e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: do_writepages+0x130/0x3a0 filemap_fdatawrite_wbc+0x83/0xa0 filemap_flush+0xab/0xe0 ext4_alloc_da_blocks+0x51/0x120 __ext4_ioctl+0x1534/0x3210 __x64_sys_ioctl+0x12c/0x170 do_syscall_64+0x3b/0x90 It may happen as follows: 1. write inline_data inode vfs_write new_sync_write ext4_file_write_iter ext4_buffered_write_iter generic_perform_write ext4_da_write_begin ext4_da_write_inline_data_begin -> If inline data size too small will allocate block to write, then mapping will has dirty page ext4_da_convert_inline_data_to_extent ->clear EXT4_STATE_MAY_INLINE_DATA 2. fallocate do_vfs_ioctl ioctl_preallocate vfs_fallocate ext4_fallocate ext4_convert_inline_data ext4_convert_inline_data_nolock ext4_map_blocks -> fail will goto restore data ext4_restore_inline_data ext4_create_inline_data ext4_write_inline_data ext4_set_inode_state -> set inode EXT4_STATE_MAY_INLINE_DATA 3. writepages __ext4_ioctl ext4_alloc_da_blocks filemap_flush filemap_fdatawrite_wbc do_writepages ext4_writepages if (ext4_has_inline_data(inode)) BUG_ON(ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) The root cause of this issue is we destory inline data until call ext4_writepages under delay allocation mode. But there maybe already convert from inline to extent. To solve this issue, we call filemap_flush first.. Cc: stable@kernel.org Signed-off-by: Ye Bin Reviewed-by: Jan Kara Link: https://lore.kernel.org/r/20220516122634.1690462-1-yebin10@huawei.com Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman --- fs/ext4/inline.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c index c9a8c7d24f89..fbad4180514c 100644 --- a/fs/ext4/inline.c +++ b/fs/ext4/inline.c @@ -1974,6 +1974,18 @@ int ext4_convert_inline_data(struct inode *inode) if (!ext4_has_inline_data(inode)) { ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA); return 0; + } else if (!ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) { + /* + * Inode has inline data but EXT4_STATE_MAY_INLINE_DATA is + * cleared. This means we are in the middle of moving of + * inline data to delay allocated block. Just force writeout + * here to finish conversion. + */ + error = filemap_flush(inode->i_mapping); + if (error) + return error; + if (!ext4_has_inline_data(inode)) + return 0; } needed_blocks = ext4_writepage_trans_blocks(inode); From cc5b09cb6dacd4b32640537929ab4ee8fb2b9e04 Mon Sep 17 00:00:00 2001 From: Theodore Ts'o Date: Tue, 17 May 2022 13:27:55 -0400 Subject: [PATCH 0673/1842] ext4: filter out EXT4_FC_REPLAY from on-disk superblock field s_state commit c878bea3c9d724ddfa05a813f30de3d25a0ba83f upstream. The EXT4_FC_REPLAY bit in sbi->s_mount_state is used to indicate that we are in the middle of replay the fast commit journal. This was actually a mistake, since the sbi->s_mount_info is initialized from es->s_state. Arguably s_mount_state is misleadingly named, but the name is historical --- s_mount_state and s_state dates back to ext2. What should have been used is the ext4_{set,clear,test}_mount_flag() inline functions, which sets EXT4_MF_* bits in sbi->s_mount_flags. The problem with using EXT4_FC_REPLAY is that a maliciously corrupted superblock could result in EXT4_FC_REPLAY getting set in s_mount_state. This bypasses some sanity checks, and this can trigger a BUG() in ext4_es_cache_extent(). As a easy-to-backport-fix, filter out the EXT4_FC_REPLAY bit for now. We should eventually transition away from EXT4_FC_REPLAY to something like EXT4_MF_REPLAY. Cc: stable@kernel.org Signed-off-by: Theodore Ts'o Link: https://lore.kernel.org/r/20220420192312.1655305-1-phind.uet@gmail.com Link: https://lore.kernel.org/r/20220517174028.942119-1-tytso@mit.edu Reported-by: syzbot+c7358a3cd05ee786eb31@syzkaller.appspotmail.com Signed-off-by: Greg Kroah-Hartman --- fs/ext4/super.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/fs/ext4/super.c b/fs/ext4/super.c index 35d990adefc6..050108091f8a 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -4538,7 +4538,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) sbi->s_inodes_per_block; sbi->s_desc_per_block = blocksize / EXT4_DESC_SIZE(sb); sbi->s_sbh = bh; - sbi->s_mount_state = le16_to_cpu(es->s_state); + sbi->s_mount_state = le16_to_cpu(es->s_state) & ~EXT4_FC_REPLAY; sbi->s_addr_per_block_bits = ilog2(EXT4_ADDR_PER_BLOCK(sb)); sbi->s_desc_per_block_bits = ilog2(EXT4_DESC_PER_BLOCK(sb)); @@ -5998,7 +5998,8 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data) if (err) goto restore_opts; } - sbi->s_mount_state = le16_to_cpu(es->s_state); + sbi->s_mount_state = (le16_to_cpu(es->s_state) & + ~EXT4_FC_REPLAY); err = ext4_setup_super(sb, es, 0); if (err) From 4fd58b5cf118d2d9038a0b8c9cc0e43096297686 Mon Sep 17 00:00:00 2001 From: Baokun Li Date: Wed, 18 May 2022 20:08:16 +0800 Subject: [PATCH 0674/1842] ext4: fix bug_on in __es_tree_search commit d36f6ed761b53933b0b4126486c10d3da7751e7f upstream. Hulk Robot reported a BUG_ON: ================================================================== kernel BUG at fs/ext4/extents_status.c:199! [...] RIP: 0010:ext4_es_end fs/ext4/extents_status.c:199 [inline] RIP: 0010:__es_tree_search+0x1e0/0x260 fs/ext4/extents_status.c:217 [...] Call Trace: ext4_es_cache_extent+0x109/0x340 fs/ext4/extents_status.c:766 ext4_cache_extents+0x239/0x2e0 fs/ext4/extents.c:561 ext4_find_extent+0x6b7/0xa20 fs/ext4/extents.c:964 ext4_ext_map_blocks+0x16b/0x4b70 fs/ext4/extents.c:4384 ext4_map_blocks+0xe26/0x19f0 fs/ext4/inode.c:567 ext4_getblk+0x320/0x4c0 fs/ext4/inode.c:980 ext4_bread+0x2d/0x170 fs/ext4/inode.c:1031 ext4_quota_read+0x248/0x320 fs/ext4/super.c:6257 v2_read_header+0x78/0x110 fs/quota/quota_v2.c:63 v2_check_quota_file+0x76/0x230 fs/quota/quota_v2.c:82 vfs_load_quota_inode+0x5d1/0x1530 fs/quota/dquot.c:2368 dquot_enable+0x28a/0x330 fs/quota/dquot.c:2490 ext4_quota_enable fs/ext4/super.c:6137 [inline] ext4_enable_quotas+0x5d7/0x960 fs/ext4/super.c:6163 ext4_fill_super+0xa7c9/0xdc00 fs/ext4/super.c:4754 mount_bdev+0x2e9/0x3b0 fs/super.c:1158 mount_fs+0x4b/0x1e4 fs/super.c:1261 [...] ================================================================== Above issue may happen as follows: ------------------------------------- ext4_fill_super ext4_enable_quotas ext4_quota_enable ext4_iget __ext4_iget ext4_ext_check_inode ext4_ext_check __ext4_ext_check ext4_valid_extent_entries Check for overlapping extents does't take effect dquot_enable vfs_load_quota_inode v2_check_quota_file v2_read_header ext4_quota_read ext4_bread ext4_getblk ext4_map_blocks ext4_ext_map_blocks ext4_find_extent ext4_cache_extents ext4_es_cache_extent ext4_es_cache_extent __es_tree_search ext4_es_end BUG_ON(es->es_lblk + es->es_len < es->es_lblk) The error ext4 extents is as follows: 0af3 0300 0400 0000 00000000 extent_header 00000000 0100 0000 12000000 extent1 00000000 0100 0000 18000000 extent2 02000000 0400 0000 14000000 extent3 In the ext4_valid_extent_entries function, if prev is 0, no error is returned even if lblock<=prev. This was intended to skip the check on the first extent, but in the error image above, prev=0+1-1=0 when checking the second extent, so even though lblock<=prev, the function does not return an error. As a result, bug_ON occurs in __es_tree_search and the system panics. To solve this problem, we only need to check that: 1. The lblock of the first extent is not less than 0. 2. The lblock of the next extent is not less than the next block of the previous extent. The same applies to extent_idx. Cc: stable@kernel.org Fixes: 5946d089379a ("ext4: check for overlapping extents in ext4_valid_extent_entries()") Reported-by: Hulk Robot Signed-off-by: Baokun Li Reviewed-by: Jan Kara Link: https://lore.kernel.org/r/20220518120816.1541863-1-libaokun1@huawei.com Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman --- fs/ext4/extents.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 80b876ab6b1f..6641b74ad462 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -371,7 +371,7 @@ static int ext4_valid_extent_entries(struct inode *inode, { unsigned short entries; ext4_lblk_t lblock = 0; - ext4_lblk_t prev = 0; + ext4_lblk_t cur = 0; if (eh->eh_entries == 0) return 1; @@ -395,11 +395,11 @@ static int ext4_valid_extent_entries(struct inode *inode, /* Check for overlapping extents */ lblock = le32_to_cpu(ext->ee_block); - if ((lblock <= prev) && prev) { + if (lblock < cur) { *pblk = ext4_ext_pblock(ext); return 0; } - prev = lblock + ext4_ext_get_actual_len(ext) - 1; + cur = lblock + ext4_ext_get_actual_len(ext); ext++; entries--; } @@ -419,13 +419,13 @@ static int ext4_valid_extent_entries(struct inode *inode, /* Check for overlapping index extents */ lblock = le32_to_cpu(ext_idx->ei_block); - if ((lblock <= prev) && prev) { + if (lblock < cur) { *pblk = ext4_idx_pblock(ext_idx); return 0; } ext_idx++; entries--; - prev = lblock; + cur = lblock + 1; } } return 1; From da2f05919238c7bdc6e28c79539f55c8355408bb Mon Sep 17 00:00:00 2001 From: Jan Kara Date: Wed, 18 May 2022 11:33:28 +0200 Subject: [PATCH 0675/1842] ext4: verify dir block before splitting it commit 46c116b920ebec58031f0a78c5ea9599b0d2a371 upstream. Before splitting a directory block verify its directory entries are sane so that the splitting code does not access memory it should not. Cc: stable@vger.kernel.org Signed-off-by: Jan Kara Link: https://lore.kernel.org/r/20220518093332.13986-1-jack@suse.cz Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman --- fs/ext4/namei.c | 32 +++++++++++++++++++++----------- 1 file changed, 21 insertions(+), 11 deletions(-) diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c index 81a7c0f3b40c..4025e2868e5b 100644 --- a/fs/ext4/namei.c +++ b/fs/ext4/namei.c @@ -280,9 +280,9 @@ static struct dx_frame *dx_probe(struct ext4_filename *fname, struct dx_hash_info *hinfo, struct dx_frame *frame); static void dx_release(struct dx_frame *frames); -static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de, - unsigned blocksize, struct dx_hash_info *hinfo, - struct dx_map_entry map[]); +static int dx_make_map(struct inode *dir, struct buffer_head *bh, + struct dx_hash_info *hinfo, + struct dx_map_entry *map_tail); static void dx_sort_map(struct dx_map_entry *map, unsigned count); static struct ext4_dir_entry_2 *dx_move_dirents(char *from, char *to, struct dx_map_entry *offsets, int count, unsigned blocksize); @@ -1208,15 +1208,23 @@ static inline int search_dirblock(struct buffer_head *bh, * Create map of hash values, offsets, and sizes, stored at end of block. * Returns number of entries mapped. */ -static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de, - unsigned blocksize, struct dx_hash_info *hinfo, +static int dx_make_map(struct inode *dir, struct buffer_head *bh, + struct dx_hash_info *hinfo, struct dx_map_entry *map_tail) { int count = 0; - char *base = (char *) de; + struct ext4_dir_entry_2 *de = (struct ext4_dir_entry_2 *)bh->b_data; + unsigned int buflen = bh->b_size; + char *base = bh->b_data; struct dx_hash_info h = *hinfo; - while ((char *) de < base + blocksize) { + if (ext4_has_metadata_csum(dir->i_sb)) + buflen -= sizeof(struct ext4_dir_entry_tail); + + while ((char *) de < base + buflen) { + if (ext4_check_dir_entry(dir, NULL, de, bh, base, buflen, + ((char *)de) - base)) + return -EFSCORRUPTED; if (de->name_len && de->inode) { ext4fs_dirhash(dir, de->name, de->name_len, &h); map_tail--; @@ -1226,8 +1234,7 @@ static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de, count++; cond_resched(); } - /* XXX: do we need to check rec_len == 0 case? -Chris */ - de = ext4_next_entry(de, blocksize); + de = ext4_next_entry(de, dir->i_sb->s_blocksize); } return count; } @@ -1853,8 +1860,11 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir, /* create map in the end of data2 block */ map = (struct dx_map_entry *) (data2 + blocksize); - count = dx_make_map(dir, (struct ext4_dir_entry_2 *) data1, - blocksize, hinfo, map); + count = dx_make_map(dir, *bh, hinfo, map); + if (count < 0) { + err = count; + goto journal_error; + } map -= count; dx_sort_map(map, count); /* Ensure that neither split block is over half full */ From ff4cafa51762da3824881a9000ca421d4b78b138 Mon Sep 17 00:00:00 2001 From: Jan Kara Date: Wed, 18 May 2022 11:33:29 +0200 Subject: [PATCH 0676/1842] ext4: avoid cycles in directory h-tree commit 3ba733f879c2a88910744647e41edeefbc0d92b2 upstream. A maliciously corrupted filesystem can contain cycles in the h-tree stored inside a directory. That can easily lead to the kernel corrupting tree nodes that were already verified under its hands while doing a node split and consequently accessing unallocated memory. Fix the problem by verifying traversed block numbers are unique. Cc: stable@vger.kernel.org Signed-off-by: Jan Kara Link: https://lore.kernel.org/r/20220518093332.13986-2-jack@suse.cz Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman --- fs/ext4/namei.c | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-) diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c index 4025e2868e5b..feae39f1db37 100644 --- a/fs/ext4/namei.c +++ b/fs/ext4/namei.c @@ -756,12 +756,14 @@ static struct dx_frame * dx_probe(struct ext4_filename *fname, struct inode *dir, struct dx_hash_info *hinfo, struct dx_frame *frame_in) { - unsigned count, indirect; + unsigned count, indirect, level, i; struct dx_entry *at, *entries, *p, *q, *m; struct dx_root *root; struct dx_frame *frame = frame_in; struct dx_frame *ret_err = ERR_PTR(ERR_BAD_DX_DIR); u32 hash; + ext4_lblk_t block; + ext4_lblk_t blocks[EXT4_HTREE_LEVEL]; memset(frame_in, 0, EXT4_HTREE_LEVEL * sizeof(frame_in[0])); frame->bh = ext4_read_dirblock(dir, 0, INDEX); @@ -817,6 +819,8 @@ dx_probe(struct ext4_filename *fname, struct inode *dir, } dxtrace(printk("Look up %x", hash)); + level = 0; + blocks[0] = 0; while (1) { count = dx_get_count(entries); if (!count || count > dx_get_limit(entries)) { @@ -858,15 +862,27 @@ dx_probe(struct ext4_filename *fname, struct inode *dir, dx_get_block(at))); frame->entries = entries; frame->at = at; - if (!indirect--) + + block = dx_get_block(at); + for (i = 0; i <= level; i++) { + if (blocks[i] == block) { + ext4_warning_inode(dir, + "dx entry: tree cycle block %u points back to block %u", + blocks[level], block); + goto fail; + } + } + if (++level > indirect) return frame; + blocks[level] = block; frame++; - frame->bh = ext4_read_dirblock(dir, dx_get_block(at), INDEX); + frame->bh = ext4_read_dirblock(dir, block, INDEX); if (IS_ERR(frame->bh)) { ret_err = (struct dx_frame *) frame->bh; frame->bh = NULL; goto fail; } + entries = ((struct dx_node *) frame->bh->b_data)->entries; if (dx_get_limit(entries) != dx_node_limit(dir)) { From a2b9edc3f8946463028aea2ed30a7cee65e0e8a9 Mon Sep 17 00:00:00 2001 From: Sakari Ailus Date: Wed, 6 Apr 2022 16:12:08 +0300 Subject: [PATCH 0677/1842] ACPI: property: Release subnode properties with data nodes commit 3bd561e1572ee02a50cd1a5be339abf1a5b78d56 upstream. struct acpi_device_properties describes one source of properties present on either struct acpi_device or struct acpi_data_node. When properties are parsed, both are populated but when released, only those properties that are associated with the device node are freed. Fix this by also releasing memory of the data node properties. Fixes: 5f5e4890d57a ("ACPI / property: Allow multiple property compatible _DSD entries") Cc: 4.20+ # 4.20+ Signed-off-by: Sakari Ailus Reviewed-by: Andy Shevchenko Signed-off-by: Rafael J. Wysocki Signed-off-by: Greg Kroah-Hartman --- drivers/acpi/property.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c index bd1634008838..619d34d73dcf 100644 --- a/drivers/acpi/property.c +++ b/drivers/acpi/property.c @@ -433,6 +433,16 @@ void acpi_init_properties(struct acpi_device *adev) acpi_extract_apple_properties(adev); } +static void acpi_free_device_properties(struct list_head *list) +{ + struct acpi_device_properties *props, *tmp; + + list_for_each_entry_safe(props, tmp, list, list) { + list_del(&props->list); + kfree(props); + } +} + static void acpi_destroy_nondev_subnodes(struct list_head *list) { struct acpi_data_node *dn, *next; @@ -445,22 +455,18 @@ static void acpi_destroy_nondev_subnodes(struct list_head *list) wait_for_completion(&dn->kobj_done); list_del(&dn->sibling); ACPI_FREE((void *)dn->data.pointer); + acpi_free_device_properties(&dn->data.properties); kfree(dn); } } void acpi_free_properties(struct acpi_device *adev) { - struct acpi_device_properties *props, *tmp; - acpi_destroy_nondev_subnodes(&adev->data.subnodes); ACPI_FREE((void *)adev->data.pointer); adev->data.of_compatible = NULL; adev->data.pointer = NULL; - list_for_each_entry_safe(props, tmp, &adev->data.properties, list) { - list_del(&props->list); - kfree(props); - } + acpi_free_device_properties(&adev->data.properties); } /** From 058cb6d86b9789377216c936506b346aaa1eb581 Mon Sep 17 00:00:00 2001 From: Keita Suzuki Date: Mon, 25 Apr 2022 06:37:38 +0000 Subject: [PATCH 0678/1842] tracing: Fix potential double free in create_var_ref() commit 99696a2592bca641eb88cc9a80c90e591afebd0f upstream. In create_var_ref(), init_var_ref() is called to initialize the fields of variable ref_field, which is allocated in the previous function call to create_hist_field(). Function init_var_ref() allocates the corresponding fields such as ref_field->system, but frees these fields when the function encounters an error. The caller later calls destroy_hist_field() to conduct error handling, which frees the fields and the variable itself. This results in double free of the fields which are already freed in the previous function. Fix this by storing NULL to the corresponding fields when they are freed in init_var_ref(). Link: https://lkml.kernel.org/r/20220425063739.3859998-1-keitasuzuki.park@sslab.ics.keio.ac.jp Fixes: 067fe038e70f ("tracing: Add variable reference handling to hist triggers") CC: stable@vger.kernel.org Reviewed-by: Masami Hiramatsu Reviewed-by: Tom Zanussi Signed-off-by: Keita Suzuki Signed-off-by: Steven Rostedt (Google) Signed-off-by: Greg Kroah-Hartman --- kernel/trace/trace_events_hist.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c index eb7200699cf6..3ed1723b68d5 100644 --- a/kernel/trace/trace_events_hist.c +++ b/kernel/trace/trace_events_hist.c @@ -1789,8 +1789,11 @@ static int init_var_ref(struct hist_field *ref_field, return err; free: kfree(ref_field->system); + ref_field->system = NULL; kfree(ref_field->event_name); + ref_field->event_name = NULL; kfree(ref_field->name); + ref_field->name = NULL; goto out; } From 2b4c6ad38228fa4e166fbb6b949fb336199814a3 Mon Sep 17 00:00:00 2001 From: Bjorn Helgaas Date: Thu, 26 May 2022 16:52:23 -0500 Subject: [PATCH 0679/1842] PCI/PM: Fix bridge_d3_blacklist[] Elo i2 overwrite of Gigabyte X299 commit 12068bb346db5776d0ec9bb4cd073f8427a1ac92 upstream. 92597f97a40b ("PCI/PM: Avoid putting Elo i2 PCIe Ports in D3cold") omitted braces around the new Elo i2 entry, so it overwrote the existing Gigabyte X299 entry. Add the appropriate braces. Found by: $ make W=1 drivers/pci/pci.o CC drivers/pci/pci.o drivers/pci/pci.c:2974:12: error: initialized field overwritten [-Werror=override-init] 2974 | .ident = "Elo i2", | ^~~~~~~~ Link: https://lore.kernel.org/r/20220526221258.GA409855@bhelgaas Fixes: 92597f97a40b ("PCI/PM: Avoid putting Elo i2 PCIe Ports in D3cold") Signed-off-by: Bjorn Helgaas Cc: stable@vger.kernel.org # v5.15+ Signed-off-by: Greg Kroah-Hartman --- drivers/pci/pci.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 6ebbe06f0b08..116273454648 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -2829,6 +2829,8 @@ static const struct dmi_system_id bridge_d3_blacklist[] = { DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co., Ltd."), DMI_MATCH(DMI_BOARD_NAME, "X299 DESIGNARE EX-CF"), }, + }, + { /* * Downstream device is not accessible after putting a root port * into D3cold and back into D0 on Elo i2. From c0e129dafce2e2cbf6e4a5131979eb99131e7284 Mon Sep 17 00:00:00 2001 From: Johan Hovold Date: Fri, 1 Apr 2022 15:38:53 +0200 Subject: [PATCH 0680/1842] PCI: qcom: Fix runtime PM imbalance on probe errors commit 87d83b96c8d6c6c2d2096bd0bdba73bcf42b8ef0 upstream. Drop the leftover pm_runtime_disable() calls from the late probe error paths that would, for example, prevent runtime PM from being reenabled after a probe deferral. Link: https://lore.kernel.org/r/20220401133854.10421-2-johan+linaro@kernel.org Fixes: 6e5da6f7d824 ("PCI: qcom: Fix error handling in runtime PM support") Signed-off-by: Johan Hovold Signed-off-by: Lorenzo Pieralisi Signed-off-by: Bjorn Helgaas Reviewed-by: Manivannan Sadhasivam Acked-by: Stanimir Varbanov Cc: stable@vger.kernel.org # 4.20 Cc: Bjorn Andersson Signed-off-by: Greg Kroah-Hartman --- drivers/pci/controller/dwc/pcie-qcom.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c index 557554f53ce9..e71187b790e2 100644 --- a/drivers/pci/controller/dwc/pcie-qcom.c +++ b/drivers/pci/controller/dwc/pcie-qcom.c @@ -1443,17 +1443,14 @@ static int qcom_pcie_probe(struct platform_device *pdev) } ret = phy_init(pcie->phy); - if (ret) { - pm_runtime_disable(&pdev->dev); + if (ret) goto err_pm_runtime_put; - } platform_set_drvdata(pdev, pcie); ret = dw_pcie_host_init(pp); if (ret) { dev_err(dev, "cannot initialize host\n"); - pm_runtime_disable(&pdev->dev); goto err_pm_runtime_put; } From 99fd821f567e169d9e7254d00cbb305d1be4842c Mon Sep 17 00:00:00 2001 From: Johan Hovold Date: Fri, 1 Apr 2022 15:38:54 +0200 Subject: [PATCH 0681/1842] PCI: qcom: Fix unbalanced PHY init on probe errors commit 83013631f0f9961416abd812e228c8efbc2f6069 upstream. Undo the PHY initialisation (e.g. balance runtime PM) if host initialisation fails during probe. Link: https://lore.kernel.org/r/20220401133854.10421-3-johan+linaro@kernel.org Fixes: 82a823833f4e ("PCI: qcom: Add Qualcomm PCIe controller driver") Signed-off-by: Johan Hovold Signed-off-by: Lorenzo Pieralisi Signed-off-by: Bjorn Helgaas Reviewed-by: Manivannan Sadhasivam Acked-by: Stanimir Varbanov Cc: stable@vger.kernel.org # 4.5 Signed-off-by: Greg Kroah-Hartman --- drivers/pci/controller/dwc/pcie-qcom.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c index e71187b790e2..9c7cdc4e12f2 100644 --- a/drivers/pci/controller/dwc/pcie-qcom.c +++ b/drivers/pci/controller/dwc/pcie-qcom.c @@ -1451,11 +1451,13 @@ static int qcom_pcie_probe(struct platform_device *pdev) ret = dw_pcie_host_init(pp); if (ret) { dev_err(dev, "cannot initialize host\n"); - goto err_pm_runtime_put; + goto err_phy_exit; } return 0; +err_phy_exit: + phy_exit(pcie->phy); err_pm_runtime_put: pm_runtime_put(dev); pm_runtime_disable(dev); From 7994d890123a6cad033f2842ff0177a9bda1cb23 Mon Sep 17 00:00:00 2001 From: Rei Yamamoto Date: Fri, 13 May 2022 16:48:57 -0700 Subject: [PATCH 0682/1842] mm, compaction: fast_find_migrateblock() should return pfn in the target zone commit bbe832b9db2e1ad21522f8f0bf02775fff8a0e0e upstream. At present, pages not in the target zone are added to cc->migratepages list in isolate_migratepages_block(). As a result, pages may migrate between nodes unintentionally. This would be a serious problem for older kernels without commit a984226f457f849e ("mm: memcontrol: remove the pgdata parameter of mem_cgroup_page_lruvec"), because it can corrupt the lru list by handling pages in list without holding proper lru_lock. Avoid returning a pfn outside the target zone in the case that it is not aligned with a pageblock boundary. Otherwise isolate_migratepages_block() will handle pages not in the target zone. Link: https://lkml.kernel.org/r/20220511044300.4069-1-yamamoto.rei@jp.fujitsu.com Fixes: 70b44595eafe ("mm, compaction: use free lists to quickly locate a migration source") Signed-off-by: Rei Yamamoto Reviewed-by: Miaohe Lin Acked-by: Mel Gorman Reviewed-by: Oscar Salvador Cc: Don Dutile Cc: Wonhyuk Yang Cc: Rei Yamamoto Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- mm/compaction.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/compaction.c b/mm/compaction.c index dba424447473..8dfbe86bd74f 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1747,6 +1747,8 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc) update_fast_start_pfn(cc, free_pfn); pfn = pageblock_start_pfn(free_pfn); + if (pfn < cc->zone->zone_start_pfn) + pfn = cc->zone->zone_start_pfn; cc->fast_search_fail = 0; found_block = true; set_pageblock_skip(freepage); From 74114d26e9dbe647ebb264ef5e1dcda2fbd6efd5 Mon Sep 17 00:00:00 2001 From: Nico Boehr Date: Tue, 24 May 2022 15:43:20 +0200 Subject: [PATCH 0683/1842] s390/perf: obtain sie_block from the right address commit c9bfb460c3e4da2462e16b0f0b200990b36b1dd2 upstream. Since commit 1179f170b6f0 ("s390: fix fpu restore in entry.S"), the sie_block pointer is located at empty1[1], but in sie_block() it was taken from empty1[0]. This leads to a random pointer being dereferenced, possibly causing system crash. This problem can be observed when running a simple guest with an endless loop and recording the cpu-clock event: sudo perf kvm --guestvmlinux= --guest top -e cpu-clock With this fix, the correct guest address is shown. Fixes: 1179f170b6f0 ("s390: fix fpu restore in entry.S") Cc: stable@vger.kernel.org Acked-by: Christian Borntraeger Acked-by: Claudio Imbrenda Reviewed-by: Heiko Carstens Signed-off-by: Nico Boehr Signed-off-by: Heiko Carstens Signed-off-by: Greg Kroah-Hartman --- arch/s390/kernel/perf_event.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/s390/kernel/perf_event.c b/arch/s390/kernel/perf_event.c index 1e75cc983546..b922dc0c8130 100644 --- a/arch/s390/kernel/perf_event.c +++ b/arch/s390/kernel/perf_event.c @@ -51,7 +51,7 @@ static struct kvm_s390_sie_block *sie_block(struct pt_regs *regs) if (!stack) return NULL; - return (struct kvm_s390_sie_block *) stack->empty1[0]; + return (struct kvm_s390_sie_block *)stack->empty1[1]; } static bool is_in_guest(struct pt_regs *regs) From 899bc4429174861122f0c236588700a4710c1fec Mon Sep 17 00:00:00 2001 From: Alexander Aring Date: Mon, 4 Apr 2022 16:06:30 -0400 Subject: [PATCH 0684/1842] dlm: fix plock invalid read commit 42252d0d2aa9b94d168241710a761588b3959019 upstream. This patch fixes an invalid read showed by KASAN. A unlock will allocate a "struct plock_op" and a followed send_op() will append it to a global send_list data structure. In some cases a followed dev_read() moves it to recv_list and dev_write() will cast it to "struct plock_xop" and access fields which are only available in those structures. At this point an invalid read happens by accessing those fields. To fix this issue the "callback" field is moved to "struct plock_op" to indicate that a cast to "plock_xop" is allowed and does the additional "plock_xop" handling if set. Example of the KASAN output which showed the invalid read: [ 2064.296453] ================================================================== [ 2064.304852] BUG: KASAN: slab-out-of-bounds in dev_write+0x52b/0x5a0 [dlm] [ 2064.306491] Read of size 8 at addr ffff88800ef227d8 by task dlm_controld/7484 [ 2064.308168] [ 2064.308575] CPU: 0 PID: 7484 Comm: dlm_controld Kdump: loaded Not tainted 5.14.0+ #9 [ 2064.310292] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 [ 2064.311618] Call Trace: [ 2064.312218] dump_stack_lvl+0x56/0x7b [ 2064.313150] print_address_description.constprop.8+0x21/0x150 [ 2064.314578] ? dev_write+0x52b/0x5a0 [dlm] [ 2064.315610] ? dev_write+0x52b/0x5a0 [dlm] [ 2064.316595] kasan_report.cold.14+0x7f/0x11b [ 2064.317674] ? dev_write+0x52b/0x5a0 [dlm] [ 2064.318687] dev_write+0x52b/0x5a0 [dlm] [ 2064.319629] ? dev_read+0x4a0/0x4a0 [dlm] [ 2064.320713] ? bpf_lsm_kernfs_init_security+0x10/0x10 [ 2064.321926] vfs_write+0x17e/0x930 [ 2064.322769] ? __fget_light+0x1aa/0x220 [ 2064.323753] ksys_write+0xf1/0x1c0 [ 2064.324548] ? __ia32_sys_read+0xb0/0xb0 [ 2064.325464] do_syscall_64+0x3a/0x80 [ 2064.326387] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 2064.327606] RIP: 0033:0x7f807e4ba96f [ 2064.328470] Code: 89 54 24 18 48 89 74 24 10 89 7c 24 08 e8 39 87 f8 ff 48 8b 54 24 18 48 8b 74 24 10 41 89 c0 8b 7c 24 08 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 31 44 89 c7 48 89 44 24 08 e8 7c 87 f8 ff 48 [ 2064.332902] RSP: 002b:00007ffd50cfe6e0 EFLAGS: 00000293 ORIG_RAX: 0000000000000001 [ 2064.334658] RAX: ffffffffffffffda RBX: 000055cc3886eb30 RCX: 00007f807e4ba96f [ 2064.336275] RDX: 0000000000000040 RSI: 00007ffd50cfe7e0 RDI: 0000000000000010 [ 2064.337980] RBP: 00007ffd50cfe7e0 R08: 0000000000000000 R09: 0000000000000001 [ 2064.339560] R10: 000055cc3886eb30 R11: 0000000000000293 R12: 000055cc3886eb80 [ 2064.341237] R13: 000055cc3886eb00 R14: 000055cc3886f590 R15: 0000000000000001 [ 2064.342857] [ 2064.343226] Allocated by task 12438: [ 2064.344057] kasan_save_stack+0x1c/0x40 [ 2064.345079] __kasan_kmalloc+0x84/0xa0 [ 2064.345933] kmem_cache_alloc_trace+0x13b/0x220 [ 2064.346953] dlm_posix_unlock+0xec/0x720 [dlm] [ 2064.348811] do_lock_file_wait.part.32+0xca/0x1d0 [ 2064.351070] fcntl_setlk+0x281/0xbc0 [ 2064.352879] do_fcntl+0x5e4/0xfe0 [ 2064.354657] __x64_sys_fcntl+0x11f/0x170 [ 2064.356550] do_syscall_64+0x3a/0x80 [ 2064.358259] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 2064.360745] [ 2064.361511] Last potentially related work creation: [ 2064.363957] kasan_save_stack+0x1c/0x40 [ 2064.365811] __kasan_record_aux_stack+0xaf/0xc0 [ 2064.368100] call_rcu+0x11b/0xf70 [ 2064.369785] dlm_process_incoming_buffer+0x47d/0xfd0 [dlm] [ 2064.372404] receive_from_sock+0x290/0x770 [dlm] [ 2064.374607] process_recv_sockets+0x32/0x40 [dlm] [ 2064.377290] process_one_work+0x9a8/0x16e0 [ 2064.379357] worker_thread+0x87/0xbf0 [ 2064.381188] kthread+0x3ac/0x490 [ 2064.383460] ret_from_fork+0x22/0x30 [ 2064.385588] [ 2064.386518] Second to last potentially related work creation: [ 2064.389219] kasan_save_stack+0x1c/0x40 [ 2064.391043] __kasan_record_aux_stack+0xaf/0xc0 [ 2064.393303] call_rcu+0x11b/0xf70 [ 2064.394885] dlm_process_incoming_buffer+0x47d/0xfd0 [dlm] [ 2064.397694] receive_from_sock+0x290/0x770 [dlm] [ 2064.399932] process_recv_sockets+0x32/0x40 [dlm] [ 2064.402180] process_one_work+0x9a8/0x16e0 [ 2064.404388] worker_thread+0x87/0xbf0 [ 2064.406124] kthread+0x3ac/0x490 [ 2064.408021] ret_from_fork+0x22/0x30 [ 2064.409834] [ 2064.410599] The buggy address belongs to the object at ffff88800ef22780 [ 2064.410599] which belongs to the cache kmalloc-96 of size 96 [ 2064.416495] The buggy address is located 88 bytes inside of [ 2064.416495] 96-byte region [ffff88800ef22780, ffff88800ef227e0) [ 2064.422045] The buggy address belongs to the page: [ 2064.424635] page:00000000b6bef8bc refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0xef22 [ 2064.428970] flags: 0xfffffc0000200(slab|node=0|zone=1|lastcpupid=0x1fffff) [ 2064.432515] raw: 000fffffc0000200 ffffea0000d68b80 0000001400000014 ffff888001041780 [ 2064.436110] raw: 0000000000000000 0000000080200020 00000001ffffffff 0000000000000000 [ 2064.439813] page dumped because: kasan: bad access detected [ 2064.442548] [ 2064.443310] Memory state around the buggy address: [ 2064.445988] ffff88800ef22680: 00 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc [ 2064.449444] ffff88800ef22700: 00 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc [ 2064.452941] >ffff88800ef22780: 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc fc [ 2064.456383] ^ [ 2064.459386] ffff88800ef22800: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc [ 2064.462788] ffff88800ef22880: 00 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc [ 2064.466239] ================================================================== reproducer in python: import argparse import struct import fcntl import os parser = argparse.ArgumentParser() parser.add_argument('-f', '--file', help='file to use fcntl, must be on dlm lock filesystem e.g. gfs2') args = parser.parse_args() f = open(args.file, 'wb+') lockdata = struct.pack('hhllhh', fcntl.F_WRLCK,0,0,0,0,0) fcntl.fcntl(f, fcntl.F_SETLK, lockdata) lockdata = struct.pack('hhllhh', fcntl.F_UNLCK,0,0,0,0,0) fcntl.fcntl(f, fcntl.F_SETLK, lockdata) Fixes: 586759f03e2e ("gfs2: nfs lock support for gfs2") Cc: stable@vger.kernel.org Signed-off-by: Andreas Gruenbacher Signed-off-by: Alexander Aring Signed-off-by: David Teigland Signed-off-by: Greg Kroah-Hartman --- fs/dlm/plock.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c index c38b2b8ffd1d..a10d2bcfe75a 100644 --- a/fs/dlm/plock.c +++ b/fs/dlm/plock.c @@ -23,11 +23,11 @@ struct plock_op { struct list_head list; int done; struct dlm_plock_info info; + int (*callback)(struct file_lock *fl, int result); }; struct plock_xop { struct plock_op xop; - int (*callback)(struct file_lock *fl, int result); void *fl; void *file; struct file_lock flc; @@ -129,19 +129,18 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, struct file *file, /* fl_owner is lockd which doesn't distinguish processes on the nfs client */ op->info.owner = (__u64) fl->fl_pid; - xop->callback = fl->fl_lmops->lm_grant; + op->callback = fl->fl_lmops->lm_grant; locks_init_lock(&xop->flc); locks_copy_lock(&xop->flc, fl); xop->fl = fl; xop->file = file; } else { op->info.owner = (__u64)(long) fl->fl_owner; - xop->callback = NULL; } send_op(op); - if (xop->callback == NULL) { + if (!op->callback) { rv = wait_event_interruptible(recv_wq, (op->done != 0)); if (rv == -ERESTARTSYS) { log_debug(ls, "dlm_posix_lock: wait killed %llx", @@ -203,7 +202,7 @@ static int dlm_plock_callback(struct plock_op *op) file = xop->file; flc = &xop->flc; fl = xop->fl; - notify = xop->callback; + notify = op->callback; if (op->info.rv) { notify(fl, op->info.rv); @@ -436,10 +435,9 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count, if (op->info.fsid == info.fsid && op->info.number == info.number && op->info.owner == info.owner) { - struct plock_xop *xop = (struct plock_xop *)op; list_del_init(&op->list); memcpy(&op->info, &info, sizeof(info)); - if (xop->callback) + if (op->callback) do_callback = 1; else op->done = 1; From 4ca3ac06e77da77167de9f3e93de7d32f7753636 Mon Sep 17 00:00:00 2001 From: Alexander Aring Date: Fri, 29 Apr 2022 11:06:51 -0400 Subject: [PATCH 0685/1842] dlm: fix missing lkb refcount handling commit 1689c169134f4b5a39156122d799b7dca76d8ddb upstream. We always call hold_lkb(lkb) if we increment lkb->lkb_wait_count. So, we always need to call unhold_lkb(lkb) if we decrement lkb->lkb_wait_count. This patch will add missing unhold_lkb(lkb) if we decrement lkb->lkb_wait_count. In case of setting lkb->lkb_wait_count to zero we need to countdown until reaching zero and call unhold_lkb(lkb). The waiters list unhold_lkb(lkb) can be removed because it's done for the last lkb_wait_count decrement iteration as it's done in _remove_from_waiters(). This issue was discovered by a dlm gfs2 test case which use excessively dlm_unlock(LKF_CANCEL) feature. Probably the lkb->lkb_wait_count value never reached above 1 if this feature isn't used and so it was not discovered before. The testcase ended in a rsb on the rsb keep data structure with a refcount of 1 but no lkb was associated with it, which is itself an invalid behaviour. A side effect of that was a condition in which the dlm was sending remove messages in a looping behaviour. With this patch that has not been reproduced. Cc: stable@vger.kernel.org Signed-off-by: Alexander Aring Signed-off-by: David Teigland Signed-off-by: Greg Kroah-Hartman --- fs/dlm/lock.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c index 1e9d8999b939..2ce96a9ce63c 100644 --- a/fs/dlm/lock.c +++ b/fs/dlm/lock.c @@ -1551,6 +1551,7 @@ static int _remove_from_waiters(struct dlm_lkb *lkb, int mstype, lkb->lkb_wait_type = 0; lkb->lkb_flags &= ~DLM_IFL_OVERLAP_CANCEL; lkb->lkb_wait_count--; + unhold_lkb(lkb); goto out_del; } @@ -1577,6 +1578,7 @@ static int _remove_from_waiters(struct dlm_lkb *lkb, int mstype, log_error(ls, "remwait error %x reply %d wait_type %d overlap", lkb->lkb_id, mstype, lkb->lkb_wait_type); lkb->lkb_wait_count--; + unhold_lkb(lkb); lkb->lkb_wait_type = 0; } @@ -5312,11 +5314,16 @@ int dlm_recover_waiters_post(struct dlm_ls *ls) lkb->lkb_flags &= ~DLM_IFL_OVERLAP_UNLOCK; lkb->lkb_flags &= ~DLM_IFL_OVERLAP_CANCEL; lkb->lkb_wait_type = 0; - lkb->lkb_wait_count = 0; + /* drop all wait_count references we still + * hold a reference for this iteration. + */ + while (lkb->lkb_wait_count) { + lkb->lkb_wait_count--; + unhold_lkb(lkb); + } mutex_lock(&ls->ls_waiters_mutex); list_del_init(&lkb->lkb_wait_reply); mutex_unlock(&ls->ls_waiters_mutex); - unhold_lkb(lkb); /* for waiters list */ if (oc || ou) { /* do an unlock or cancel instead of resending */ From 337e36550788dbe03254f0593a231c1c4873b20d Mon Sep 17 00:00:00 2001 From: Junxiao Bi via Ocfs2-devel Date: Wed, 18 May 2022 16:52:24 -0700 Subject: [PATCH 0686/1842] ocfs2: dlmfs: fix error handling of user_dlm_destroy_lock commit 863e0d81b6683c4cbc588ad831f560c90e494bef upstream. When user_dlm_destroy_lock failed, it didn't clean up the flags it set before exit. For USER_LOCK_IN_TEARDOWN, if this function fails because of lock is still in used, next time when unlink invokes this function, it will return succeed, and then unlink will remove inode and dentry if lock is not in used(file closed), but the dlm lock is still linked in dlm lock resource, then when bast come in, it will trigger a panic due to user-after-free. See the following panic call trace. To fix this, USER_LOCK_IN_TEARDOWN should be reverted if fail. And also error should be returned if USER_LOCK_IN_TEARDOWN is set to let user know that unlink fail. For the case of ocfs2_dlm_unlock failure, besides USER_LOCK_IN_TEARDOWN, USER_LOCK_BUSY is also required to be cleared. Even though spin lock is released in between, but USER_LOCK_IN_TEARDOWN is still set, for USER_LOCK_BUSY, if before every place that waits on this flag, USER_LOCK_IN_TEARDOWN is checked to bail out, that will make sure no flow waits on the busy flag set by user_dlm_destroy_lock(), then we can simplely revert USER_LOCK_BUSY when ocfs2_dlm_unlock fails. Fix user_dlm_cluster_lock() which is the only function not following this. [ 941.336392] (python,26174,16):dlmfs_unlink:562 ERROR: unlink 004fb0000060000b5a90b8c847b72e1, error -16 from destroy [ 989.757536] ------------[ cut here ]------------ [ 989.757709] kernel BUG at fs/ocfs2/dlmfs/userdlm.c:173! [ 989.757876] invalid opcode: 0000 [#1] SMP [ 989.758027] Modules linked in: ksplice_2zhuk2jr_ib_ipoib_new(O) ksplice_2zhuk2jr(O) mptctl mptbase xen_netback xen_blkback xen_gntalloc xen_gntdev xen_evtchn cdc_ether usbnet mii ocfs2 jbd2 rpcsec_gss_krb5 auth_rpcgss nfsv4 nfsv3 nfs_acl nfs fscache lockd grace ocfs2_dlmfs ocfs2_stack_o2cb ocfs2_dlm ocfs2_nodemanager ocfs2_stackglue configfs bnx2fc fcoe libfcoe libfc scsi_transport_fc sunrpc ipmi_devintf bridge stp llc rds_rdma rds bonding ib_sdp ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm falcon_lsm_serviceable(PE) falcon_nf_netcontain(PE) mlx4_vnic falcon_kal(E) falcon_lsm_pinned_13402(E) mlx4_ib ib_sa ib_mad ib_core ib_addr xenfs xen_privcmd dm_multipath iTCO_wdt iTCO_vendor_support pcspkr sb_edac edac_core i2c_i801 lpc_ich mfd_core ipmi_ssif i2c_core ipmi_si ipmi_msghandler [ 989.760686] ioatdma sg ext3 jbd mbcache sd_mod ahci libahci ixgbe dca ptp pps_core vxlan udp_tunnel ip6_udp_tunnel megaraid_sas mlx4_core crc32c_intel be2iscsi bnx2i cnic uio cxgb4i cxgb4 cxgb3i libcxgbi ipv6 cxgb3 mdio libiscsi_tcp qla4xxx iscsi_boot_sysfs libiscsi scsi_transport_iscsi wmi dm_mirror dm_region_hash dm_log dm_mod [last unloaded: ksplice_2zhuk2jr_ib_ipoib_old] [ 989.761987] CPU: 10 PID: 19102 Comm: dlm_thread Tainted: P OE 4.1.12-124.57.1.el6uek.x86_64 #2 [ 989.762290] Hardware name: Oracle Corporation ORACLE SERVER X5-2/ASM,MOTHERBOARD,1U, BIOS 30350100 06/17/2021 [ 989.762599] task: ffff880178af6200 ti: ffff88017f7c8000 task.ti: ffff88017f7c8000 [ 989.762848] RIP: e030:[] [] __user_dlm_queue_lockres.part.4+0x76/0x80 [ocfs2_dlmfs] [ 989.763185] RSP: e02b:ffff88017f7cbcb8 EFLAGS: 00010246 [ 989.763353] RAX: 0000000000000000 RBX: ffff880174d48008 RCX: 0000000000000003 [ 989.763565] RDX: 0000000000120012 RSI: 0000000000000003 RDI: ffff880174d48170 [ 989.763778] RBP: ffff88017f7cbcc8 R08: ffff88021f4293b0 R09: 0000000000000000 [ 989.763991] R10: ffff880179c8c000 R11: 0000000000000003 R12: ffff880174d48008 [ 989.764204] R13: 0000000000000003 R14: ffff880179c8c000 R15: ffff88021db7a000 [ 989.764422] FS: 0000000000000000(0000) GS:ffff880247480000(0000) knlGS:ffff880247480000 [ 989.764685] CS: e033 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 989.764865] CR2: ffff8000007f6800 CR3: 0000000001ae0000 CR4: 0000000000042660 [ 989.765081] Stack: [ 989.765167] 0000000000000003 ffff880174d48040 ffff88017f7cbd18 ffffffffc07d455f [ 989.765442] ffff88017f7cbd88 ffffffff816fb639 ffff88017f7cbd38 ffff8800361b5600 [ 989.765717] ffff88021db7a000 ffff88021f429380 0000000000000003 ffffffffc0453020 [ 989.765991] Call Trace: [ 989.766093] [] user_bast+0x5f/0xf0 [ocfs2_dlmfs] [ 989.766287] [] ? schedule_timeout+0x169/0x2d0 [ 989.766475] [] ? o2dlm_lock_ast_wrapper+0x20/0x20 [ocfs2_stack_o2cb] [ 989.766738] [] o2dlm_blocking_ast_wrapper+0x1a/0x20 [ocfs2_stack_o2cb] [ 989.767010] [] dlm_do_local_bast+0x46/0xe0 [ocfs2_dlm] [ 989.767217] [] ? dlm_lockres_calc_usage+0x4c/0x60 [ocfs2_dlm] [ 989.767466] [] dlm_thread+0xa31/0x1140 [ocfs2_dlm] [ 989.767662] [] ? __schedule+0x24a/0x810 [ 989.767834] [] ? __schedule+0x23e/0x810 [ 989.768006] [] ? __schedule+0x24a/0x810 [ 989.768178] [] ? __schedule+0x23e/0x810 [ 989.768349] [] ? __schedule+0x24a/0x810 [ 989.768521] [] ? __schedule+0x23e/0x810 [ 989.768693] [] ? __schedule+0x24a/0x810 [ 989.768893] [] ? __schedule+0x23e/0x810 [ 989.769067] [] ? __schedule+0x24a/0x810 [ 989.769241] [] ? wait_woken+0x90/0x90 [ 989.769411] [] ? dlm_kick_thread+0x80/0x80 [ocfs2_dlm] [ 989.769617] [] kthread+0xcb/0xf0 [ 989.769774] [] ? __schedule+0x24a/0x810 [ 989.769945] [] ? __schedule+0x24a/0x810 [ 989.770117] [] ? kthread_create_on_node+0x180/0x180 [ 989.770321] [] ret_from_fork+0x61/0x90 [ 989.770492] [] ? kthread_create_on_node+0x180/0x180 [ 989.770689] Code: d0 00 00 00 f0 45 7d c0 bf 00 20 00 00 48 89 83 c0 00 00 00 48 89 83 c8 00 00 00 e8 55 c1 8c c0 83 4b 04 10 48 83 c4 08 5b 5d c3 <0f> 0b 0f 1f 84 00 00 00 00 00 55 48 89 e5 41 55 41 54 53 48 83 [ 989.771892] RIP [] __user_dlm_queue_lockres.part.4+0x76/0x80 [ocfs2_dlmfs] [ 989.772174] RSP [ 989.772704] ---[ end trace ebd1e38cebcc93a8 ]--- [ 989.772907] Kernel panic - not syncing: Fatal exception [ 989.773173] Kernel Offset: disabled Link: https://lkml.kernel.org/r/20220518235224.87100-2-junxiao.bi@oracle.com Signed-off-by: Junxiao Bi Reviewed-by: Joseph Qi Cc: Mark Fasheh Cc: Joel Becker Cc: Joseph Qi Cc: Changwei Ge Cc: Gang He Cc: Jun Piao Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- fs/ocfs2/dlmfs/userdlm.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/fs/ocfs2/dlmfs/userdlm.c b/fs/ocfs2/dlmfs/userdlm.c index 339f098d9592..8fa289de3939 100644 --- a/fs/ocfs2/dlmfs/userdlm.c +++ b/fs/ocfs2/dlmfs/userdlm.c @@ -435,6 +435,11 @@ int user_dlm_cluster_lock(struct user_lock_res *lockres, } spin_lock(&lockres->l_lock); + if (lockres->l_flags & USER_LOCK_IN_TEARDOWN) { + spin_unlock(&lockres->l_lock); + status = -EAGAIN; + goto bail; + } /* We only compare against the currently granted level * here. If the lock is blocked waiting on a downconvert, @@ -597,7 +602,7 @@ int user_dlm_destroy_lock(struct user_lock_res *lockres) spin_lock(&lockres->l_lock); if (lockres->l_flags & USER_LOCK_IN_TEARDOWN) { spin_unlock(&lockres->l_lock); - return 0; + goto bail; } lockres->l_flags |= USER_LOCK_IN_TEARDOWN; @@ -611,12 +616,17 @@ int user_dlm_destroy_lock(struct user_lock_res *lockres) } if (lockres->l_ro_holders || lockres->l_ex_holders) { + lockres->l_flags &= ~USER_LOCK_IN_TEARDOWN; spin_unlock(&lockres->l_lock); goto bail; } status = 0; if (!(lockres->l_flags & USER_LOCK_ATTACHED)) { + /* + * lock is never requested, leave USER_LOCK_IN_TEARDOWN set + * to avoid new lock request coming in. + */ spin_unlock(&lockres->l_lock); goto bail; } @@ -627,6 +637,10 @@ int user_dlm_destroy_lock(struct user_lock_res *lockres) status = ocfs2_dlm_unlock(conn, &lockres->l_lksb, DLM_LKF_VALBLK); if (status) { + spin_lock(&lockres->l_lock); + lockres->l_flags &= ~USER_LOCK_IN_TEARDOWN; + lockres->l_flags &= ~USER_LOCK_BUSY; + spin_unlock(&lockres->l_lock); user_log_dlm_error("ocfs2_dlm_unlock", status, lockres); goto bail; } From f297dc2364b9a38358ddf92c9812a3d18a102843 Mon Sep 17 00:00:00 2001 From: Xiaomeng Tong Date: Thu, 14 Apr 2022 12:02:31 +0800 Subject: [PATCH 0687/1842] scsi: dc395x: Fix a missing check on list iterator commit 036a45aa587a10fa2abbd50fbd0f6c4cfc44f69f upstream. The bug is here: p->target_id, p->target_lun); The list iterator 'p' will point to a bogus position containing HEAD if the list is empty or no element is found. This case must be checked before any use of the iterator, otherwise it will lead to an invalid memory access. To fix this bug, add a check. Use a new variable 'iter' as the list iterator, and use the original variable 'p' as a dedicated pointer to point to the found element. Link: https://lore.kernel.org/r/20220414040231.2662-1-xiam0nd.tong@gmail.com Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Cc: stable@vger.kernel.org Signed-off-by: Xiaomeng Tong Signed-off-by: Martin K. Petersen Signed-off-by: Greg Kroah-Hartman --- drivers/scsi/dc395x.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/dc395x.c b/drivers/scsi/dc395x.c index 6cb48ae8e124..d8967bf72ef2 100644 --- a/drivers/scsi/dc395x.c +++ b/drivers/scsi/dc395x.c @@ -3631,10 +3631,19 @@ static struct DeviceCtlBlk *device_alloc(struct AdapterCtlBlk *acb, #endif if (dcb->target_lun != 0) { /* Copy settings */ - struct DeviceCtlBlk *p; - list_for_each_entry(p, &acb->dcb_list, list) - if (p->target_id == dcb->target_id) + struct DeviceCtlBlk *p = NULL, *iter; + + list_for_each_entry(iter, &acb->dcb_list, list) + if (iter->target_id == dcb->target_id) { + p = iter; break; + } + + if (!p) { + kfree(dcb); + return NULL; + } + dprintkdbg(DBG_1, "device_alloc: <%02i-%i> copy from <%02i-%i>\n", dcb->target_id, dcb->target_lun, From 556e404691ede7bad8aa10814971e3b10ffdec47 Mon Sep 17 00:00:00 2001 From: Manivannan Sadhasivam Date: Wed, 4 May 2022 14:12:10 +0530 Subject: [PATCH 0688/1842] scsi: ufs: qcom: Add a readl() to make sure ref_clk gets enabled commit 8eecddfca30e1651dc1c74531ed5eef21dcce7e3 upstream. In ufs_qcom_dev_ref_clk_ctrl(), it was noted that the ref_clk needs to be stable for at least 1us. Even though there is wmb() to make sure the write gets "completed", there is no guarantee that the write actually reached the UFS device. There is a good chance that the write could be stored in a Write Buffer (WB). In that case, even though the CPU waits for 1us, the ref_clk might not be stable for that period. So lets do a readl() to make sure that the previous write has reached the UFS device before udelay(). Also, the wmb() after writel_relaxed() is not really needed. Both writel() and readl() are ordered on all architectures and the CPU won't speculate instructions after readl() due to the in-built control dependency with read value on weakly ordered architectures. So it can be safely removed. Link: https://lore.kernel.org/r/20220504084212.11605-4-manivannan.sadhasivam@linaro.org Fixes: f06fcc7155dc ("scsi: ufs-qcom: add QUniPro hardware support and power optimizations") Cc: stable@vger.kernel.org Reviewed-by: Bjorn Andersson Signed-off-by: Manivannan Sadhasivam Signed-off-by: Martin K. Petersen Signed-off-by: Greg Kroah-Hartman --- drivers/scsi/ufs/ufs-qcom.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c index 117740b302fa..08331ecbe91f 100644 --- a/drivers/scsi/ufs/ufs-qcom.c +++ b/drivers/scsi/ufs/ufs-qcom.c @@ -664,8 +664,11 @@ static void ufs_qcom_dev_ref_clk_ctrl(struct ufs_qcom_host *host, bool enable) writel_relaxed(temp, host->dev_ref_clk_ctrl_mmio); - /* ensure that ref_clk is enabled/disabled before we return */ - wmb(); + /* + * Make sure the write to ref_clk reaches the destination and + * not stored in a Write Buffer (WB). + */ + readl(host->dev_ref_clk_ctrl_mmio); /* * If we call hibern8 exit after this, we need to make sure that From be585921f29df5422a39c952d188b418ad48ffab Mon Sep 17 00:00:00 2001 From: Dave Airlie Date: Mon, 23 May 2022 10:24:18 +1000 Subject: [PATCH 0689/1842] drm/amdgpu/cs: make commands with 0 chunks illegal behaviour. commit 31ab27b14daaa75541a415c6794d6f3567fea44a upstream. Submitting a cs with 0 chunks, causes an oops later, found trying to execute the wrong userspace driver. MESA_LOADER_DRIVER_OVERRIDE=v3d glxinfo [172536.665184] BUG: kernel NULL pointer dereference, address: 00000000000001d8 [172536.665188] #PF: supervisor read access in kernel mode [172536.665189] #PF: error_code(0x0000) - not-present page [172536.665191] PGD 6712a0067 P4D 6712a0067 PUD 5af9ff067 PMD 0 [172536.665195] Oops: 0000 [#1] SMP NOPTI [172536.665197] CPU: 7 PID: 2769838 Comm: glxinfo Tainted: P O 5.10.81 #1-NixOS [172536.665199] Hardware name: To be filled by O.E.M. To be filled by O.E.M./CROSSHAIR V FORMULA-Z, BIOS 2201 03/23/2015 [172536.665272] RIP: 0010:amdgpu_cs_ioctl+0x96/0x1ce0 [amdgpu] [172536.665274] Code: 75 18 00 00 4c 8b b2 88 00 00 00 8b 46 08 48 89 54 24 68 49 89 f7 4c 89 5c 24 60 31 d2 4c 89 74 24 30 85 c0 0f 85 c0 01 00 00 <48> 83 ba d8 01 00 00 00 48 8b b4 24 90 00 00 00 74 16 48 8b 46 10 [172536.665276] RSP: 0018:ffffb47c0e81bbe0 EFLAGS: 00010246 [172536.665277] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 [172536.665278] RDX: 0000000000000000 RSI: ffffb47c0e81be28 RDI: ffffb47c0e81bd68 [172536.665279] RBP: ffff936524080010 R08: 0000000000000000 R09: ffffb47c0e81be38 [172536.665281] R10: ffff936524080010 R11: ffff936524080000 R12: ffffb47c0e81bc40 [172536.665282] R13: ffffb47c0e81be28 R14: ffff9367bc410000 R15: ffffb47c0e81be28 [172536.665283] FS: 00007fe35e05d740(0000) GS:ffff936c1edc0000(0000) knlGS:0000000000000000 [172536.665284] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [172536.665286] CR2: 00000000000001d8 CR3: 0000000532e46000 CR4: 00000000000406e0 [172536.665287] Call Trace: [172536.665322] ? amdgpu_cs_find_mapping+0x110/0x110 [amdgpu] [172536.665332] drm_ioctl_kernel+0xaa/0xf0 [drm] [172536.665338] drm_ioctl+0x201/0x3b0 [drm] [172536.665369] ? amdgpu_cs_find_mapping+0x110/0x110 [amdgpu] [172536.665372] ? selinux_file_ioctl+0x135/0x230 [172536.665399] amdgpu_drm_ioctl+0x49/0x80 [amdgpu] [172536.665403] __x64_sys_ioctl+0x83/0xb0 [172536.665406] do_syscall_64+0x33/0x40 [172536.665409] entry_SYSCALL_64_after_hwframe+0x44/0xa9 Bug: https://gitlab.freedesktop.org/drm/amd/-/issues/2018 Signed-off-by: Dave Airlie Cc: stable@vger.kernel.org Reviewed-by: Alex Deucher Signed-off-by: Alex Deucher Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c index 867fcee6b0d3..ffd8f5601e28 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c @@ -116,7 +116,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs int ret; if (cs->in.num_chunks == 0) - return 0; + return -EINVAL; chunk_array = kmalloc_array(cs->in.num_chunks, sizeof(uint64_t), GFP_KERNEL); if (!chunk_array) From 436cff507f2a41230baacc3e2ef1d3b2d2653f40 Mon Sep 17 00:00:00 2001 From: Lucas Stach Date: Wed, 23 Mar 2022 17:08:22 +0100 Subject: [PATCH 0690/1842] drm/etnaviv: check for reaped mapping in etnaviv_iommu_unmap_gem MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit e168c25526cd0368af098095c2ded4a008007e1b upstream. When the mapping is already reaped the unmap must be a no-op, as we would otherwise try to remove the mapping twice, corrupting the involved data structures. Cc: stable@vger.kernel.org # 5.4 Signed-off-by: Lucas Stach Reviewed-by: Philipp Zabel Tested-by: Guido Günther Acked-by: Guido Günther Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c index 984569a59a90..9ba2fe48228f 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c @@ -282,6 +282,12 @@ void etnaviv_iommu_unmap_gem(struct etnaviv_iommu_context *context, mutex_lock(&context->lock); + /* Bail if the mapping has been reaped by another thread */ + if (!mapping->context) { + mutex_unlock(&context->lock); + return; + } + /* If the vram node is on the mm, unmap and remove the node */ if (mapping->vram_node.mm == &context->mm) etnaviv_iommu_remove_mapping(context, mapping); From 97a9ec86ccb4e336ecde46db42b59b2ff7e0d719 Mon Sep 17 00:00:00 2001 From: Xiaomeng Tong Date: Sun, 27 Mar 2022 15:58:24 +0800 Subject: [PATCH 0691/1842] drm/nouveau/clk: Fix an incorrect NULL check on list iterator commit 1c3b2a27def609473ed13b1cd668cb10deab49b4 upstream. The bug is here: if (nvkm_cstate_valid(clk, cstate, max_volt, clk->temp)) return cstate; The list iterator value 'cstate' will *always* be set and non-NULL by list_for_each_entry_from_reverse(), so it is incorrect to assume that the iterator value will be unchanged if the list is empty or no element is found (In fact, it will be a bogus pointer to an invalid structure object containing the HEAD). Also it missed a NULL check at callsite and may lead to invalid memory access after that. To fix this bug, just return 'encoder' when found, otherwise return NULL. And add the NULL check. Cc: stable@vger.kernel.org Fixes: 1f7f3d91ad38a ("drm/nouveau/clk: Respect voltage limits in nvkm_cstate_prog") Signed-off-by: Xiaomeng Tong Reviewed-by: Lyude Paul Signed-off-by: Lyude Paul Link: https://patchwork.freedesktop.org/patch/msgid/20220327075824.11806-1-xiam0nd.tong@gmail.com Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c index dc184e857f85..5214fd0658b8 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c @@ -135,10 +135,10 @@ nvkm_cstate_find_best(struct nvkm_clk *clk, struct nvkm_pstate *pstate, list_for_each_entry_from_reverse(cstate, &pstate->list, head) { if (nvkm_cstate_valid(clk, cstate, max_volt, clk->temp)) - break; + return cstate; } - return cstate; + return NULL; } static struct nvkm_cstate * @@ -169,6 +169,8 @@ nvkm_cstate_prog(struct nvkm_clk *clk, struct nvkm_pstate *pstate, int cstatei) if (!list_empty(&pstate->list)) { cstate = nvkm_cstate_get(clk, pstate, cstatei); cstate = nvkm_cstate_find_best(clk, pstate, cstate); + if (!cstate) + return -EINVAL; } else { cstate = &pstate->base; } From addf0ae792589f33b2d7daabd53e30fc5bc33780 Mon Sep 17 00:00:00 2001 From: Xiaomeng Tong Date: Sun, 27 Mar 2022 15:39:25 +0800 Subject: [PATCH 0692/1842] drm/nouveau/kms/nv50-: atom: fix an incorrect NULL check on list iterator commit 6ce4431c7ba7954c4fa6a96ce16ca1b2943e1a83 upstream. The bug is here: return encoder; The list iterator value 'encoder' will *always* be set and non-NULL by drm_for_each_encoder_mask(), so it is incorrect to assume that the iterator value will be NULL if the list is empty or no element found. Otherwise it will bypass some NULL checks and lead to invalid memory access passing the check. To fix this bug, just return 'encoder' when found, otherwise return NULL. Cc: stable@vger.kernel.org Fixes: 12885ecbfe62d ("drm/nouveau/kms/nvd9-: Add CRC support") Signed-off-by: Xiaomeng Tong Reviewed-by: Lyude Paul [Changed commit title] Signed-off-by: Lyude Paul Link: https://patchwork.freedesktop.org/patch/msgid/20220327073925.11121-1-xiam0nd.tong@gmail.com Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/nouveau/dispnv50/atom.h | 6 +++--- drivers/gpu/drm/nouveau/dispnv50/crc.c | 27 ++++++++++++++++++++----- 2 files changed, 25 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/nouveau/dispnv50/atom.h b/drivers/gpu/drm/nouveau/dispnv50/atom.h index 3d82b3c67dec..93f8f4f64578 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/atom.h +++ b/drivers/gpu/drm/nouveau/dispnv50/atom.h @@ -160,14 +160,14 @@ nv50_head_atom_get(struct drm_atomic_state *state, struct drm_crtc *crtc) static inline struct drm_encoder * nv50_head_atom_get_encoder(struct nv50_head_atom *atom) { - struct drm_encoder *encoder = NULL; + struct drm_encoder *encoder; /* We only ever have a single encoder */ drm_for_each_encoder_mask(encoder, atom->state.crtc->dev, atom->state.encoder_mask) - break; + return encoder; - return encoder; + return NULL; } #define nv50_wndw_atom(p) container_of((p), struct nv50_wndw_atom, state) diff --git a/drivers/gpu/drm/nouveau/dispnv50/crc.c b/drivers/gpu/drm/nouveau/dispnv50/crc.c index 66f32d965c72..5624a716e11c 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/crc.c +++ b/drivers/gpu/drm/nouveau/dispnv50/crc.c @@ -411,9 +411,18 @@ void nv50_crc_atomic_check_outp(struct nv50_atom *atom) struct nv50_head_atom *armh = nv50_head_atom(old_crtc_state); struct nv50_head_atom *asyh = nv50_head_atom(new_crtc_state); struct nv50_outp_atom *outp_atom; - struct nouveau_encoder *outp = - nv50_real_outp(nv50_head_atom_get_encoder(armh)); - struct drm_encoder *encoder = &outp->base.base; + struct nouveau_encoder *outp; + struct drm_encoder *encoder, *enc; + + enc = nv50_head_atom_get_encoder(armh); + if (!enc) + continue; + + outp = nv50_real_outp(enc); + if (!outp) + continue; + + encoder = &outp->base.base; if (!asyh->clr.crc) continue; @@ -464,8 +473,16 @@ void nv50_crc_atomic_set(struct nv50_head *head, struct drm_device *dev = crtc->dev; struct nv50_crc *crc = &head->crc; const struct nv50_crc_func *func = nv50_disp(dev)->core->func->crc; - struct nouveau_encoder *outp = - nv50_real_outp(nv50_head_atom_get_encoder(asyh)); + struct nouveau_encoder *outp; + struct drm_encoder *encoder; + + encoder = nv50_head_atom_get_encoder(asyh); + if (!encoder) + return; + + outp = nv50_real_outp(encoder); + if (!outp) + return; func->set_src(head, outp->or, nv50_crc_source_type(outp, asyh->crc.src), From 495ac7757663c6b61ffa5b7442285629f387a0a6 Mon Sep 17 00:00:00 2001 From: Brian Norris Date: Tue, 1 Mar 2022 18:11:38 -0800 Subject: [PATCH 0693/1842] drm/bridge: analogix_dp: Grab runtime PM reference for DP-AUX commit 8fb6c44fe8468f92ac7b8bbfcca4404a4e88645f upstream. If the display is not enable()d, then we aren't holding a runtime PM reference here. Thus, it's easy to accidentally cause a hang, if user space is poking around at /dev/drm_dp_aux0 at the "wrong" time. Let's get a runtime PM reference, and check that we "see" the panel. Don't force any panel power-up, etc., because that can be intrusive, and that's not what other drivers do (see drivers/gpu/drm/bridge/ti-sn65dsi86.c and drivers/gpu/drm/bridge/parade-ps8640.c.) Fixes: 0d97ad03f422 ("drm/bridge: analogix_dp: Remove duplicated code") Cc: Cc: Tomeu Vizoso Signed-off-by: Brian Norris Reviewed-by: Douglas Anderson Signed-off-by: Douglas Anderson Link: https://patchwork.freedesktop.org/patch/msgid/20220301181107.v4.1.I773a08785666ebb236917b0c8e6c05e3de471e75@changeid Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/bridge/analogix/analogix_dp_core.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c index 31b4ff60a010..9755672caf1a 100644 --- a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c +++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c @@ -1639,8 +1639,19 @@ static ssize_t analogix_dpaux_transfer(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg) { struct analogix_dp_device *dp = to_dp(aux); + int ret; - return analogix_dp_transfer(dp, msg); + pm_runtime_get_sync(dp->dev); + + ret = analogix_dp_detect_hpd(dp); + if (ret) + goto out; + + ret = analogix_dp_transfer(dp, msg); +out: + pm_runtime_put(dp->dev); + + return ret; } struct analogix_dp_device * From e28321e013653d77dcbc015d409e41beb037a639 Mon Sep 17 00:00:00 2001 From: Jani Nikula Date: Fri, 20 May 2022 12:46:00 +0300 Subject: [PATCH 0694/1842] drm/i915/dsi: fix VBT send packet port selection for ICL+ MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 0ea917819d12fed41ea4662cc26ffa0060a5c354 upstream. The VBT send packet port selection was never updated for ICL+ where the 2nd link is on port B instead of port C as in VLV+ DSI. First, single link DSI needs to use the configured port instead of relying on the VBT sequence block port. Remove the hard-coded port C check here and make it generic. For reference, see commit f915084edc5a ("drm/i915: Changes related to the sequence port no for") for the original VLV specific fix. Second, the sequence block port number is either 0 or 1, where 1 indicates the 2nd link. Remove the hard-coded port C here for 2nd link. (This could be a "find second set bit" on DSI ports, but just check the two possible options.) Third, sanity check the result with a warning to avoid a NULL pointer dereference. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5984 Cc: stable@vger.kernel.org # v4.19+ Cc: Ville Syrjala Signed-off-by: Jani Nikula Reviewed-by: Ville Syrjälä Link: https://patchwork.freedesktop.org/patch/msgid/20220520094600.2066945-1-jani.nikula@intel.com (cherry picked from commit 08c59dde71b73a0ac94e3ed2d431345b01f20485) Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/i915/display/intel_dsi_vbt.c | 33 +++++++++++++------- 1 file changed, 22 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c index eed037ec0b29..8777d35ac7a7 100644 --- a/drivers/gpu/drm/i915/display/intel_dsi_vbt.c +++ b/drivers/gpu/drm/i915/display/intel_dsi_vbt.c @@ -121,9 +121,25 @@ struct i2c_adapter_lookup { #define ICL_GPIO_DDPA_CTRLCLK_2 8 #define ICL_GPIO_DDPA_CTRLDATA_2 9 -static enum port intel_dsi_seq_port_to_port(u8 port) +static enum port intel_dsi_seq_port_to_port(struct intel_dsi *intel_dsi, + u8 seq_port) { - return port ? PORT_C : PORT_A; + /* + * If single link DSI is being used on any port, the VBT sequence block + * send packet apparently always has 0 for the port. Just use the port + * we have configured, and ignore the sequence block port. + */ + if (hweight8(intel_dsi->ports) == 1) + return ffs(intel_dsi->ports) - 1; + + if (seq_port) { + if (intel_dsi->ports & PORT_B) + return PORT_B; + else if (intel_dsi->ports & PORT_C) + return PORT_C; + } + + return PORT_A; } static const u8 *mipi_exec_send_packet(struct intel_dsi *intel_dsi, @@ -145,15 +161,10 @@ static const u8 *mipi_exec_send_packet(struct intel_dsi *intel_dsi, seq_port = (flags >> MIPI_PORT_SHIFT) & 3; - /* For DSI single link on Port A & C, the seq_port value which is - * parsed from Sequence Block#53 of VBT has been set to 0 - * Now, read/write of packets for the DSI single link on Port A and - * Port C will based on the DVO port from VBT block 2. - */ - if (intel_dsi->ports == (1 << PORT_C)) - port = PORT_C; - else - port = intel_dsi_seq_port_to_port(seq_port); + port = intel_dsi_seq_port_to_port(intel_dsi, seq_port); + + if (drm_WARN_ON(&dev_priv->drm, !intel_dsi->dsi_hosts[port])) + goto out; dsi_device = intel_dsi->dsi_hosts[port]->device; if (!dsi_device) { From 2401f1cf3dee6974d799a4387f0c5d98cec29326 Mon Sep 17 00:00:00 2001 From: Xiaomeng Tong Date: Fri, 8 Apr 2022 16:37:28 +0800 Subject: [PATCH 0695/1842] md: fix an incorrect NULL check in does_sb_need_changing commit fc8738343eefc4ea8afb6122826dea48eacde514 upstream. The bug is here: if (!rdev) The list iterator value 'rdev' will *always* be set and non-NULL by rdev_for_each(), so it is incorrect to assume that the iterator value will be NULL if the list is empty or no element found. Otherwise it will bypass the NULL check and lead to invalid memory access passing the check. To fix the bug, use a new variable 'iter' as the list iterator, while using the original variable 'rdev' as a dedicated pointer to point to the found element. Cc: stable@vger.kernel.org Fixes: 2aa82191ac36 ("md-cluster: Perform a lazy update") Acked-by: Guoqing Jiang Signed-off-by: Xiaomeng Tong Acked-by: Goldwyn Rodrigues Signed-off-by: Song Liu Signed-off-by: Greg Kroah-Hartman --- drivers/md/md.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index cc3876500c4b..e0f99c8750dc 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -2648,14 +2648,16 @@ static void sync_sbs(struct mddev *mddev, int nospares) static bool does_sb_need_changing(struct mddev *mddev) { - struct md_rdev *rdev; + struct md_rdev *rdev = NULL, *iter; struct mdp_superblock_1 *sb; int role; /* Find a good rdev */ - rdev_for_each(rdev, mddev) - if ((rdev->raid_disk >= 0) && !test_bit(Faulty, &rdev->flags)) + rdev_for_each(iter, mddev) + if ((iter->raid_disk >= 0) && !test_bit(Faulty, &iter->flags)) { + rdev = iter; break; + } /* No good device found. */ if (!rdev) From b2b01444228d59ba62b75286c84cea40bb064c64 Mon Sep 17 00:00:00 2001 From: Xiaomeng Tong Date: Fri, 8 Apr 2022 16:47:15 +0800 Subject: [PATCH 0696/1842] md: fix an incorrect NULL check in md_reload_sb commit 64c54d9244a4efe9bc6e9c98e13c4bbb8bb39083 upstream. The bug is here: if (!rdev || rdev->desc_nr != nr) { The list iterator value 'rdev' will *always* be set and non-NULL by rdev_for_each_rcu(), so it is incorrect to assume that the iterator value will be NULL if the list is empty or no element found (In fact, it will be a bogus pointer to an invalid struct object containing the HEAD). Otherwise it will bypass the check and lead to invalid memory access passing the check. To fix the bug, use a new variable 'iter' as the list iterator, while using the original variable 'pdev' as a dedicated pointer to point to the found element. Cc: stable@vger.kernel.org Fixes: 70bcecdb1534 ("md-cluster: Improve md_reload_sb to be less error prone") Signed-off-by: Xiaomeng Tong Signed-off-by: Song Liu Signed-off-by: Greg Kroah-Hartman --- drivers/md/md.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index e0f99c8750dc..7a9701adee73 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -9730,16 +9730,18 @@ static int read_rdev(struct mddev *mddev, struct md_rdev *rdev) void md_reload_sb(struct mddev *mddev, int nr) { - struct md_rdev *rdev; + struct md_rdev *rdev = NULL, *iter; int err; /* Find the rdev */ - rdev_for_each_rcu(rdev, mddev) { - if (rdev->desc_nr == nr) + rdev_for_each_rcu(iter, mddev) { + if (iter->desc_nr == nr) { + rdev = iter; break; + } } - if (!rdev || rdev->desc_nr != nr) { + if (!rdev) { pr_warn("%s: %d Could not find rdev with nr %d\n", __func__, __LINE__, nr); return; } From 08788b917b79535303534956384e7a8942df8cc4 Mon Sep 17 00:00:00 2001 From: Tokunori Ikegami Date: Thu, 24 Mar 2022 02:04:55 +0900 Subject: [PATCH 0697/1842] mtd: cfi_cmdset_0002: Move and rename chip_check/chip_ready/chip_good_for_write commit 083084df578a8bdb18334f69e7b32d690aaa3247 upstream. This is a preparation patch for the S29GL064N buffer writes fix. There is no functional change. Link: https://lore.kernel.org/r/b687c259-6413-26c9-d4c9-b3afa69ea124@pengutronix.de/ Fixes: dfeae1073583("mtd: cfi_cmdset_0002: Change write buffer to check correct value") Signed-off-by: Tokunori Ikegami Cc: stable@vger.kernel.org Acked-by: Vignesh Raghavendra Signed-off-by: Miquel Raynal Link: https://lore.kernel.org/linux-mtd/20220323170458.5608-2-ikegami.t@gmail.com Signed-off-by: Greg Kroah-Hartman --- drivers/mtd/chips/cfi_cmdset_0002.c | 101 ++++++++++------------------ 1 file changed, 35 insertions(+), 66 deletions(-) diff --git a/drivers/mtd/chips/cfi_cmdset_0002.c b/drivers/mtd/chips/cfi_cmdset_0002.c index 96a27e06401f..a431c82cf7f9 100644 --- a/drivers/mtd/chips/cfi_cmdset_0002.c +++ b/drivers/mtd/chips/cfi_cmdset_0002.c @@ -797,47 +797,11 @@ static struct mtd_info *cfi_amdstd_setup(struct mtd_info *mtd) return NULL; } -/* - * Return true if the chip is ready. - * - * Ready is one of: read mode, query mode, erase-suspend-read mode (in any - * non-suspended sector) and is indicated by no toggle bits toggling. - * - * Note that anything more complicated than checking if no bits are toggling - * (including checking DQ5 for an error status) is tricky to get working - * correctly and is therefore not done (particularly with interleaved chips - * as each chip must be checked independently of the others). - */ -static int __xipram chip_ready(struct map_info *map, struct flchip *chip, - unsigned long addr) -{ - struct cfi_private *cfi = map->fldrv_priv; - map_word d, t; - - if (cfi_use_status_reg(cfi)) { - map_word ready = CMD(CFI_SR_DRB); - /* - * For chips that support status register, check device - * ready bit - */ - cfi_send_gen_cmd(0x70, cfi->addr_unlock1, chip->start, map, cfi, - cfi->device_type, NULL); - d = map_read(map, addr); - - return map_word_andequal(map, d, ready, ready); - } - - d = map_read(map, addr); - t = map_read(map, addr); - - return map_word_equal(map, d, t); -} - /* * Return true if the chip is ready and has the correct value. * * Ready is one of: read mode, query mode, erase-suspend-read mode (in any - * non-suspended sector) and it is indicated by no bits toggling. + * non-suspended sector) and is indicated by no toggle bits toggling. * * Error are indicated by toggling bits or bits held with the wrong value, * or with bits toggling. @@ -846,33 +810,36 @@ static int __xipram chip_ready(struct map_info *map, struct flchip *chip, * (including checking DQ5 for an error status) is tricky to get working * correctly and is therefore not done (particularly with interleaved chips * as each chip must be checked independently of the others). - * */ -static int __xipram chip_good(struct map_info *map, struct flchip *chip, - unsigned long addr, map_word expected) +static int __xipram chip_ready(struct map_info *map, struct flchip *chip, + unsigned long addr, map_word *expected) { struct cfi_private *cfi = map->fldrv_priv; - map_word oldd, curd; + map_word d, t; + int ret; if (cfi_use_status_reg(cfi)) { map_word ready = CMD(CFI_SR_DRB); - /* * For chips that support status register, check device * ready bit */ cfi_send_gen_cmd(0x70, cfi->addr_unlock1, chip->start, map, cfi, cfi->device_type, NULL); - curd = map_read(map, addr); + t = map_read(map, addr); - return map_word_andequal(map, curd, ready, ready); + return map_word_andequal(map, t, ready, ready); } - oldd = map_read(map, addr); - curd = map_read(map, addr); + d = map_read(map, addr); + t = map_read(map, addr); - return map_word_equal(map, oldd, curd) && - map_word_equal(map, curd, expected); + ret = map_word_equal(map, d, t); + + if (!ret || !expected) + return ret; + + return map_word_equal(map, t, *expected); } static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr, int mode) @@ -889,7 +856,7 @@ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr case FL_STATUS: for (;;) { - if (chip_ready(map, chip, adr)) + if (chip_ready(map, chip, adr, NULL)) break; if (time_after(jiffies, timeo)) { @@ -927,7 +894,7 @@ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr chip->state = FL_ERASE_SUSPENDING; chip->erase_suspended = 1; for (;;) { - if (chip_ready(map, chip, adr)) + if (chip_ready(map, chip, adr, NULL)) break; if (time_after(jiffies, timeo)) { @@ -1458,7 +1425,7 @@ static int do_otp_lock(struct map_info *map, struct flchip *chip, loff_t adr, /* wait for chip to become ready */ timeo = jiffies + msecs_to_jiffies(2); for (;;) { - if (chip_ready(map, chip, adr)) + if (chip_ready(map, chip, adr, NULL)) break; if (time_after(jiffies, timeo)) { @@ -1690,11 +1657,11 @@ static int __xipram do_write_oneword_once(struct map_info *map, } /* - * We check "time_after" and "!chip_good" before checking - * "chip_good" to avoid the failure due to scheduling. + * We check "time_after" and "!chip_ready" before checking + * "chip_ready" to avoid the failure due to scheduling. */ if (time_after(jiffies, timeo) && - !chip_good(map, chip, adr, datum)) { + !chip_ready(map, chip, adr, &datum)) { xip_enable(map, chip, adr); printk(KERN_WARNING "MTD %s(): software timeout\n", __func__); xip_disable(map, chip, adr); @@ -1702,7 +1669,7 @@ static int __xipram do_write_oneword_once(struct map_info *map, break; } - if (chip_good(map, chip, adr, datum)) { + if (chip_ready(map, chip, adr, &datum)) { if (cfi_check_err_status(map, chip, adr)) ret = -EIO; break; @@ -1970,18 +1937,18 @@ static int __xipram do_write_buffer_wait(struct map_info *map, } /* - * We check "time_after" and "!chip_good" before checking - * "chip_good" to avoid the failure due to scheduling. + * We check "time_after" and "!chip_ready" before checking + * "chip_ready" to avoid the failure due to scheduling. */ if (time_after(jiffies, timeo) && - !chip_good(map, chip, adr, datum)) { + !chip_ready(map, chip, adr, &datum)) { pr_err("MTD %s(): software timeout, address:0x%.8lx.\n", __func__, adr); ret = -EIO; break; } - if (chip_good(map, chip, adr, datum)) { + if (chip_ready(map, chip, adr, &datum)) { if (cfi_check_err_status(map, chip, adr)) ret = -EIO; break; @@ -2190,7 +2157,7 @@ static int cfi_amdstd_panic_wait(struct map_info *map, struct flchip *chip, * If the driver thinks the chip is idle, and no toggle bits * are changing, then the chip is actually idle for sure. */ - if (chip->state == FL_READY && chip_ready(map, chip, adr)) + if (chip->state == FL_READY && chip_ready(map, chip, adr, NULL)) return 0; /* @@ -2207,7 +2174,7 @@ static int cfi_amdstd_panic_wait(struct map_info *map, struct flchip *chip, /* wait for the chip to become ready */ for (i = 0; i < jiffies_to_usecs(timeo); i++) { - if (chip_ready(map, chip, adr)) + if (chip_ready(map, chip, adr, NULL)) return 0; udelay(1); @@ -2271,13 +2238,13 @@ static int do_panic_write_oneword(struct map_info *map, struct flchip *chip, map_write(map, datum, adr); for (i = 0; i < jiffies_to_usecs(uWriteTimeout); i++) { - if (chip_ready(map, chip, adr)) + if (chip_ready(map, chip, adr, NULL)) break; udelay(1); } - if (!chip_good(map, chip, adr, datum) || + if (!chip_ready(map, chip, adr, &datum) || cfi_check_err_status(map, chip, adr)) { /* reset on all failures. */ map_write(map, CMD(0xF0), chip->start); @@ -2419,6 +2386,7 @@ static int __xipram do_erase_chip(struct map_info *map, struct flchip *chip) DECLARE_WAITQUEUE(wait, current); int ret; int retry_cnt = 0; + map_word datum = map_word_ff(map); adr = cfi->addr_unlock1; @@ -2473,7 +2441,7 @@ static int __xipram do_erase_chip(struct map_info *map, struct flchip *chip) chip->erase_suspended = 0; } - if (chip_good(map, chip, adr, map_word_ff(map))) { + if (chip_ready(map, chip, adr, &datum)) { if (cfi_check_err_status(map, chip, adr)) ret = -EIO; break; @@ -2518,6 +2486,7 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip, DECLARE_WAITQUEUE(wait, current); int ret; int retry_cnt = 0; + map_word datum = map_word_ff(map); adr += chip->start; @@ -2572,7 +2541,7 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip, chip->erase_suspended = 0; } - if (chip_good(map, chip, adr, map_word_ff(map))) { + if (chip_ready(map, chip, adr, &datum)) { if (cfi_check_err_status(map, chip, adr)) ret = -EIO; break; @@ -2766,7 +2735,7 @@ static int __maybe_unused do_ppb_xxlock(struct map_info *map, */ timeo = jiffies + msecs_to_jiffies(2000); /* 2s max (un)locking */ for (;;) { - if (chip_ready(map, chip, adr)) + if (chip_ready(map, chip, adr, NULL)) break; if (time_after(jiffies, timeo)) { From d09dad00574b342e6103071308597de94a4ef4d3 Mon Sep 17 00:00:00 2001 From: Tokunori Ikegami Date: Thu, 24 Mar 2022 02:04:56 +0900 Subject: [PATCH 0698/1842] mtd: cfi_cmdset_0002: Use chip_ready() for write on S29GL064N commit 0a8e98305f63deaf0a799d5cf5532cc83af035d1 upstream. Since commit dfeae1073583("mtd: cfi_cmdset_0002: Change write buffer to check correct value") buffered writes fail on S29GL064N. This is because, on S29GL064N, reads return 0xFF at the end of DQ polling for write completion, where as, chip_good() check expects actual data written to the last location to be returned post DQ polling completion. Fix is to revert to using chip_good() for S29GL064N which only checks for DQ lines to settle down to determine write completion. Link: https://lore.kernel.org/r/b687c259-6413-26c9-d4c9-b3afa69ea124@pengutronix.de/ Fixes: dfeae1073583("mtd: cfi_cmdset_0002: Change write buffer to check correct value") Cc: stable@vger.kernel.org Signed-off-by: Tokunori Ikegami Acked-by: Vignesh Raghavendra Signed-off-by: Miquel Raynal Link: https://lore.kernel.org/linux-mtd/20220323170458.5608-3-ikegami.t@gmail.com Signed-off-by: Greg Kroah-Hartman --- drivers/mtd/chips/cfi_cmdset_0002.c | 42 +++++++++++++++++++++++------ include/linux/mtd/cfi.h | 1 + 2 files changed, 35 insertions(+), 8 deletions(-) diff --git a/drivers/mtd/chips/cfi_cmdset_0002.c b/drivers/mtd/chips/cfi_cmdset_0002.c index a431c82cf7f9..9bd65f3f805c 100644 --- a/drivers/mtd/chips/cfi_cmdset_0002.c +++ b/drivers/mtd/chips/cfi_cmdset_0002.c @@ -59,6 +59,10 @@ #define CFI_SR_WBASB BIT(3) #define CFI_SR_SLSB BIT(1) +enum cfi_quirks { + CFI_QUIRK_DQ_TRUE_DATA = BIT(0), +}; + static int cfi_amdstd_read (struct mtd_info *, loff_t, size_t, size_t *, u_char *); static int cfi_amdstd_write_words(struct mtd_info *, loff_t, size_t, size_t *, const u_char *); #if !FORCE_WORD_WRITE @@ -432,6 +436,15 @@ static void fixup_s29ns512p_sectors(struct mtd_info *mtd) mtd->name); } +static void fixup_quirks(struct mtd_info *mtd) +{ + struct map_info *map = mtd->priv; + struct cfi_private *cfi = map->fldrv_priv; + + if (cfi->mfr == CFI_MFR_AMD && cfi->id == 0x0c01) + cfi->quirks |= CFI_QUIRK_DQ_TRUE_DATA; +} + /* Used to fix CFI-Tables of chips without Extended Query Tables */ static struct cfi_fixup cfi_nopri_fixup_table[] = { { CFI_MFR_SST, 0x234a, fixup_sst39vf }, /* SST39VF1602 */ @@ -470,6 +483,7 @@ static struct cfi_fixup cfi_fixup_table[] = { #if !FORCE_WORD_WRITE { CFI_MFR_ANY, CFI_ID_ANY, fixup_use_write_buffers }, #endif + { CFI_MFR_ANY, CFI_ID_ANY, fixup_quirks }, { 0, 0, NULL } }; static struct cfi_fixup jedec_fixup_table[] = { @@ -842,6 +856,18 @@ static int __xipram chip_ready(struct map_info *map, struct flchip *chip, return map_word_equal(map, t, *expected); } +static int __xipram chip_good(struct map_info *map, struct flchip *chip, + unsigned long addr, map_word *expected) +{ + struct cfi_private *cfi = map->fldrv_priv; + map_word *datum = expected; + + if (cfi->quirks & CFI_QUIRK_DQ_TRUE_DATA) + datum = NULL; + + return chip_ready(map, chip, addr, datum); +} + static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr, int mode) { DECLARE_WAITQUEUE(wait, current); @@ -1657,11 +1683,11 @@ static int __xipram do_write_oneword_once(struct map_info *map, } /* - * We check "time_after" and "!chip_ready" before checking - * "chip_ready" to avoid the failure due to scheduling. + * We check "time_after" and "!chip_good" before checking + * "chip_good" to avoid the failure due to scheduling. */ if (time_after(jiffies, timeo) && - !chip_ready(map, chip, adr, &datum)) { + !chip_good(map, chip, adr, &datum)) { xip_enable(map, chip, adr); printk(KERN_WARNING "MTD %s(): software timeout\n", __func__); xip_disable(map, chip, adr); @@ -1669,7 +1695,7 @@ static int __xipram do_write_oneword_once(struct map_info *map, break; } - if (chip_ready(map, chip, adr, &datum)) { + if (chip_good(map, chip, adr, &datum)) { if (cfi_check_err_status(map, chip, adr)) ret = -EIO; break; @@ -1937,18 +1963,18 @@ static int __xipram do_write_buffer_wait(struct map_info *map, } /* - * We check "time_after" and "!chip_ready" before checking - * "chip_ready" to avoid the failure due to scheduling. + * We check "time_after" and "!chip_good" before checking + * "chip_good" to avoid the failure due to scheduling. */ if (time_after(jiffies, timeo) && - !chip_ready(map, chip, adr, &datum)) { + !chip_good(map, chip, adr, &datum)) { pr_err("MTD %s(): software timeout, address:0x%.8lx.\n", __func__, adr); ret = -EIO; break; } - if (chip_ready(map, chip, adr, &datum)) { + if (chip_good(map, chip, adr, &datum)) { if (cfi_check_err_status(map, chip, adr)) ret = -EIO; break; diff --git a/include/linux/mtd/cfi.h b/include/linux/mtd/cfi.h index fd1ecb821106..d88bb56c18e2 100644 --- a/include/linux/mtd/cfi.h +++ b/include/linux/mtd/cfi.h @@ -286,6 +286,7 @@ struct cfi_private { map_word sector_erase_cmd; unsigned long chipshift; /* Because they're of the same type */ const char *im_name; /* inter_module name for cmdset_setup */ + unsigned long quirks; struct flchip chips[]; /* per-chip data structure for each chip */ }; From 4005f6a25c0524f7ea7761b6989898fdccd7140f Mon Sep 17 00:00:00 2001 From: Nicolas Dufresne Date: Wed, 6 Apr 2022 21:23:42 +0100 Subject: [PATCH 0699/1842] media: coda: Fix reported H264 profile commit 7110c08ea71953a7fc342f0b76046f72442cf26c upstream. The CODA960 manual states that ASO/FMO features of baseline are not supported, so for this reason this driver should only report constrained baseline support. This fixes negotiation issue with constrained baseline content on GStreamer 1.17.1. ASO/FMO features are unsupported for the encoder and untested for the decoder because there is currently no userspace support. Neither GStreamer parsers nor FFMPEG parsers support ASO/FMO. Cc: stable@vger.kernel.org Fixes: 42a68012e67c2 ("media: coda: add read-only h.264 decoder profile/level controls") Signed-off-by: Nicolas Dufresne Signed-off-by: Ezequiel Garcia Tested-by: Pascal Speck Signed-off-by: Fabio Estevam Reviewed-by: Philipp Zabel Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Greg Kroah-Hartman --- drivers/media/platform/coda/coda-common.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c index 99f6d22e0c3c..41ce1163e7f1 100644 --- a/drivers/media/platform/coda/coda-common.c +++ b/drivers/media/platform/coda/coda-common.c @@ -2343,8 +2343,8 @@ static void coda_encode_ctrls(struct coda_ctx *ctx) V4L2_CID_MPEG_VIDEO_H264_CHROMA_QP_INDEX_OFFSET, -12, 12, 1, 0); v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops, V4L2_CID_MPEG_VIDEO_H264_PROFILE, - V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE, 0x0, - V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE); + V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE, 0x0, + V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE); if (ctx->dev->devtype->product == CODA_HX4 || ctx->dev->devtype->product == CODA_7541) { v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops, @@ -2425,7 +2425,7 @@ static void coda_decode_ctrls(struct coda_ctx *ctx) ctx->h264_profile_ctrl = v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops, V4L2_CID_MPEG_VIDEO_H264_PROFILE, V4L2_MPEG_VIDEO_H264_PROFILE_HIGH, - ~((1 << V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE) | + ~((1 << V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE) | (1 << V4L2_MPEG_VIDEO_H264_PROFILE_MAIN) | (1 << V4L2_MPEG_VIDEO_H264_PROFILE_HIGH)), V4L2_MPEG_VIDEO_H264_PROFILE_HIGH); From 577a959cb0bd75bf1585de5f7f283e974254ba31 Mon Sep 17 00:00:00 2001 From: Nicolas Dufresne Date: Wed, 6 Apr 2022 21:23:43 +0100 Subject: [PATCH 0700/1842] media: coda: Add more H264 levels for CODA960 commit eb2fd187abc878a2dfad46902becb74963473c7d upstream. Add H264 level 1.0, 4.1, 4.2 to the list of supported formats. While the hardware does not fully support these levels, it does support most of them. The constraints on frame size and pixel formats already cover the limitation. This fixes negotiation of level on GStreamer 1.17.1. Cc: stable@vger.kernel.org Fixes: 42a68012e67c2 ("media: coda: add read-only h.264 decoder profile/level controls") Suggested-by: Philipp Zabel Signed-off-by: Nicolas Dufresne Signed-off-by: Ezequiel Garcia Signed-off-by: Fabio Estevam Reviewed-by: Philipp Zabel Signed-off-by: Hans Verkuil Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Greg Kroah-Hartman --- drivers/media/platform/coda/coda-common.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c index 41ce1163e7f1..14d4830d5db5 100644 --- a/drivers/media/platform/coda/coda-common.c +++ b/drivers/media/platform/coda/coda-common.c @@ -2358,12 +2358,15 @@ static void coda_encode_ctrls(struct coda_ctx *ctx) if (ctx->dev->devtype->product == CODA_960) { v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops, V4L2_CID_MPEG_VIDEO_H264_LEVEL, - V4L2_MPEG_VIDEO_H264_LEVEL_4_0, - ~((1 << V4L2_MPEG_VIDEO_H264_LEVEL_2_0) | + V4L2_MPEG_VIDEO_H264_LEVEL_4_2, + ~((1 << V4L2_MPEG_VIDEO_H264_LEVEL_1_0) | + (1 << V4L2_MPEG_VIDEO_H264_LEVEL_2_0) | (1 << V4L2_MPEG_VIDEO_H264_LEVEL_3_0) | (1 << V4L2_MPEG_VIDEO_H264_LEVEL_3_1) | (1 << V4L2_MPEG_VIDEO_H264_LEVEL_3_2) | - (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_0)), + (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_0) | + (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_1) | + (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_2)), V4L2_MPEG_VIDEO_H264_LEVEL_4_0); } v4l2_ctrl_new_std(&ctx->ctrls, &coda_ctrl_ops, From b67adaec347ddb759d34478da9bf56168798350d Mon Sep 17 00:00:00 2001 From: GUO Zihua Date: Thu, 7 Apr 2022 10:16:19 +0800 Subject: [PATCH 0701/1842] ima: remove the IMA_TEMPLATE Kconfig option commit 891163adf180bc369b2f11c9dfce6d2758d2a5bd upstream. The original 'ima' measurement list template contains a hash, defined as 20 bytes, and a null terminated pathname, limited to 255 characters. Other measurement list templates permit both larger hashes and longer pathnames. When the "ima" template is configured as the default, a new measurement list template (ima_template=) must be specified before specifying a larger hash algorithm (ima_hash=) on the boot command line. To avoid this boot command line ordering issue, remove the legacy "ima" template configuration option, allowing it to still be specified on the boot command line. The root cause of this issue is that during the processing of ima_hash, we would try to check whether the hash algorithm is compatible with the template. If the template is not set at the moment we do the check, we check the algorithm against the configured default template. If the default template is "ima", then we reject any hash algorithm other than sha1 and md5. For example, if the compiled default template is "ima", and the default algorithm is sha1 (which is the current default). In the cmdline, we put in "ima_hash=sha256 ima_template=ima-ng". The expected behavior would be that ima starts with ima-ng as the template and sha256 as the hash algorithm. However, during the processing of "ima_hash=", "ima_template=" has not been processed yet, and hash_setup would check the configured hash algorithm against the compiled default: ima, and reject sha256. So at the end, the hash algorithm that is actually used will be sha1. With template "ima" removed from the configured default, we ensure that the default tempalte would at least be "ima-ng" which allows for basically any hash algorithm. This change would not break the algorithm compatibility checks for IMA. Fixes: 4286587dccd43 ("ima: add Kconfig default measurement list template") Signed-off-by: GUO Zihua Cc: Signed-off-by: Mimi Zohar Signed-off-by: Greg Kroah-Hartman --- security/integrity/ima/Kconfig | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig index 9e72edb8d31a..755af0b29e75 100644 --- a/security/integrity/ima/Kconfig +++ b/security/integrity/ima/Kconfig @@ -69,10 +69,9 @@ choice hash, defined as 20 bytes, and a null terminated pathname, limited to 255 characters. The 'ima-ng' measurement list template permits both larger hash digests and longer - pathnames. + pathnames. The configured default template can be replaced + by specifying "ima_template=" on the boot command line. - config IMA_TEMPLATE - bool "ima" config IMA_NG_TEMPLATE bool "ima-ng (default)" config IMA_SIG_TEMPLATE @@ -82,7 +81,6 @@ endchoice config IMA_DEFAULT_TEMPLATE string depends on IMA - default "ima" if IMA_TEMPLATE default "ima-ng" if IMA_NG_TEMPLATE default "ima-sig" if IMA_SIG_TEMPLATE @@ -102,19 +100,19 @@ choice config IMA_DEFAULT_HASH_SHA256 bool "SHA256" - depends on CRYPTO_SHA256=y && !IMA_TEMPLATE + depends on CRYPTO_SHA256=y config IMA_DEFAULT_HASH_SHA512 bool "SHA512" - depends on CRYPTO_SHA512=y && !IMA_TEMPLATE + depends on CRYPTO_SHA512=y config IMA_DEFAULT_HASH_WP512 bool "WP512" - depends on CRYPTO_WP512=y && !IMA_TEMPLATE + depends on CRYPTO_WP512=y config IMA_DEFAULT_HASH_SM3 bool "SM3" - depends on CRYPTO_SM3=y && !IMA_TEMPLATE + depends on CRYPTO_SM3=y endchoice config IMA_DEFAULT_HASH From 09408080adb1c704c66685f3ec4391a4b9490c41 Mon Sep 17 00:00:00 2001 From: Sean Christopherson Date: Wed, 2 Feb 2022 00:49:41 +0000 Subject: [PATCH 0702/1842] Kconfig: Add option for asm goto w/ tied outputs to workaround clang-13 bug commit 1aa0e8b144b6474c4914439d232d15bfe883636b upstream. Add a config option to guard (future) usage of asm_volatile_goto() that includes "tied outputs", i.e. "+" constraints that specify both an input and output parameter. clang-13 has a bug[1] that causes compilation of such inline asm to fail, and KVM wants to use a "+m" constraint to implement a uaccess form of CMPXCHG[2]. E.g. the test code fails with :1:29: error: invalid operand in inline asm: '.long (${1:l}) - .' int foo(int *x) { asm goto (".long (%l[bar]) - .\n": "+m"(*x) ::: bar); return *x; bar: return 0; } ^ :1:29: error: unknown token in expression :1:9: note: instantiated into assembly here .long () - . ^ 2 errors generated. on clang-13, but passes on gcc (with appropriate asm goto support). The bug is fixed in clang-14, but won't be backported to clang-13 as the changes are too invasive/risky. gcc also had a similar bug[3], fixed in gcc-11, where gcc failed to account for its behavior of assigning two numbers to tied outputs (one for input, one for output) when evaluating symbolic references. [1] https://github.com/ClangBuiltLinux/linux/issues/1512 [2] https://lore.kernel.org/all/YfMruK8%2F1izZ2VHS@google.com [3] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98096 Suggested-by: Nick Desaulniers Reviewed-by: Nick Desaulniers Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Message-Id: <20220202004945.2540433-2-seanjc@google.com> Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- init/Kconfig | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/init/Kconfig b/init/Kconfig index 13685bffef37..22912631d79b 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -68,6 +68,11 @@ config CC_HAS_ASM_GOTO_OUTPUT depends on CC_HAS_ASM_GOTO def_bool $(success,echo 'int foo(int x) { asm goto ("": "=r"(x) ::: bar); return x; bar: return 0; }' | $(CC) -x c - -c -o /dev/null) +config CC_HAS_ASM_GOTO_TIED_OUTPUT + depends on CC_HAS_ASM_GOTO_OUTPUT + # Detect buggy gcc and clang, fixed in gcc-11 clang-14. + def_bool $(success,echo 'int foo(int *x) { asm goto (".long (%l[bar]) - .\n": "+m"(*x) ::: bar); return *x; bar: return 0; }' | $CC -x c - -c -o /dev/null) + config TOOLS_SUPPORT_RELR def_bool $(success,env "CC=$(CC)" "LD=$(LD)" "NM=$(NM)" "OBJCOPY=$(OBJCOPY)" $(srctree)/scripts/tools-support-relr.sh) From 31dca00d0cc9f4133320d72eb7e3720badc6d6e6 Mon Sep 17 00:00:00 2001 From: Dennis Dalessandro Date: Fri, 20 May 2022 14:37:12 -0400 Subject: [PATCH 0703/1842] RDMA/hfi1: Fix potential integer multiplication overflow errors commit f93e91a0372c922c20d5bee260b0f43b4b8a1bee upstream. When multiplying of different types, an overflow is possible even when storing the result in a larger type. This is because the conversion is done after the multiplication. So arithmetic overflow and thus in incorrect value is possible. Correct an instance of this in the inter packet delay calculation. Fix by ensuring one of the operands is u64 which will promote the other to u64 as well ensuring no overflow. Cc: stable@vger.kernel.org Fixes: 7724105686e7 ("IB/hfi1: add driver files") Link: https://lore.kernel.org/r/20220520183712.48973.29855.stgit@awfm-01.cornelisnetworks.com Reviewed-by: Mike Marciniszyn Signed-off-by: Dennis Dalessandro Signed-off-by: Jason Gunthorpe Signed-off-by: Greg Kroah-Hartman --- drivers/infiniband/hw/hfi1/init.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c index fa2cd76747ff..837293aac68f 100644 --- a/drivers/infiniband/hw/hfi1/init.c +++ b/drivers/infiniband/hw/hfi1/init.c @@ -530,7 +530,7 @@ void set_link_ipg(struct hfi1_pportdata *ppd) u16 shift, mult; u64 src; u32 current_egress_rate; /* Mbits /sec */ - u32 max_pkt_time; + u64 max_pkt_time; /* * max_pkt_time is the maximum packet egress time in units * of the fabric clock period 1/(805 MHz). From df7f0f8be3019816ce7e3085195a0e68d028c4e2 Mon Sep 17 00:00:00 2001 From: Guo Ren Date: Wed, 6 Apr 2022 22:28:43 +0800 Subject: [PATCH 0704/1842] csky: patch_text: Fixup last cpu should be master commit 8c4d16471e2babe9bdfe41d6ef724526629696cb upstream. These patch_text implementations are using stop_machine_cpuslocked infrastructure with atomic cpu_count. The original idea: When the master CPU patch_text, the others should wait for it. But current implementation is using the first CPU as master, which couldn't guarantee the remaining CPUs are waiting. This patch changes the last CPU as the master to solve the potential risk. Fixes: 33e53ae1ce41 ("csky: Add kprobes supported") Signed-off-by: Guo Ren Signed-off-by: Guo Ren Reviewed-by: Masami Hiramatsu Cc: Signed-off-by: Greg Kroah-Hartman --- arch/csky/kernel/probes/kprobes.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c index 589f090f48b9..556b9ba61ec0 100644 --- a/arch/csky/kernel/probes/kprobes.c +++ b/arch/csky/kernel/probes/kprobes.c @@ -28,7 +28,7 @@ static int __kprobes patch_text_cb(void *priv) struct csky_insn_patch *param = priv; unsigned int addr = (unsigned int)param->addr; - if (atomic_inc_return(¶m->cpu_count) == 1) { + if (atomic_inc_return(¶m->cpu_count) == num_online_cpus()) { *(u16 *) addr = cpu_to_le16(param->opcode); dcache_wb_range(addr, addr + 2); atomic_inc(¶m->cpu_count); From be7ae7cd1c2d2898644ad826c47961a73e51eea3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pali=20Roh=C3=A1r?= Date: Mon, 25 Apr 2022 13:37:05 +0200 Subject: [PATCH 0705/1842] irqchip/armada-370-xp: Do not touch Performance Counter Overflow on A375, A38x, A39x MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit a3d66a76348daf559873f19afc912a2a7c2ccdaf upstream. Register ARMADA_370_XP_INT_FABRIC_MASK_OFFS is Armada 370 and XP specific and on new Armada platforms it has different meaning. It does not configure Performance Counter Overflow interrupt masking. So do not touch this register on non-A370/XP platforms (A375, A38x and A39x). Signed-off-by: Pali Rohár Cc: stable@vger.kernel.org Fixes: 28da06dfd9e4 ("irqchip: armada-370-xp: Enable the PMU interrupts") Reviewed-by: Andrew Lunn Signed-off-by: Marc Zyngier Link: https://lore.kernel.org/r/20220425113706.29310-1-pali@kernel.org Signed-off-by: Greg Kroah-Hartman --- drivers/irqchip/irq-armada-370-xp.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/drivers/irqchip/irq-armada-370-xp.c b/drivers/irqchip/irq-armada-370-xp.c index 84f2741aaac6..c76fb70c70bb 100644 --- a/drivers/irqchip/irq-armada-370-xp.c +++ b/drivers/irqchip/irq-armada-370-xp.c @@ -308,7 +308,16 @@ static inline int armada_370_xp_msi_init(struct device_node *node, static void armada_xp_mpic_perf_init(void) { - unsigned long cpuid = cpu_logical_map(smp_processor_id()); + unsigned long cpuid; + + /* + * This Performance Counter Overflow interrupt is specific for + * Armada 370 and XP. It is not available on Armada 375, 38x and 39x. + */ + if (!of_machine_is_compatible("marvell,armada-370-xp")) + return; + + cpuid = cpu_logical_map(smp_processor_id()); /* Enable Performance Counter Overflow interrupts */ writel(ARMADA_370_XP_INT_CAUSE_PERF(cpuid), From e87fedad4a004a809df62475744d7c63100e0b2c Mon Sep 17 00:00:00 2001 From: Max Filippov Date: Tue, 26 Apr 2022 09:01:18 -0700 Subject: [PATCH 0706/1842] irqchip: irq-xtensa-mx: fix initial IRQ affinity commit a255ee29252066d621df5d6b420bf534c6ba5bc0 upstream. When irq-xtensa-mx chip is used in non-SMP configuration its irq_set_affinity callback is not called leaving IRQ affinity set empty. As a result IRQ delivery does not work in that configuration. Initialize IRQ affinity of the xtensa MX interrupt distributor to CPU 0 for all external IRQ lines. Cc: stable@vger.kernel.org Signed-off-by: Max Filippov Signed-off-by: Greg Kroah-Hartman --- drivers/irqchip/irq-xtensa-mx.c | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/drivers/irqchip/irq-xtensa-mx.c b/drivers/irqchip/irq-xtensa-mx.c index 27933338f7b3..8c581c985aa7 100644 --- a/drivers/irqchip/irq-xtensa-mx.c +++ b/drivers/irqchip/irq-xtensa-mx.c @@ -151,14 +151,25 @@ static struct irq_chip xtensa_mx_irq_chip = { .irq_set_affinity = xtensa_mx_irq_set_affinity, }; +static void __init xtensa_mx_init_common(struct irq_domain *root_domain) +{ + unsigned int i; + + irq_set_default_host(root_domain); + secondary_init_irq(); + + /* Initialize default IRQ routing to CPU 0 */ + for (i = 0; i < XCHAL_NUM_EXTINTERRUPTS; ++i) + set_er(1, MIROUT(i)); +} + int __init xtensa_mx_init_legacy(struct device_node *interrupt_parent) { struct irq_domain *root_domain = irq_domain_add_legacy(NULL, NR_IRQS - 1, 1, 0, &xtensa_mx_irq_domain_ops, &xtensa_mx_irq_chip); - irq_set_default_host(root_domain); - secondary_init_irq(); + xtensa_mx_init_common(root_domain); return 0; } @@ -168,8 +179,7 @@ static int __init xtensa_mx_init(struct device_node *np, struct irq_domain *root_domain = irq_domain_add_linear(np, NR_IRQS, &xtensa_mx_irq_domain_ops, &xtensa_mx_irq_chip); - irq_set_default_host(root_domain); - secondary_init_irq(); + xtensa_mx_init_common(root_domain); return 0; } IRQCHIP_DECLARE(xtensa_mx_irq_chip, "cdns,xtensa-mx", xtensa_mx_init); From 22741dd048ef6a96610868c3de4aaf777e4e5339 Mon Sep 17 00:00:00 2001 From: Dimitri John Ledkov Date: Thu, 14 Apr 2022 13:50:03 +0100 Subject: [PATCH 0707/1842] cfg80211: declare MODULE_FIRMWARE for regulatory.db commit 7bc7981eeebe1b8e603ad2ffc5e84f4df76920dd upstream. Add MODULE_FIRMWARE declarations for regulatory.db and regulatory.db.p7s such that userspace tooling can discover and include these files. Cc: stable@vger.kernel.org Signed-off-by: Dimitri John Ledkov Link: https://lore.kernel.org/r/20220414125004.267819-1-dimitri.ledkov@canonical.com Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman --- net/wireless/reg.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/net/wireless/reg.c b/net/wireless/reg.c index 6b3386e1d93a..fd848609e656 100644 --- a/net/wireless/reg.c +++ b/net/wireless/reg.c @@ -787,6 +787,8 @@ static int __init load_builtin_regdb_keys(void) return 0; } +MODULE_FIRMWARE("regulatory.db.p7s"); + static bool regdb_has_valid_signature(const u8 *data, unsigned int size) { const struct firmware *sig; @@ -1058,6 +1060,8 @@ static void regdb_fw_cb(const struct firmware *fw, void *context) release_firmware(fw); } +MODULE_FIRMWARE("regulatory.db"); + static int query_regdb_file(const char *alpha2) { ASSERT_RTNL(); From 873069e393c5e56a95a98b799e69184a85fa6cf6 Mon Sep 17 00:00:00 2001 From: Felix Fietkau Date: Wed, 20 Apr 2022 12:49:07 +0200 Subject: [PATCH 0708/1842] mac80211: upgrade passive scan to active scan on DFS channels after beacon rx commit b041b7b9de6e1d4362de855ab90f9d03ef323edd upstream. In client mode, we can't connect to hidden SSID APs or SSIDs not advertised in beacons on DFS channels, since we're forced to passive scan. Fix this by sending out a probe request immediately after the first beacon, if active scan was requested by the user. Cc: stable@vger.kernel.org Reported-by: Catrinel Catrinescu Signed-off-by: Felix Fietkau Link: https://lore.kernel.org/r/20220420104907.36275-1-nbd@nbd.name Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman --- net/mac80211/ieee80211_i.h | 5 +++++ net/mac80211/scan.c | 20 ++++++++++++++++++++ 2 files changed, 25 insertions(+) diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h index fe8f586886b4..bcc94cc1b620 100644 --- a/net/mac80211/ieee80211_i.h +++ b/net/mac80211/ieee80211_i.h @@ -1103,6 +1103,9 @@ struct tpt_led_trigger { * a scan complete for an aborted scan. * @SCAN_HW_CANCELLED: Set for our scan work function when the scan is being * cancelled. + * @SCAN_BEACON_WAIT: Set whenever we're passive scanning because of radar/no-IR + * and could send a probe request after receiving a beacon. + * @SCAN_BEACON_DONE: Beacon received, we can now send a probe request */ enum { SCAN_SW_SCANNING, @@ -1111,6 +1114,8 @@ enum { SCAN_COMPLETED, SCAN_ABORTED, SCAN_HW_CANCELLED, + SCAN_BEACON_WAIT, + SCAN_BEACON_DONE, }; /** diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c index 6b50cb5e0e3c..887f945bb12d 100644 --- a/net/mac80211/scan.c +++ b/net/mac80211/scan.c @@ -277,6 +277,16 @@ void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb) if (likely(!sdata1 && !sdata2)) return; + if (test_and_clear_bit(SCAN_BEACON_WAIT, &local->scanning)) { + /* + * we were passive scanning because of radar/no-IR, but + * the beacon/proberesp rx gives us an opportunity to upgrade + * to active scan + */ + set_bit(SCAN_BEACON_DONE, &local->scanning); + ieee80211_queue_delayed_work(&local->hw, &local->scan_work, 0); + } + if (ieee80211_is_probe_resp(mgmt->frame_control)) { struct cfg80211_scan_request *scan_req; struct cfg80211_sched_scan_request *sched_scan_req; @@ -783,6 +793,8 @@ static int __ieee80211_start_scan(struct ieee80211_sub_if_data *sdata, IEEE80211_CHAN_RADAR)) || !req->n_ssids) { next_delay = IEEE80211_PASSIVE_CHANNEL_TIME; + if (req->n_ssids) + set_bit(SCAN_BEACON_WAIT, &local->scanning); } else { ieee80211_scan_state_send_probe(local, &next_delay); next_delay = IEEE80211_CHANNEL_TIME; @@ -994,6 +1006,8 @@ static void ieee80211_scan_state_set_channel(struct ieee80211_local *local, !scan_req->n_ssids) { *next_delay = IEEE80211_PASSIVE_CHANNEL_TIME; local->next_scan_state = SCAN_DECISION; + if (scan_req->n_ssids) + set_bit(SCAN_BEACON_WAIT, &local->scanning); return; } @@ -1086,6 +1100,8 @@ void ieee80211_scan_work(struct work_struct *work) goto out; } + clear_bit(SCAN_BEACON_WAIT, &local->scanning); + /* * as long as no delay is required advance immediately * without scheduling a new work @@ -1096,6 +1112,10 @@ void ieee80211_scan_work(struct work_struct *work) goto out_complete; } + if (test_and_clear_bit(SCAN_BEACON_DONE, &local->scanning) && + local->next_scan_state == SCAN_DECISION) + local->next_scan_state = SCAN_SEND_PROBE; + switch (local->next_scan_state) { case SCAN_DECISION: /* if no more bands/channels left, complete scan */ From 7f8fd5dd43cd7306bb8fc519c13bcf1df7de3783 Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Fri, 20 May 2022 19:45:36 +0200 Subject: [PATCH 0709/1842] um: chan_user: Fix winch_tramp() return value commit 57ae0b67b747031bc41fb44643aa5344ab58607e upstream. The previous fix here was only partially correct, it did result in returning a proper error value in case of error, but it also clobbered the pid that we need to return from this function (not just zero for success). As a result, it returned 0 here, but later this is treated as a pid and used to kill the process, but since it's now 0 we kill(0, SIGKILL), which makes UML kill itself rather than just the helper thread. Fix that and make it more obvious by using a separate variable for the pid. Fixes: ccf1236ecac4 ("um: fix error return code in winch_tramp()") Reported-and-tested-by: Nathan Chancellor Signed-off-by: Johannes Berg Cc: stable@vger.kernel.org Signed-off-by: Richard Weinberger Signed-off-by: Greg Kroah-Hartman --- arch/um/drivers/chan_user.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/um/drivers/chan_user.c b/arch/um/drivers/chan_user.c index 6040817c036f..25727ed648b7 100644 --- a/arch/um/drivers/chan_user.c +++ b/arch/um/drivers/chan_user.c @@ -220,7 +220,7 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out, unsigned long *stack_out) { struct winch_data data; - int fds[2], n, err; + int fds[2], n, err, pid; char c; err = os_pipe(fds, 1, 1); @@ -238,8 +238,9 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out, * problem with /dev/net/tun, which if held open by this * thread, prevents the TUN/TAP device from being reused. */ - err = run_helper_thread(winch_thread, &data, CLONE_FILES, stack_out); - if (err < 0) { + pid = run_helper_thread(winch_thread, &data, CLONE_FILES, stack_out); + if (pid < 0) { + err = pid; printk(UM_KERN_ERR "fork of winch_thread failed - errno = %d\n", -err); goto out_close; @@ -263,7 +264,7 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out, goto out_close; } - return err; + return pid; out_close: close(fds[1]); From cf0dabc37446c5ee538ae7b4c467ab0e53fa5463 Mon Sep 17 00:00:00 2001 From: Vincent Whitchurch Date: Mon, 23 May 2022 16:04:03 +0200 Subject: [PATCH 0710/1842] um: Fix out-of-bounds read in LDT setup commit 2a4a62a14be1947fa945c5c11ebf67326381a568 upstream. syscall_stub_data() expects the data_count parameter to be the number of longs, not bytes. ================================================================== BUG: KASAN: stack-out-of-bounds in syscall_stub_data+0x70/0xe0 Read of size 128 at addr 000000006411f6f0 by task swapper/1 CPU: 0 PID: 1 Comm: swapper Not tainted 5.18.0+ #18 Call Trace: show_stack.cold+0x166/0x2a7 __dump_stack+0x3a/0x43 dump_stack_lvl+0x1f/0x27 print_report.cold+0xdb/0xf81 kasan_report+0x119/0x1f0 kasan_check_range+0x3a3/0x440 memcpy+0x52/0x140 syscall_stub_data+0x70/0xe0 write_ldt_entry+0xac/0x190 init_new_ldt+0x515/0x960 init_new_context+0x2c4/0x4d0 mm_init.constprop.0+0x5ed/0x760 mm_alloc+0x118/0x170 0x60033f48 do_one_initcall+0x1d7/0x860 0x60003e7b kernel_init+0x6e/0x3d4 new_thread_handler+0x1e7/0x2c0 The buggy address belongs to stack of task swapper/1 and is located at offset 64 in frame: init_new_ldt+0x0/0x960 This frame has 2 objects: [32, 40) 'addr' [64, 80) 'desc' ================================================================== Fixes: 858259cf7d1c443c83 ("uml: maintain own LDT entries") Signed-off-by: Vincent Whitchurch Cc: stable@vger.kernel.org Signed-off-by: Richard Weinberger Signed-off-by: Greg Kroah-Hartman --- arch/x86/um/ldt.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/um/ldt.c b/arch/x86/um/ldt.c index 3ee234b6234d..255a44dd415a 100644 --- a/arch/x86/um/ldt.c +++ b/arch/x86/um/ldt.c @@ -23,9 +23,11 @@ static long write_ldt_entry(struct mm_id *mm_idp, int func, { long res; void *stub_addr; + + BUILD_BUG_ON(sizeof(*desc) % sizeof(long)); + res = syscall_stub_data(mm_idp, (unsigned long *)desc, - (sizeof(*desc) + sizeof(long) - 1) & - ~(sizeof(long) - 1), + sizeof(*desc) / sizeof(long), addr, &stub_addr); if (!res) { unsigned long args[] = { func, From c26ccbaeb8d8eddd6dc6e78ce2ff44f7a10083b9 Mon Sep 17 00:00:00 2001 From: "Naveen N. Rao" Date: Thu, 19 May 2022 14:42:37 +0530 Subject: [PATCH 0711/1842] kexec_file: drop weak attribute from arch_kexec_apply_relocations[_add] commit 3e35142ef99fe6b4fe5d834ad43ee13cca10a2dc upstream. Since commit d1bcae833b32f1 ("ELF: Don't generate unused section symbols") [1], binutils (v2.36+) started dropping section symbols that it thought were unused. This isn't an issue in general, but with kexec_file.c, gcc is placing kexec_arch_apply_relocations[_add] into a separate .text.unlikely section and the section symbol ".text.unlikely" is being dropped. Due to this, recordmcount is unable to find a non-weak symbol in .text.unlikely to generate a relocation record against. Address this by dropping the weak attribute from these functions. Instead, follow the existing pattern of having architectures #define the name of the function they want to override in their headers. [1] https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=d1bcae833b32f1 [akpm@linux-foundation.org: arch/s390/include/asm/kexec.h needs linux/module.h] Link: https://lkml.kernel.org/r/20220519091237.676736-1-naveen.n.rao@linux.vnet.ibm.com Signed-off-by: Michael Ellerman Signed-off-by: Naveen N. Rao Cc: "Eric W. Biederman" Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- arch/s390/include/asm/kexec.h | 10 ++++++++ arch/x86/include/asm/kexec.h | 8 ++++++ include/linux/kexec.h | 46 +++++++++++++++++++++++++++++------ kernel/kexec_file.c | 34 -------------------------- 4 files changed, 56 insertions(+), 42 deletions(-) diff --git a/arch/s390/include/asm/kexec.h b/arch/s390/include/asm/kexec.h index 7f3c9ac34bd8..63098df81c9f 100644 --- a/arch/s390/include/asm/kexec.h +++ b/arch/s390/include/asm/kexec.h @@ -9,6 +9,8 @@ #ifndef _S390_KEXEC_H #define _S390_KEXEC_H +#include + #include #include #include @@ -83,4 +85,12 @@ struct kimage_arch { extern const struct kexec_file_ops s390_kexec_image_ops; extern const struct kexec_file_ops s390_kexec_elf_ops; +#ifdef CONFIG_KEXEC_FILE +struct purgatory_info; +int arch_kexec_apply_relocations_add(struct purgatory_info *pi, + Elf_Shdr *section, + const Elf_Shdr *relsec, + const Elf_Shdr *symtab); +#define arch_kexec_apply_relocations_add arch_kexec_apply_relocations_add +#endif #endif /*_S390_KEXEC_H */ diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h index 6802c59e8252..8e80c2655ca9 100644 --- a/arch/x86/include/asm/kexec.h +++ b/arch/x86/include/asm/kexec.h @@ -191,6 +191,14 @@ extern int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, extern void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages); #define arch_kexec_pre_free_pages arch_kexec_pre_free_pages +#ifdef CONFIG_KEXEC_FILE +struct purgatory_info; +int arch_kexec_apply_relocations_add(struct purgatory_info *pi, + Elf_Shdr *section, + const Elf_Shdr *relsec, + const Elf_Shdr *symtab); +#define arch_kexec_apply_relocations_add arch_kexec_apply_relocations_add +#endif #endif typedef void crash_vmclear_fn(void); diff --git a/include/linux/kexec.h b/include/linux/kexec.h index 5f61389f5f36..037192c3a46f 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -187,14 +187,6 @@ void *kexec_purgatory_get_symbol_addr(struct kimage *image, const char *name); int arch_kexec_kernel_image_probe(struct kimage *image, void *buf, unsigned long buf_len); void *arch_kexec_kernel_image_load(struct kimage *image); -int arch_kexec_apply_relocations_add(struct purgatory_info *pi, - Elf_Shdr *section, - const Elf_Shdr *relsec, - const Elf_Shdr *symtab); -int arch_kexec_apply_relocations(struct purgatory_info *pi, - Elf_Shdr *section, - const Elf_Shdr *relsec, - const Elf_Shdr *symtab); int arch_kimage_file_post_load_cleanup(struct kimage *image); #ifdef CONFIG_KEXEC_SIG int arch_kexec_kernel_verify_sig(struct kimage *image, void *buf, @@ -223,6 +215,44 @@ extern int crash_exclude_mem_range(struct crash_mem *mem, unsigned long long mend); extern int crash_prepare_elf64_headers(struct crash_mem *mem, int kernel_map, void **addr, unsigned long *sz); + +#ifndef arch_kexec_apply_relocations_add +/* + * arch_kexec_apply_relocations_add - apply relocations of type RELA + * @pi: Purgatory to be relocated. + * @section: Section relocations applying to. + * @relsec: Section containing RELAs. + * @symtab: Corresponding symtab. + * + * Return: 0 on success, negative errno on error. + */ +static inline int +arch_kexec_apply_relocations_add(struct purgatory_info *pi, Elf_Shdr *section, + const Elf_Shdr *relsec, const Elf_Shdr *symtab) +{ + pr_err("RELA relocation unsupported.\n"); + return -ENOEXEC; +} +#endif + +#ifndef arch_kexec_apply_relocations +/* + * arch_kexec_apply_relocations - apply relocations of type REL + * @pi: Purgatory to be relocated. + * @section: Section relocations applying to. + * @relsec: Section containing RELs. + * @symtab: Corresponding symtab. + * + * Return: 0 on success, negative errno on error. + */ +static inline int +arch_kexec_apply_relocations(struct purgatory_info *pi, Elf_Shdr *section, + const Elf_Shdr *relsec, const Elf_Shdr *symtab) +{ + pr_err("REL relocation unsupported.\n"); + return -ENOEXEC; +} +#endif #endif /* CONFIG_KEXEC_FILE */ #ifdef CONFIG_KEXEC_ELF diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c index aea9104265f2..2e0f0b3fb9ab 100644 --- a/kernel/kexec_file.c +++ b/kernel/kexec_file.c @@ -108,40 +108,6 @@ int __weak arch_kexec_kernel_verify_sig(struct kimage *image, void *buf, } #endif -/* - * arch_kexec_apply_relocations_add - apply relocations of type RELA - * @pi: Purgatory to be relocated. - * @section: Section relocations applying to. - * @relsec: Section containing RELAs. - * @symtab: Corresponding symtab. - * - * Return: 0 on success, negative errno on error. - */ -int __weak -arch_kexec_apply_relocations_add(struct purgatory_info *pi, Elf_Shdr *section, - const Elf_Shdr *relsec, const Elf_Shdr *symtab) -{ - pr_err("RELA relocation unsupported.\n"); - return -ENOEXEC; -} - -/* - * arch_kexec_apply_relocations - apply relocations of type REL - * @pi: Purgatory to be relocated. - * @section: Section relocations applying to. - * @relsec: Section containing RELs. - * @symtab: Corresponding symtab. - * - * Return: 0 on success, negative errno on error. - */ -int __weak -arch_kexec_apply_relocations(struct purgatory_info *pi, Elf_Shdr *section, - const Elf_Shdr *relsec, const Elf_Shdr *symtab) -{ - pr_err("REL relocation unsupported.\n"); - return -ENOEXEC; -} - /* * Free up memory used by kernel, initrd, and command line. This is temporary * memory allocation which is not needed any more after these buffers have From 82c888e51c2176a06f8b4541cf748ee81aac6e7e Mon Sep 17 00:00:00 2001 From: Song Liu Date: Tue, 24 May 2022 10:08:39 -0700 Subject: [PATCH 0712/1842] ftrace: Clean up hash direct_functions on register failures commit 7d54c15cb89a29a5f59e5ffc9ee62e6591769ef1 upstream. We see the following GPF when register_ftrace_direct fails: [ ] general protection fault, probably for non-canonical address \ 0x200000000000010: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC PTI [...] [ ] RIP: 0010:ftrace_find_rec_direct+0x53/0x70 [ ] Code: 48 c1 e0 03 48 03 42 08 48 8b 10 31 c0 48 85 d2 74 [...] [ ] RSP: 0018:ffffc9000138bc10 EFLAGS: 00010206 [ ] RAX: 0000000000000000 RBX: ffffffff813e0df0 RCX: 000000000000003b [ ] RDX: 0200000000000000 RSI: 000000000000000c RDI: ffffffff813e0df0 [ ] RBP: ffffffffa00a3000 R08: ffffffff81180ce0 R09: 0000000000000001 [ ] R10: ffffc9000138bc18 R11: 0000000000000001 R12: ffffffff813e0df0 [ ] R13: ffffffff813e0df0 R14: ffff888171b56400 R15: 0000000000000000 [ ] FS: 00007fa9420c7780(0000) GS:ffff888ff6a00000(0000) knlGS:000000000 [ ] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ ] CR2: 000000000770d000 CR3: 0000000107d50003 CR4: 0000000000370ee0 [ ] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ ] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ ] Call Trace: [ ] [ ] register_ftrace_direct+0x54/0x290 [ ] ? render_sigset_t+0xa0/0xa0 [ ] bpf_trampoline_update+0x3f5/0x4a0 [ ] ? 0xffffffffa00a3000 [ ] bpf_trampoline_link_prog+0xa9/0x140 [ ] bpf_tracing_prog_attach+0x1dc/0x450 [ ] bpf_raw_tracepoint_open+0x9a/0x1e0 [ ] ? find_held_lock+0x2d/0x90 [ ] ? lock_release+0x150/0x430 [ ] __sys_bpf+0xbd6/0x2700 [ ] ? lock_is_held_type+0xd8/0x130 [ ] __x64_sys_bpf+0x1c/0x20 [ ] do_syscall_64+0x3a/0x80 [ ] entry_SYSCALL_64_after_hwframe+0x44/0xae [ ] RIP: 0033:0x7fa9421defa9 [ ] Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 9 f8 [...] [ ] RSP: 002b:00007ffed743bd78 EFLAGS: 00000246 ORIG_RAX: 0000000000000141 [ ] RAX: ffffffffffffffda RBX: 00000000069d2480 RCX: 00007fa9421defa9 [ ] RDX: 0000000000000078 RSI: 00007ffed743bd80 RDI: 0000000000000011 [ ] RBP: 00007ffed743be00 R08: 0000000000bb7270 R09: 0000000000000000 [ ] R10: 00000000069da210 R11: 0000000000000246 R12: 0000000000000001 [ ] R13: 00007ffed743c4b0 R14: 00000000069d2480 R15: 0000000000000001 [ ] [ ] Modules linked in: klp_vm(OK) [ ] ---[ end trace 0000000000000000 ]--- One way to trigger this is: 1. load a livepatch that patches kernel function xxx; 2. run bpftrace -e 'kfunc:xxx {}', this will fail (expected for now); 3. repeat #2 => gpf. This is because the entry is added to direct_functions, but not removed. Fix this by remove the entry from direct_functions when register_ftrace_direct fails. Also remove the last trailing space from ftrace.c, so we don't have to worry about it anymore. Link: https://lkml.kernel.org/r/20220524170839.900849-1-song@kernel.org Cc: stable@vger.kernel.org Fixes: 763e34e74bb7 ("ftrace: Add register_ftrace_direct()") Signed-off-by: Song Liu Signed-off-by: Steven Rostedt (Google) Signed-off-by: Greg Kroah-Hartman --- kernel/trace/ftrace.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 4a5d35dc490b..a63713dcd05d 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -4427,7 +4427,7 @@ int ftrace_func_mapper_add_ip(struct ftrace_func_mapper *mapper, * @ip: The instruction pointer address to remove the data from * * Returns the data if it is found, otherwise NULL. - * Note, if the data pointer is used as the data itself, (see + * Note, if the data pointer is used as the data itself, (see * ftrace_func_mapper_find_ip(), then the return value may be meaningless, * if the data pointer was set to zero. */ @@ -5153,8 +5153,6 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr) __add_hash_entry(direct_functions, entry); ret = ftrace_set_filter_ip(&direct_ops, ip, 0, 0); - if (ret) - remove_hash_entry(direct_functions, entry); if (!ret && !(direct_ops.flags & FTRACE_OPS_FL_ENABLED)) { ret = register_ftrace_function(&direct_ops); @@ -5163,6 +5161,7 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr) } if (ret) { + remove_hash_entry(direct_functions, entry); kfree(entry); if (!direct->count) { list_del_rcu(&direct->next); From e9514bce2fb78edac76db738f5eca66dd1165c19 Mon Sep 17 00:00:00 2001 From: Xiaomeng Tong Date: Sun, 1 May 2022 21:28:23 +0800 Subject: [PATCH 0713/1842] iommu/msm: Fix an incorrect NULL check on list iterator commit 8b9ad480bd1dd25f4ff4854af5685fa334a2f57a upstream. The bug is here: if (!iommu || iommu->dev->of_node != spec->np) { The list iterator value 'iommu' will *always* be set and non-NULL by list_for_each_entry(), so it is incorrect to assume that the iterator value will be NULL if the list is empty or no element is found (in fact, it will point to a invalid structure object containing HEAD). To fix the bug, use a new value 'iter' as the list iterator, while use the old value 'iommu' as a dedicated variable to point to the found one, and remove the unneeded check for 'iommu->dev->of_node != spec->np' outside the loop. Cc: stable@vger.kernel.org Fixes: f78ebca8ff3d6 ("iommu/msm: Add support for generic master bindings") Signed-off-by: Xiaomeng Tong Link: https://lore.kernel.org/r/20220501132823.12714-1-xiam0nd.tong@gmail.com Signed-off-by: Joerg Roedel Signed-off-by: Greg Kroah-Hartman --- drivers/iommu/msm_iommu.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c index 3615cd6241c4..149f2e53b39d 100644 --- a/drivers/iommu/msm_iommu.c +++ b/drivers/iommu/msm_iommu.c @@ -616,16 +616,19 @@ static void insert_iommu_master(struct device *dev, static int qcom_iommu_of_xlate(struct device *dev, struct of_phandle_args *spec) { - struct msm_iommu_dev *iommu; + struct msm_iommu_dev *iommu = NULL, *iter; unsigned long flags; int ret = 0; spin_lock_irqsave(&msm_iommu_lock, flags); - list_for_each_entry(iommu, &qcom_iommu_devices, dev_node) - if (iommu->dev->of_node == spec->np) + list_for_each_entry(iter, &qcom_iommu_devices, dev_node) { + if (iter->dev->of_node == spec->np) { + iommu = iter; break; + } + } - if (!iommu || iommu->dev->of_node != spec->np) { + if (!iommu) { ret = -ENODEV; goto fail; } From 90ad54714e14933d4210ade416015622b834609b Mon Sep 17 00:00:00 2001 From: Christophe de Dinechin Date: Thu, 14 Apr 2022 17:08:54 +0200 Subject: [PATCH 0714/1842] nodemask.h: fix compilation error with GCC12 commit 37462a920392cb86541650a6f4121155f11f1199 upstream. With gcc version 12.0.1 20220401 (Red Hat 12.0.1-0), building with defconfig results in the following compilation error: | CC mm/swapfile.o | mm/swapfile.c: In function `setup_swap_info': | mm/swapfile.c:2291:47: error: array subscript -1 is below array bounds | of `struct plist_node[]' [-Werror=array-bounds] | 2291 | p->avail_lists[i].prio = 1; | | ~~~~~~~~~~~~~~^~~ | In file included from mm/swapfile.c:16: | ./include/linux/swap.h:292:27: note: while referencing `avail_lists' | 292 | struct plist_node avail_lists[]; /* | | ^~~~~~~~~~~ This is due to the compiler detecting that the mask in node_states[__state] could theoretically be zero, which would lead to first_node() returning -1 through find_first_bit. I believe that the warning/error is legitimate. I first tried adding a test to check that the node mask is not emtpy, since a similar test exists in the case where MAX_NUMNODES == 1. However, adding the if statement causes other warnings to appear in for_each_cpu_node_but, because it introduces a dangling else ambiguity. And unfortunately, GCC is not smart enough to detect that the added test makes the case where (node) == -1 impossible, so it still complains with the same message. This is why I settled on replacing that with a harmless, but relatively useless (node) >= 0 test. Based on the warning for the dangling else, I also decided to fix the case where MAX_NUMNODES == 1 by moving the condition inside the for loop. It will still only be tested once. This ensures that the meaning of an else following for_each_node_mask or derivatives would not silently have a different meaning depending on the configuration. Link: https://lkml.kernel.org/r/20220414150855.2407137-3-dinechin@redhat.com Signed-off-by: Christophe de Dinechin Signed-off-by: Christophe de Dinechin Reviewed-by: Andrew Morton Cc: Ben Segall Cc: "Michael S. Tsirkin" Cc: Steven Rostedt Cc: Ingo Molnar Cc: Mel Gorman Cc: Dietmar Eggemann Cc: Vincent Guittot Cc: Paolo Bonzini Cc: Daniel Bristot de Oliveira Cc: Jason Wang Cc: Zhen Lei Cc: Juri Lelli Cc: Peter Zijlstra Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- include/linux/nodemask.h | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h index ac398e143c9a..843678bfc364 100644 --- a/include/linux/nodemask.h +++ b/include/linux/nodemask.h @@ -375,14 +375,13 @@ static inline void __nodes_fold(nodemask_t *dstp, const nodemask_t *origp, } #if MAX_NUMNODES > 1 -#define for_each_node_mask(node, mask) \ - for ((node) = first_node(mask); \ - (node) < MAX_NUMNODES; \ - (node) = next_node((node), (mask))) +#define for_each_node_mask(node, mask) \ + for ((node) = first_node(mask); \ + (node >= 0) && (node) < MAX_NUMNODES; \ + (node) = next_node((node), (mask))) #else /* MAX_NUMNODES == 1 */ -#define for_each_node_mask(node, mask) \ - if (!nodes_empty(mask)) \ - for ((node) = 0; (node) < 1; (node)++) +#define for_each_node_mask(node, mask) \ + for ((node) = 0; (node) < 1 && !nodes_empty(mask); (node)++) #endif /* MAX_NUMNODES */ /* From 63758dd9595f87c7e7b5f826fd2dcf53d6aff0cf Mon Sep 17 00:00:00 2001 From: Mike Kravetz Date: Tue, 24 May 2022 13:50:03 -0700 Subject: [PATCH 0715/1842] hugetlb: fix huge_pmd_unshare address update commit 48381273f8734d28ef56a5bdf1966dd8530111bc upstream. The routine huge_pmd_unshare() is passed a pointer to an address associated with an area which may be unshared. If unshare is successful this address is updated to 'optimize' callers iterating over huge page addresses. For the optimization to work correctly, address should be updated to the last huge page in the unmapped/unshared area. However, in the common case where the passed address is PUD_SIZE aligned, the address is incorrectly updated to the address of the preceding huge page. That wastes CPU cycles as the unmapped/unshared range is scanned twice. Link: https://lkml.kernel.org/r/20220524205003.126184-1-mike.kravetz@oracle.com Fixes: 39dde65c9940 ("shared page table for hugetlb page") Signed-off-by: Mike Kravetz Acked-by: Muchun Song Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- mm/hugetlb.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index fce705fc2848..c42c76447e10 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5465,7 +5465,14 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, pud_clear(pud); put_page(virt_to_page(ptep)); mm_dec_nr_pmds(mm); - *addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE; + /* + * This update of passed address optimizes loops sequentially + * processing addresses in increments of huge page size (PMD_SIZE + * in this case). By clearing the pud, a PUD_SIZE area is unmapped. + * Update address to the 'last page' in the cleared area so that + * calling loop can move to first page past this area. + */ + *addr |= PUD_SIZE - PMD_SIZE; return 1; } #define want_pmd_share() (1) From d787a57a17cf0e36cfd44659539c60fa18ce8c9d Mon Sep 17 00:00:00 2001 From: Yi Yang Date: Tue, 10 May 2022 16:05:33 +0800 Subject: [PATCH 0716/1842] xtensa/simdisk: fix proc_read_simdisk() commit b011946d039d66bbc7102137e98cc67e1356aa87 upstream. The commit a69755b18774 ("xtensa simdisk: switch to proc_create_data()") split read operation into two parts, first retrieving the path when it's non-null and second retrieving the trailing '\n'. However when the path is non-null the first simple_read_from_buffer updates ppos, and the second simple_read_from_buffer returns 0 if ppos is greater than 1 (i.e. almost always). As a result reading from that proc file is almost always empty. Fix it by making a temporary copy of the path with the trailing '\n' and using simple_read_from_buffer on that copy. Cc: stable@vger.kernel.org Fixes: a69755b18774 ("xtensa simdisk: switch to proc_create_data()") Signed-off-by: Yi Yang Signed-off-by: Max Filippov Signed-off-by: Greg Kroah-Hartman --- arch/xtensa/platforms/iss/simdisk.c | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/arch/xtensa/platforms/iss/simdisk.c b/arch/xtensa/platforms/iss/simdisk.c index 3447556d276d..2b3c829f655b 100644 --- a/arch/xtensa/platforms/iss/simdisk.c +++ b/arch/xtensa/platforms/iss/simdisk.c @@ -213,12 +213,18 @@ static ssize_t proc_read_simdisk(struct file *file, char __user *buf, struct simdisk *dev = PDE_DATA(file_inode(file)); const char *s = dev->filename; if (s) { - ssize_t n = simple_read_from_buffer(buf, size, ppos, - s, strlen(s)); - if (n < 0) - return n; - buf += n; - size -= n; + ssize_t len = strlen(s); + char *temp = kmalloc(len + 2, GFP_KERNEL); + + if (!temp) + return -ENOMEM; + + len = scnprintf(temp, len + 2, "%s\n", s); + len = simple_read_from_buffer(buf, size, ppos, + temp, len); + + kfree(temp); + return len; } return simple_read_from_buffer(buf, size, ppos, "\n", 1); } From 769ec2a824deae2f1268dfda14999a4d14d0d0c5 Mon Sep 17 00:00:00 2001 From: Alexander Wetzel Date: Fri, 22 Apr 2022 16:52:28 +0200 Subject: [PATCH 0717/1842] rtl818x: Prevent using not initialized queues commit 746285cf81dc19502ab238249d75f5990bd2d231 upstream. Using not existing queues can panic the kernel with rtl8180/rtl8185 cards. Ignore the skb priority for those cards, they only have one tx queue. Pierre Asselin (pa@panix.com) reported the kernel crash in the Gentoo forum: https://forums.gentoo.org/viewtopic-t-1147832-postdays-0-postorder-asc-start-25.html He also confirmed that this patch fixes the issue. In summary this happened: After updating wpa_supplicant from 2.9 to 2.10 the kernel crashed with a "divide error: 0000" when connecting to an AP. Control port tx now tries to use IEEE80211_AC_VO for the priority, which wpa_supplicants starts to use in 2.10. Since only the rtl8187se part of the driver supports QoS, the priority of the skb is set to IEEE80211_AC_BE (2) by mac80211 for rtl8180/rtl8185 cards. rtl8180 is then unconditionally reading out the priority and finally crashes on drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c line 544 without this patch: idx = (ring->idx + skb_queue_len(&ring->queue)) % ring->entries "ring->entries" is zero for rtl8180/rtl8185 cards, tx_ring[2] never got initialized. Cc: stable@vger.kernel.org Reported-by: pa@panix.com Tested-by: pa@panix.com Signed-off-by: Alexander Wetzel Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/20220422145228.7567-1-alexander@wetzel-home.de Signed-off-by: Greg Kroah-Hartman --- drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c b/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c index 2477e18c7cae..025619cd14e8 100644 --- a/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c +++ b/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c @@ -460,8 +460,10 @@ static void rtl8180_tx(struct ieee80211_hw *dev, struct rtl8180_priv *priv = dev->priv; struct rtl8180_tx_ring *ring; struct rtl8180_tx_desc *entry; + unsigned int prio = 0; unsigned long flags; - unsigned int idx, prio, hw_prio; + unsigned int idx, hw_prio; + dma_addr_t mapping; u32 tx_flags; u8 rc_flags; @@ -470,7 +472,9 @@ static void rtl8180_tx(struct ieee80211_hw *dev, /* do arithmetic and then convert to le16 */ u16 frame_duration = 0; - prio = skb_get_queue_mapping(skb); + /* rtl8180/rtl8185 only has one useable tx queue */ + if (dev->queues > IEEE80211_AC_BK) + prio = skb_get_queue_mapping(skb); ring = &priv->tx_ring[prio]; mapping = dma_map_single(&priv->pdev->dev, skb->data, skb->len, From 8f1bc0edf53c8a964a9b73019c4ba5f977956271 Mon Sep 17 00:00:00 2001 From: Mark Brown Date: Thu, 28 Apr 2022 17:24:44 +0100 Subject: [PATCH 0718/1842] ASoC: rt5514: Fix event generation for "DSP Voice Wake Up" control commit 4213ff556740bb45e2d9ff0f50d056c4e7dd0921 upstream. The driver has a custom put function for "DSP Voice Wake Up" which does not generate event notifications on change, instead returning 0. Since we already exit early in the case that there is no change this can be fixed by unconditionally returning 1 at the end of the function. Signed-off-by: Mark Brown Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20220428162444.3883147-1-broonie@kernel.org Signed-off-by: Mark Brown Signed-off-by: Greg Kroah-Hartman --- sound/soc/codecs/rt5514.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sound/soc/codecs/rt5514.c b/sound/soc/codecs/rt5514.c index 7081142a355e..c444a56df95b 100644 --- a/sound/soc/codecs/rt5514.c +++ b/sound/soc/codecs/rt5514.c @@ -419,7 +419,7 @@ static int rt5514_dsp_voice_wake_up_put(struct snd_kcontrol *kcontrol, } } - return 0; + return 1; } static const struct snd_kcontrol_new rt5514_snd_controls[] = { From dc12a64cf850be2a2584084f1aff8b96642e5115 Mon Sep 17 00:00:00 2001 From: Xiaomeng Tong Date: Mon, 28 Mar 2022 20:28:20 +0800 Subject: [PATCH 0719/1842] carl9170: tx: fix an incorrect use of list iterator commit 54a6f29522da3c914da30e50721dedf51046449a upstream. If the previous list_for_each_entry_continue_rcu() don't exit early (no goto hit inside the loop), the iterator 'cvif' after the loop will be a bogus pointer to an invalid structure object containing the HEAD (&ar->vif_list). As a result, the use of 'cvif' after that will lead to a invalid memory access (i.e., 'cvif->id': the invalid pointer dereference when return back to/after the callsite in the carl9170_update_beacon()). The original intention should have been to return the valid 'cvif' when found in list, NULL otherwise. So just return NULL when no entry found, to fix this bug. Cc: stable@vger.kernel.org Fixes: 1f1d9654e183c ("carl9170: refactor carl9170_update_beacon") Signed-off-by: Xiaomeng Tong Acked-by: Christian Lamparter Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/20220328122820.1004-1-xiam0nd.tong@gmail.com Signed-off-by: Greg Kroah-Hartman --- drivers/net/wireless/ath/carl9170/tx.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/net/wireless/ath/carl9170/tx.c b/drivers/net/wireless/ath/carl9170/tx.c index 235cf77cd60c..43050dc7b98d 100644 --- a/drivers/net/wireless/ath/carl9170/tx.c +++ b/drivers/net/wireless/ath/carl9170/tx.c @@ -1557,6 +1557,9 @@ static struct carl9170_vif_info *carl9170_pick_beaconing_vif(struct ar9170 *ar) goto out; } } while (ar->beacon_enabled && i--); + + /* no entry found in list */ + return NULL; } out: From 4e2fbe8cda17d7cfae7429c6d0ac8a11be50cb15 Mon Sep 17 00:00:00 2001 From: Xiaomeng Tong Date: Sun, 27 Mar 2022 13:53:55 +0800 Subject: [PATCH 0720/1842] stm: ltdc: fix two incorrect NULL checks on list iterator commit 2e6c86be0e57079d1fb6c7c7e5423db096d0548a upstream. The two bugs are here: if (encoder) { if (bridge && bridge->timings) The list iterator value 'encoder/bridge' will *always* be set and non-NULL by drm_for_each_encoder()/list_for_each_entry(), so it is incorrect to assume that the iterator value will be NULL if the list is empty or no element is found. To fix the bug, use a new variable '*_iter' as the list iterator, while use the old variable 'encoder/bridge' as a dedicated pointer to point to the found element. Cc: stable@vger.kernel.org Fixes: 99e360442f223 ("drm/stm: Fix bus_flags handling") Signed-off-by: Xiaomeng Tong Acked-by: Raphael Gallais-Pou Signed-off-by: Philippe Cornu Link: https://patchwork.freedesktop.org/patch/msgid/20220327055355.3808-1-xiam0nd.tong@gmail.com Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/stm/ltdc.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c index 62488ac14923..089c00a8e7d4 100644 --- a/drivers/gpu/drm/stm/ltdc.c +++ b/drivers/gpu/drm/stm/ltdc.c @@ -527,8 +527,8 @@ static void ltdc_crtc_mode_set_nofb(struct drm_crtc *crtc) struct drm_device *ddev = crtc->dev; struct drm_connector_list_iter iter; struct drm_connector *connector = NULL; - struct drm_encoder *encoder = NULL; - struct drm_bridge *bridge = NULL; + struct drm_encoder *encoder = NULL, *en_iter; + struct drm_bridge *bridge = NULL, *br_iter; struct drm_display_mode *mode = &crtc->state->adjusted_mode; struct videomode vm; u32 hsync, vsync, accum_hbp, accum_vbp, accum_act_w, accum_act_h; @@ -538,15 +538,19 @@ static void ltdc_crtc_mode_set_nofb(struct drm_crtc *crtc) int ret; /* get encoder from crtc */ - drm_for_each_encoder(encoder, ddev) - if (encoder->crtc == crtc) + drm_for_each_encoder(en_iter, ddev) + if (en_iter->crtc == crtc) { + encoder = en_iter; break; + } if (encoder) { /* get bridge from encoder */ - list_for_each_entry(bridge, &encoder->bridge_chain, chain_node) - if (bridge->encoder == encoder) + list_for_each_entry(br_iter, &encoder->bridge_chain, chain_node) + if (br_iter->encoder == encoder) { + bridge = br_iter; break; + } /* Get the connector from encoder */ drm_connector_list_iter_begin(ddev, &iter); From 46c2b5f81c9ef3080ec26e47171c3fcb11bec139 Mon Sep 17 00:00:00 2001 From: Coly Li Date: Tue, 24 May 2022 18:23:33 +0800 Subject: [PATCH 0721/1842] bcache: improve multithreaded bch_btree_check() commit 622536443b6731ec82c563aae7807165adbe9178 upstream. Commit 8e7102273f59 ("bcache: make bch_btree_check() to be multithreaded") makes bch_btree_check() to be much faster when checking all btree nodes during cache device registration. But it isn't in ideal shap yet, still can be improved. This patch does the following thing to improve current parallel btree nodes check by multiple threads in bch_btree_check(), - Add read lock to root node while checking all the btree nodes with multiple threads. Although currently it is not mandatory but it is good to have a read lock in code logic. - Remove local variable 'char name[32]', and generate kernel thread name string directly when calling kthread_run(). - Allocate local variable "struct btree_check_state check_state" on the stack and avoid unnecessary dynamic memory allocation for it. - Reduce BCH_BTR_CHKTHREAD_MAX from 64 to 12 which is enough indeed. - Increase check_state->started to count created kernel thread after it succeeds to create. - When wait for all checking kernel threads to finish, use wait_event() to replace wait_event_interruptible(). With this change, the code is more clear, and some potential error conditions are avoided. Fixes: 8e7102273f59 ("bcache: make bch_btree_check() to be multithreaded") Signed-off-by: Coly Li Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20220524102336.10684-2-colyli@suse.de Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- drivers/md/bcache/btree.c | 58 ++++++++++++++++++--------------------- drivers/md/bcache/btree.h | 2 +- 2 files changed, 27 insertions(+), 33 deletions(-) diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c index 418914373a51..f64834785c8b 100644 --- a/drivers/md/bcache/btree.c +++ b/drivers/md/bcache/btree.c @@ -2006,8 +2006,7 @@ int bch_btree_check(struct cache_set *c) int i; struct bkey *k = NULL; struct btree_iter iter; - struct btree_check_state *check_state; - char name[32]; + struct btree_check_state check_state; /* check and mark root node keys */ for_each_key_filter(&c->root->keys, k, &iter, bch_ptr_invalid) @@ -2018,63 +2017,58 @@ int bch_btree_check(struct cache_set *c) if (c->root->level == 0) return 0; - check_state = kzalloc(sizeof(struct btree_check_state), GFP_KERNEL); - if (!check_state) - return -ENOMEM; - - check_state->c = c; - check_state->total_threads = bch_btree_chkthread_nr(); - check_state->key_idx = 0; - spin_lock_init(&check_state->idx_lock); - atomic_set(&check_state->started, 0); - atomic_set(&check_state->enough, 0); - init_waitqueue_head(&check_state->wait); + check_state.c = c; + check_state.total_threads = bch_btree_chkthread_nr(); + check_state.key_idx = 0; + spin_lock_init(&check_state.idx_lock); + atomic_set(&check_state.started, 0); + atomic_set(&check_state.enough, 0); + init_waitqueue_head(&check_state.wait); + rw_lock(0, c->root, c->root->level); /* * Run multiple threads to check btree nodes in parallel, - * if check_state->enough is non-zero, it means current + * if check_state.enough is non-zero, it means current * running check threads are enough, unncessary to create * more. */ - for (i = 0; i < check_state->total_threads; i++) { - /* fetch latest check_state->enough earlier */ + for (i = 0; i < check_state.total_threads; i++) { + /* fetch latest check_state.enough earlier */ smp_mb__before_atomic(); - if (atomic_read(&check_state->enough)) + if (atomic_read(&check_state.enough)) break; - check_state->infos[i].result = 0; - check_state->infos[i].state = check_state; - snprintf(name, sizeof(name), "bch_btrchk[%u]", i); - atomic_inc(&check_state->started); + check_state.infos[i].result = 0; + check_state.infos[i].state = &check_state; - check_state->infos[i].thread = + check_state.infos[i].thread = kthread_run(bch_btree_check_thread, - &check_state->infos[i], - name); - if (IS_ERR(check_state->infos[i].thread)) { + &check_state.infos[i], + "bch_btrchk[%d]", i); + if (IS_ERR(check_state.infos[i].thread)) { pr_err("fails to run thread bch_btrchk[%d]\n", i); for (--i; i >= 0; i--) - kthread_stop(check_state->infos[i].thread); + kthread_stop(check_state.infos[i].thread); ret = -ENOMEM; goto out; } + atomic_inc(&check_state.started); } /* * Must wait for all threads to stop. */ - wait_event_interruptible(check_state->wait, - atomic_read(&check_state->started) == 0); + wait_event(check_state.wait, atomic_read(&check_state.started) == 0); - for (i = 0; i < check_state->total_threads; i++) { - if (check_state->infos[i].result) { - ret = check_state->infos[i].result; + for (i = 0; i < check_state.total_threads; i++) { + if (check_state.infos[i].result) { + ret = check_state.infos[i].result; goto out; } } out: - kfree(check_state); + rw_unlock(0, c->root); return ret; } diff --git a/drivers/md/bcache/btree.h b/drivers/md/bcache/btree.h index 50482107134f..1b5fdbc0d83e 100644 --- a/drivers/md/bcache/btree.h +++ b/drivers/md/bcache/btree.h @@ -226,7 +226,7 @@ struct btree_check_info { int result; }; -#define BCH_BTR_CHKTHREAD_MAX 64 +#define BCH_BTR_CHKTHREAD_MAX 12 struct btree_check_state { struct cache_set *c; int total_threads; From 3f686b249b1c21ff2c4a865db72eef0aecf84a7b Mon Sep 17 00:00:00 2001 From: Coly Li Date: Tue, 24 May 2022 18:23:34 +0800 Subject: [PATCH 0722/1842] bcache: improve multithreaded bch_sectors_dirty_init() commit 4dc34ae1b45fe26e772a44379f936c72623dd407 upstream. Commit b144e45fc576 ("bcache: make bch_sectors_dirty_init() to be multithreaded") makes bch_sectors_dirty_init() to be much faster when counting dirty sectors by iterating all dirty keys in the btree. But it isn't in ideal shape yet, still can be improved. This patch does the following changes to improve current parallel dirty keys iteration on the btree, - Add read lock to root node when multiple threads iterating the btree, to prevent the root node gets split by I/Os from other registered bcache devices. - Remove local variable "char name[32]" and generate kernel thread name string directly when calling kthread_run(). - Allocate "struct bch_dirty_init_state state" directly on stack and avoid the unnecessary dynamic memory allocation for it. - Decrease BCH_DIRTY_INIT_THRD_MAX from 64 to 12 which is enough indeed. - Increase &state->started to count created kernel thread after it succeeds to create. - When wait for all dirty key counting threads to finish, use wait_event() to replace wait_event_interruptible(). With the above changes, the code is more clear, and some potential error conditions are avoided. Fixes: b144e45fc576 ("bcache: make bch_sectors_dirty_init() to be multithreaded") Signed-off-by: Coly Li Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20220524102336.10684-3-colyli@suse.de Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- drivers/md/bcache/writeback.c | 60 ++++++++++++++--------------------- drivers/md/bcache/writeback.h | 2 +- 2 files changed, 25 insertions(+), 37 deletions(-) diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c index 952253f24175..d8bdadec05dc 100644 --- a/drivers/md/bcache/writeback.c +++ b/drivers/md/bcache/writeback.c @@ -899,10 +899,10 @@ void bch_sectors_dirty_init(struct bcache_device *d) struct btree_iter iter; struct sectors_dirty_init op; struct cache_set *c = d->c; - struct bch_dirty_init_state *state; - char name[32]; + struct bch_dirty_init_state state; /* Just count root keys if no leaf node */ + rw_lock(0, c->root, c->root->level); if (c->root->level == 0) { bch_btree_op_init(&op.op, -1); op.inode = d->id; @@ -912,54 +912,42 @@ void bch_sectors_dirty_init(struct bcache_device *d) for_each_key_filter(&c->root->keys, k, &iter, bch_ptr_invalid) sectors_dirty_init_fn(&op.op, c->root, k); + rw_unlock(0, c->root); return; } - state = kzalloc(sizeof(struct bch_dirty_init_state), GFP_KERNEL); - if (!state) { - pr_warn("sectors dirty init failed: cannot allocate memory\n"); - return; - } + state.c = c; + state.d = d; + state.total_threads = bch_btre_dirty_init_thread_nr(); + state.key_idx = 0; + spin_lock_init(&state.idx_lock); + atomic_set(&state.started, 0); + atomic_set(&state.enough, 0); + init_waitqueue_head(&state.wait); - state->c = c; - state->d = d; - state->total_threads = bch_btre_dirty_init_thread_nr(); - state->key_idx = 0; - spin_lock_init(&state->idx_lock); - atomic_set(&state->started, 0); - atomic_set(&state->enough, 0); - init_waitqueue_head(&state->wait); - - for (i = 0; i < state->total_threads; i++) { - /* Fetch latest state->enough earlier */ + for (i = 0; i < state.total_threads; i++) { + /* Fetch latest state.enough earlier */ smp_mb__before_atomic(); - if (atomic_read(&state->enough)) + if (atomic_read(&state.enough)) break; - state->infos[i].state = state; - atomic_inc(&state->started); - snprintf(name, sizeof(name), "bch_dirty_init[%d]", i); - - state->infos[i].thread = - kthread_run(bch_dirty_init_thread, - &state->infos[i], - name); - if (IS_ERR(state->infos[i].thread)) { + state.infos[i].state = &state; + state.infos[i].thread = + kthread_run(bch_dirty_init_thread, &state.infos[i], + "bch_dirtcnt[%d]", i); + if (IS_ERR(state.infos[i].thread)) { pr_err("fails to run thread bch_dirty_init[%d]\n", i); for (--i; i >= 0; i--) - kthread_stop(state->infos[i].thread); + kthread_stop(state.infos[i].thread); goto out; } + atomic_inc(&state.started); } - /* - * Must wait for all threads to stop. - */ - wait_event_interruptible(state->wait, - atomic_read(&state->started) == 0); - out: - kfree(state); + /* Must wait for all threads to stop. */ + wait_event(state.wait, atomic_read(&state.started) == 0); + rw_unlock(0, c->root); } void bch_cached_dev_writeback_init(struct cached_dev *dc) diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h index 3f1230e22de0..0f1d96920630 100644 --- a/drivers/md/bcache/writeback.h +++ b/drivers/md/bcache/writeback.h @@ -16,7 +16,7 @@ #define BCH_AUTO_GC_DIRTY_THRESHOLD 50 -#define BCH_DIRTY_INIT_THRD_MAX 64 +#define BCH_DIRTY_INIT_THRD_MAX 12 /* * 14 (16384ths) is chosen here as something that each backing device * should be a reasonable fraction of the share, and not to blow up From 0cf22f234ebcab655026cbdb07bbb6e189047e5f Mon Sep 17 00:00:00 2001 From: Coly Li Date: Tue, 24 May 2022 18:23:35 +0800 Subject: [PATCH 0723/1842] bcache: remove incremental dirty sector counting for bch_sectors_dirty_init() commit 80db4e4707e78cb22287da7d058d7274bd4cb370 upstream. After making bch_sectors_dirty_init() being multithreaded, the existing incremental dirty sector counting in bch_root_node_dirty_init() doesn't release btree occupation after iterating 500000 (INIT_KEYS_EACH_TIME) bkeys. Because a read lock is added on btree root node to prevent the btree to be split during the dirty sectors counting, other I/O requester has no chance to gain the write lock even restart bcache_btree(). That is to say, the incremental dirty sectors counting is incompatible to the multhreaded bch_sectors_dirty_init(). We have to choose one and drop another one. In my testing, with 512 bytes random writes, I generate 1.2T dirty data and a btree with 400K nodes. With single thread and incremental dirty sectors counting, it takes 30+ minites to register the backing device. And with multithreaded dirty sectors counting, the backing device registration can be accomplished within 2 minutes. The 30+ minutes V.S. 2- minutes difference makes me decide to keep multithreaded bch_sectors_dirty_init() and drop the incremental dirty sectors counting. This is what this patch does. But INIT_KEYS_EACH_TIME is kept, in sectors_dirty_init_fn() the CPU will be released by cond_resched() after every INIT_KEYS_EACH_TIME keys iterated. This is to avoid the watchdog reports a bogus soft lockup warning. Fixes: b144e45fc576 ("bcache: make bch_sectors_dirty_init() to be multithreaded") Signed-off-by: Coly Li Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20220524102336.10684-4-colyli@suse.de Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- drivers/md/bcache/writeback.c | 39 +++++++++++------------------------ 1 file changed, 12 insertions(+), 27 deletions(-) diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c index d8bdadec05dc..0145046a45f4 100644 --- a/drivers/md/bcache/writeback.c +++ b/drivers/md/bcache/writeback.c @@ -756,13 +756,11 @@ static int bch_writeback_thread(void *arg) /* Init */ #define INIT_KEYS_EACH_TIME 500000 -#define INIT_KEYS_SLEEP_MS 100 struct sectors_dirty_init { struct btree_op op; unsigned int inode; size_t count; - struct bkey start; }; static int sectors_dirty_init_fn(struct btree_op *_op, struct btree *b, @@ -778,11 +776,8 @@ static int sectors_dirty_init_fn(struct btree_op *_op, struct btree *b, KEY_START(k), KEY_SIZE(k)); op->count++; - if (atomic_read(&b->c->search_inflight) && - !(op->count % INIT_KEYS_EACH_TIME)) { - bkey_copy_key(&op->start, k); - return -EAGAIN; - } + if (!(op->count % INIT_KEYS_EACH_TIME)) + cond_resched(); return MAP_CONTINUE; } @@ -797,24 +792,16 @@ static int bch_root_node_dirty_init(struct cache_set *c, bch_btree_op_init(&op.op, -1); op.inode = d->id; op.count = 0; - op.start = KEY(op.inode, 0, 0); - do { - ret = bcache_btree(map_keys_recurse, - k, - c->root, - &op.op, - &op.start, - sectors_dirty_init_fn, - 0); - if (ret == -EAGAIN) - schedule_timeout_interruptible( - msecs_to_jiffies(INIT_KEYS_SLEEP_MS)); - else if (ret < 0) { - pr_warn("sectors dirty init failed, ret=%d!\n", ret); - break; - } - } while (ret == -EAGAIN); + ret = bcache_btree(map_keys_recurse, + k, + c->root, + &op.op, + &KEY(op.inode, 0, 0), + sectors_dirty_init_fn, + 0); + if (ret < 0) + pr_warn("sectors dirty init failed, ret=%d!\n", ret); return ret; } @@ -858,7 +845,6 @@ static int bch_dirty_init_thread(void *arg) goto out; } skip_nr--; - cond_resched(); } if (p) { @@ -868,7 +854,6 @@ static int bch_dirty_init_thread(void *arg) p = NULL; prev_idx = cur_idx; - cond_resched(); } out: @@ -907,11 +892,11 @@ void bch_sectors_dirty_init(struct bcache_device *d) bch_btree_op_init(&op.op, -1); op.inode = d->id; op.count = 0; - op.start = KEY(op.inode, 0, 0); for_each_key_filter(&c->root->keys, k, &iter, bch_ptr_invalid) sectors_dirty_init_fn(&op.op, c->root, k); + rw_unlock(0, c->root); return; } From 59afd4f287900c8187e968a4153ed35e6b48efce Mon Sep 17 00:00:00 2001 From: Coly Li Date: Tue, 24 May 2022 18:23:36 +0800 Subject: [PATCH 0724/1842] bcache: avoid journal no-space deadlock by reserving 1 journal bucket commit 32feee36c30ea06e38ccb8ae6e5c44c6eec790a6 upstream. The journal no-space deadlock was reported time to time. Such deadlock can happen in the following situation. When all journal buckets are fully filled by active jset with heavy write I/O load, the cache set registration (after a reboot) will load all active jsets and inserting them into the btree again (which is called journal replay). If a journaled bkey is inserted into a btree node and results btree node split, new journal request might be triggered. For example, the btree grows one more level after the node split, then the root node record in cache device super block will be upgrade by bch_journal_meta() from bch_btree_set_root(). But there is no space in journal buckets, the journal replay has to wait for new journal bucket to be reclaimed after at least one journal bucket replayed. This is one example that how the journal no-space deadlock happens. The solution to avoid the deadlock is to reserve 1 journal bucket in run time, and only permit the reserved journal bucket to be used during cache set registration procedure for things like journal replay. Then the journal space will never be fully filled, there is no chance for journal no-space deadlock to happen anymore. This patch adds a new member "bool do_reserve" in struct journal, it is inititalized to 0 (false) when struct journal is allocated, and set to 1 (true) by bch_journal_space_reserve() when all initialization done in run_cache_set(). In the run time when journal_reclaim() tries to allocate a new journal bucket, free_journal_buckets() is called to check whether there are enough free journal buckets to use. If there is only 1 free journal bucket and journal->do_reserve is 1 (true), the last bucket is reserved and free_journal_buckets() will return 0 to indicate no free journal bucket. Then journal_reclaim() will give up, and try next time to see whetheer there is free journal bucket to allocate. By this method, there is always 1 jouranl bucket reserved in run time. During the cache set registration, journal->do_reserve is 0 (false), so the reserved journal bucket can be used to avoid the no-space deadlock. Reported-by: Nikhil Kshirsagar Signed-off-by: Coly Li Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20220524102336.10684-5-colyli@suse.de Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- drivers/md/bcache/journal.c | 31 ++++++++++++++++++++++++++----- drivers/md/bcache/journal.h | 2 ++ drivers/md/bcache/super.c | 1 + 3 files changed, 29 insertions(+), 5 deletions(-) diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c index c6613e817333..aea4833f67fc 100644 --- a/drivers/md/bcache/journal.c +++ b/drivers/md/bcache/journal.c @@ -407,6 +407,11 @@ int bch_journal_replay(struct cache_set *s, struct list_head *list) return ret; } +void bch_journal_space_reserve(struct journal *j) +{ + j->do_reserve = true; +} + /* Journalling */ static void btree_flush_write(struct cache_set *c) @@ -625,12 +630,30 @@ static void do_journal_discard(struct cache *ca) } } +static unsigned int free_journal_buckets(struct cache_set *c) +{ + struct journal *j = &c->journal; + struct cache *ca = c->cache; + struct journal_device *ja = &c->cache->journal; + unsigned int n; + + /* In case njournal_buckets is not power of 2 */ + if (ja->cur_idx >= ja->discard_idx) + n = ca->sb.njournal_buckets + ja->discard_idx - ja->cur_idx; + else + n = ja->discard_idx - ja->cur_idx; + + if (n > (1 + j->do_reserve)) + return n - (1 + j->do_reserve); + + return 0; +} + static void journal_reclaim(struct cache_set *c) { struct bkey *k = &c->journal.key; struct cache *ca = c->cache; uint64_t last_seq; - unsigned int next; struct journal_device *ja = &ca->journal; atomic_t p __maybe_unused; @@ -653,12 +676,10 @@ static void journal_reclaim(struct cache_set *c) if (c->journal.blocks_free) goto out; - next = (ja->cur_idx + 1) % ca->sb.njournal_buckets; - /* No space available on this device */ - if (next == ja->discard_idx) + if (!free_journal_buckets(c)) goto out; - ja->cur_idx = next; + ja->cur_idx = (ja->cur_idx + 1) % ca->sb.njournal_buckets; k->ptr[0] = MAKE_PTR(0, bucket_to_sector(c, ca->sb.d[ja->cur_idx]), ca->sb.nr_this_dev); diff --git a/drivers/md/bcache/journal.h b/drivers/md/bcache/journal.h index f2ea34d5f431..cd316b4a1e95 100644 --- a/drivers/md/bcache/journal.h +++ b/drivers/md/bcache/journal.h @@ -105,6 +105,7 @@ struct journal { spinlock_t lock; spinlock_t flush_write_lock; bool btree_flushing; + bool do_reserve; /* used when waiting because the journal was full */ struct closure_waitlist wait; struct closure io; @@ -182,5 +183,6 @@ int bch_journal_replay(struct cache_set *c, struct list_head *list); void bch_journal_free(struct cache_set *c); int bch_journal_alloc(struct cache_set *c); +void bch_journal_space_reserve(struct journal *j); #endif /* _BCACHE_JOURNAL_H */ diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 81f1cc5b3499..7f5ea2509643 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -2150,6 +2150,7 @@ static int run_cache_set(struct cache_set *c) flash_devs_run(c); + bch_journal_space_reserve(&c->journal); set_bit(CACHE_SET_RUNNING, &c->flags); return 0; err: From 10c5088a312dd5311da9e13370629972fe42da88 Mon Sep 17 00:00:00 2001 From: Jiri Slaby Date: Tue, 3 May 2022 10:08:03 +0200 Subject: [PATCH 0725/1842] serial: pch: don't overwrite xmit->buf[0] by x_char commit d9f3af4fbb1d955bbaf872d9e76502f6e3e803cb upstream. When x_char is to be sent, the TX path overwrites whatever is in the circular buffer at offset 0 with x_char and sends it using pch_uart_hal_write(). I don't understand how this was supposed to work if xmit->buf[0] already contained some character. It must have been lost. Remove this whole pop_tx_x() concept and do the work directly in the callers. (Without printing anything using dev_dbg().) Cc: Fixes: 3c6a483275f4 (Serial: EG20T: add PCH_UART driver) Signed-off-by: Jiri Slaby Link: https://lore.kernel.org/r/20220503080808.28332-1-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman --- drivers/tty/serial/pch_uart.c | 27 +++++++-------------------- 1 file changed, 7 insertions(+), 20 deletions(-) diff --git a/drivers/tty/serial/pch_uart.c b/drivers/tty/serial/pch_uart.c index a7363bc66c11..351ad0b02029 100644 --- a/drivers/tty/serial/pch_uart.c +++ b/drivers/tty/serial/pch_uart.c @@ -628,22 +628,6 @@ static int push_rx(struct eg20t_port *priv, const unsigned char *buf, return 0; } -static int pop_tx_x(struct eg20t_port *priv, unsigned char *buf) -{ - int ret = 0; - struct uart_port *port = &priv->port; - - if (port->x_char) { - dev_dbg(priv->port.dev, "%s:X character send %02x (%lu)\n", - __func__, port->x_char, jiffies); - buf[0] = port->x_char; - port->x_char = 0; - ret = 1; - } - - return ret; -} - static int dma_push_rx(struct eg20t_port *priv, int size) { int room; @@ -893,9 +877,10 @@ static unsigned int handle_tx(struct eg20t_port *priv) fifo_size = max(priv->fifo_size, 1); tx_empty = 1; - if (pop_tx_x(priv, xmit->buf)) { - pch_uart_hal_write(priv, xmit->buf, 1); + if (port->x_char) { + pch_uart_hal_write(priv, &port->x_char, 1); port->icount.tx++; + port->x_char = 0; tx_empty = 0; fifo_size--; } @@ -950,9 +935,11 @@ static unsigned int dma_handle_tx(struct eg20t_port *priv) } fifo_size = max(priv->fifo_size, 1); - if (pop_tx_x(priv, xmit->buf)) { - pch_uart_hal_write(priv, xmit->buf, 1); + + if (port->x_char) { + pch_uart_hal_write(priv, &port->x_char, 1); port->icount.tx++; + port->x_char = 0; fifo_size--; } From c521f42dd241aacb78599aeb879aac14334fd9be Mon Sep 17 00:00:00 2001 From: Xiaomeng Tong Date: Sun, 27 Mar 2022 14:15:16 +0800 Subject: [PATCH 0726/1842] tilcdc: tilcdc_external: fix an incorrect NULL check on list iterator commit 8b917cbe38e9b0d002492477a9fc2bfee2412ce4 upstream. The bug is here: if (!encoder) { The list iterator value 'encoder' will *always* be set and non-NULL by list_for_each_entry(), so it is incorrect to assume that the iterator value will be NULL if the list is empty or no element is found. To fix the bug, use a new variable 'iter' as the list iterator, while use the original variable 'encoder' as a dedicated pointer to point to the found element. Cc: stable@vger.kernel.org Fixes: ec9eab097a500 ("drm/tilcdc: Add drm bridge support for attaching drm bridge drivers") Signed-off-by: Xiaomeng Tong Reviewed-by: Jyri Sarha Tested-by: Jyri Sarha Signed-off-by: Jyri Sarha Link: https://patchwork.freedesktop.org/patch/msgid/20220327061516.5076-1-xiam0nd.tong@gmail.com Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/tilcdc/tilcdc_external.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/tilcdc/tilcdc_external.c b/drivers/gpu/drm/tilcdc/tilcdc_external.c index b177525588c1..8ece0dd0b469 100644 --- a/drivers/gpu/drm/tilcdc/tilcdc_external.c +++ b/drivers/gpu/drm/tilcdc/tilcdc_external.c @@ -60,11 +60,13 @@ struct drm_connector *tilcdc_encoder_find_connector(struct drm_device *ddev, int tilcdc_add_component_encoder(struct drm_device *ddev) { struct tilcdc_drm_private *priv = ddev->dev_private; - struct drm_encoder *encoder; + struct drm_encoder *encoder = NULL, *iter; - list_for_each_entry(encoder, &ddev->mode_config.encoder_list, head) - if (encoder->possible_crtcs & (1 << priv->crtc->index)) + list_for_each_entry(iter, &ddev->mode_config.encoder_list, head) + if (iter->possible_crtcs & (1 << priv->crtc->index)) { + encoder = iter; break; + } if (!encoder) { dev_err(ddev->dev, "%s: No suitable encoder found\n", __func__); From 591c3481b13fa6d5d03819e90539fa442c6ed2d2 Mon Sep 17 00:00:00 2001 From: Xiaomeng Tong Date: Sun, 27 Mar 2022 13:20:28 +0800 Subject: [PATCH 0727/1842] gma500: fix an incorrect NULL check on list iterator commit bdef417d84536715145f6dc9cc3275c46f26295a upstream. The bug is here: return crtc; The list iterator value 'crtc' will *always* be set and non-NULL by list_for_each_entry(), so it is incorrect to assume that the iterator value will be NULL if the list is empty or no element is found. To fix the bug, return 'crtc' when found, otherwise return NULL. Cc: stable@vger.kernel.org fixes: 89c78134cc54d ("gma500: Add Poulsbo support") Signed-off-by: Xiaomeng Tong Signed-off-by: Patrik Jakobsson Link: https://patchwork.freedesktop.org/patch/msgid/20220327052028.2013-1-xiam0nd.tong@gmail.com Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/gma500/psb_intel_display.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/gma500/psb_intel_display.c b/drivers/gpu/drm/gma500/psb_intel_display.c index 531c5485be17..bd3df5adb96f 100644 --- a/drivers/gpu/drm/gma500/psb_intel_display.c +++ b/drivers/gpu/drm/gma500/psb_intel_display.c @@ -536,14 +536,15 @@ void psb_intel_crtc_init(struct drm_device *dev, int pipe, struct drm_crtc *psb_intel_get_crtc_from_pipe(struct drm_device *dev, int pipe) { - struct drm_crtc *crtc = NULL; + struct drm_crtc *crtc; list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { struct gma_crtc *gma_crtc = to_gma_crtc(crtc); + if (gma_crtc->pipe == pipe) - break; + return crtc; } - return crtc; + return NULL; } int gma_connector_clones(struct drm_device *dev, int type_mask) From 09a84dad95fa0d3ff9c796db219cc3fb90d7b96c Mon Sep 17 00:00:00 2001 From: Kathiravan T Date: Fri, 11 Feb 2022 17:44:15 +0530 Subject: [PATCH 0728/1842] arm64: dts: qcom: ipq8074: fix the sleep clock frequency commit f607dd767f5d6800ffbdce5b99ba81763b023781 upstream. Sleep clock frequency should be 32768Hz. Lets fix it. Cc: stable@vger.kernel.org Fixes: 41dac73e243d ("arm64: dts: Add ipq8074 SoC and HK01 board support") Link: https://lore.kernel.org/all/e2a447f8-6024-0369-f698-2027b6edcf9e@codeaurora.org/ Signed-off-by: Kathiravan T Signed-off-by: Bjorn Andersson Link: https://lore.kernel.org/r/1644581655-11568-1-git-send-email-quic_kathirav@quicinc.com Signed-off-by: Greg Kroah-Hartman --- arch/arm64/boot/dts/qcom/ipq8074.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi index 776a6b0f61a6..dca040f66f5f 100644 --- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi +++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi @@ -13,7 +13,7 @@ / { clocks { sleep_clk: sleep_clk { compatible = "fixed-clock"; - clock-frequency = <32000>; + clock-frequency = <32768>; #clock-cells = <0>; }; From 6f3673c8d8eff0c4ab5a5ee0d3ca9717d85419b4 Mon Sep 17 00:00:00 2001 From: Johan Hovold Date: Wed, 27 Apr 2022 08:32:41 +0200 Subject: [PATCH 0729/1842] phy: qcom-qmp: fix struct clk leak on probe errors commit f0a4bc38a12f5a0cc5ad68670d9480e91e6a94df upstream. Make sure to release the pipe clock reference in case of a late probe error (e.g. probe deferral). Fixes: e78f3d15e115 ("phy: qcom-qmp: new qmp phy driver for qcom-chipsets") Cc: stable@vger.kernel.org # 4.12 Cc: Vivek Gautam Reviewed-by: Bjorn Andersson Signed-off-by: Johan Hovold Link: https://lore.kernel.org/r/20220427063243.32576-2-johan+linaro@kernel.org Signed-off-by: Vinod Koul Signed-off-by: Greg Kroah-Hartman --- drivers/phy/qualcomm/phy-qcom-qmp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c index 0cda16846962..cb2d722b1bb4 100644 --- a/drivers/phy/qualcomm/phy-qcom-qmp.c +++ b/drivers/phy/qualcomm/phy-qcom-qmp.c @@ -3789,7 +3789,7 @@ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id, * all phys that don't need this. */ snprintf(prop_name, sizeof(prop_name), "pipe%d", id); - qphy->pipe_clk = of_clk_get_by_name(np, prop_name); + qphy->pipe_clk = devm_get_clk_from_child(dev, np, prop_name); if (IS_ERR(qphy->pipe_clk)) { if (cfg->type == PHY_TYPE_PCIE || cfg->type == PHY_TYPE_USB3) { From 39c61f4f7f6f3005c82a6fa7daa1836692d72805 Mon Sep 17 00:00:00 2001 From: Jonathan Bakker Date: Sun, 27 Mar 2022 11:08:50 -0700 Subject: [PATCH 0730/1842] ARM: dts: s5pv210: Remove spi-cs-high on panel in Aries commit 096f58507374e1293a9e9cff8a1ccd5f37780a20 upstream. Since commit 766c6b63aa04 ("spi: fix client driver breakages when using GPIO descriptors"), the panel has been blank due to an inverted CS GPIO. In order to correct this, drop the spi-cs-high from the panel SPI device. Fixes: 766c6b63aa04 ("spi: fix client driver breakages when using GPIO descriptors") Cc: Signed-off-by: Jonathan Bakker Link: https://lore.kernel.org/r/CY4PR04MB05670C771062570E911AF3B4CB1C9@CY4PR04MB0567.namprd04.prod.outlook.com Signed-off-by: Krzysztof Kozlowski Signed-off-by: Greg Kroah-Hartman --- arch/arm/boot/dts/s5pv210-aries.dtsi | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/arm/boot/dts/s5pv210-aries.dtsi b/arch/arm/boot/dts/s5pv210-aries.dtsi index d9b4c51a00d9..9005f0a23e8f 100644 --- a/arch/arm/boot/dts/s5pv210-aries.dtsi +++ b/arch/arm/boot/dts/s5pv210-aries.dtsi @@ -564,7 +564,6 @@ panel@0 { reset-gpios = <&mp05 5 GPIO_ACTIVE_LOW>; vdd3-supply = <&ldo7_reg>; vci-supply = <&ldo17_reg>; - spi-cs-high; spi-max-frequency = <1200000>; pinctrl-names = "default"; From d6b9b220d10eda36a4094bc5bee23acc1b9f8047 Mon Sep 17 00:00:00 2001 From: Arnd Bergmann Date: Wed, 11 Sep 2019 22:31:51 +0200 Subject: [PATCH 0731/1842] ARM: pxa: maybe fix gpio lookup tables commit 2672a4bff6c03a20d5ae460a091f67ee782c3eff upstream. From inspection I found a couple of GPIO lookups that are listed with device "gpio-pxa", but actually have a number from a different gpio controller. Try to rectify that here, with a guess of what the actual device name is. Acked-by: Robert Jarzmik Reviewed-by: Linus Walleij Cc: stable@vger.kernel.org Signed-off-by: Arnd Bergmann Signed-off-by: Greg Kroah-Hartman --- arch/arm/mach-pxa/cm-x300.c | 8 ++++---- arch/arm/mach-pxa/magician.c | 2 +- arch/arm/mach-pxa/tosa.c | 4 ++-- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/arm/mach-pxa/cm-x300.c b/arch/arm/mach-pxa/cm-x300.c index 2e35354b61f5..167e871f059e 100644 --- a/arch/arm/mach-pxa/cm-x300.c +++ b/arch/arm/mach-pxa/cm-x300.c @@ -354,13 +354,13 @@ static struct platform_device cm_x300_spi_gpio = { static struct gpiod_lookup_table cm_x300_spi_gpiod_table = { .dev_id = "spi_gpio", .table = { - GPIO_LOOKUP("gpio-pxa", GPIO_LCD_SCL, + GPIO_LOOKUP("pca9555.1", GPIO_LCD_SCL - GPIO_LCD_BASE, "sck", GPIO_ACTIVE_HIGH), - GPIO_LOOKUP("gpio-pxa", GPIO_LCD_DIN, + GPIO_LOOKUP("pca9555.1", GPIO_LCD_DIN - GPIO_LCD_BASE, "mosi", GPIO_ACTIVE_HIGH), - GPIO_LOOKUP("gpio-pxa", GPIO_LCD_DOUT, + GPIO_LOOKUP("pca9555.1", GPIO_LCD_DOUT - GPIO_LCD_BASE, "miso", GPIO_ACTIVE_HIGH), - GPIO_LOOKUP("gpio-pxa", GPIO_LCD_CS, + GPIO_LOOKUP("pca9555.1", GPIO_LCD_CS - GPIO_LCD_BASE, "cs", GPIO_ACTIVE_HIGH), { }, }, diff --git a/arch/arm/mach-pxa/magician.c b/arch/arm/mach-pxa/magician.c index cd9fa465b9b2..9aee8e0f2bb1 100644 --- a/arch/arm/mach-pxa/magician.c +++ b/arch/arm/mach-pxa/magician.c @@ -681,7 +681,7 @@ static struct platform_device bq24022 = { static struct gpiod_lookup_table bq24022_gpiod_table = { .dev_id = "gpio-regulator", .table = { - GPIO_LOOKUP("gpio-pxa", EGPIO_MAGICIAN_BQ24022_ISET2, + GPIO_LOOKUP("htc-egpio-0", EGPIO_MAGICIAN_BQ24022_ISET2 - MAGICIAN_EGPIO_BASE, NULL, GPIO_ACTIVE_HIGH), GPIO_LOOKUP("gpio-pxa", GPIO30_MAGICIAN_BQ24022_nCHARGE_EN, "enable", GPIO_ACTIVE_LOW), diff --git a/arch/arm/mach-pxa/tosa.c b/arch/arm/mach-pxa/tosa.c index 431709725d02..ded5e343e198 100644 --- a/arch/arm/mach-pxa/tosa.c +++ b/arch/arm/mach-pxa/tosa.c @@ -296,9 +296,9 @@ static struct gpiod_lookup_table tosa_mci_gpio_table = { .table = { GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_nSD_DETECT, "cd", GPIO_ACTIVE_LOW), - GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_SD_WP, + GPIO_LOOKUP("sharp-scoop.0", TOSA_GPIO_SD_WP - TOSA_SCOOP_GPIO_BASE, "wp", GPIO_ACTIVE_LOW), - GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_PWR_ON, + GPIO_LOOKUP("sharp-scoop.0", TOSA_GPIO_PWR_ON - TOSA_SCOOP_GPIO_BASE, "power", GPIO_ACTIVE_HIGH), { }, }, From 6182c71a0c04095b526498efcd6d0961f3e1172f Mon Sep 17 00:00:00 2001 From: Steve French Date: Thu, 12 May 2022 10:18:00 -0500 Subject: [PATCH 0732/1842] SMB3: EBADF/EIO errors in rename/open caused by race condition in smb2_compound_op MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 0a55cf74ffb5d004b93647e4389096880ce37d6b upstream. There is a race condition in smb2_compound_op: after_close: num_rqst++; if (cfile) { cifsFileInfo_put(cfile); // sends SMB2_CLOSE to the server cfile = NULL; This is triggered by smb2_query_path_info operation that happens during revalidate_dentry. In smb2_query_path_info, get_readable_path is called to load the cfile, increasing the reference counter. If in the meantime, this reference becomes the very last, this call to cifsFileInfo_put(cfile) will trigger a SMB2_CLOSE request sent to the server just before sending this compound request – and so then the compound request fails either with EBADF/EIO depending on the timing at the server, because the handle is already closed. In the first scenario, the race seems to be happening between smb2_query_path_info triggered by the rename operation, and between “cleanup” of asynchronous writes – while fsync(fd) likely waits for the asynchronous writes to complete, releasing the writeback structures can happen after the close(fd) call. So the EBADF/EIO errors will pop up if the timing is such that: 1) There are still outstanding references after close(fd) in the writeback structures 2) smb2_query_path_info successfully fetches the cfile, increasing the refcounter by 1 3) All writeback structures release the same cfile, reducing refcounter to 1 4) smb2_compound_op is called with that cfile In the second scenario, the race seems to be similar – here open triggers the smb2_query_path_info operation, and if all other threads in the meantime decrease the refcounter to 1 similarly to the first scenario, again SMB2_CLOSE will be sent to the server just before issuing the compound request. This case is harder to reproduce. See https://bugzilla.samba.org/show_bug.cgi?id=15051 Cc: stable@vger.kernel.org Fixes: 8de9e86c67ba ("cifs: create a helper to find a writeable handle by path name") Signed-off-by: Ondrej Hubsch Reviewed-by: Ronnie Sahlberg Reviewed-by: Paulo Alcantara (SUSE) Signed-off-by: Steve French Signed-off-by: Greg Kroah-Hartman --- fs/cifs/smb2inode.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c index a718dc77e604..97cd4df04060 100644 --- a/fs/cifs/smb2inode.c +++ b/fs/cifs/smb2inode.c @@ -371,8 +371,6 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon, num_rqst++; if (cfile) { - cifsFileInfo_put(cfile); - cfile = NULL; rc = compound_send_recv(xid, ses, server, flags, num_rqst - 2, &rqst[1], &resp_buftype[1], From 0ac587c61fc1ddf536cdbe1c239bc536847d5505 Mon Sep 17 00:00:00 2001 From: Akira Yokosawa Date: Wed, 1 Jun 2022 23:34:06 +0900 Subject: [PATCH 0733/1842] docs/conf.py: Cope with removal of language=None in Sphinx 5.0.0 commit 627f01eab93d8671d4e4afee9b148f9998d20e7c upstream. One of the changes in Sphinx 5.0.0 [1] says [sic]: 5.0.0 final - #10474: language does not accept None as it value. The default value of language becomes to 'en' now. [1]: https://www.sphinx-doc.org/en/master/changes.html#release-5-0-0-released-may-30-2022 It results in a new warning from Sphinx 5.0.0 [sic]: WARNING: Invalid configuration value found: 'language = None'. Update your configuration to a valid langauge code. Falling back to 'en' (English). Silence the warning by using 'en'. It works with all the Sphinx versions required for building kernel documentation (1.7.9 or later). Signed-off-by: Akira Yokosawa Link: https://lore.kernel.org/r/bd0c2ddc-2401-03cb-4526-79ca664e1cbe@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Jonathan Corbet Signed-off-by: Greg Kroah-Hartman --- Documentation/conf.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Documentation/conf.py b/Documentation/conf.py index ed2b43ec7754..f47c3f1682db 100644 --- a/Documentation/conf.py +++ b/Documentation/conf.py @@ -176,7 +176,7 @@ finally: # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. -language = None +language = 'en' # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: From ec029087dfef70a89c5ff0c6433bd4da211cbbad Mon Sep 17 00:00:00 2001 From: Dinh Nguyen Date: Wed, 11 May 2022 12:54:46 -0500 Subject: [PATCH 0734/1842] dt-bindings: gpio: altera: correct interrupt-cells commit 3a21c3ac93aff7b4522b152399df8f6a041df56d upstream. update documentation to correctly state the interrupt-cells to be 2. Cc: stable@vger.kernel.org Fixes: 4fd9bbc6e071 ("drivers/gpio: Altera soft IP GPIO driver devicetree binding") Signed-off-by: Dinh Nguyen Signed-off-by: Greg Kroah-Hartman --- Documentation/devicetree/bindings/gpio/gpio-altera.txt | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/Documentation/devicetree/bindings/gpio/gpio-altera.txt b/Documentation/devicetree/bindings/gpio/gpio-altera.txt index 146e554b3c67..2a80e272cd66 100644 --- a/Documentation/devicetree/bindings/gpio/gpio-altera.txt +++ b/Documentation/devicetree/bindings/gpio/gpio-altera.txt @@ -9,8 +9,9 @@ Required properties: - The second cell is reserved and is currently unused. - gpio-controller : Marks the device node as a GPIO controller. - interrupt-controller: Mark the device node as an interrupt controller -- #interrupt-cells : Should be 1. The interrupt type is fixed in the hardware. +- #interrupt-cells : Should be 2. The interrupt type is fixed in the hardware. - The first cell is the GPIO offset number within the GPIO controller. + - The second cell is the interrupt trigger type and level flags. - interrupts: Specify the interrupt. - altr,interrupt-type: Specifies the interrupt trigger type the GPIO hardware is synthesized. This field is required if the Altera GPIO controller @@ -38,6 +39,6 @@ gpio_altr: gpio@ff200000 { altr,interrupt-type = ; #gpio-cells = <2>; gpio-controller; - #interrupt-cells = <1>; + #interrupt-cells = <2>; interrupt-controller; }; From 19e5aac38abca5213bab8b9a1dab25b9adf1ff68 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= Date: Thu, 19 May 2022 16:59:19 +0200 Subject: [PATCH 0735/1842] vdpasim: allow to enable a vq repeatedly MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 242436973831aa97e8ce19533c6c912ea8def31b upstream. Code must be resilient to enable a queue many times. At the moment the queue is resetting so it's definitely not the expected behavior. v2: set vq->ready = 0 at disable. Fixes: 2c53d0f64c06 ("vdpasim: vDPA device simulator") Cc: stable@vger.kernel.org Signed-off-by: Eugenio Pérez Message-Id: <20220519145919.772896-1-eperezma@redhat.com> Signed-off-by: Michael S. Tsirkin Reviewed-by: Stefano Garzarella Signed-off-by: Greg Kroah-Hartman --- drivers/vdpa/vdpa_sim/vdpa_sim.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index f2ad450db547..e65c0fa95d31 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -473,11 +473,14 @@ static void vdpasim_set_vq_ready(struct vdpa_device *vdpa, u16 idx, bool ready) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx]; + bool old_ready; spin_lock(&vdpasim->lock); + old_ready = vq->ready; vq->ready = ready; - if (vq->ready) + if (vq->ready && !old_ready) { vdpasim_queue_ready(vdpasim, idx); + } spin_unlock(&vdpasim->lock); } From 77692c02e1517c54f2fd0535f41aa4286ac9f140 Mon Sep 17 00:00:00 2001 From: Tejun Heo Date: Fri, 13 May 2022 20:55:45 -1000 Subject: [PATCH 0736/1842] blk-iolatency: Fix inflight count imbalances and IO hangs on offline commit 8a177a36da6c54c98b8685d4f914cb3637d53c0d upstream. iolatency needs to track the number of inflight IOs per cgroup. As this tracking can be expensive, it is disabled when no cgroup has iolatency configured for the device. To ensure that the inflight counters stay balanced, iolatency_set_limit() freezes the request_queue while manipulating the enabled counter, which ensures that no IO is in flight and thus all counters are zero. Unfortunately, iolatency_set_limit() isn't the only place where the enabled counter is manipulated. iolatency_pd_offline() can also dec the counter and trigger disabling. As this disabling happens without freezing the q, this can easily happen while some IOs are in flight and thus leak the counts. This can be easily demonstrated by turning on iolatency on an one empty cgroup while IOs are in flight in other cgroups and then removing the cgroup. Note that iolatency shouldn't have been enabled elsewhere in the system to ensure that removing the cgroup disables iolatency for the whole device. The following keeps flipping on and off iolatency on sda: echo +io > /sys/fs/cgroup/cgroup.subtree_control while true; do mkdir -p /sys/fs/cgroup/test echo '8:0 target=100000' > /sys/fs/cgroup/test/io.latency sleep 1 rmdir /sys/fs/cgroup/test sleep 1 done and there's concurrent fio generating direct rand reads: fio --name test --filename=/dev/sda --direct=1 --rw=randread \ --runtime=600 --time_based --iodepth=256 --numjobs=4 --bs=4k while monitoring with the following drgn script: while True: for css in css_for_each_descendant_pre(prog['blkcg_root'].css.address_of_()): for pos in hlist_for_each(container_of(css, 'struct blkcg', 'css').blkg_list): blkg = container_of(pos, 'struct blkcg_gq', 'blkcg_node') pd = blkg.pd[prog['blkcg_policy_iolatency'].plid] if pd.value_() == 0: continue iolat = container_of(pd, 'struct iolatency_grp', 'pd') inflight = iolat.rq_wait.inflight.counter.value_() if inflight: print(f'inflight={inflight} {disk_name(blkg.q.disk).decode("utf-8")} ' f'{cgroup_path(css.cgroup).decode("utf-8")}') time.sleep(1) The monitoring output looks like the following: inflight=1 sda /user.slice inflight=1 sda /user.slice ... inflight=14 sda /user.slice inflight=13 sda /user.slice inflight=17 sda /user.slice inflight=15 sda /user.slice inflight=18 sda /user.slice inflight=17 sda /user.slice inflight=20 sda /user.slice inflight=19 sda /user.slice <- fio stopped, inflight stuck at 19 inflight=19 sda /user.slice inflight=19 sda /user.slice If a cgroup with stuck inflight ends up getting throttled, the throttled IOs will never get issued as there's no completion event to wake it up leading to an indefinite hang. This patch fixes the bug by unifying enable handling into a work item which is automatically kicked off from iolatency_set_min_lat_nsec() which is called from both iolatency_set_limit() and iolatency_pd_offline() paths. Punting to a work item is necessary as iolatency_pd_offline() is called under spinlocks while freezing a request_queue requires a sleepable context. This also simplifies the code reducing LOC sans the comments and avoids the unnecessary freezes which were happening whenever a cgroup's latency target is newly set or cleared. Signed-off-by: Tejun Heo Cc: Josef Bacik Cc: Liu Bo Fixes: 8c772a9bfc7c ("blk-iolatency: fix IO hang due to negative inflight counter") Cc: stable@vger.kernel.org # v5.0+ Link: https://lore.kernel.org/r/Yn9ScX6Nx2qIiQQi@slm.duckdns.org Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- block/blk-iolatency.c | 122 ++++++++++++++++++++++-------------------- 1 file changed, 64 insertions(+), 58 deletions(-) diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index d8b0d8bd132b..74511a060d59 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -86,7 +86,17 @@ struct iolatency_grp; struct blk_iolatency { struct rq_qos rqos; struct timer_list timer; - atomic_t enabled; + + /* + * ->enabled is the master enable switch gating the throttling logic and + * inflight tracking. The number of cgroups which have iolat enabled is + * tracked in ->enable_cnt, and ->enable is flipped on/off accordingly + * from ->enable_work with the request_queue frozen. For details, See + * blkiolatency_enable_work_fn(). + */ + bool enabled; + atomic_t enable_cnt; + struct work_struct enable_work; }; static inline struct blk_iolatency *BLKIOLATENCY(struct rq_qos *rqos) @@ -94,11 +104,6 @@ static inline struct blk_iolatency *BLKIOLATENCY(struct rq_qos *rqos) return container_of(rqos, struct blk_iolatency, rqos); } -static inline bool blk_iolatency_enabled(struct blk_iolatency *blkiolat) -{ - return atomic_read(&blkiolat->enabled) > 0; -} - struct child_latency_info { spinlock_t lock; @@ -463,7 +468,7 @@ static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio) struct blkcg_gq *blkg = bio->bi_blkg; bool issue_as_root = bio_issue_as_root_blkg(bio); - if (!blk_iolatency_enabled(blkiolat)) + if (!blkiolat->enabled) return; while (blkg && blkg->parent) { @@ -593,7 +598,6 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio) u64 window_start; u64 now; bool issue_as_root = bio_issue_as_root_blkg(bio); - bool enabled = false; int inflight = 0; blkg = bio->bi_blkg; @@ -604,8 +608,7 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio) if (!iolat) return; - enabled = blk_iolatency_enabled(iolat->blkiolat); - if (!enabled) + if (!iolat->blkiolat->enabled) return; now = ktime_to_ns(ktime_get()); @@ -644,6 +647,7 @@ static void blkcg_iolatency_exit(struct rq_qos *rqos) struct blk_iolatency *blkiolat = BLKIOLATENCY(rqos); del_timer_sync(&blkiolat->timer); + flush_work(&blkiolat->enable_work); blkcg_deactivate_policy(rqos->q, &blkcg_policy_iolatency); kfree(blkiolat); } @@ -715,6 +719,44 @@ static void blkiolatency_timer_fn(struct timer_list *t) rcu_read_unlock(); } +/** + * blkiolatency_enable_work_fn - Enable or disable iolatency on the device + * @work: enable_work of the blk_iolatency of interest + * + * iolatency needs to keep track of the number of in-flight IOs per cgroup. This + * is relatively expensive as it involves walking up the hierarchy twice for + * every IO. Thus, if iolatency is not enabled in any cgroup for the device, we + * want to disable the in-flight tracking. + * + * We have to make sure that the counting is balanced - we don't want to leak + * the in-flight counts by disabling accounting in the completion path while IOs + * are in flight. This is achieved by ensuring that no IO is in flight by + * freezing the queue while flipping ->enabled. As this requires a sleepable + * context, ->enabled flipping is punted to this work function. + */ +static void blkiolatency_enable_work_fn(struct work_struct *work) +{ + struct blk_iolatency *blkiolat = container_of(work, struct blk_iolatency, + enable_work); + bool enabled; + + /* + * There can only be one instance of this function running for @blkiolat + * and it's guaranteed to be executed at least once after the latest + * ->enabled_cnt modification. Acting on the latest ->enable_cnt is + * sufficient. + * + * Also, we know @blkiolat is safe to access as ->enable_work is flushed + * in blkcg_iolatency_exit(). + */ + enabled = atomic_read(&blkiolat->enable_cnt); + if (enabled != blkiolat->enabled) { + blk_mq_freeze_queue(blkiolat->rqos.q); + blkiolat->enabled = enabled; + blk_mq_unfreeze_queue(blkiolat->rqos.q); + } +} + int blk_iolatency_init(struct request_queue *q) { struct blk_iolatency *blkiolat; @@ -740,17 +782,15 @@ int blk_iolatency_init(struct request_queue *q) } timer_setup(&blkiolat->timer, blkiolatency_timer_fn, 0); + INIT_WORK(&blkiolat->enable_work, blkiolatency_enable_work_fn); return 0; } -/* - * return 1 for enabling iolatency, return -1 for disabling iolatency, otherwise - * return 0. - */ -static int iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val) +static void iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val) { struct iolatency_grp *iolat = blkg_to_lat(blkg); + struct blk_iolatency *blkiolat = iolat->blkiolat; u64 oldval = iolat->min_lat_nsec; iolat->min_lat_nsec = val; @@ -758,13 +798,15 @@ static int iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val) iolat->cur_win_nsec = min_t(u64, iolat->cur_win_nsec, BLKIOLATENCY_MAX_WIN_SIZE); - if (!oldval && val) - return 1; + if (!oldval && val) { + if (atomic_inc_return(&blkiolat->enable_cnt) == 1) + schedule_work(&blkiolat->enable_work); + } if (oldval && !val) { blkcg_clear_delay(blkg); - return -1; + if (atomic_dec_return(&blkiolat->enable_cnt) == 0) + schedule_work(&blkiolat->enable_work); } - return 0; } static void iolatency_clear_scaling(struct blkcg_gq *blkg) @@ -796,7 +838,6 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf, u64 lat_val = 0; u64 oldval; int ret; - int enable = 0; ret = blkg_conf_prep(blkcg, &blkcg_policy_iolatency, buf, &ctx); if (ret) @@ -831,41 +872,12 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf, blkg = ctx.blkg; oldval = iolat->min_lat_nsec; - enable = iolatency_set_min_lat_nsec(blkg, lat_val); - if (enable) { - if (!blk_get_queue(blkg->q)) { - ret = -ENODEV; - goto out; - } - - blkg_get(blkg); - } - - if (oldval != iolat->min_lat_nsec) { + iolatency_set_min_lat_nsec(blkg, lat_val); + if (oldval != iolat->min_lat_nsec) iolatency_clear_scaling(blkg); - } - ret = 0; out: blkg_conf_finish(&ctx); - if (ret == 0 && enable) { - struct iolatency_grp *tmp = blkg_to_lat(blkg); - struct blk_iolatency *blkiolat = tmp->blkiolat; - - blk_mq_freeze_queue(blkg->q); - - if (enable == 1) - atomic_inc(&blkiolat->enabled); - else if (enable == -1) - atomic_dec(&blkiolat->enabled); - else - WARN_ON_ONCE(1); - - blk_mq_unfreeze_queue(blkg->q); - - blkg_put(blkg); - blk_put_queue(blkg->q); - } return ret ?: nbytes; } @@ -1006,14 +1018,8 @@ static void iolatency_pd_offline(struct blkg_policy_data *pd) { struct iolatency_grp *iolat = pd_to_lat(pd); struct blkcg_gq *blkg = lat_to_blkg(iolat); - struct blk_iolatency *blkiolat = iolat->blkiolat; - int ret; - ret = iolatency_set_min_lat_nsec(blkg, 0); - if (ret == 1) - atomic_inc(&blkiolat->enabled); - if (ret == -1) - atomic_dec(&blkiolat->enabled); + iolatency_set_min_lat_nsec(blkg, 0); iolatency_clear_scaling(blkg); } From 67e3404889cf514a50d3888caed5012f63925e17 Mon Sep 17 00:00:00 2001 From: Mao Jinlong Date: Wed, 9 Mar 2022 06:22:06 -0800 Subject: [PATCH 0737/1842] coresight: core: Fix coresight device probe failure issue commit 8c1d3f79d9ca48e406b78e90e94cf09a8c076bf2 upstream. It is possibe that probe failure issue happens when the device and its child_device's probe happens at the same time. In coresight_make_links, has_conns_grp is true for parent, but has_conns_grp is false for child device as has_conns_grp is set to true in coresight_create_conns_sysfs_group. The probe of parent device will fail at this condition. Add has_conns_grp check for child device before make the links and make the process from device_register to connection_create be atomic to avoid this probe failure issue. Cc: stable@vger.kernel.org Suggested-by: Suzuki K Poulose Suggested-by: Mike Leach Signed-off-by: Mao Jinlong Link: https://lore.kernel.org/r/20220309142206.15632-1-quic_jinlmao@quicinc.com [ Added Cc stable ] Signed-off-by: Suzuki K Poulose Signed-off-by: Greg Kroah-Hartman --- drivers/hwtracing/coresight/coresight-core.c | 33 +++++++++++++------- 1 file changed, 22 insertions(+), 11 deletions(-) diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c index 424c296845db..e8a7f47b8fce 100644 --- a/drivers/hwtracing/coresight/coresight-core.c +++ b/drivers/hwtracing/coresight/coresight-core.c @@ -1337,7 +1337,7 @@ static int coresight_fixup_device_conns(struct coresight_device *csdev) continue; conn->child_dev = coresight_find_csdev_by_fwnode(conn->child_fwnode); - if (conn->child_dev) { + if (conn->child_dev && conn->child_dev->has_conns_grp) { ret = coresight_make_links(csdev, conn, conn->child_dev); if (ret) @@ -1486,6 +1486,7 @@ struct coresight_device *coresight_register(struct coresight_desc *desc) int nr_refcnts = 1; atomic_t *refcnts = NULL; struct coresight_device *csdev; + bool registered = false; csdev = kzalloc(sizeof(*csdev), GFP_KERNEL); if (!csdev) { @@ -1506,7 +1507,8 @@ struct coresight_device *coresight_register(struct coresight_desc *desc) refcnts = kcalloc(nr_refcnts, sizeof(*refcnts), GFP_KERNEL); if (!refcnts) { ret = -ENOMEM; - goto err_free_csdev; + kfree(csdev); + goto err_out; } csdev->refcnt = refcnts; @@ -1530,6 +1532,13 @@ struct coresight_device *coresight_register(struct coresight_desc *desc) csdev->dev.fwnode = fwnode_handle_get(dev_fwnode(desc->dev)); dev_set_name(&csdev->dev, "%s", desc->name); + /* + * Make sure the device registration and the connection fixup + * are synchronised, so that we don't see uninitialised devices + * on the coresight bus while trying to resolve the connections. + */ + mutex_lock(&coresight_mutex); + ret = device_register(&csdev->dev); if (ret) { put_device(&csdev->dev); @@ -1537,7 +1546,7 @@ struct coresight_device *coresight_register(struct coresight_desc *desc) * All resources are free'd explicitly via * coresight_device_release(), triggered from put_device(). */ - goto err_out; + goto out_unlock; } if (csdev->type == CORESIGHT_DEV_TYPE_SINK || @@ -1552,11 +1561,11 @@ struct coresight_device *coresight_register(struct coresight_desc *desc) * from put_device(), which is in turn called from * function device_unregister(). */ - goto err_out; + goto out_unlock; } } - - mutex_lock(&coresight_mutex); + /* Device is now registered */ + registered = true; ret = coresight_create_conns_sysfs_group(csdev); if (!ret) @@ -1566,16 +1575,18 @@ struct coresight_device *coresight_register(struct coresight_desc *desc) if (!ret && cti_assoc_ops && cti_assoc_ops->add) cti_assoc_ops->add(csdev); +out_unlock: mutex_unlock(&coresight_mutex); - if (ret) { + /* Success */ + if (!ret) + return csdev; + + /* Unregister the device if needed */ + if (registered) { coresight_unregister(csdev); return ERR_PTR(ret); } - return csdev; - -err_free_csdev: - kfree(csdev); err_out: /* Cleanup the connection information */ coresight_release_platform_data(NULL, desc->pdata); From 2156dc390402043ba5982489c6625adcb0b0975c Mon Sep 17 00:00:00 2001 From: Johan Hovold Date: Wed, 27 Apr 2022 08:32:42 +0200 Subject: [PATCH 0738/1842] phy: qcom-qmp: fix reset-controller leak on probe errors commit 4d2900f20edfe541f75756a00deeb2ffe7c66bc1 upstream. Make sure to release the lane reset controller in case of a late probe error (e.g. probe deferral). Note that due to the reset controller being defined in devicetree in "lane" child nodes, devm_reset_control_get_exclusive() cannot be used directly. Fixes: e78f3d15e115 ("phy: qcom-qmp: new qmp phy driver for qcom-chipsets") Cc: stable@vger.kernel.org # 4.12 Cc: Vivek Gautam Reviewed-by: Philipp Zabel Signed-off-by: Johan Hovold Reviewed-by: Bjorn Andersson Link: https://lore.kernel.org/r/20220427063243.32576-3-johan+linaro@kernel.org Signed-off-by: Vinod Koul Signed-off-by: Greg Kroah-Hartman --- drivers/phy/qualcomm/phy-qcom-qmp.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c index cb2d722b1bb4..ea46950c5d2a 100644 --- a/drivers/phy/qualcomm/phy-qcom-qmp.c +++ b/drivers/phy/qualcomm/phy-qcom-qmp.c @@ -3717,6 +3717,11 @@ static const struct phy_ops qcom_qmp_pcie_ufs_ops = { .owner = THIS_MODULE, }; +static void qcom_qmp_reset_control_put(void *data) +{ + reset_control_put(data); +} + static int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id, void __iomem *serdes, const struct qmp_phy_cfg *cfg) @@ -3811,6 +3816,10 @@ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id, dev_err(dev, "failed to get lane%d reset\n", id); return PTR_ERR(qphy->lane_rst); } + ret = devm_add_action_or_reset(dev, qcom_qmp_reset_control_put, + qphy->lane_rst); + if (ret) + return ret; } if (cfg->type == PHY_TYPE_UFS || cfg->type == PHY_TYPE_PCIE) From 70124d94f4c9164207ab009ac780d4d869ead8aa Mon Sep 17 00:00:00 2001 From: Alex Elder Date: Thu, 26 May 2022 10:23:13 -0500 Subject: [PATCH 0739/1842] net: ipa: fix page free in ipa_endpoint_trans_release() commit 155c0c90bca918de6e4327275dfc1d97fd604115 upstream. Currently the (possibly compound) page used for receive buffers are freed using __free_pages(). But according to this comment above the definition of that function, that's wrong: If you want to use the page's reference count to decide when to free the allocation, you should allocate a compound page, and use put_page() instead of __free_pages(). Convert the call to __free_pages() in ipa_endpoint_trans_release() to use put_page() instead. Fixes: ed23f02680caa ("net: ipa: define per-endpoint receive buffer size") Signed-off-by: Alex Elder Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/net/ipa/ipa_endpoint.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c index eb25a13042ea..decee3dbe394 100644 --- a/drivers/net/ipa/ipa_endpoint.c +++ b/drivers/net/ipa/ipa_endpoint.c @@ -1179,7 +1179,7 @@ void ipa_endpoint_trans_release(struct ipa_endpoint *endpoint, struct page *page = trans->data; if (page) - __free_pages(page, get_order(IPA_RX_BUFFER_SIZE)); + put_page(page); } } From d27f0000d7d46e3adcc4c04a2208ae2d7ce711c9 Mon Sep 17 00:00:00 2001 From: Alex Elder Date: Thu, 26 May 2022 10:23:14 -0500 Subject: [PATCH 0740/1842] net: ipa: fix page free in ipa_endpoint_replenish_one() commit 70132763d5d2e94cd185e3aa92ac6a3ba89068fa upstream. Currently the (possibly compound) pages used for receive buffers are freed using __free_pages(). But according to this comment above the definition of that function, that's wrong: If you want to use the page's reference count to decide when to free the allocation, you should allocate a compound page, and use put_page() instead of __free_pages(). Convert the call to __free_pages() in ipa_endpoint_replenish_one() to use put_page() instead. Fixes: 6a606b90153b8 ("net: ipa: allocate transaction in replenish loop") Signed-off-by: Alex Elder Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/net/ipa/ipa_endpoint.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c index decee3dbe394..7e1208446e04 100644 --- a/drivers/net/ipa/ipa_endpoint.c +++ b/drivers/net/ipa/ipa_endpoint.c @@ -884,7 +884,7 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint) err_trans_free: gsi_trans_free(trans); err_free_pages: - __free_pages(page, get_order(IPA_RX_BUFFER_SIZE)); + put_page(page); return -ENOMEM; } From af26bfb04a17639b2bb1e9cd6912b4dceefa5e58 Mon Sep 17 00:00:00 2001 From: Jeffrey Mitchell Date: Mon, 6 Jun 2022 17:32:48 +0300 Subject: [PATCH 0741/1842] xfs: set inode size after creating symlink commit 8aa921a95335d0a8c8e2be35a44467e7c91ec3e4 upstream. When XFS creates a new symlink, it writes its size to disk but not to the VFS inode. This causes i_size_read() to return 0 for that symlink until it is re-read from disk, for example when the system is rebooted. I found this inconsistency while protecting directories with eCryptFS. The command "stat path/to/symlink/in/ecryptfs" will report "Size: 0" if the symlink was created after the last reboot on an XFS root. Call i_size_write() in xfs_symlink() Signed-off-by: Jeffrey Mitchell Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Reviewed-by: Brian Foster Signed-off-by: Amir Goldstein Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_symlink.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/xfs/xfs_symlink.c b/fs/xfs/xfs_symlink.c index 8e88a7ca387e..8d3abf06c54f 100644 --- a/fs/xfs/xfs_symlink.c +++ b/fs/xfs/xfs_symlink.c @@ -300,6 +300,7 @@ xfs_symlink( } ASSERT(pathlen == 0); } + i_size_write(VFS_I(ip), ip->i_d.di_size); /* * Create the directory entry for the symlink. From 643ceee253a45ac3e8be5518d5779cb3c9464d13 Mon Sep 17 00:00:00 2001 From: Brian Foster Date: Mon, 6 Jun 2022 17:32:49 +0300 Subject: [PATCH 0742/1842] xfs: sync lazy sb accounting on quiesce of read-only mounts commit 50d25484bebe94320c49dd1347d3330c7063bbdb upstream. xfs_log_sbcount() syncs the superblock specifically to accumulate the in-core percpu superblock counters and commit them to disk. This is required to maintain filesystem consistency across quiesce (freeze, read-only mount/remount) or unmount when lazy superblock accounting is enabled because individual transactions do not update the superblock directly. This mechanism works as expected for writable mounts, but xfs_log_sbcount() skips the update for read-only mounts. Read-only mounts otherwise still allow log recovery and write out an unmount record during log quiesce. If a read-only mount performs log recovery, it can modify the in-core superblock counters and write an unmount record when the filesystem unmounts without ever syncing the in-core counters. This leaves the filesystem with a clean log but in an inconsistent state with regard to lazy sb counters. Update xfs_log_sbcount() to use the same logic xfs_log_unmount_write() uses to determine when to write an unmount record. This ensures that lazy accounting is always synced before the log is cleaned. Refactor this logic into a new helper to distinguish between a writable filesystem and a writable log. Specifically, the log is writable unless the filesystem is mounted with the norecovery mount option, the underlying log device is read-only, or the filesystem is shutdown. Drop the freeze state check because the update is already allowed during the freezing process and no context calls this function on an already frozen fs. Also, retain the shutdown check in xfs_log_unmount_write() to catch the case where the preceding log force might have triggered a shutdown. Signed-off-by: Brian Foster Reviewed-by: Gao Xiang Reviewed-by: Allison Henderson Reviewed-by: Darrick J. Wong Reviewed-by: Bill O'Donnell Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_log.c | 28 ++++++++++++++++++++-------- fs/xfs/xfs_log.h | 1 + fs/xfs/xfs_mount.c | 3 +-- 3 files changed, 22 insertions(+), 10 deletions(-) diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index fa2d05e65ff1..b445e63cbc3c 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -347,6 +347,25 @@ xlog_tic_add_region(xlog_ticket_t *tic, uint len, uint type) tic->t_res_num++; } +bool +xfs_log_writable( + struct xfs_mount *mp) +{ + /* + * Never write to the log on norecovery mounts, if the block device is + * read-only, or if the filesystem is shutdown. Read-only mounts still + * allow internal writes for log recovery and unmount purposes, so don't + * restrict that case here. + */ + if (mp->m_flags & XFS_MOUNT_NORECOVERY) + return false; + if (xfs_readonly_buftarg(mp->m_log->l_targ)) + return false; + if (XFS_FORCED_SHUTDOWN(mp)) + return false; + return true; +} + /* * Replenish the byte reservation required by moving the grant write head. */ @@ -886,15 +905,8 @@ xfs_log_unmount_write( { struct xlog *log = mp->m_log; - /* - * Don't write out unmount record on norecovery mounts or ro devices. - * Or, if we are doing a forced umount (typically because of IO errors). - */ - if (mp->m_flags & XFS_MOUNT_NORECOVERY || - xfs_readonly_buftarg(log->l_targ)) { - ASSERT(mp->m_flags & XFS_MOUNT_RDONLY); + if (!xfs_log_writable(mp)) return; - } xfs_log_force(mp, XFS_LOG_SYNC); diff --git a/fs/xfs/xfs_log.h b/fs/xfs/xfs_log.h index 58c3fcbec94a..98c913da7587 100644 --- a/fs/xfs/xfs_log.h +++ b/fs/xfs/xfs_log.h @@ -127,6 +127,7 @@ int xfs_log_reserve(struct xfs_mount *mp, int xfs_log_regrant(struct xfs_mount *mp, struct xlog_ticket *tic); void xfs_log_unmount(struct xfs_mount *mp); int xfs_log_force_umount(struct xfs_mount *mp, int logerror); +bool xfs_log_writable(struct xfs_mount *mp); struct xlog_ticket *xfs_log_ticket_get(struct xlog_ticket *ticket); void xfs_log_ticket_put(struct xlog_ticket *ticket); diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index 7110507a2b6b..a62b8a574409 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -1176,8 +1176,7 @@ xfs_fs_writable( int xfs_log_sbcount(xfs_mount_t *mp) { - /* allow this to proceed during the freeze sequence... */ - if (!xfs_fs_writable(mp, SB_FREEZE_COMPLETE)) + if (!xfs_log_writable(mp)) return 0; /* From 893cf5f68a4c31f97929888dfff3d3046996af67 Mon Sep 17 00:00:00 2001 From: "Darrick J. Wong" Date: Mon, 6 Jun 2022 17:32:50 +0300 Subject: [PATCH 0743/1842] xfs: fix chown leaking delalloc quota blocks when fssetxattr fails commit 1aecf3734a95f3c167d1495550ca57556d33f7ec upstream. While refactoring the quota code to create a function to allocate inode change transactions, I noticed that xfs_qm_vop_chown_reserve does more than just make reservations: it also *modifies* the incore counts directly to handle the owner id change for the delalloc blocks. I then observed that the fssetxattr code continues validating input arguments after making the quota reservation but before dirtying the transaction. If the routine decides to error out, it fails to undo the accounting switch! This leads to incorrect quota reservation and failure down the line. We can fix this by making the reservation function do only that -- for the new dquot, it reserves ondisk and delalloc blocks to the transaction, and the old dquot hangs on to its incore reservation for now. Once we actually switch the dquots, we can then update the incore reservations because we've dirtied the transaction and it's too late to turn back now. No fixes tag because this has been broken since the start of git. Signed-off-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Reviewed-by: Brian Foster Signed-off-by: Amir Goldstein Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_qm.c | 92 +++++++++++++++++++------------------------------ 1 file changed, 35 insertions(+), 57 deletions(-) diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c index b2a9abee8b2b..64e5da33733b 100644 --- a/fs/xfs/xfs_qm.c +++ b/fs/xfs/xfs_qm.c @@ -1785,6 +1785,29 @@ xfs_qm_vop_chown( xfs_trans_mod_dquot(tp, newdq, bfield, ip->i_d.di_nblocks); xfs_trans_mod_dquot(tp, newdq, XFS_TRANS_DQ_ICOUNT, 1); + /* + * Back when we made quota reservations for the chown, we reserved the + * ondisk blocks + delalloc blocks with the new dquot. Now that we've + * switched the dquots, decrease the new dquot's block reservation + * (having already bumped up the real counter) so that we don't have + * any reservation to give back when we commit. + */ + xfs_trans_mod_dquot(tp, newdq, XFS_TRANS_DQ_RES_BLKS, + -ip->i_delayed_blks); + + /* + * Give the incore reservation for delalloc blocks back to the old + * dquot. We don't normally handle delalloc quota reservations + * transactionally, so just lock the dquot and subtract from the + * reservation. Dirty the transaction because it's too late to turn + * back now. + */ + tp->t_flags |= XFS_TRANS_DIRTY; + xfs_dqlock(prevdq); + ASSERT(prevdq->q_blk.reserved >= ip->i_delayed_blks); + prevdq->q_blk.reserved -= ip->i_delayed_blks; + xfs_dqunlock(prevdq); + /* * Take an extra reference, because the inode is going to keep * this dquot pointer even after the trans_commit. @@ -1807,84 +1830,39 @@ xfs_qm_vop_chown_reserve( uint flags) { struct xfs_mount *mp = ip->i_mount; - uint64_t delblks; unsigned int blkflags; - struct xfs_dquot *udq_unres = NULL; - struct xfs_dquot *gdq_unres = NULL; - struct xfs_dquot *pdq_unres = NULL; struct xfs_dquot *udq_delblks = NULL; struct xfs_dquot *gdq_delblks = NULL; struct xfs_dquot *pdq_delblks = NULL; - int error; - ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL|XFS_ILOCK_SHARED)); ASSERT(XFS_IS_QUOTA_RUNNING(mp)); - delblks = ip->i_delayed_blks; blkflags = XFS_IS_REALTIME_INODE(ip) ? XFS_QMOPT_RES_RTBLKS : XFS_QMOPT_RES_REGBLKS; if (XFS_IS_UQUOTA_ON(mp) && udqp && - i_uid_read(VFS_I(ip)) != udqp->q_id) { + i_uid_read(VFS_I(ip)) != udqp->q_id) udq_delblks = udqp; - /* - * If there are delayed allocation blocks, then we have to - * unreserve those from the old dquot, and add them to the - * new dquot. - */ - if (delblks) { - ASSERT(ip->i_udquot); - udq_unres = ip->i_udquot; - } - } + if (XFS_IS_GQUOTA_ON(ip->i_mount) && gdqp && - i_gid_read(VFS_I(ip)) != gdqp->q_id) { + i_gid_read(VFS_I(ip)) != gdqp->q_id) gdq_delblks = gdqp; - if (delblks) { - ASSERT(ip->i_gdquot); - gdq_unres = ip->i_gdquot; - } - } if (XFS_IS_PQUOTA_ON(ip->i_mount) && pdqp && - ip->i_d.di_projid != pdqp->q_id) { + ip->i_d.di_projid != pdqp->q_id) pdq_delblks = pdqp; - if (delblks) { - ASSERT(ip->i_pdquot); - pdq_unres = ip->i_pdquot; - } - } - - error = xfs_trans_reserve_quota_bydquots(tp, ip->i_mount, - udq_delblks, gdq_delblks, pdq_delblks, - ip->i_d.di_nblocks, 1, flags | blkflags); - if (error) - return error; /* - * Do the delayed blks reservations/unreservations now. Since, these - * are done without the help of a transaction, if a reservation fails - * its previous reservations won't be automatically undone by trans - * code. So, we have to do it manually here. + * Reserve enough quota to handle blocks on disk and reserved for a + * delayed allocation. We'll actually transfer the delalloc + * reservation between dquots at chown time, even though that part is + * only semi-transactional. */ - if (delblks) { - /* - * Do the reservations first. Unreservation can't fail. - */ - ASSERT(udq_delblks || gdq_delblks || pdq_delblks); - ASSERT(udq_unres || gdq_unres || pdq_unres); - error = xfs_trans_reserve_quota_bydquots(NULL, ip->i_mount, - udq_delblks, gdq_delblks, pdq_delblks, - (xfs_qcnt_t)delblks, 0, flags | blkflags); - if (error) - return error; - xfs_trans_reserve_quota_bydquots(NULL, ip->i_mount, - udq_unres, gdq_unres, pdq_unres, - -((xfs_qcnt_t)delblks), 0, blkflags); - } - - return 0; + return xfs_trans_reserve_quota_bydquots(tp, ip->i_mount, udq_delblks, + gdq_delblks, pdq_delblks, + ip->i_d.di_nblocks + ip->i_delayed_blks, + 1, blkflags | flags); } int From 3d05a855dcf793c2214d2e057ba37aae16e6502b Mon Sep 17 00:00:00 2001 From: "Darrick J. Wong" Date: Mon, 6 Jun 2022 17:32:51 +0300 Subject: [PATCH 0744/1842] xfs: fix incorrect root dquot corruption error when switching group/project quota types commit 45068063efb7dd0a8d115c106aa05d9ab0946257 upstream. While writing up a regression test for broken behavior when a chprojid request fails, I noticed that we were logging corruption notices about the root dquot of the group/project quota file at mount time when testing V4 filesystems. In commit afeda6000b0c, I was trying to improve ondisk dquot validation by making sure that when we load an ondisk dquot into memory on behalf of an incore dquot, the dquot id and type matches. Unfortunately, I forgot that V4 filesystems only have two quota files, and can switch that file between group and project quota types at mount time. When we perform that switch, we'll try to load the default quota limits from the root dquot prior to running quotacheck and log a corruption error when the types don't match. This is inconsequential because quotacheck will reset the second quota file as part of doing the switch, but we shouldn't leave scary messages in the kernel log. Fixes: afeda6000b0c ("xfs: validate ondisk/incore dquot flags") Signed-off-by: Darrick J. Wong Reviewed-by: Brian Foster Reviewed-by: Chandan Babu R Signed-off-by: Amir Goldstein Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_dquot.c | 39 +++++++++++++++++++++++++++++++++++++-- 1 file changed, 37 insertions(+), 2 deletions(-) diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c index 1d95ed387d66..80c4579d6835 100644 --- a/fs/xfs/xfs_dquot.c +++ b/fs/xfs/xfs_dquot.c @@ -500,6 +500,42 @@ xfs_dquot_alloc( return dqp; } +/* Check the ondisk dquot's id and type match what the incore dquot expects. */ +static bool +xfs_dquot_check_type( + struct xfs_dquot *dqp, + struct xfs_disk_dquot *ddqp) +{ + uint8_t ddqp_type; + uint8_t dqp_type; + + ddqp_type = ddqp->d_type & XFS_DQTYPE_REC_MASK; + dqp_type = xfs_dquot_type(dqp); + + if (be32_to_cpu(ddqp->d_id) != dqp->q_id) + return false; + + /* + * V5 filesystems always expect an exact type match. V4 filesystems + * expect an exact match for user dquots and for non-root group and + * project dquots. + */ + if (xfs_sb_version_hascrc(&dqp->q_mount->m_sb) || + dqp_type == XFS_DQTYPE_USER || dqp->q_id != 0) + return ddqp_type == dqp_type; + + /* + * V4 filesystems support either group or project quotas, but not both + * at the same time. The non-user quota file can be switched between + * group and project quota uses depending on the mount options, which + * means that we can encounter the other type when we try to load quota + * defaults. Quotacheck will soon reset the the entire quota file + * (including the root dquot) anyway, but don't log scary corruption + * reports to dmesg. + */ + return ddqp_type == XFS_DQTYPE_GROUP || ddqp_type == XFS_DQTYPE_PROJ; +} + /* Copy the in-core quota fields in from the on-disk buffer. */ STATIC int xfs_dquot_from_disk( @@ -512,8 +548,7 @@ xfs_dquot_from_disk( * Ensure that we got the type and ID we were looking for. * Everything else was checked by the dquot buffer verifier. */ - if ((ddqp->d_type & XFS_DQTYPE_REC_MASK) != xfs_dquot_type(dqp) || - be32_to_cpu(ddqp->d_id) != dqp->q_id) { + if (!xfs_dquot_check_type(dqp, ddqp)) { xfs_alert_tag(bp->b_mount, XFS_PTAG_VERIFIER_ERROR, "Metadata corruption detected at %pS, quota %u", __this_address, dqp->q_id); From 0b229d03d05f74044efde7d476de2b6c58bb8444 Mon Sep 17 00:00:00 2001 From: Brian Foster Date: Mon, 6 Jun 2022 17:32:52 +0300 Subject: [PATCH 0745/1842] xfs: restore shutdown check in mapped write fault path commit e4826691cc7e5458bcb659935d0092bcf3f08c20 upstream. XFS triggers an iomap warning in the write fault path due to a !PageUptodate() page if a write fault happens to occur on a page that recently failed writeback. The iomap writeback error handling code can clear the Uptodate flag if no portion of the page is submitted for I/O. This is reproduced by fstest generic/019, which combines various forms of I/O with simulated disk failures that inevitably lead to filesystem shutdown (which then unconditionally fails page writeback). This is a regression introduced by commit f150b4234397 ("xfs: split the iomap ops for buffered vs direct writes") due to the removal of a shutdown check and explicit error return in the ->iomap_begin() path used by the write fault path. The explicit error return historically translated to a SIGBUS, but now carries on with iomap processing where it complains about the unexpected state. Restore the shutdown check to xfs_buffered_write_iomap_begin() to restore historical behavior. Fixes: f150b4234397 ("xfs: split the iomap ops for buffered vs direct writes") Signed-off-by: Brian Foster Reviewed-by: Eric Sandeen Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_iomap.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index 7b9ff824e82d..74bc2beadc23 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -870,6 +870,9 @@ xfs_buffered_write_iomap_begin( int allocfork = XFS_DATA_FORK; int error = 0; + if (XFS_FORCED_SHUTDOWN(mp)) + return -EIO; + /* we can't use delayed allocations when using extent size hints */ if (xfs_get_extsz_hint(ip)) return xfs_direct_write_iomap_begin(inode, offset, count, From e3ffe7387c706adec8b155bab9252fe91682e35a Mon Sep 17 00:00:00 2001 From: "Darrick J. Wong" Date: Mon, 6 Jun 2022 17:32:53 +0300 Subject: [PATCH 0746/1842] xfs: force log and push AIL to clear pinned inodes when aborting mount commit d336f7ebc65007f5831e2297e6f3383ae8dbf8ed upstream. If we allocate quota inodes in the process of mounting a filesystem but then decide to abort the mount, it's possible that the quota inodes are sitting around pinned by the log. Now that inode reclaim relies on the AIL to flush inodes, we have to force the log and push the AIL in between releasing the quota inodes and kicking off reclaim to tear down all the incore inodes. Do this by extracting the bits we need from the unmount path and reusing them. As an added bonus, failed writes during a failed mount will not retry forever now. This was originally found during a fuzz test of metadata directories (xfs/1546), but the actual symptom was that reclaim hung up on the quota inodes. Signed-off-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Reviewed-by: Dave Chinner Signed-off-by: Amir Goldstein Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_mount.c | 90 +++++++++++++++++++++++----------------------- 1 file changed, 44 insertions(+), 46 deletions(-) diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index a62b8a574409..44b05e1d5d32 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -631,6 +631,47 @@ xfs_check_summary_counts( return xfs_initialize_perag_data(mp, mp->m_sb.sb_agcount); } +/* + * Flush and reclaim dirty inodes in preparation for unmount. Inodes and + * internal inode structures can be sitting in the CIL and AIL at this point, + * so we need to unpin them, write them back and/or reclaim them before unmount + * can proceed. + * + * An inode cluster that has been freed can have its buffer still pinned in + * memory because the transaction is still sitting in a iclog. The stale inodes + * on that buffer will be pinned to the buffer until the transaction hits the + * disk and the callbacks run. Pushing the AIL will skip the stale inodes and + * may never see the pinned buffer, so nothing will push out the iclog and + * unpin the buffer. + * + * Hence we need to force the log to unpin everything first. However, log + * forces don't wait for the discards they issue to complete, so we have to + * explicitly wait for them to complete here as well. + * + * Then we can tell the world we are unmounting so that error handling knows + * that the filesystem is going away and we should error out anything that we + * have been retrying in the background. This will prevent never-ending + * retries in AIL pushing from hanging the unmount. + * + * Finally, we can push the AIL to clean all the remaining dirty objects, then + * reclaim the remaining inodes that are still in memory at this point in time. + */ +static void +xfs_unmount_flush_inodes( + struct xfs_mount *mp) +{ + xfs_log_force(mp, XFS_LOG_SYNC); + xfs_extent_busy_wait_all(mp); + flush_workqueue(xfs_discard_wq); + + mp->m_flags |= XFS_MOUNT_UNMOUNTING; + + xfs_ail_push_all_sync(mp->m_ail); + cancel_delayed_work_sync(&mp->m_reclaim_work); + xfs_reclaim_inodes(mp); + xfs_health_unmount(mp); +} + /* * This function does the following on an initial mount of a file system: * - reads the superblock from disk and init the mount struct @@ -1005,7 +1046,7 @@ xfs_mountfs( /* Clean out dquots that might be in memory after quotacheck. */ xfs_qm_unmount(mp); /* - * Cancel all delayed reclaim work and reclaim the inodes directly. + * Flush all inode reclamation work and flush the log. * We have to do this /after/ rtunmount and qm_unmount because those * two will have scheduled delayed reclaim for the rt/quota inodes. * @@ -1015,11 +1056,8 @@ xfs_mountfs( * qm_unmount_quotas and therefore rely on qm_unmount to release the * quota inodes. */ - cancel_delayed_work_sync(&mp->m_reclaim_work); - xfs_reclaim_inodes(mp); - xfs_health_unmount(mp); + xfs_unmount_flush_inodes(mp); out_log_dealloc: - mp->m_flags |= XFS_MOUNT_UNMOUNTING; xfs_log_mount_cancel(mp); out_fail_wait: if (mp->m_logdev_targp && mp->m_logdev_targp != mp->m_ddev_targp) @@ -1060,47 +1098,7 @@ xfs_unmountfs( xfs_rtunmount_inodes(mp); xfs_irele(mp->m_rootip); - /* - * We can potentially deadlock here if we have an inode cluster - * that has been freed has its buffer still pinned in memory because - * the transaction is still sitting in a iclog. The stale inodes - * on that buffer will be pinned to the buffer until the - * transaction hits the disk and the callbacks run. Pushing the AIL will - * skip the stale inodes and may never see the pinned buffer, so - * nothing will push out the iclog and unpin the buffer. Hence we - * need to force the log here to ensure all items are flushed into the - * AIL before we go any further. - */ - xfs_log_force(mp, XFS_LOG_SYNC); - - /* - * Wait for all busy extents to be freed, including completion of - * any discard operation. - */ - xfs_extent_busy_wait_all(mp); - flush_workqueue(xfs_discard_wq); - - /* - * We now need to tell the world we are unmounting. This will allow - * us to detect that the filesystem is going away and we should error - * out anything that we have been retrying in the background. This will - * prevent neverending retries in AIL pushing from hanging the unmount. - */ - mp->m_flags |= XFS_MOUNT_UNMOUNTING; - - /* - * Flush all pending changes from the AIL. - */ - xfs_ail_push_all_sync(mp->m_ail); - - /* - * Reclaim all inodes. At this point there should be no dirty inodes and - * none should be pinned or locked. Stop background inode reclaim here - * if it is still running. - */ - cancel_delayed_work_sync(&mp->m_reclaim_work); - xfs_reclaim_inodes(mp); - xfs_health_unmount(mp); + xfs_unmount_flush_inodes(mp); xfs_qm_unmount(mp); From f1916a88c89e151fd607a43f89c9dfd0d6b5c03d Mon Sep 17 00:00:00 2001 From: Brian Foster Date: Mon, 6 Jun 2022 17:32:54 +0300 Subject: [PATCH 0747/1842] xfs: consider shutdown in bmapbt cursor delete assert commit 1cd738b13ae9b29e03d6149f0246c61f76e81fcf upstream. The assert in xfs_btree_del_cursor() checks that the bmapbt block allocation field has been handled correctly before the cursor is freed. This field is used for accurate calculation of indirect block reservation requirements (for delayed allocations), for example. generic/019 reproduces a scenario where this assert fails because the filesystem has shutdown while in the middle of a bmbt record insertion. This occurs after a bmbt block has been allocated via the cursor but before the higher level bmap function (i.e. xfs_bmap_add_extent_hole_real()) completes and resets the field. Update the assert to accommodate the transient state if the filesystem has shutdown. While here, clean up the indentation and comments in the function. Signed-off-by: Brian Foster Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Signed-off-by: Greg Kroah-Hartman --- fs/xfs/libxfs/xfs_btree.c | 33 ++++++++++++--------------------- 1 file changed, 12 insertions(+), 21 deletions(-) diff --git a/fs/xfs/libxfs/xfs_btree.c b/fs/xfs/libxfs/xfs_btree.c index 2d25bab68764..9f9f9feccbcd 100644 --- a/fs/xfs/libxfs/xfs_btree.c +++ b/fs/xfs/libxfs/xfs_btree.c @@ -353,20 +353,17 @@ xfs_btree_free_block( */ void xfs_btree_del_cursor( - xfs_btree_cur_t *cur, /* btree cursor */ - int error) /* del because of error */ + struct xfs_btree_cur *cur, /* btree cursor */ + int error) /* del because of error */ { - int i; /* btree level */ + int i; /* btree level */ /* - * Clear the buffer pointers, and release the buffers. - * If we're doing this in the face of an error, we - * need to make sure to inspect all of the entries - * in the bc_bufs array for buffers to be unlocked. - * This is because some of the btree code works from - * level n down to 0, and if we get an error along - * the way we won't have initialized all the entries - * down to 0. + * Clear the buffer pointers and release the buffers. If we're doing + * this because of an error, inspect all of the entries in the bc_bufs + * array for buffers to be unlocked. This is because some of the btree + * code works from level n down to 0, and if we get an error along the + * way we won't have initialized all the entries down to 0. */ for (i = 0; i < cur->bc_nlevels; i++) { if (cur->bc_bufs[i]) @@ -374,17 +371,11 @@ xfs_btree_del_cursor( else if (!error) break; } - /* - * Can't free a bmap cursor without having dealt with the - * allocated indirect blocks' accounting. - */ - ASSERT(cur->bc_btnum != XFS_BTNUM_BMAP || - cur->bc_ino.allocated == 0); - /* - * Free the cursor. - */ + + ASSERT(cur->bc_btnum != XFS_BTNUM_BMAP || cur->bc_ino.allocated == 0 || + XFS_FORCED_SHUTDOWN(cur->bc_mp)); if (unlikely(cur->bc_flags & XFS_BTREE_STAGING)) - kmem_free((void *)cur->bc_ops); + kmem_free(cur->bc_ops); kmem_cache_free(xfs_btree_cur_zone, cur); } From 82b2b60b6745418d34e5fd48948cac853449579f Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Mon, 6 Jun 2022 17:32:55 +0300 Subject: [PATCH 0748/1842] xfs: assert in xfs_btree_del_cursor should take into account error commit 56486f307100e8fc66efa2ebd8a71941fa10bf6f upstream. xfs/538 on a 1kB block filesystem failed with this assert: XFS: Assertion failed: cur->bc_btnum != XFS_BTNUM_BMAP || cur->bc_ino.allocated == 0 || xfs_is_shutdown(cur->bc_mp), file: fs/xfs/libxfs/xfs_btree.c, line: 448 The problem was that an allocation failed unexpectedly in xfs_bmbt_alloc_block() after roughly 150,000 minlen allocation error injections, resulting in an EFSCORRUPTED error being returned to xfs_bmapi_write(). The error occurred on extent-to-btree format conversion allocating the new root block: RIP: 0010:xfs_bmbt_alloc_block+0x177/0x210 Call Trace: xfs_btree_new_iroot+0xdf/0x520 xfs_btree_make_block_unfull+0x10d/0x1c0 xfs_btree_insrec+0x364/0x790 xfs_btree_insert+0xaa/0x210 xfs_bmap_add_extent_hole_real+0x1fe/0x9a0 xfs_bmapi_allocate+0x34c/0x420 xfs_bmapi_write+0x53c/0x9c0 xfs_alloc_file_space+0xee/0x320 xfs_file_fallocate+0x36b/0x450 vfs_fallocate+0x148/0x340 __x64_sys_fallocate+0x3c/0x70 do_syscall_64+0x35/0x80 entry_SYSCALL_64_after_hwframe+0x44/0xa Why the allocation failed at this point is unknown, but is likely that we ran the transaction out of reserved space and filesystem out of space with bmbt blocks because of all the minlen allocations being done causing worst case fragmentation of a large allocation. Regardless of the cause, we've then called xfs_bmapi_finish() which calls xfs_btree_del_cursor(cur, error) to tear down the cursor. So we have a failed operation, error != 0, cur->bc_ino.allocated > 0 and the filesystem is still up. The assert fails to take into account that allocation can fail with an error and the transaction teardown will shut the filesystem down if necessary. i.e. the assert needs to check "|| error != 0" as well, because at this point shutdown is pending because the current transaction is dirty.... Signed-off-by: Dave Chinner Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Signed-off-by: Dave Chinner Signed-off-by: Amir Goldstein Signed-off-by: Greg Kroah-Hartman --- fs/xfs/libxfs/xfs_btree.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/fs/xfs/libxfs/xfs_btree.c b/fs/xfs/libxfs/xfs_btree.c index 9f9f9feccbcd..98c82f4935e1 100644 --- a/fs/xfs/libxfs/xfs_btree.c +++ b/fs/xfs/libxfs/xfs_btree.c @@ -372,8 +372,14 @@ xfs_btree_del_cursor( break; } + /* + * If we are doing a BMBT update, the number of unaccounted blocks + * allocated during this cursor life time should be zero. If it's not + * zero, then we should be shut down or on our way to shutdown due to + * cancelling a dirty transaction on error. + */ ASSERT(cur->bc_btnum != XFS_BTNUM_BMAP || cur->bc_ino.allocated == 0 || - XFS_FORCED_SHUTDOWN(cur->bc_mp)); + XFS_FORCED_SHUTDOWN(cur->bc_mp) || error != 0); if (unlikely(cur->bc_flags & XFS_BTREE_STAGING)) kmem_free(cur->bc_ops); kmem_cache_free(xfs_btree_cur_zone, cur); From ec1378f2fa36f6e4a5042cca5ad6f415038dcda1 Mon Sep 17 00:00:00 2001 From: Waiman Long Date: Fri, 13 May 2022 15:09:28 -0400 Subject: [PATCH 0749/1842] kseltest/cgroup: Make test_stress.sh work if run interactively commit 213adc63dfbcdff9a0c19ec1f2681fda9c05cf6d upstream. Commit 54de76c01239 ("kselftest/cgroup: fix test_stress.sh to use OUTPUT dir") changes the test_core command path from . to $OUTPUT. However, variable OUTPUT may not be defined if the command is run interactively. Fix that by using ${OUTPUT:-.} to cover both cases. Signed-off-by: Waiman Long Signed-off-by: Tejun Heo Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/cgroup/test_stress.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/cgroup/test_stress.sh b/tools/testing/selftests/cgroup/test_stress.sh index 109c044f715f..3c9c4554d5f6 100755 --- a/tools/testing/selftests/cgroup/test_stress.sh +++ b/tools/testing/selftests/cgroup/test_stress.sh @@ -1,4 +1,4 @@ #!/bin/bash # SPDX-License-Identifier: GPL-2.0 -./with_stress.sh -s subsys -s fork ${OUTPUT}/test_core +./with_stress.sh -s subsys -s fork ${OUTPUT:-.}/test_core From b132abaa6515e14e0db292389c25007d666e1925 Mon Sep 17 00:00:00 2001 From: Ziyang Xuan Date: Fri, 15 Oct 2021 10:45:04 +0800 Subject: [PATCH 0750/1842] thermal/core: fix a UAF bug in __thermal_cooling_device_register() commit 0a5c26712f963f0500161a23e0ffff8d29f742ab upstream. When device_register() return failed, program will goto out_kfree_type to release 'cdev->device' by put_device(). That will call thermal_release() to free 'cdev'. But the follow-up processes access 'cdev' continually. That trggers the UAF bug. ==================================================================== BUG: KASAN: use-after-free in __thermal_cooling_device_register+0x75b/0xa90 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014 Call Trace: dump_stack_lvl+0xe2/0x152 print_address_description.constprop.0+0x21/0x140 ? __thermal_cooling_device_register+0x75b/0xa90 kasan_report.cold+0x7f/0x11b ? __thermal_cooling_device_register+0x75b/0xa90 __thermal_cooling_device_register+0x75b/0xa90 ? memset+0x20/0x40 ? __sanitizer_cov_trace_pc+0x1d/0x50 ? __devres_alloc_node+0x130/0x180 devm_thermal_of_cooling_device_register+0x67/0xf0 max6650_probe.cold+0x557/0x6aa ...... Freed by task 258: kasan_save_stack+0x1b/0x40 kasan_set_track+0x1c/0x30 kasan_set_free_info+0x20/0x30 __kasan_slab_free+0x109/0x140 kfree+0x117/0x4c0 thermal_release+0xa0/0x110 device_release+0xa7/0x240 kobject_put+0x1ce/0x540 put_device+0x20/0x30 __thermal_cooling_device_register+0x731/0xa90 devm_thermal_of_cooling_device_register+0x67/0xf0 max6650_probe.cold+0x557/0x6aa [max6650] Do not use 'cdev' again after put_device() to fix the problem like doing in thermal_zone_device_register(). [dlezcano]: as requested by Rafael, change the affectation into two statements. Fixes: 584837618100 ("thermal/drivers/core: Use a char pointer for the cooling device name") Signed-off-by: Ziyang Xuan Reported-by: kernel test robot Link: https://lore.kernel.org/r/20211015024504.947520-1-william.xuanziyang@huawei.com Signed-off-by: Daniel Lezcano Signed-off-by: Greg Kroah-Hartman --- drivers/thermal/thermal_core.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c index 5e48e9cfa044..c0d8c882b247 100644 --- a/drivers/thermal/thermal_core.c +++ b/drivers/thermal/thermal_core.c @@ -1092,7 +1092,7 @@ __thermal_cooling_device_register(struct device_node *np, { struct thermal_cooling_device *cdev; struct thermal_zone_device *pos = NULL; - int ret; + int id, ret; if (!ops || !ops->get_max_state || !ops->get_cur_state || !ops->set_cur_state) @@ -1106,6 +1106,7 @@ __thermal_cooling_device_register(struct device_node *np, if (ret < 0) goto out_kfree_cdev; cdev->id = ret; + id = ret; cdev->type = kstrdup(type ? type : "", GFP_KERNEL); if (!cdev->type) { @@ -1147,8 +1148,9 @@ __thermal_cooling_device_register(struct device_node *np, thermal_cooling_device_destroy_sysfs(cdev); kfree(cdev->type); put_device(&cdev->device); + cdev = NULL; out_ida_remove: - ida_simple_remove(&thermal_cdev_ida, cdev->id); + ida_simple_remove(&thermal_cdev_ida, id); out_kfree_cdev: return ERR_PTR(ret); } From 54cdc10ac7184f2159a4f5658b497e90244d1516 Mon Sep 17 00:00:00 2001 From: Daniel Lezcano Date: Fri, 19 Mar 2021 21:22:57 +0100 Subject: [PATCH 0751/1842] thermal/core: Fix memory leak in the error path commit d44616c6cc3e35eea03ecfe9040edfa2b486a059 upstream. Fix the following error: smatch warnings: drivers/thermal/thermal_core.c:1020 __thermal_cooling_device_register() warn: possible memory leak of 'cdev' by freeing the cdev when exiting the function in the error path. Fixes: 584837618100 ("thermal/drivers/core: Use a char pointer for the cooling device name") Reported-by: kernel test robot Reported-by: Dan Carpenter Signed-off-by: Daniel Lezcano Link: https://lore.kernel.org/r/20210319202257.890848-1-daniel.lezcano@linaro.org Signed-off-by: Greg Kroah-Hartman --- drivers/thermal/thermal_core.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c index c0d8c882b247..dd449945e1e5 100644 --- a/drivers/thermal/thermal_core.c +++ b/drivers/thermal/thermal_core.c @@ -1152,6 +1152,7 @@ __thermal_cooling_device_register(struct device_node *np, out_ida_remove: ida_simple_remove(&thermal_cdev_ida, id); out_kfree_cdev: + kfree(cdev); return ERR_PTR(ret); } From 7d172b9dc913e161d8ff88770eea01701ff553de Mon Sep 17 00:00:00 2001 From: Jan Kara Date: Mon, 6 Jun 2022 19:56:36 +0200 Subject: [PATCH 0752/1842] bfq: Avoid merging queues with different parents commit c1cee4ab36acef271be9101590756ed0c0c374d9 upstream. It can happen that the parent of a bfqq changes between the moment we decide two queues are worth to merge (and set bic->stable_merge_bfqq) and the moment bfq_setup_merge() is called. This can happen e.g. because the process submitted IO for a different cgroup and thus bfqq got reparented. It can even happen that the bfqq we are merging with has parent cgroup that is already offline and going to be destroyed in which case the merge can lead to use-after-free issues such as: BUG: KASAN: use-after-free in __bfq_deactivate_entity+0x9cb/0xa50 Read of size 8 at addr ffff88800693c0c0 by task runc:[2:INIT]/10544 CPU: 0 PID: 10544 Comm: runc:[2:INIT] Tainted: G E 5.15.2-0.g5fb85fd-default #1 openSUSE Tumbleweed (unreleased) f1f3b891c72369aebecd2e43e4641a6358867c70 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a-rebuilt.opensuse.org 04/01/2014 Call Trace: dump_stack_lvl+0x46/0x5a print_address_description.constprop.0+0x1f/0x140 ? __bfq_deactivate_entity+0x9cb/0xa50 kasan_report.cold+0x7f/0x11b ? __bfq_deactivate_entity+0x9cb/0xa50 __bfq_deactivate_entity+0x9cb/0xa50 ? update_curr+0x32f/0x5d0 bfq_deactivate_entity+0xa0/0x1d0 bfq_del_bfqq_busy+0x28a/0x420 ? resched_curr+0x116/0x1d0 ? bfq_requeue_bfqq+0x70/0x70 ? check_preempt_wakeup+0x52b/0xbc0 __bfq_bfqq_expire+0x1a2/0x270 bfq_bfqq_expire+0xd16/0x2160 ? try_to_wake_up+0x4ee/0x1260 ? bfq_end_wr_async_queues+0xe0/0xe0 ? _raw_write_unlock_bh+0x60/0x60 ? _raw_spin_lock_irq+0x81/0xe0 bfq_idle_slice_timer+0x109/0x280 ? bfq_dispatch_request+0x4870/0x4870 __hrtimer_run_queues+0x37d/0x700 ? enqueue_hrtimer+0x1b0/0x1b0 ? kvm_clock_get_cycles+0xd/0x10 ? ktime_get_update_offsets_now+0x6f/0x280 hrtimer_interrupt+0x2c8/0x740 Fix the problem by checking that the parent of the two bfqqs we are merging in bfq_setup_merge() is the same. Link: https://lore.kernel.org/linux-block/20211125172809.GC19572@quack2.suse.cz/ CC: stable@vger.kernel.org Fixes: 430a67f9d616 ("block, bfq: merge bursts of newly-created queues") Tested-by: "yukuai (C)" Signed-off-by: Jan Kara Reviewed-by: Christoph Hellwig Link: https://lore.kernel.org/r/20220401102752.8599-2-jack@suse.cz Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- block/bfq-iosched.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index bab5122062f5..e14f421282dd 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -2509,6 +2509,14 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq) if (process_refs == 0 || new_process_refs == 0) return NULL; + /* + * Make sure merged queues belong to the same parent. Parents could + * have changed since the time we decided the two queues are suitable + * for merging. + */ + if (new_bfqq->entity.parent != bfqq->entity.parent) + return NULL; + bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d", new_bfqq->pid); From 13599aac1b983341a1240199e461bf1a8ee55dfb Mon Sep 17 00:00:00 2001 From: Jan Kara Date: Mon, 6 Jun 2022 19:56:37 +0200 Subject: [PATCH 0753/1842] bfq: Drop pointless unlock-lock pair commit fc84e1f941b91221092da5b3102ec82da24c5673 upstream. In bfq_insert_request() we unlock bfqd->lock only to call trace_block_rq_insert() and then lock bfqd->lock again. This is really pointless since tracing is disabled if we really care about performance and even if the tracepoint is enabled, it is a quick call. CC: stable@vger.kernel.org Tested-by: "yukuai (C)" Signed-off-by: Jan Kara Reviewed-by: Christoph Hellwig Link: https://lore.kernel.org/r/20220401102752.8599-5-jack@suse.cz Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- block/bfq-iosched.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index e14f421282dd..bad088103279 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -5537,11 +5537,8 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, return; } - spin_unlock_irq(&bfqd->lock); - blk_mq_sched_request_inserted(rq); - spin_lock_irq(&bfqd->lock); bfqq = bfq_init_rq(rq); if (!bfqq || at_head || blk_rq_is_passthrough(rq)) { if (at_head) From 80b0a2b3dfea5de3224ba756830b9243709c6e9e Mon Sep 17 00:00:00 2001 From: Jan Kara Date: Mon, 6 Jun 2022 19:56:38 +0200 Subject: [PATCH 0754/1842] bfq: Remove pointless bfq_init_rq() calls commit 5f550ede5edf846ecc0067be1ba80514e6fe7f8e upstream. We call bfq_init_rq() from request merging functions where requests we get should have already gone through bfq_init_rq() during insert and anyway we want to do anything only if the request is already tracked by BFQ. So replace calls to bfq_init_rq() with RQ_BFQQ() instead to simply skip requests untracked by BFQ. We move bfq_init_rq() call in bfq_insert_request() a bit earlier to cover request merging and thus can transfer FIFO position in case of a merge. CC: stable@vger.kernel.org Tested-by: "yukuai (C)" Signed-off-by: Jan Kara Reviewed-by: Christoph Hellwig Link: https://lore.kernel.org/r/20220401102752.8599-6-jack@suse.cz Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- block/bfq-iosched.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index bad088103279..3b605d8d99bf 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -2267,8 +2267,6 @@ static int bfq_request_merge(struct request_queue *q, struct request **req, return ELEVATOR_NO_MERGE; } -static struct bfq_queue *bfq_init_rq(struct request *rq); - static void bfq_request_merged(struct request_queue *q, struct request *req, enum elv_merge type) { @@ -2277,7 +2275,7 @@ static void bfq_request_merged(struct request_queue *q, struct request *req, blk_rq_pos(req) < blk_rq_pos(container_of(rb_prev(&req->rb_node), struct request, rb_node))) { - struct bfq_queue *bfqq = bfq_init_rq(req); + struct bfq_queue *bfqq = RQ_BFQQ(req); struct bfq_data *bfqd; struct request *prev, *next_rq; @@ -2329,8 +2327,8 @@ static void bfq_request_merged(struct request_queue *q, struct request *req, static void bfq_requests_merged(struct request_queue *q, struct request *rq, struct request *next) { - struct bfq_queue *bfqq = bfq_init_rq(rq), - *next_bfqq = bfq_init_rq(next); + struct bfq_queue *bfqq = RQ_BFQQ(rq), + *next_bfqq = RQ_BFQQ(next); if (!bfqq) return; @@ -5518,6 +5516,8 @@ static inline void bfq_update_insert_stats(struct request_queue *q, unsigned int cmd_flags) {} #endif /* CONFIG_BFQ_CGROUP_DEBUG */ +static struct bfq_queue *bfq_init_rq(struct request *rq); + static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, bool at_head) { @@ -5532,6 +5532,7 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, bfqg_stats_update_legacy_io(q, rq); #endif spin_lock_irq(&bfqd->lock); + bfqq = bfq_init_rq(rq); if (blk_mq_sched_try_insert_merge(q, rq)) { spin_unlock_irq(&bfqd->lock); return; @@ -5539,7 +5540,6 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, blk_mq_sched_request_inserted(rq); - bfqq = bfq_init_rq(rq); if (!bfqq || at_head || blk_rq_is_passthrough(rq)) { if (at_head) list_add(&rq->queuelist, &bfqd->dispatch); From 0285718e28259e41f405a038ee0e6bb984fd1b34 Mon Sep 17 00:00:00 2001 From: Jan Kara Date: Mon, 6 Jun 2022 19:56:39 +0200 Subject: [PATCH 0755/1842] bfq: Get rid of __bio_blkcg() usage commit 4e54a2493e582361adc3bfbf06c7d50d19d18837 upstream. BFQ usage of __bio_blkcg() is a relict from the past. Furthermore if bio would not be associated with any blkcg, the usage of __bio_blkcg() in BFQ is prone to races with the task being migrated between cgroups as __bio_blkcg() calls at different places could return different blkcgs. Convert BFQ to the new situation where bio->bi_blkg is initialized in bio_set_dev() and thus practically always valid. This allows us to save blkcg_gq lookup and noticeably simplify the code. CC: stable@vger.kernel.org Fixes: 0fe061b9f03c ("blkcg: fix ref count issue with bio_blkcg() using task_css") Tested-by: "yukuai (C)" Signed-off-by: Jan Kara Reviewed-by: Christoph Hellwig Link: https://lore.kernel.org/r/20220401102752.8599-8-jack@suse.cz Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- block/bfq-cgroup.c | 63 +++++++++++++++++---------------------------- block/bfq-iosched.c | 10 +------ block/bfq-iosched.h | 3 +-- 3 files changed, 25 insertions(+), 51 deletions(-) diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index 0077f44ad63f..622c9a13e496 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -582,27 +582,11 @@ static void bfq_group_set_parent(struct bfq_group *bfqg, entity->sched_data = &parent->sched_data; } -static struct bfq_group *bfq_lookup_bfqg(struct bfq_data *bfqd, - struct blkcg *blkcg) +static void bfq_link_bfqg(struct bfq_data *bfqd, struct bfq_group *bfqg) { - struct blkcg_gq *blkg; - - blkg = blkg_lookup(blkcg, bfqd->queue); - if (likely(blkg)) - return blkg_to_bfqg(blkg); - return NULL; -} - -struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd, - struct blkcg *blkcg) -{ - struct bfq_group *bfqg, *parent; + struct bfq_group *parent; struct bfq_entity *entity; - bfqg = bfq_lookup_bfqg(bfqd, blkcg); - if (unlikely(!bfqg)) - return NULL; - /* * Update chain of bfq_groups as we might be handling a leaf group * which, along with some of its relatives, has not been hooked yet @@ -619,8 +603,15 @@ struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd, bfq_group_set_parent(curr_bfqg, parent); } } +} - return bfqg; +struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio) +{ + struct blkcg_gq *blkg = bio->bi_blkg; + + if (!blkg) + return bfqd->root_group; + return blkg_to_bfqg(blkg); } /** @@ -696,25 +687,15 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, * Move bic to blkcg, assuming that bfqd->lock is held; which makes * sure that the reference to cgroup is valid across the call (see * comments in bfq_bic_update_cgroup on this issue) - * - * NOTE: an alternative approach might have been to store the current - * cgroup in bfqq and getting a reference to it, reducing the lookup - * time here, at the price of slightly more complex code. */ -static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd, - struct bfq_io_cq *bic, - struct blkcg *blkcg) +static void *__bfq_bic_change_cgroup(struct bfq_data *bfqd, + struct bfq_io_cq *bic, + struct bfq_group *bfqg) { struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0); struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1); - struct bfq_group *bfqg; struct bfq_entity *entity; - bfqg = bfq_find_set_group(bfqd, blkcg); - - if (unlikely(!bfqg)) - bfqg = bfqd->root_group; - if (async_bfqq) { entity = &async_bfqq->entity; @@ -766,20 +747,24 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd, void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio) { struct bfq_data *bfqd = bic_to_bfqd(bic); - struct bfq_group *bfqg = NULL; + struct bfq_group *bfqg = bfq_bio_bfqg(bfqd, bio); uint64_t serial_nr; - rcu_read_lock(); - serial_nr = __bio_blkcg(bio)->css.serial_nr; + serial_nr = bfqg_to_blkg(bfqg)->blkcg->css.serial_nr; /* * Check whether blkcg has changed. The condition may trigger * spuriously on a newly created cic but there's no harm. */ if (unlikely(!bfqd) || likely(bic->blkcg_serial_nr == serial_nr)) - goto out; + return; - bfqg = __bfq_bic_change_cgroup(bfqd, bic, __bio_blkcg(bio)); + /* + * New cgroup for this process. Make sure it is linked to bfq internal + * cgroup hierarchy. + */ + bfq_link_bfqg(bfqd, bfqg); + __bfq_bic_change_cgroup(bfqd, bic, bfqg); /* * Update blkg_path for bfq_log_* functions. We cache this * path, and update it here, for the following @@ -832,8 +817,6 @@ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio) */ blkg_path(bfqg_to_blkg(bfqg), bfqg->blkg_path, sizeof(bfqg->blkg_path)); bic->blkcg_serial_nr = serial_nr; -out: - rcu_read_unlock(); } /** @@ -1451,7 +1434,7 @@ void bfq_end_wr_async(struct bfq_data *bfqd) bfq_end_wr_async_queues(bfqd, bfqd->root_group); } -struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd, struct blkcg *blkcg) +struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio) { return bfqd->root_group; } diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 3b605d8d99bf..592d32a46c4c 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -5162,14 +5162,7 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd, struct bfq_queue *bfqq; struct bfq_group *bfqg; - rcu_read_lock(); - - bfqg = bfq_find_set_group(bfqd, __bio_blkcg(bio)); - if (!bfqg) { - bfqq = &bfqd->oom_bfqq; - goto out; - } - + bfqg = bfq_bio_bfqg(bfqd, bio); if (!is_sync) { async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class, ioprio); @@ -5213,7 +5206,6 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd, out: bfqq->ref++; /* get a process reference to this queue */ bfq_log_bfqq(bfqd, bfqq, "get_queue, at end: %p, %d", bfqq, bfqq->ref); - rcu_read_unlock(); return bfqq; } diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h index b116ed48d27d..2a4a6f44efff 100644 --- a/block/bfq-iosched.h +++ b/block/bfq-iosched.h @@ -984,8 +984,7 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, void bfq_init_entity(struct bfq_entity *entity, struct bfq_group *bfqg); void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio); void bfq_end_wr_async(struct bfq_data *bfqd); -struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd, - struct blkcg *blkcg); +struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio); struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg); struct bfq_group *bfqq_group(struct bfq_queue *bfqq); struct bfq_group *bfq_create_group_hierarchy(struct bfq_data *bfqd, int node); From 51f724bffa3403a5236597e6b75df7329c1ec6e9 Mon Sep 17 00:00:00 2001 From: Jan Kara Date: Mon, 6 Jun 2022 19:56:40 +0200 Subject: [PATCH 0756/1842] bfq: Make sure bfqg for which we are queueing requests is online commit 075a53b78b815301f8d3dd1ee2cd99554e34f0dd upstream. Bios queued into BFQ IO scheduler can be associated with a cgroup that was already offlined. This may then cause insertion of this bfq_group into a service tree. But this bfq_group will get freed as soon as last bio associated with it is completed leading to use after free issues for service tree users. Fix the problem by making sure we always operate on online bfq_group. If the bfq_group associated with the bio is not online, we pick the first online parent. CC: stable@vger.kernel.org Fixes: e21b7a0b9887 ("block, bfq: add full hierarchical scheduling and cgroups support") Tested-by: "yukuai (C)" Signed-off-by: Jan Kara Reviewed-by: Christoph Hellwig Link: https://lore.kernel.org/r/20220401102752.8599-9-jack@suse.cz Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- block/bfq-cgroup.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index 622c9a13e496..be6733558b83 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -608,10 +608,19 @@ static void bfq_link_bfqg(struct bfq_data *bfqd, struct bfq_group *bfqg) struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio) { struct blkcg_gq *blkg = bio->bi_blkg; + struct bfq_group *bfqg; - if (!blkg) - return bfqd->root_group; - return blkg_to_bfqg(blkg); + while (blkg) { + bfqg = blkg_to_bfqg(blkg); + if (bfqg->online) { + bio_associate_blkg_from_css(bio, &blkg->blkcg->css); + return bfqg; + } + blkg = blkg->parent; + } + bio_associate_blkg_from_css(bio, + &bfqg_to_blkg(bfqd->root_group)->blkcg->css); + return bfqd->root_group; } /** From 6b03dc67dde3811b11125b089bec876f1a9806b7 Mon Sep 17 00:00:00 2001 From: Jan Kara Date: Mon, 6 Jun 2022 19:56:41 +0200 Subject: [PATCH 0757/1842] block: fix bio_clone_blkg_association() to associate with proper blkcg_gq commit 22b106e5355d6e7a9c3b5cb5ed4ef22ae585ea94 upstream. Commit d92c370a16cb ("block: really clone the block cgroup in bio_clone_blkg_association") changed bio_clone_blkg_association() to just clone bio->bi_blkg reference from source to destination bio. This is however wrong if the source and destination bios are against different block devices because struct blkcg_gq is different for each bdev-blkcg pair. This will result in IOs being accounted (and throttled as a result) multiple times against the same device (src bdev) while throttling of the other device (dst bdev) is ignored. In case of BFQ the inconsistency can even result in crashes in bfq_bic_update_cgroup(). Fix the problem by looking up correct blkcg_gq for the cloned bio. Reported-by: Logan Gunthorpe Reported-and-tested-by: Donald Buczek Fixes: d92c370a16cb ("block: really clone the block cgroup in bio_clone_blkg_association") CC: stable@vger.kernel.org Reviewed-by: Christoph Hellwig Signed-off-by: Jan Kara Link: https://lore.kernel.org/r/20220602081242.7731-1-jack@suse.cz Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- block/blk-cgroup.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 5b19665bc486..484c6b2dd264 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -1892,12 +1892,8 @@ EXPORT_SYMBOL_GPL(bio_associate_blkg); */ void bio_clone_blkg_association(struct bio *dst, struct bio *src) { - if (src->bi_blkg) { - if (dst->bi_blkg) - blkg_put(dst->bi_blkg); - blkg_get(src->bi_blkg); - dst->bi_blkg = src->bi_blkg; - } + if (src->bi_blkg) + bio_associate_blkg_from_css(dst, &bio_blkcg(src)->css); } EXPORT_SYMBOL_GPL(bio_clone_blkg_association); From 72268945b124cd61336f9b4cac538b0516399a2d Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 7 Jun 2022 10:40:05 +0200 Subject: [PATCH 0758/1842] Revert "random: use static branch for crng_ready()" This reverts upstream commit f5bda35fba615ace70a656d4700423fa6c9bebee from stable. It's not essential and will take some time during 5.19 to work out properly. Signed-off-by: Jason A. Donenfeld --- drivers/char/random.c | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index c206db96f60a..5776dfd4a6fc 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -79,8 +79,7 @@ static enum { CRNG_EARLY = 1, /* At least POOL_EARLY_BITS collected */ CRNG_READY = 2 /* Fully initialized with POOL_READY_BITS collected */ } crng_init __read_mostly = CRNG_EMPTY; -static DEFINE_STATIC_KEY_FALSE(crng_is_ready); -#define crng_ready() (static_branch_likely(&crng_is_ready) || crng_init >= CRNG_READY) +#define crng_ready() (likely(crng_init >= CRNG_READY)) /* Various types of waiters for crng_init->CRNG_READY transition. */ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); static struct fasync_struct *fasync; @@ -110,11 +109,6 @@ bool rng_is_initialized(void) } EXPORT_SYMBOL(rng_is_initialized); -static void __cold crng_set_ready(struct work_struct *work) -{ - static_branch_enable(&crng_is_ready); -} - /* Used by wait_for_random_bytes(), and considered an entropy collector, below. */ static void try_to_generate_entropy(void); @@ -268,7 +262,7 @@ static void crng_reseed(void) ++next_gen; WRITE_ONCE(base_crng.generation, next_gen); WRITE_ONCE(base_crng.birth, jiffies); - if (!static_branch_likely(&crng_is_ready)) + if (!crng_ready()) crng_init = CRNG_READY; spin_unlock_irqrestore(&base_crng.lock, flags); memzero_explicit(key, sizeof(key)); @@ -711,7 +705,6 @@ static void extract_entropy(void *buf, size_t len) static void __cold _credit_init_bits(size_t bits) { - static struct execute_work set_ready; unsigned int new, orig, add; unsigned long flags; @@ -727,7 +720,6 @@ static void __cold _credit_init_bits(size_t bits) if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) { crng_reseed(); /* Sets crng_init to CRNG_READY under base_crng.lock. */ - execute_in_process_context(crng_set_ready, &set_ready); process_random_ready_list(); wake_up_interruptible(&crng_init_wait); kill_fasync(&fasync, SIGIO, POLL_IN); From bb55ca1612923b06c4d86ab28b8dd8fdca55ced1 Mon Sep 17 00:00:00 2001 From: Xiao Yang Date: Sun, 10 Apr 2022 19:35:13 +0800 Subject: [PATCH 0759/1842] RDMA/rxe: Generate a completion for unsupported/invalid opcode commit 2f917af777011c88e977b9b9a5d00b280d3a59ce upstream. Current rxe_requester() doesn't generate a completion when processing an unsupported/invalid opcode. If rxe driver doesn't support a new opcode (e.g. RDMA Atomic Write) and RDMA library supports it, an application using the new opcode can reproduce this issue. Fix the issue by calling "goto err;". Fixes: 8700e3e7c485 ("Soft RoCE driver") Link: https://lore.kernel.org/r/20220410113513.27537-1-yangx.jy@fujitsu.com Signed-off-by: Xiao Yang Signed-off-by: Jason Gunthorpe Signed-off-by: Greg Kroah-Hartman --- drivers/infiniband/sw/rxe/rxe_req.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c index d4917646641a..69238f856c67 100644 --- a/drivers/infiniband/sw/rxe/rxe_req.c +++ b/drivers/infiniband/sw/rxe/rxe_req.c @@ -650,7 +650,7 @@ int rxe_requester(void *arg) opcode = next_opcode(qp, wqe, wqe->wr.opcode); if (unlikely(opcode < 0)) { wqe->status = IB_WC_LOC_QP_OP_ERR; - goto exit; + goto err; } mask = rxe_opcode[opcode].mask; From 57e561573f2e51f9f53428caa17eae6a7090f0f5 Mon Sep 17 00:00:00 2001 From: "Maciej W. Rozycki" Date: Sun, 1 May 2022 23:14:16 +0100 Subject: [PATCH 0760/1842] MIPS: IP27: Remove incorrect `cpu_has_fpu' override commit 424c3781dd1cb401857585331eaaa425a13f2429 upstream. Remove unsupported forcing of `cpu_has_fpu' to 1, which makes the `nofpu' kernel parameter non-functional, and also causes a link error: ld: arch/mips/kernel/traps.o: in function `trap_init': ./arch/mips/include/asm/msa.h:(.init.text+0x348): undefined reference to `handle_fpe' ld: ./arch/mips/include/asm/msa.h:(.init.text+0x354): undefined reference to `handle_fpe' ld: ./arch/mips/include/asm/msa.h:(.init.text+0x360): undefined reference to `handle_fpe' where the CONFIG_MIPS_FP_SUPPORT configuration option has been disabled. Signed-off-by: Maciej W. Rozycki Reported-by: Stephen Zhang Fixes: 0ebb2f4159af ("MIPS: IP27: Update/restructure CPU overrides") Cc: stable@vger.kernel.org # v4.2+ Signed-off-by: Thomas Bogendoerfer Signed-off-by: Greg Kroah-Hartman --- arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h b/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h index 58f829c9b6c7..79d6fd249583 100644 --- a/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h +++ b/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h @@ -26,7 +26,6 @@ #define cpu_has_3k_cache 0 #define cpu_has_4k_cache 1 #define cpu_has_tx39_cache 0 -#define cpu_has_fpu 1 #define cpu_has_nofpuex 0 #define cpu_has_32fpr 1 #define cpu_has_counter 1 From 96662c77466dfc2285519c87a2b955bb2d4f5278 Mon Sep 17 00:00:00 2001 From: "Maciej W. Rozycki" Date: Sun, 1 May 2022 23:14:22 +0100 Subject: [PATCH 0761/1842] MIPS: IP30: Remove incorrect `cpu_has_fpu' override commit f44b3e74c33fe04defeff24ebcae98c3bcc5b285 upstream. Remove unsupported forcing of `cpu_has_fpu' to 1, which makes the `nofpu' kernel parameter non-functional, and also causes a link error: ld: arch/mips/kernel/traps.o: in function `trap_init': ./arch/mips/include/asm/msa.h:(.init.text+0x348): undefined reference to `handle_fpe' ld: ./arch/mips/include/asm/msa.h:(.init.text+0x354): undefined reference to `handle_fpe' ld: ./arch/mips/include/asm/msa.h:(.init.text+0x360): undefined reference to `handle_fpe' where the CONFIG_MIPS_FP_SUPPORT configuration option has been disabled. Signed-off-by: Maciej W. Rozycki Reported-by: Stephen Zhang Fixes: 7505576d1c1a ("MIPS: add support for SGI Octane (IP30)") Cc: stable@vger.kernel.org # v5.5+ Signed-off-by: Thomas Bogendoerfer Signed-off-by: Greg Kroah-Hartman --- arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h b/arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h index 49a93e82c252..2635b6ba1cb5 100644 --- a/arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h +++ b/arch/mips/include/asm/mach-ip30/cpu-feature-overrides.h @@ -29,7 +29,6 @@ #define cpu_has_3k_cache 0 #define cpu_has_4k_cache 1 #define cpu_has_tx39_cache 0 -#define cpu_has_fpu 1 #define cpu_has_nofpuex 0 #define cpu_has_32fpr 1 #define cpu_has_counter 1 From a67100f42665cf7a5ed7821376140f62def0d31e Mon Sep 17 00:00:00 2001 From: Eric Biggers Date: Thu, 19 May 2022 13:44:37 -0700 Subject: [PATCH 0762/1842] ext4: only allow test_dummy_encryption when supported commit 5f41fdaea63ddf96d921ab36b2af4a90ccdb5744 upstream. Make the test_dummy_encryption mount option require that the encrypt feature flag be already enabled on the filesystem, rather than automatically enabling it. Practically, this means that "-O encrypt" will need to be included in MKFS_OPTIONS when running xfstests with the test_dummy_encryption mount option. (ext4/053 also needs an update.) Moreover, as long as the preconditions for test_dummy_encryption are being tightened anyway, take the opportunity to start rejecting it when !CONFIG_FS_ENCRYPTION rather than ignoring it. The motivation for requiring the encrypt feature flag is that: - Having the filesystem auto-enable feature flags is problematic, as it bypasses the usual sanity checks. The specific issue which came up recently is that in kernel versions where ext4 supports casefold but not encrypt+casefold (v5.1 through v5.10), the kernel will happily add the encrypt flag to a filesystem that has the casefold flag, making it unmountable -- but only for subsequent mounts, not the initial one. This confused the casefold support detection in xfstests, causing generic/556 to fail rather than be skipped. - The xfstests-bld test runners (kvm-xfstests et al.) already use the required mkfs flag, so they will not be affected by this change. Only users of test_dummy_encryption alone will be affected. But, this option has always been for testing only, so it should be fine to require that the few users of this option update their test scripts. - f2fs already requires it (for its equivalent feature flag). Signed-off-by: Eric Biggers Reviewed-by: Gabriel Krisman Bertazi Link: https://lore.kernel.org/r/20220519204437.61645-1-ebiggers@kernel.org Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman --- fs/ext4/ext4.h | 6 ------ fs/ext4/super.c | 18 ++++++++++-------- 2 files changed, 10 insertions(+), 14 deletions(-) diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h index 8329961546b5..4ad1c3ce9398 100644 --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -1419,12 +1419,6 @@ struct ext4_super_block { #ifdef __KERNEL__ -#ifdef CONFIG_FS_ENCRYPTION -#define DUMMY_ENCRYPTION_ENABLED(sbi) ((sbi)->s_dummy_enc_policy.policy != NULL) -#else -#define DUMMY_ENCRYPTION_ENABLED(sbi) (0) -#endif - /* Number of quota types we support */ #define EXT4_MAXQUOTAS 3 diff --git a/fs/ext4/super.c b/fs/ext4/super.c index 050108091f8a..a0af833f7da7 100644 --- a/fs/ext4/super.c +++ b/fs/ext4/super.c @@ -2084,6 +2084,12 @@ static int ext4_set_test_dummy_encryption(struct super_block *sb, struct ext4_sb_info *sbi = EXT4_SB(sb); int err; + if (!ext4_has_feature_encrypt(sb)) { + ext4_msg(sb, KERN_WARNING, + "test_dummy_encryption requires encrypt feature"); + return -1; + } + /* * This mount option is just for testing, and it's not worthwhile to * implement the extra complexity (e.g. RCU protection) that would be @@ -2111,11 +2117,13 @@ static int ext4_set_test_dummy_encryption(struct super_block *sb, return -1; } ext4_msg(sb, KERN_WARNING, "Test dummy encryption mode enabled"); + return 1; #else ext4_msg(sb, KERN_WARNING, - "Test dummy encryption mount option ignored"); + "test_dummy_encryption option not supported"); + return -1; + #endif - return 1; } static int handle_mount_opt(struct super_block *sb, char *opt, int token, @@ -4929,12 +4937,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) goto failed_mount_wq; } - if (DUMMY_ENCRYPTION_ENABLED(sbi) && !sb_rdonly(sb) && - !ext4_has_feature_encrypt(sb)) { - ext4_set_feature_encrypt(sb); - ext4_commit_super(sb, 1); - } - /* * Get the # of file system overhead blocks from the * superblock if present. From 47c1680e51efaf1a9ec09d6bf04cfc261b8270ab Mon Sep 17 00:00:00 2001 From: Jia-Ju Bai Date: Fri, 27 May 2022 23:28:18 +0800 Subject: [PATCH 0763/1842] md: bcache: check the return value of kzalloc() in detached_dev_do_request() commit 40f567bbb3b0639d2ec7d1c6ad4b1b018f80cf19 upstream. The function kzalloc() in detached_dev_do_request() can fail, so its return value should be checked. Fixes: bc082a55d25c ("bcache: fix inaccurate io state for detached bcache devices") Reported-by: TOTE Robot Signed-off-by: Jia-Ju Bai Signed-off-by: Coly Li Link: https://lore.kernel.org/r/20220527152818.27545-4-colyli@suse.de Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- drivers/md/bcache/request.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c index 214326383145..97895262fc54 100644 --- a/drivers/md/bcache/request.c +++ b/drivers/md/bcache/request.c @@ -1109,6 +1109,12 @@ static void detached_dev_do_request(struct bcache_device *d, struct bio *bio) * which would call closure_get(&dc->disk.cl) */ ddip = kzalloc(sizeof(struct detached_dev_io_private), GFP_NOIO); + if (!ddip) { + bio->bi_status = BLK_STS_RESOURCE; + bio->bi_end_io(bio); + return; + } + ddip->d = d; /* Count on the bcache device */ ddip->start_time = part_start_io_acct(d->disk, &ddip->part, bio); From e2e52b40ef1acfd4f2508c32fce1409013909581 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 9 Jun 2022 10:21:31 +0200 Subject: [PATCH 0764/1842] Linux 5.10.121 Link: https://lore.kernel.org/r/20220607164908.521895282@linuxfoundation.org Tested-by: Pavel Machek (CIP) Tested-by: Shuah Khan Tested-by: Sudip Mukherjee Tested-by: Fox Chen Tested-by: Linux Kernel Functional Testing Tested-by: Salvatore Bonaccorso Tested-by: Guenter Roeck Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index fdd2ac273f42..5233d3d9a3b5 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 120 +SUBLEVEL = 121 EXTRAVERSION = NAME = Dare mighty things From 1509d2335db84d8936d5f00e9765ec774732e7c9 Mon Sep 17 00:00:00 2001 From: Randy Dunlap Date: Sun, 23 Jan 2022 09:40:31 -0800 Subject: [PATCH 0765/1842] pcmcia: db1xxx_ss: restrict to MIPS_DB1XXX boards [ Upstream commit 3928cf08334ed895a31458cbebd8d4ec6d84c080 ] When the MIPS_ALCHEMY board selection is MIPS_XXS1500 instead of MIPS_DB1XXX, the PCMCIA driver 'db1xxx_ss' has build errors due to missing DB1XXX symbols. The PCMCIA driver should be restricted to MIPS_DB1XXX instead of MIPS_ALCHEMY to fix this build error. ERROR: modpost: "bcsr_read" [drivers/pcmcia/db1xxx_ss.ko] undefined! ERROR: modpost: "bcsr_mod" [drivers/pcmcia/db1xxx_ss.ko] undefined! Fixes: 42a4f17dc356 ("MIPS: Alchemy: remove SOC_AU1X00 in favor of MIPS_ALCHEMY") Signed-off-by: Randy Dunlap Reported-by: kernel test robot Cc: Arnd Bergmann Cc: Daniel Vetter Cc: Kees Cook Cc: Thomas Bogendoerfer Cc: linux-mips@vger.kernel.org Acked-by: Manuel Lauss Signed-off-by: Dominik Brodowski Signed-off-by: Sasha Levin --- drivers/pcmcia/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/pcmcia/Kconfig b/drivers/pcmcia/Kconfig index 82d10b6661c7..73508fca520c 100644 --- a/drivers/pcmcia/Kconfig +++ b/drivers/pcmcia/Kconfig @@ -151,7 +151,7 @@ config TCIC config PCMCIA_ALCHEMY_DEVBOARD tristate "Alchemy Db/Pb1xxx PCMCIA socket services" - depends on MIPS_ALCHEMY && PCMCIA + depends on MIPS_DB1XXX && PCMCIA help Enable this driver of you want PCMCIA support on your Alchemy Db1000, Db/Pb1100, Db/Pb1500, Db/Pb1550, Db/Pb1200, DB1300 From 4610b067615f3ff97dec9ad576d6031072e0eb18 Mon Sep 17 00:00:00 2001 From: Jakob Koschel Date: Mon, 21 Mar 2022 13:36:26 +0100 Subject: [PATCH 0766/1842] staging: greybus: codecs: fix type confusion of list iterator variable [ Upstream commit 84ef256550196bc06e6849a34224c998b45bd557 ] If the list does not exit early then data == NULL and 'module' does not point to a valid list element. Using 'module' in such a case is not valid and was therefore removed. Fixes: 6dd67645f22c ("greybus: audio: Use single codec driver registration") Reviewed-by: Dan Carpenter Reviewed-by: Vaibhav Agarwal Reviewed-by: Mark Greer Signed-off-by: Jakob Koschel Link: https://lore.kernel.org/r/20220321123626.3068639-1-jakobkoschel@gmail.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/staging/greybus/audio_codec.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/staging/greybus/audio_codec.c b/drivers/staging/greybus/audio_codec.c index 42ce6c88ea75..4ed29f852c23 100644 --- a/drivers/staging/greybus/audio_codec.c +++ b/drivers/staging/greybus/audio_codec.c @@ -621,8 +621,8 @@ static int gbcodec_mute_stream(struct snd_soc_dai *dai, int mute, int stream) break; } if (!data) { - dev_err(dai->dev, "%s:%s DATA connection missing\n", - dai->name, module->name); + dev_err(dai->dev, "%s DATA connection missing\n", + dai->name); mutex_unlock(&codec->lock); return -ENODEV; } From 03efa70eb0ee1f39665a67ca5ebf08ba935301cf Mon Sep 17 00:00:00 2001 From: Alexandru Tachici Date: Tue, 22 Mar 2022 12:50:24 +0200 Subject: [PATCH 0767/1842] iio: adc: ad7124: Remove shift from scan_type [ Upstream commit fe78ccf79b0e29fd6d8dc2e2c3b0dbeda4ce3ad8 ] The 24 bits data is stored in 32 bits in BE. There is no need to shift it. This confuses user-space apps. Fixes: b3af341bbd966 ("iio: adc: Add ad7124 support") Signed-off-by: Alexandru Tachici Link: https://lore.kernel.org/r/20220322105029.86389-2-alexandru.tachici@analog.com Signed-off-by: Jonathan Cameron Signed-off-by: Sasha Levin --- drivers/iio/adc/ad7124.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c index bd3500995037..19ab7d7251bc 100644 --- a/drivers/iio/adc/ad7124.c +++ b/drivers/iio/adc/ad7124.c @@ -170,7 +170,6 @@ static const struct iio_chan_spec ad7124_channel_template = { .sign = 'u', .realbits = 24, .storagebits = 32, - .shift = 8, .endianness = IIO_BE, }, }; From 56ac04f35fc5dc8b5b67a1fa2f7204282aa887d5 Mon Sep 17 00:00:00 2001 From: Jiasheng Jiang Date: Thu, 20 Jan 2022 17:29:36 +0800 Subject: [PATCH 0768/1842] lkdtm/bugs: Check for the NULL pointer after calling kmalloc [ Upstream commit 4a9800c81d2f34afb66b4b42e0330ae8298019a2 ] As the possible failure of the kmalloc(), the not_checked and checked could be NULL pointer. Therefore, it should be better to check it in order to avoid the dereference of the NULL pointer. Also, we need to kfree the 'not_checked' and 'checked' to avoid the memory leak if fails. And since it is just a test, it may directly return without error number. Fixes: ae2e1aad3e48 ("drivers/misc/lkdtm/bugs.c: add arithmetic overflow and array bounds checks") Signed-off-by: Jiasheng Jiang Acked-by: Dan Carpenter Signed-off-by: Kees Cook Link: https://lore.kernel.org/r/20220120092936.1874264-1-jiasheng@iscas.ac.cn Signed-off-by: Sasha Levin --- drivers/misc/lkdtm/bugs.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c index a337f97b30e2..d39b8139b096 100644 --- a/drivers/misc/lkdtm/bugs.c +++ b/drivers/misc/lkdtm/bugs.c @@ -231,6 +231,11 @@ void lkdtm_ARRAY_BOUNDS(void) not_checked = kmalloc(sizeof(*not_checked) * 2, GFP_KERNEL); checked = kmalloc(sizeof(*checked) * 2, GFP_KERNEL); + if (!not_checked || !checked) { + kfree(not_checked); + kfree(checked); + return; + } pr_info("Array access within bounds ...\n"); /* For both, touch all bytes in the actual member size. */ From ee6c33b29e624f515202a31bf6ef0437f26a1867 Mon Sep 17 00:00:00 2001 From: Wang Weiyang Date: Mon, 28 Mar 2022 19:58:44 +0800 Subject: [PATCH 0769/1842] tty: goldfish: Use tty_port_destroy() to destroy port [ Upstream commit 507b05063d1b7a1fcb9f7d7c47586fc4f3508f98 ] In goldfish_tty_probe(), the port initialized through tty_port_init() should be destroyed in error paths.In goldfish_tty_remove(), qtty->port also should be destroyed or else might leak resources. Fix the above by calling tty_port_destroy(). Fixes: 666b7793d4bf ("goldfish: tty driver") Reviewed-by: Jiri Slaby Signed-off-by: Wang Weiyang Link: https://lore.kernel.org/r/20220328115844.86032-1-wangweiyang2@huawei.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/goldfish.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/tty/goldfish.c b/drivers/tty/goldfish.c index c8c5cdfc5e19..abc84d84f638 100644 --- a/drivers/tty/goldfish.c +++ b/drivers/tty/goldfish.c @@ -407,6 +407,7 @@ static int goldfish_tty_probe(struct platform_device *pdev) err_tty_register_device_failed: free_irq(irq, qtty); err_dec_line_count: + tty_port_destroy(&qtty->port); goldfish_tty_current_line_count--; if (goldfish_tty_current_line_count == 0) goldfish_tty_delete_driver(); @@ -428,6 +429,7 @@ static int goldfish_tty_remove(struct platform_device *pdev) iounmap(qtty->base); qtty->base = NULL; free_irq(qtty->irq, pdev); + tty_port_destroy(&qtty->port); goldfish_tty_current_line_count--; if (goldfish_tty_current_line_count == 0) goldfish_tty_delete_driver(); From 11bc6eff3abcc45a8d978852e01ab2a7a9615f44 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Mon, 7 Mar 2022 10:51:35 +0000 Subject: [PATCH 0770/1842] tty: serial: owl: Fix missing clk_disable_unprepare() in owl_uart_probe [ Upstream commit bcea0f547ec1a2ee44d429aaf0334633e386e67c ] Fix the missing clk_disable_unprepare() before return from owl_uart_probe() in the error handling case. Fixes: abf42d2f333b ("tty: serial: owl: add "much needed" clk_prepare_enable()") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220307105135.11698-1-linmq006@gmail.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/serial/owl-uart.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/tty/serial/owl-uart.c b/drivers/tty/serial/owl-uart.c index c149f8c30007..a0d4bffe70bd 100644 --- a/drivers/tty/serial/owl-uart.c +++ b/drivers/tty/serial/owl-uart.c @@ -695,6 +695,7 @@ static int owl_uart_probe(struct platform_device *pdev) owl_port->port.uartclk = clk_get_rate(owl_port->clk); if (owl_port->port.uartclk == 0) { dev_err(&pdev->dev, "clock rate is zero\n"); + clk_disable_unprepare(owl_port->clk); return -EINVAL; } owl_port->port.flags = UPF_BOOT_AUTOCONF | UPF_IOREMAP | UPF_LOW_LATENCY; From e27376f5aade9c545c5f964ee9b0d9f23105f2df Mon Sep 17 00:00:00 2001 From: Daniel Gibson Date: Wed, 30 Mar 2022 01:58:10 +0200 Subject: [PATCH 0771/1842] tty: n_tty: Restore EOF push handling behavior [ Upstream commit 65a8b287023da68c4550deab5c764e6891cf1caf ] TTYs in ICANON mode have a special case that allows "pushing" a line without a regular EOL character (like newline), by using EOF (the EOT character - ASCII 0x4) as a pseudo-EOL. It is silently discarded, so the reader of the PTS will receive the line *without* EOF or any other terminating character. This special case has an edge case: What happens if the readers buffer is the same size as the line (without EOF)? Will they be able to tell if the whole line is received, i.e. if the next read() will return more of the same line or the next line? There are two possibilities, that both have (dis)advantages: 1. The next read() returns 0. FreeBSD (13.0) and OSX (10.11) do this. Advantage: The reader can interpret this as "the line is over". Disadvantage: read() returning 0 means EOF, the reader could also interpret it as "there's no more data" and stop reading or even close the PT. 2. The next read() returns the next line, the EOF is silently discarded. Solaris (or at least OpenIndiana 2021.10) does this, Linux has done do this since commit 40d5e0905a03 ("n_tty: Fix EOF push handling"); this behavior was recently broken by commit 359303076163 ("tty: n_tty: do not look ahead for EOL character past the end of the buffer"). Advantage: read() won't return 0 (EOF), reader less likely to be confused (and things like `while(read(..)>0)` don't break) Disadvantage: The reader can't really know if the read() continues the last line (that filled the whole read buffer) or starts a new line. As both options are defensible (and are used by other Unix-likes), it's best to stick to the "old" behavior since "n_tty: Fix EOF push handling" of 2013, i.e. silently discard that EOF. This patch - that I actually got from Linus for testing and only modified slightly - restores that behavior by skipping an EOF character if it's the next character after reading is done. Based on a patch from Linus Torvalds. Link: https://bugzilla.kernel.org/show_bug.cgi?id=215611 Fixes: 359303076163 ("tty: n_tty: do not look ahead for EOL character past the end of the buffer") Cc: Peter Hurley Cc: Greg Kroah-Hartman Cc: Jiri Slaby Reviewed-and-tested-by: Daniel Gibson Acked-by: Linus Torvalds Signed-off-by: Daniel Gibson Link: https://lore.kernel.org/r/20220329235810.452513-2-daniel@gibson.sh Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/n_tty.c | 38 +++++++++++++++++++++++++++++++++++++- 1 file changed, 37 insertions(+), 1 deletion(-) diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c index 58190135efb7..12dde01e576b 100644 --- a/drivers/tty/n_tty.c +++ b/drivers/tty/n_tty.c @@ -2073,6 +2073,35 @@ static bool canon_copy_from_read_buf(struct tty_struct *tty, return ldata->read_tail != canon_head; } +/* + * If we finished a read at the exact location of an + * EOF (special EOL character that's a __DISABLED_CHAR) + * in the stream, silently eat the EOF. + */ +static void canon_skip_eof(struct tty_struct *tty) +{ + struct n_tty_data *ldata = tty->disc_data; + size_t tail, canon_head; + + canon_head = smp_load_acquire(&ldata->canon_head); + tail = ldata->read_tail; + + // No data? + if (tail == canon_head) + return; + + // See if the tail position is EOF in the circular buffer + tail &= (N_TTY_BUF_SIZE - 1); + if (!test_bit(tail, ldata->read_flags)) + return; + if (read_buf(ldata, tail) != __DISABLED_CHAR) + return; + + // Clear the EOL bit, skip the EOF char. + clear_bit(tail, ldata->read_flags); + smp_store_release(&ldata->read_tail, ldata->read_tail + 1); +} + /** * job_control - check job control * @tty: tty @@ -2142,7 +2171,14 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file, */ if (*cookie) { if (ldata->icanon && !L_EXTPROC(tty)) { - if (canon_copy_from_read_buf(tty, &kb, &nr)) + /* + * If we have filled the user buffer, see + * if we should skip an EOF character before + * releasing the lock and returning done. + */ + if (!nr) + canon_skip_eof(tty); + else if (canon_copy_from_read_buf(tty, &kb, &nr)) return kb - kbuf; } else { if (copy_from_read_buf(tty, &kb, &nr)) From 4e3a2d77bd0b83442156fdfa74a8d098511634cd Mon Sep 17 00:00:00 2001 From: Sherry Sun Date: Mon, 21 Mar 2022 19:22:11 +0800 Subject: [PATCH 0772/1842] tty: serial: fsl_lpuart: fix potential bug when using both of_alias_get_id and ida_simple_get [ Upstream commit f398e0aa325c61fa20903833a5b534ecb8e6e418 ] Now fsl_lpuart driver use both of_alias_get_id() and ida_simple_get() in .probe(), which has the potential bug. For example, when remove the lpuart7 alias in dts, of_alias_get_id() will return error, then call ida_simple_get() to allocate the id 0 for lpuart7, this may confilct with the lpuart4 which has alias 0. aliases { ... serial0 = &lpuart4; serial1 = &lpuart5; serial2 = &lpuart6; serial3 = &lpuart7; } So remove the ida_simple_get() in .probe(), return an error directly when calling of_alias_get_id() fails, which is consistent with other uart drivers behavior. Fixes: 3bc3206e1c0f ("serial: fsl_lpuart: Remove the alias node dependence") Signed-off-by: Sherry Sun Link: https://lore.kernel.org/r/20220321112211.8895-1-sherry.sun@nxp.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/serial/fsl_lpuart.c | 24 ++++-------------------- 1 file changed, 4 insertions(+), 20 deletions(-) diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c index b9f8add284e3..52a603a6f9b8 100644 --- a/drivers/tty/serial/fsl_lpuart.c +++ b/drivers/tty/serial/fsl_lpuart.c @@ -229,8 +229,6 @@ /* IMX lpuart has four extra unused regs located at the beginning */ #define IMX_REG_OFF 0x10 -static DEFINE_IDA(fsl_lpuart_ida); - enum lpuart_type { VF610_LPUART, LS1021A_LPUART, @@ -265,7 +263,6 @@ struct lpuart_port { int rx_dma_rng_buf_len; unsigned int dma_tx_nents; wait_queue_head_t dma_wait; - bool id_allocated; }; struct lpuart_soc_data { @@ -2638,23 +2635,18 @@ static int lpuart_probe(struct platform_device *pdev) ret = of_alias_get_id(np, "serial"); if (ret < 0) { - ret = ida_simple_get(&fsl_lpuart_ida, 0, UART_NR, GFP_KERNEL); - if (ret < 0) { - dev_err(&pdev->dev, "port line is full, add device failed\n"); - return ret; - } - sport->id_allocated = true; + dev_err(&pdev->dev, "failed to get alias id, errno %d\n", ret); + return ret; } if (ret >= ARRAY_SIZE(lpuart_ports)) { dev_err(&pdev->dev, "serial%d out of range\n", ret); - ret = -EINVAL; - goto failed_out_of_range; + return -EINVAL; } sport->port.line = ret; ret = lpuart_enable_clks(sport); if (ret) - goto failed_clock_enable; + return ret; sport->port.uartclk = lpuart_get_baud_clk_rate(sport); lpuart_ports[sport->port.line] = sport; @@ -2697,10 +2689,6 @@ static int lpuart_probe(struct platform_device *pdev) failed_attach_port: failed_irq_request: lpuart_disable_clks(sport); -failed_clock_enable: -failed_out_of_range: - if (sport->id_allocated) - ida_simple_remove(&fsl_lpuart_ida, sport->port.line); return ret; } @@ -2710,9 +2698,6 @@ static int lpuart_remove(struct platform_device *pdev) uart_remove_one_port(&lpuart_reg, &sport->port); - if (sport->id_allocated) - ida_simple_remove(&fsl_lpuart_ida, sport->port.line); - lpuart_disable_clks(sport); if (sport->dma_tx_chan) @@ -2842,7 +2827,6 @@ static int __init lpuart_serial_init(void) static void __exit lpuart_serial_exit(void) { - ida_destroy(&fsl_lpuart_ida); platform_driver_unregister(&lpuart_driver); uart_unregister_driver(&lpuart_reg); } From bcbb795a9e78180d74c6ab21518da87e803dfdce Mon Sep 17 00:00:00 2001 From: Hangyu Hua Date: Tue, 12 Apr 2022 10:02:57 +0800 Subject: [PATCH 0773/1842] usb: usbip: fix a refcount leak in stub_probe() [ Upstream commit 9ec4cbf1cc55d126759051acfe328d489c5d6e60 ] usb_get_dev() is called in stub_device_alloc(). When stub_probe() fails after that, usb_put_dev() needs to be called to release the reference. Fix this by moving usb_put_dev() to sdev_free error path handling. Find this by code review. Fixes: 3ff67445750a ("usbip: fix error handling in stub_probe()") Reviewed-by: Shuah Khan Signed-off-by: Hangyu Hua Link: https://lore.kernel.org/r/20220412020257.9767-1-hbh25y@gmail.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/usb/usbip/stub_dev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/usb/usbip/stub_dev.c b/drivers/usb/usbip/stub_dev.c index d8d3892e5a69..3c6d452e3bf4 100644 --- a/drivers/usb/usbip/stub_dev.c +++ b/drivers/usb/usbip/stub_dev.c @@ -393,7 +393,6 @@ static int stub_probe(struct usb_device *udev) err_port: dev_set_drvdata(&udev->dev, NULL); - usb_put_dev(udev); /* we already have busid_priv, just lock busid_lock */ spin_lock(&busid_priv->busid_lock); @@ -408,6 +407,7 @@ static int stub_probe(struct usb_device *udev) put_busid_priv(busid_priv); sdev_free: + usb_put_dev(udev); stub_device_free(sdev); return rc; From e100742823c3fc586b709e042d69f1f20a5cee9e Mon Sep 17 00:00:00 2001 From: Niels Dossche Date: Tue, 12 Apr 2022 18:50:55 +0200 Subject: [PATCH 0774/1842] usb: usbip: add missing device lock on tweak configuration cmd [ Upstream commit d088fabace2ca337b275d1d4b36db4fe7771e44f ] The function documentation of usb_set_configuration says that its callers should hold the device lock. This lock is held for all callsites except tweak_set_configuration_cmd. The code path can be executed for example when attaching a remote USB device. The solution is to surround the call by the device lock. This bug was found using my experimental own-developed static analysis tool, which reported the missing lock on v5.17.2. I manually verified this bug report by doing code review as well. I runtime checked that the required lock is not held. I compiled and runtime tested this on x86_64 with a USB mouse. After applying this patch, my analyser no longer reports this potential bug. Fixes: 2c8c98158946 ("staging: usbip: let client choose device configuration") Reviewed-by: Shuah Khan Signed-off-by: Niels Dossche Link: https://lore.kernel.org/r/20220412165055.257113-1-dossche.niels@gmail.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/usb/usbip/stub_rx.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/usb/usbip/stub_rx.c b/drivers/usb/usbip/stub_rx.c index 325c22008e53..5dd41e8215e0 100644 --- a/drivers/usb/usbip/stub_rx.c +++ b/drivers/usb/usbip/stub_rx.c @@ -138,7 +138,9 @@ static int tweak_set_configuration_cmd(struct urb *urb) req = (struct usb_ctrlrequest *) urb->setup_packet; config = le16_to_cpu(req->wValue); + usb_lock_device(sdev->udev); err = usb_set_configuration(sdev->udev, config); + usb_unlock_device(sdev->udev); if (err && err != -ENODEV) dev_err(&sdev->udev->dev, "can't set config #%d, error %d\n", config, err); From 6b7cf2212223b5128f68358e7b22b5de89ae2ec1 Mon Sep 17 00:00:00 2001 From: Lin Ma Date: Tue, 12 Apr 2022 22:43:59 +0800 Subject: [PATCH 0775/1842] USB: storage: karma: fix rio_karma_init return [ Upstream commit b92ffb1eddd9a66a90defc556dcbf65a43c196c7 ] The function rio_karam_init() should return -ENOMEM instead of value 0 (USB_STOR_TRANSPORT_GOOD) when allocation fails. Similarly, it should return -EIO when rio_karma_send_command() fails. Fixes: dfe0d3ba20e8 ("USB Storage: add rio karma eject support") Acked-by: Alan Stern Signed-off-by: Lin Ma Link: https://lore.kernel.org/r/20220412144359.28447-1-linma@zju.edu.cn Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/usb/storage/karma.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/drivers/usb/storage/karma.c b/drivers/usb/storage/karma.c index 05cec81dcd3f..38ddfedef629 100644 --- a/drivers/usb/storage/karma.c +++ b/drivers/usb/storage/karma.c @@ -174,24 +174,25 @@ static void rio_karma_destructor(void *extra) static int rio_karma_init(struct us_data *us) { - int ret = 0; struct karma_data *data = kzalloc(sizeof(struct karma_data), GFP_NOIO); if (!data) - goto out; + return -ENOMEM; data->recv = kmalloc(RIO_RECV_LEN, GFP_NOIO); if (!data->recv) { kfree(data); - goto out; + return -ENOMEM; } us->extra = data; us->extra_destructor = rio_karma_destructor; - ret = rio_karma_send_command(RIO_ENTER_STORAGE, us); - data->in_storage = (ret == 0); -out: - return ret; + if (rio_karma_send_command(RIO_ENTER_STORAGE, us)) + return -EIO; + + data->in_storage = 1; + + return 0; } static struct scsi_host_template karma_host_template; From b382c0c3b8cc33465f23827f9f21218a0d8ad0cf Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Wed, 9 Mar 2022 11:10:33 +0000 Subject: [PATCH 0776/1842] usb: musb: Fix missing of_node_put() in omap2430_probe [ Upstream commit 424bef51fa530389b0b9008c9e144e40c10e8458 ] The device_node pointer is returned by of_parse_phandle() with refcount incremented. We should use of_node_put() on it when done. Fixes: 8934d3e4d0e7 ("usb: musb: omap2430: Don't use omap_get_control_dev()") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220309111033.24487-1-linmq006@gmail.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/usb/musb/omap2430.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/usb/musb/omap2430.c b/drivers/usb/musb/omap2430.c index 4232f1ce3fbf..1d435e4ee857 100644 --- a/drivers/usb/musb/omap2430.c +++ b/drivers/usb/musb/omap2430.c @@ -360,6 +360,7 @@ static int omap2430_probe(struct platform_device *pdev) control_node = of_parse_phandle(np, "ctrl-module", 0); if (control_node) { control_pdev = of_find_device_by_node(control_node); + of_node_put(control_node); if (!control_pdev) { dev_err(&pdev->dev, "Failed to get control device\n"); ret = -EINVAL; From f9782b26d6f1134923d7232c82f06999ad0a09c7 Mon Sep 17 00:00:00 2001 From: Christophe JAILLET Date: Fri, 22 Apr 2022 08:48:18 +0200 Subject: [PATCH 0777/1842] staging: fieldbus: Fix the error handling path in anybuss_host_common_probe() [ Upstream commit 7079b3483a17be2cfba64cbd4feb1b7ae07f1ea7 ] If device_register() fails, device_unregister() should not be called because it will free some resources that are not allocated. put_device() should be used instead. Fixes: 308ee87a2f1e ("staging: fieldbus: anybus-s: support HMS Anybus-S bus") Signed-off-by: Christophe JAILLET Link: https://lore.kernel.org/r/5401a519608d6e1a4e7435c20f4f20b0c5c36c23.1650610082.git.christophe.jaillet@wanadoo.fr Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/staging/fieldbus/anybuss/host.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/staging/fieldbus/anybuss/host.c b/drivers/staging/fieldbus/anybuss/host.c index 549cb7d51af8..2a20a1767d77 100644 --- a/drivers/staging/fieldbus/anybuss/host.c +++ b/drivers/staging/fieldbus/anybuss/host.c @@ -1384,7 +1384,7 @@ anybuss_host_common_probe(struct device *dev, goto err_device; return cd; err_device: - device_unregister(&cd->client->dev); + put_device(&cd->client->dev); err_kthread: kthread_stop(cd->qthread); err_reset: From 572211d631d7665c6690b5a6cb80436f8c368dc1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Uwe=20Kleine-K=C3=B6nig?= Date: Fri, 8 Apr 2022 17:22:38 +0200 Subject: [PATCH 0778/1842] pwm: lp3943: Fix duty calculation in case period was clamped MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 5e3b07ca5cc78cd4a987e78446849e41288d87cb ] The hardware only supports periods <= 1.6 ms and if a bigger period is requested it is clamped to 1.6 ms. In this case duty_cycle might be bigger than 1.6 ms and then the duty cycle register is written with a value bigger than LP3943_MAX_DUTY. So clamp duty_cycle accordingly. Fixes: af66b3c0934e ("pwm: Add LP3943 PWM driver") Signed-off-by: Uwe Kleine-König Signed-off-by: Thierry Reding Signed-off-by: Sasha Levin --- drivers/pwm/pwm-lp3943.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/pwm/pwm-lp3943.c b/drivers/pwm/pwm-lp3943.c index bf3f14fb5f24..05e4120fd702 100644 --- a/drivers/pwm/pwm-lp3943.c +++ b/drivers/pwm/pwm-lp3943.c @@ -125,6 +125,7 @@ static int lp3943_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm, if (err) return err; + duty_ns = min(duty_ns, period_ns); val = (u8)(duty_ns * LP3943_MAX_DUTY / period_ns); return lp3943_write_byte(lp3943, reg_duty, val); From 8ae4fed195c064c08330d685fc7737137f74043b Mon Sep 17 00:00:00 2001 From: Krzysztof Kozlowski Date: Fri, 22 Apr 2022 12:53:26 +0200 Subject: [PATCH 0779/1842] rpmsg: qcom_smd: Fix irq_of_parse_and_map() return value [ Upstream commit 1a358d35066487d228a68303d808bc4721c6b1b9 ] The irq_of_parse_and_map() returns 0 on failure, not a negative ERRNO. Fixes: 53e2822e56c7 ("rpmsg: Introduce Qualcomm SMD backend") Signed-off-by: Krzysztof Kozlowski Signed-off-by: Bjorn Andersson Link: https://lore.kernel.org/r/20220422105326.78713-1-krzysztof.kozlowski@linaro.org Signed-off-by: Sasha Levin --- drivers/rpmsg/qcom_smd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c index 19903de6268d..db5f6009fb49 100644 --- a/drivers/rpmsg/qcom_smd.c +++ b/drivers/rpmsg/qcom_smd.c @@ -1388,7 +1388,7 @@ static int qcom_smd_parse_edge(struct device *dev, edge->name = node->name; irq = irq_of_parse_and_map(node, 0); - if (irq < 0) { + if (!irq) { dev_err(dev, "required smd interrupt missing\n"); ret = irq; goto put_node; From 2a1bf8e5ad61f54ba2a52dd3bab1e9f94cdc0ff6 Mon Sep 17 00:00:00 2001 From: Zheng Yongjun Date: Fri, 22 Apr 2022 06:26:52 +0000 Subject: [PATCH 0780/1842] usb: dwc3: pci: Fix pm_runtime_get_sync() error checking [ Upstream commit a03e2ddab8e735e2cc315609b297b300e9cc60d2 ] If the device is already in a runtime PM enabled state pm_runtime_get_sync() will return 1, so a test for negative value should be used to check for errors. Fixes: 8eed00b237a28 ("usb: dwc3: pci: Runtime resume child device from wq") Signed-off-by: Zheng Yongjun Link: https://lore.kernel.org/r/20220422062652.10575-1-zhengyongjun3@huawei.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/usb/dwc3/dwc3-pci.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c index 98df8d52c765..a5a8c5712bce 100644 --- a/drivers/usb/dwc3/dwc3-pci.c +++ b/drivers/usb/dwc3/dwc3-pci.c @@ -213,7 +213,7 @@ static void dwc3_pci_resume_work(struct work_struct *work) int ret; ret = pm_runtime_get_sync(&dwc3->dev); - if (ret) { + if (ret < 0) { pm_runtime_put_sync_autosuspend(&dwc3->dev); return; } From 70ece3c5ec4f490470329da4e940d7b57b53ff33 Mon Sep 17 00:00:00 2001 From: Xiaomeng Tong Date: Sun, 27 Mar 2022 14:22:02 +0800 Subject: [PATCH 0781/1842] misc: fastrpc: fix an incorrect NULL check on list iterator [ Upstream commit 5ac11fe03a0a83042d1a040dbce4fa2fb5521e23 ] The bug is here: if (!buf) { The list iterator value 'buf' will *always* be set and non-NULL by list_for_each_entry(), so it is incorrect to assume that the iterator value will be NULL if the list is empty (in this case, the check 'if (!buf) {' will always be false and never exit expectly). To fix the bug, use a new variable 'iter' as the list iterator, while use the original variable 'buf' as a dedicated pointer to point to the found element. Fixes: 2419e55e532de ("misc: fastrpc: add mmap/unmap support") Signed-off-by: Xiaomeng Tong Link: https://lore.kernel.org/r/20220327062202.5720-1-xiam0nd.tong@gmail.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/misc/fastrpc.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c index d0471fec37fb..65f24b6150aa 100644 --- a/drivers/misc/fastrpc.c +++ b/drivers/misc/fastrpc.c @@ -1349,17 +1349,18 @@ static int fastrpc_req_munmap_impl(struct fastrpc_user *fl, struct fastrpc_req_munmap *req) { struct fastrpc_invoke_args args[1] = { [0] = { 0 } }; - struct fastrpc_buf *buf, *b; + struct fastrpc_buf *buf = NULL, *iter, *b; struct fastrpc_munmap_req_msg req_msg; struct device *dev = fl->sctx->dev; int err; u32 sc; spin_lock(&fl->lock); - list_for_each_entry_safe(buf, b, &fl->mmaps, node) { - if ((buf->raddr == req->vaddrout) && (buf->size == req->size)) + list_for_each_entry_safe(iter, b, &fl->mmaps, node) { + if ((iter->raddr == req->vaddrout) && (iter->size == req->size)) { + buf = iter; break; - buf = NULL; + } } spin_unlock(&fl->lock); From 7027c890ff6b252407256b2019c1f184969c9ab7 Mon Sep 17 00:00:00 2001 From: Xiaomeng Tong Date: Thu, 14 Apr 2022 11:56:09 +0800 Subject: [PATCH 0782/1842] firmware: stratix10-svc: fix a missing check on list iterator [ Upstream commit 5a0793ac66ac0e254d292f129a4d6c526f9f2aff ] The bug is here: pmem->vaddr = NULL; The list iterator 'pmem' will point to a bogus position containing HEAD if the list is empty or no element is found. This case must be checked before any use of the iterator, otherwise it will lead to a invalid memory access. To fix this bug, just gen_pool_free/set NULL/list_del() and return when found, otherwise list_del HEAD and return; Fixes: 7ca5ce896524f ("firmware: add Intel Stratix10 service layer driver") Signed-off-by: Xiaomeng Tong Link: https://lore.kernel.org/r/20220414035609.2239-1-xiam0nd.tong@gmail.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/firmware/stratix10-svc.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c index 53c7e3f8cfde..7dd0ac1a0cfc 100644 --- a/drivers/firmware/stratix10-svc.c +++ b/drivers/firmware/stratix10-svc.c @@ -941,17 +941,17 @@ EXPORT_SYMBOL_GPL(stratix10_svc_allocate_memory); void stratix10_svc_free_memory(struct stratix10_svc_chan *chan, void *kaddr) { struct stratix10_svc_data_mem *pmem; - size_t size = 0; list_for_each_entry(pmem, &svc_data_mem, node) if (pmem->vaddr == kaddr) { - size = pmem->size; - break; + gen_pool_free(chan->ctrl->genpool, + (unsigned long)kaddr, pmem->size); + pmem->vaddr = NULL; + list_del(&pmem->node); + return; } - gen_pool_free(chan->ctrl->genpool, (unsigned long)kaddr, size); - pmem->vaddr = NULL; - list_del(&pmem->node); + list_del(&svc_data_mem); } EXPORT_SYMBOL_GPL(stratix10_svc_free_memory); From 6f01c0fb8e44b7ac9dad9e728e44157cbf7b3d4a Mon Sep 17 00:00:00 2001 From: Bjorn Andersson Date: Fri, 22 Apr 2022 15:23:47 -0700 Subject: [PATCH 0783/1842] usb: typec: mux: Check dev_set_name() return value [ Upstream commit b9fa0292490db39d6542f514117333d366ec0011 ] It's possible that dev_set_name() returns -ENOMEM, catch and handle this. Fixes: 3370db35193b ("usb: typec: Registering real device entries for the muxes") Reported-by: Andy Shevchenko Reviewed-by: Andy Shevchenko Acked-by: Heikki Krogerus Signed-off-by: Bjorn Andersson Link: https://lore.kernel.org/r/20220422222351.1297276-4-bjorn.andersson@linaro.org Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/usb/typec/mux.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/drivers/usb/typec/mux.c b/drivers/usb/typec/mux.c index b9035c3407b5..a6e5028105d4 100644 --- a/drivers/usb/typec/mux.c +++ b/drivers/usb/typec/mux.c @@ -127,8 +127,11 @@ typec_switch_register(struct device *parent, sw->dev.class = &typec_mux_class; sw->dev.type = &typec_switch_dev_type; sw->dev.driver_data = desc->drvdata; - dev_set_name(&sw->dev, "%s-switch", - desc->name ? desc->name : dev_name(parent)); + ret = dev_set_name(&sw->dev, "%s-switch", desc->name ? desc->name : dev_name(parent)); + if (ret) { + put_device(&sw->dev); + return ERR_PTR(ret); + } ret = device_add(&sw->dev); if (ret) { @@ -331,8 +334,11 @@ typec_mux_register(struct device *parent, const struct typec_mux_desc *desc) mux->dev.class = &typec_mux_class; mux->dev.type = &typec_mux_dev_type; mux->dev.driver_data = desc->drvdata; - dev_set_name(&mux->dev, "%s-mux", - desc->name ? desc->name : dev_name(parent)); + ret = dev_set_name(&mux->dev, "%s-mux", desc->name ? desc->name : dev_name(parent)); + if (ret) { + put_device(&mux->dev); + return ERR_PTR(ret); + } ret = device_add(&mux->dev); if (ret) { From 43823ceb26e6ea12ea1284a2e130f8a4ac886ff7 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Tue, 12 Apr 2022 06:51:45 +0000 Subject: [PATCH 0784/1842] iio: adc: stmpe-adc: Fix wait_for_completion_timeout return value check [ Upstream commit d345b23200bcdbd2bd3582213d738c258b77718f ] wait_for_completion_timeout() returns unsigned long not long. it returns 0 if timed out, and positive if completed. The check for <= 0 is ambiguous and should be == 0 here indicating timeout which is the only error case Fixes: e813dde6f833 ("iio: stmpe-adc: Use wait_for_completion_timeout") Signed-off-by: Miaoqian Lin Reviewed-by: Philippe Schenker Link: https://lore.kernel.org/r/20220412065150.14486-1-linmq006@gmail.com Signed-off-by: Jonathan Cameron Signed-off-by: Sasha Levin --- drivers/iio/adc/stmpe-adc.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/iio/adc/stmpe-adc.c b/drivers/iio/adc/stmpe-adc.c index fba659bfdb40..64305d9fa560 100644 --- a/drivers/iio/adc/stmpe-adc.c +++ b/drivers/iio/adc/stmpe-adc.c @@ -61,7 +61,7 @@ struct stmpe_adc { static int stmpe_read_voltage(struct stmpe_adc *info, struct iio_chan_spec const *chan, int *val) { - long ret; + unsigned long ret; mutex_lock(&info->lock); @@ -79,7 +79,7 @@ static int stmpe_read_voltage(struct stmpe_adc *info, ret = wait_for_completion_timeout(&info->completion, STMPE_ADC_TIMEOUT); - if (ret <= 0) { + if (ret == 0) { stmpe_reg_write(info->stmpe, STMPE_REG_ADC_INT_STA, STMPE_ADC_CH(info->channel)); mutex_unlock(&info->lock); @@ -96,7 +96,7 @@ static int stmpe_read_voltage(struct stmpe_adc *info, static int stmpe_read_temp(struct stmpe_adc *info, struct iio_chan_spec const *chan, int *val) { - long ret; + unsigned long ret; mutex_lock(&info->lock); @@ -114,7 +114,7 @@ static int stmpe_read_temp(struct stmpe_adc *info, ret = wait_for_completion_timeout(&info->completion, STMPE_ADC_TIMEOUT); - if (ret <= 0) { + if (ret == 0) { mutex_unlock(&info->lock); return -ETIMEDOUT; } From b30f2315a3a611ed1d8c3e5e76b101daae5df67b Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Tue, 12 Apr 2022 06:42:09 +0000 Subject: [PATCH 0785/1842] iio: proximity: vl53l0x: Fix return value check of wait_for_completion_timeout [ Upstream commit 50f2959113cb6756ffd73c4fedc712cf2661f711 ] wait_for_completion_timeout() returns unsigned long not int. It returns 0 if timed out, and positive if completed. The check for <= 0 is ambiguous and should be == 0 here indicating timeout which is the only error case. Fixes: 3cef2e31b54b ("iio: proximity: vl53l0x: Add IRQ support") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220412064210.10734-1-linmq006@gmail.com Signed-off-by: Jonathan Cameron Signed-off-by: Sasha Levin --- drivers/iio/proximity/vl53l0x-i2c.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/iio/proximity/vl53l0x-i2c.c b/drivers/iio/proximity/vl53l0x-i2c.c index 235e125aeb3a..3d3ab86423ee 100644 --- a/drivers/iio/proximity/vl53l0x-i2c.c +++ b/drivers/iio/proximity/vl53l0x-i2c.c @@ -104,6 +104,7 @@ static int vl53l0x_read_proximity(struct vl53l0x_data *data, u16 tries = 20; u8 buffer[12]; int ret; + unsigned long time_left; ret = i2c_smbus_write_byte_data(client, VL_REG_SYSRANGE_START, 1); if (ret < 0) @@ -112,10 +113,8 @@ static int vl53l0x_read_proximity(struct vl53l0x_data *data, if (data->client->irq) { reinit_completion(&data->completion); - ret = wait_for_completion_timeout(&data->completion, HZ/10); - if (ret < 0) - return ret; - else if (ret == 0) + time_left = wait_for_completion_timeout(&data->completion, HZ/10); + if (time_left == 0) return -ETIMEDOUT; vl53l0x_clear_irq(data); From 3747429834d2a01944e61d5a1b00f80fb52b0d78 Mon Sep 17 00:00:00 2001 From: Cixi Geng Date: Tue, 19 Apr 2022 22:24:53 +0800 Subject: [PATCH 0786/1842] iio: adc: sc27xx: fix read big scale voltage not right [ Upstream commit ad930a75613282400179361e220e58b87386b8c7 ] Fix wrong configuration value of SC27XX_ADC_SCALE_MASK and SC27XX_ADC_SCALE_SHIFT by spec documetation. Fixes: 5df362a6cf49c (iio: adc: Add Spreadtrum SC27XX PMICs ADC support) Signed-off-by: Cixi Geng Reviewed-by: Baolin Wang Link: https://lore.kernel.org/r/20220419142458.884933-3-gengcixi@gmail.com Signed-off-by: Jonathan Cameron Signed-off-by: Sasha Levin --- drivers/iio/adc/sc27xx_adc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/iio/adc/sc27xx_adc.c b/drivers/iio/adc/sc27xx_adc.c index aa32a1f385e2..2c0d0d1634c8 100644 --- a/drivers/iio/adc/sc27xx_adc.c +++ b/drivers/iio/adc/sc27xx_adc.c @@ -36,8 +36,8 @@ /* Bits and mask definition for SC27XX_ADC_CH_CFG register */ #define SC27XX_ADC_CHN_ID_MASK GENMASK(4, 0) -#define SC27XX_ADC_SCALE_MASK GENMASK(10, 8) -#define SC27XX_ADC_SCALE_SHIFT 8 +#define SC27XX_ADC_SCALE_MASK GENMASK(10, 9) +#define SC27XX_ADC_SCALE_SHIFT 9 /* Bits definitions for SC27XX_ADC_INT_EN registers */ #define SC27XX_ADC_IRQ_EN BIT(0) From c10333c4519aedcdb54de48d45d6708142b0da0c Mon Sep 17 00:00:00 2001 From: Cixi Geng Date: Tue, 19 Apr 2022 22:24:54 +0800 Subject: [PATCH 0787/1842] iio: adc: sc27xx: Fine tune the scale calibration values [ Upstream commit 5a7a184b11c6910f47600ff5cbbee34168f701a8 ] Small adjustment the scale calibration value for the sc2731, use new name sc2731_[big|small]_scale_graph_calib, and remove the origin [big|small]_scale_graph_calib struct for unused. Fixes: 8ba0dbfd07a35 (iio: adc: sc27xx: Add ADC scale calibration) Signed-off-by: Cixi Geng Link: https://lore.kernel.org/r/20220419142458.884933-4-gengcixi@gmail.com Signed-off-by: Jonathan Cameron Signed-off-by: Sasha Levin --- drivers/iio/adc/sc27xx_adc.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/drivers/iio/adc/sc27xx_adc.c b/drivers/iio/adc/sc27xx_adc.c index 2c0d0d1634c8..2b463e1cf1c7 100644 --- a/drivers/iio/adc/sc27xx_adc.c +++ b/drivers/iio/adc/sc27xx_adc.c @@ -103,14 +103,14 @@ static struct sc27xx_adc_linear_graph small_scale_graph = { 100, 341, }; -static const struct sc27xx_adc_linear_graph big_scale_graph_calib = { - 4200, 856, - 3600, 733, +static const struct sc27xx_adc_linear_graph sc2731_big_scale_graph_calib = { + 4200, 850, + 3600, 728, }; -static const struct sc27xx_adc_linear_graph small_scale_graph_calib = { - 1000, 833, - 100, 80, +static const struct sc27xx_adc_linear_graph sc2731_small_scale_graph_calib = { + 1000, 838, + 100, 84, }; static int sc27xx_adc_get_calib_data(u32 calib_data, int calib_adc) @@ -130,11 +130,11 @@ static int sc27xx_adc_scale_calibration(struct sc27xx_adc_data *data, size_t len; if (big_scale) { - calib_graph = &big_scale_graph_calib; + calib_graph = &sc2731_big_scale_graph_calib; graph = &big_scale_graph; cell_name = "big_scale_calib"; } else { - calib_graph = &small_scale_graph_calib; + calib_graph = &sc2731_small_scale_graph_calib; graph = &small_scale_graph; cell_name = "small_scale_calib"; } From 52f327a45c5bb3b0bfc976770b866a8c7a573ed7 Mon Sep 17 00:00:00 2001 From: Krzysztof Kozlowski Date: Sat, 23 Apr 2022 11:39:32 +0200 Subject: [PATCH 0788/1842] rpmsg: qcom_smd: Fix returning 0 if irq_of_parse_and_map() fails [ Upstream commit 59d6f72f6f9c92fec8757d9e29527da828e9281f ] irq_of_parse_and_map() returns 0 on failure, so this should not be passed further as error return code. Fixes: 1a358d350664 ("rpmsg: qcom_smd: Fix irq_of_parse_and_map() return value") Signed-off-by: Krzysztof Kozlowski Signed-off-by: Bjorn Andersson Link: https://lore.kernel.org/r/20220423093932.32136-1-krzysztof.kozlowski@linaro.org Signed-off-by: Sasha Levin --- drivers/rpmsg/qcom_smd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c index db5f6009fb49..a4db9f6100d2 100644 --- a/drivers/rpmsg/qcom_smd.c +++ b/drivers/rpmsg/qcom_smd.c @@ -1390,7 +1390,7 @@ static int qcom_smd_parse_edge(struct device *dev, irq = irq_of_parse_and_map(node, 0); if (!irq) { dev_err(dev, "required smd interrupt missing\n"); - ret = irq; + ret = -EINVAL; goto put_node; } From ab35308bbd33262976f0850074c6561dd3921c10 Mon Sep 17 00:00:00 2001 From: Johan Hovold Date: Mon, 2 May 2022 15:31:29 +0200 Subject: [PATCH 0789/1842] phy: qcom-qmp: fix pipe-clock imbalance on power-on failure [ Upstream commit 5e73b2d9867998278479ccc065a8a8227a5513ef ] Make sure to disable the pipe clock also if ufs-reset deassertion fails during power on. Note that the ufs-reset is asserted in qcom_qmp_phy_com_exit(). Fixes: c9b589791fc1 ("phy: qcom: Utilize UFS reset controller") Cc: Evan Green Signed-off-by: Johan Hovold Link: https://lore.kernel.org/r/20220502133130.4125-2-johan+linaro@kernel.org Signed-off-by: Vinod Koul Signed-off-by: Sasha Levin --- drivers/phy/qualcomm/phy-qcom-qmp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c index ea46950c5d2a..afcc82ab3202 100644 --- a/drivers/phy/qualcomm/phy-qcom-qmp.c +++ b/drivers/phy/qualcomm/phy-qcom-qmp.c @@ -3141,7 +3141,7 @@ static int qcom_qmp_phy_power_on(struct phy *phy) ret = reset_control_deassert(qmp->ufs_reset); if (ret) - goto err_lane_rst; + goto err_pcs_ready; qcom_qmp_phy_configure(pcs_misc, cfg->regs, cfg->pcs_misc_tbl, cfg->pcs_misc_tbl_num); From 47ebc50dc2a7d1e64a2268f6e21f62e1409d30e9 Mon Sep 17 00:00:00 2001 From: "Maciej W. Rozycki" Date: Fri, 29 Apr 2022 21:40:18 +0100 Subject: [PATCH 0790/1842] serial: sifive: Report actual baud base rather than fixed 115200 [ Upstream commit 0a7ff843d507ce2cca2c3b7e169ee56e28133530 ] The base baud value reported is supposed to be the highest baud rate that can be set for a serial port. The SiFive FU740-C000 SOC's on-chip UART supports baud rates of up to 1/16 of the input clock rate, which is the bus clock `tlclk'[1], often at 130MHz in the case of the HiFive Unmatched board. However the sifive UART driver reports a fixed value of 115200 instead: 10010000.serial: ttySIF0 at MMIO 0x10010000 (irq = 1, base_baud = 115200) is a SiFive UART v0 10011000.serial: ttySIF1 at MMIO 0x10011000 (irq = 2, base_baud = 115200) is a SiFive UART v0 even though we already support setting higher baud rates, e.g.: $ tty /dev/ttySIF1 $ stty speed 230400 The baud base value is computed by the serial core by dividing the UART clock recorded in `struct uart_port' by 16, which is also the minimum value of the clock divider supported, so correct the baud base value reported by setting the UART clock recorded to the input clock rate rather than 115200: 10010000.serial: ttySIF0 at MMIO 0x10010000 (irq = 1, base_baud = 8125000) is a SiFive UART v0 10011000.serial: ttySIF1 at MMIO 0x10011000 (irq = 2, base_baud = 8125000) is a SiFive UART v0 References: [1] "SiFive FU740-C000 Manual", v1p3, SiFive, Inc., August 13, 2021, Section 16.9 "Baud Rate Divisor Register (div)", pp.143-144 Signed-off-by: Maciej W. Rozycki Fixes: 1f1496a923b6 ("riscv: Fix sifive serial driver") Link: https://lore.kernel.org/r/alpine.DEB.2.21.2204291656280.9383@angie.orcam.me.uk Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/serial/sifive.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/tty/serial/sifive.c b/drivers/tty/serial/sifive.c index 214bf3086c68..24036a02a424 100644 --- a/drivers/tty/serial/sifive.c +++ b/drivers/tty/serial/sifive.c @@ -999,7 +999,7 @@ static int sifive_serial_probe(struct platform_device *pdev) /* Set up clock divider */ ssp->clkin_rate = clk_get_rate(ssp->clk); ssp->baud_rate = SIFIVE_DEFAULT_BAUD_RATE; - ssp->port.uartclk = ssp->baud_rate * 16; + ssp->port.uartclk = ssp->clkin_rate; __ssp_update_div(ssp); platform_set_drvdata(pdev, ssp); From 96414e2cdc2840752a14bfca995578cccbaf2761 Mon Sep 17 00:00:00 2001 From: "Guilherme G. Piccoli" Date: Wed, 27 Apr 2022 19:49:03 -0300 Subject: [PATCH 0791/1842] coresight: cpu-debug: Replace mutex with mutex_trylock on panic notifier [ Upstream commit 1adff542d67a2ed1120955cb219bfff8a9c53f59 ] The panic notifier infrastructure executes registered callbacks when a panic event happens - such callbacks are executed in atomic context, with interrupts and preemption disabled in the running CPU and all other CPUs disabled. That said, mutexes in such context are not a good idea. This patch replaces a regular mutex with a mutex_trylock safer approach; given the nature of the mutex used in the driver, it should be pretty uncommon being unable to acquire such mutex in the panic path, hence no functional change should be observed (and if it is, that would be likely a deadlock with the regular mutex). Fixes: 2227b7c74634 ("coresight: add support for CPU debug module") Cc: Leo Yan Cc: Mathieu Poirier Cc: Mike Leach Cc: Suzuki K Poulose Signed-off-by: Guilherme G. Piccoli Reviewed-by: Suzuki K Poulose Signed-off-by: Suzuki K Poulose Link: https://lore.kernel.org/r/20220427224924.592546-10-gpiccoli@igalia.com Signed-off-by: Sasha Levin --- drivers/hwtracing/coresight/coresight-cpu-debug.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/hwtracing/coresight/coresight-cpu-debug.c b/drivers/hwtracing/coresight/coresight-cpu-debug.c index 2dcf13de751f..1e98562f4287 100644 --- a/drivers/hwtracing/coresight/coresight-cpu-debug.c +++ b/drivers/hwtracing/coresight/coresight-cpu-debug.c @@ -379,9 +379,10 @@ static int debug_notifier_call(struct notifier_block *self, int cpu; struct debug_drvdata *drvdata; - mutex_lock(&debug_lock); + /* Bail out if we can't acquire the mutex or the functionality is off */ + if (!mutex_trylock(&debug_lock)) + return NOTIFY_DONE; - /* Bail out if the functionality is disabled */ if (!debug_enable) goto skip_dump; @@ -400,7 +401,7 @@ static int debug_notifier_call(struct notifier_block *self, skip_dump: mutex_unlock(&debug_lock); - return 0; + return NOTIFY_DONE; } static struct notifier_block debug_notifier = { From cfe8a0967d6ea6dbc133da7df4a50ad7b4b5c60b Mon Sep 17 00:00:00 2001 From: Li Jun Date: Tue, 19 Apr 2022 20:44:08 +0800 Subject: [PATCH 0792/1842] extcon: ptn5150: Add queue work sync before driver release [ Upstream commit 782cd939cbe0f569197cd1c9b0477ee213167f04 ] Add device managed action to sync pending queue work, otherwise the queued work may run after the work is destroyed. Fixes: 4ed754de2d66 ("extcon: Add support for ptn5150 extcon driver") Reviewed-by: Krzysztof Kozlowski Signed-off-by: Li Jun Signed-off-by: Chanwoo Choi Signed-off-by: Sasha Levin --- drivers/extcon/extcon-ptn5150.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/extcon/extcon-ptn5150.c b/drivers/extcon/extcon-ptn5150.c index 5b9a3cf8df26..2a7874108df8 100644 --- a/drivers/extcon/extcon-ptn5150.c +++ b/drivers/extcon/extcon-ptn5150.c @@ -194,6 +194,13 @@ static int ptn5150_init_dev_type(struct ptn5150_info *info) return 0; } +static void ptn5150_work_sync_and_put(void *data) +{ + struct ptn5150_info *info = data; + + cancel_work_sync(&info->irq_work); +} + static int ptn5150_i2c_probe(struct i2c_client *i2c) { struct device *dev = &i2c->dev; @@ -284,6 +291,10 @@ static int ptn5150_i2c_probe(struct i2c_client *i2c) if (ret) return -EINVAL; + ret = devm_add_action_or_reset(dev, ptn5150_work_sync_and_put, info); + if (ret) + return ret; + /* * Update current extcon state if for example OTG connection was there * before the probe From 5b3e990f85eb034faa461e691e719e8ce9e2a3c8 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Mon, 16 May 2022 11:20:10 +0400 Subject: [PATCH 0793/1842] soc: rockchip: Fix refcount leak in rockchip_grf_init [ Upstream commit 9b59588d8be91c96bfb0371e912ceb4f16315dbf ] of_find_matching_node_and_match returns a node pointer with refcount incremented, we should use of_node_put() on it when done. Add missing of_node_put() to avoid refcount leak. Fixes: 4c58063d4258 ("soc: rockchip: add driver handling grf setup") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220516072013.19731-1-linmq006@gmail.com Signed-off-by: Heiko Stuebner Signed-off-by: Sasha Levin --- drivers/soc/rockchip/grf.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/soc/rockchip/grf.c b/drivers/soc/rockchip/grf.c index 494cf2b5bf7b..343ff61ccccb 100644 --- a/drivers/soc/rockchip/grf.c +++ b/drivers/soc/rockchip/grf.c @@ -148,12 +148,14 @@ static int __init rockchip_grf_init(void) return -ENODEV; if (!match || !match->data) { pr_err("%s: missing grf data\n", __func__); + of_node_put(np); return -EINVAL; } grf_info = match->data; grf = syscon_node_to_regmap(np); + of_node_put(np); if (IS_ERR(grf)) { pr_err("%s: could not get grf syscon\n", __func__); return PTR_ERR(grf); From d54a51b51851ead86292a0db4a7184d77623bfe2 Mon Sep 17 00:00:00 2001 From: Samuel Holland Date: Sun, 8 May 2022 20:21:21 -0500 Subject: [PATCH 0794/1842] clocksource/drivers/riscv: Events are stopped during CPU suspend [ Upstream commit 232ccac1bd9b5bfe73895f527c08623e7fa0752d ] Some implementations of the SBI time extension depend on hart-local state (for example, CSRs) that are lost or hardware that is powered down when a CPU is suspended. To be safe, the clockevents driver cannot assume that timer IRQs will be received during CPU suspend. Fixes: 62b019436814 ("clocksource: new RISC-V SBI timer driver") Signed-off-by: Samuel Holland Reviewed-by: Anup Patel Link: https://lore.kernel.org/r/20220509012121.40031-1-samuel@sholland.org Signed-off-by: Daniel Lezcano Signed-off-by: Sasha Levin --- drivers/clocksource/timer-riscv.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c index c51c5ed15aa7..0e7748df4be3 100644 --- a/drivers/clocksource/timer-riscv.c +++ b/drivers/clocksource/timer-riscv.c @@ -32,7 +32,7 @@ static int riscv_clock_next_event(unsigned long delta, static unsigned int riscv_clock_event_irq; static DEFINE_PER_CPU(struct clock_event_device, riscv_clock_event) = { .name = "riscv_timer_clockevent", - .features = CLOCK_EVT_FEAT_ONESHOT, + .features = CLOCK_EVT_FEAT_ONESHOT | CLOCK_EVT_FEAT_C3STOP, .rating = 100, .set_next_event = riscv_clock_next_event, }; From 82bfea344e8f7e9a0e0b1bf9af27552baa756620 Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Thu, 5 May 2022 20:50:43 +0800 Subject: [PATCH 0795/1842] rtc: mt6397: check return value after calling platform_get_resource() [ Upstream commit d3b43eb505bffb8e4cdf6800c15660c001553fe6 ] It will cause null-ptr-deref if platform_get_resource() returns NULL, we need check the return value. Fixes: fc2979118f3f ("rtc: mediatek: Add MT6397 RTC driver") Signed-off-by: Yang Yingliang Reviewed-by: AngeloGioacchino Del Regno Signed-off-by: Alexandre Belloni Link: https://lore.kernel.org/r/20220505125043.1594771-1-yangyingliang@huawei.com Signed-off-by: Sasha Levin --- drivers/rtc/rtc-mt6397.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/rtc/rtc-mt6397.c b/drivers/rtc/rtc-mt6397.c index 1894aded4c85..acfcb378767d 100644 --- a/drivers/rtc/rtc-mt6397.c +++ b/drivers/rtc/rtc-mt6397.c @@ -269,6 +269,8 @@ static int mtk_rtc_probe(struct platform_device *pdev) return -ENOMEM; res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!res) + return -EINVAL; rtc->addr_base = res->start; rtc->data = of_device_get_match_data(&pdev->dev); From 260792d5c9d688fe03b719b96878a35ed56a5a19 Mon Sep 17 00:00:00 2001 From: John Ogness Date: Sun, 8 May 2022 12:41:47 +0206 Subject: [PATCH 0796/1842] serial: meson: acquire port->lock in startup() [ Upstream commit 589f892ac8ef244e47c5a00ffd8605daa1eaef8e ] The uart_ops startup() callback is called without interrupts disabled and without port->lock locked, relatively late during the boot process (from the call path of console_on_rootfs()). If the device is a console, it was already previously registered and could be actively printing messages. Since the startup() callback is reading/writing registers used by the console write() callback (AML_UART_CONTROL), its access must be synchronized using the port->lock. Currently it is not. The startup() callback is the only function that explicitly enables interrupts. Without the synchronization, it is possible that interrupts become accidentally permanently disabled. CPU0 CPU1 meson_serial_console_write meson_uart_startup -------------------------- ------------------ spin_lock(port->lock) val = readl(AML_UART_CONTROL) uart_console_write() writel(INT_EN, AML_UART_CONTROL) writel(val, AML_UART_CONTROL) spin_unlock(port->lock) Add port->lock synchronization to meson_uart_startup() to avoid racing with meson_serial_console_write(). Also add detailed comments to meson_uart_reset() explaining why it is *not* using port->lock synchronization. Link: https://lore.kernel.org/lkml/2a82eae7-a256-f70c-fd82-4e510750906e@samsung.com Fixes: ff7693d079e5 ("ARM: meson: serial: add MesonX SoC on-chip uart driver") Reported-by: Marek Szyprowski Tested-by: Marek Szyprowski Reviewed-by: Petr Mladek Reviewed-by: Jiri Slaby Acked-by: Neil Armstrong Signed-off-by: John Ogness Link: https://lore.kernel.org/r/20220508103547.626355-1-john.ogness@linutronix.de Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/serial/meson_uart.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/drivers/tty/serial/meson_uart.c b/drivers/tty/serial/meson_uart.c index d2c08b760f83..91b7359b79a2 100644 --- a/drivers/tty/serial/meson_uart.c +++ b/drivers/tty/serial/meson_uart.c @@ -255,6 +255,14 @@ static const char *meson_uart_type(struct uart_port *port) return (port->type == PORT_MESON) ? "meson_uart" : NULL; } +/* + * This function is called only from probe() using a temporary io mapping + * in order to perform a reset before setting up the device. Since the + * temporarily mapped region was successfully requested, there can be no + * console on this port at this time. Hence it is not necessary for this + * function to acquire the port->lock. (Since there is no console on this + * port at this time, the port->lock is not initialized yet.) + */ static void meson_uart_reset(struct uart_port *port) { u32 val; @@ -269,9 +277,12 @@ static void meson_uart_reset(struct uart_port *port) static int meson_uart_startup(struct uart_port *port) { + unsigned long flags; u32 val; int ret = 0; + spin_lock_irqsave(&port->lock, flags); + val = readl(port->membase + AML_UART_CONTROL); val |= AML_UART_CLEAR_ERR; writel(val, port->membase + AML_UART_CONTROL); @@ -287,6 +298,8 @@ static int meson_uart_startup(struct uart_port *port) val = (AML_UART_RECV_IRQ(1) | AML_UART_XMIT_IRQ(port->fifosize / 2)); writel(val, port->membase + AML_UART_MISC); + spin_unlock_irqrestore(&port->lock, flags); + ret = request_irq(port->irq, meson_uart_interrupt, 0, port->name, port); From 5cd331bcf094e4261aba68555ad1fc85e20f12e2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ilpo=20J=C3=A4rvinen?= Date: Fri, 13 May 2022 16:46:43 +0300 Subject: [PATCH 0797/1842] serial: 8250_fintek: Check SER_RS485_RTS_* only with RS485 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit af0179270977508df6986b51242825d7edd59caf ] SER_RS485_RTS_ON_SEND and SER_RS485_RTS_AFTER_SEND relate to behavior within RS485 operation. The driver checks if they have the same value which is not possible to realize with the hardware. The check is taken regardless of SER_RS485_ENABLED flag and -EINVAL is returned when the check fails, which creates problems. This check makes it unnecessarily complicated to turn RS485 mode off as simple zeroed serial_rs485 struct will trigger that equal values check. In addition, the driver itself memsets its rs485 structure to zero when RS485 is disabled but if userspace would try to make an TIOCSRS485 ioctl() call with the very same struct, it would end up failing with -EINVAL which doesn't make much sense. Resolve the problem by moving the check inside SER_RS485_ENABLED block. Fixes: 7ecc77011c6f ("serial: 8250_fintek: Return -EINVAL on invalid configuration") Cc: Ricardo Ribalda Delgado Signed-off-by: Ilpo Järvinen Link: https://lore.kernel.org/r/035c738-8ea5-8b17-b1d7-84a7b3aeaa51@linux.intel.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/serial/8250/8250_fintek.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/tty/serial/8250/8250_fintek.c b/drivers/tty/serial/8250/8250_fintek.c index 251f0018ae8c..dba5950b8d0e 100644 --- a/drivers/tty/serial/8250/8250_fintek.c +++ b/drivers/tty/serial/8250/8250_fintek.c @@ -200,12 +200,12 @@ static int fintek_8250_rs485_config(struct uart_port *port, if (!pdata) return -EINVAL; - /* Hardware do not support same RTS level on send and receive */ - if (!(rs485->flags & SER_RS485_RTS_ON_SEND) == - !(rs485->flags & SER_RS485_RTS_AFTER_SEND)) - return -EINVAL; if (rs485->flags & SER_RS485_ENABLED) { + /* Hardware do not support same RTS level on send and receive */ + if (!(rs485->flags & SER_RS485_RTS_ON_SEND) == + !(rs485->flags & SER_RS485_RTS_AFTER_SEND)) + return -EINVAL; memset(rs485->padding, 0, sizeof(rs485->padding)); config |= RS485_URA; } else { From ff4ce2979b5d9e405af72260553531c86cfe8ffb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ilpo=20J=C3=A4rvinen?= Date: Thu, 19 May 2022 11:18:01 +0300 Subject: [PATCH 0798/1842] serial: digicolor-usart: Don't allow CS5-6 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit fd63031b8c0763addcecdefe0e0c59d49646204e ] Only CS7 and CS8 seem supported but CSIZE is not sanitized to CS8 in the default: block. Set CSIZE correctly so that userspace knows the effective value. Incorrect CSIZE also results in miscalculation of the frame bits in tty_get_char_size() or in its predecessor where the roughly the same code is directly within uart_update_timeout(). Fixes: 5930cb3511df (serial: driver for Conexant Digicolor USART) Acked-by: Baruch Siach Signed-off-by: Ilpo Järvinen Link: https://lore.kernel.org/r/20220519081808.3776-3-ilpo.jarvinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/serial/digicolor-usart.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/tty/serial/digicolor-usart.c b/drivers/tty/serial/digicolor-usart.c index c7f81aa1ce91..5fea9bf86e85 100644 --- a/drivers/tty/serial/digicolor-usart.c +++ b/drivers/tty/serial/digicolor-usart.c @@ -309,6 +309,8 @@ static void digicolor_uart_set_termios(struct uart_port *port, case CS8: default: config |= UA_CONFIG_CHAR_LEN; + termios->c_cflag &= ~CSIZE; + termios->c_cflag |= CS8; break; } From 22e975796f89166ffdf4726f43e49e56eed8f9e7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ilpo=20J=C3=A4rvinen?= Date: Thu, 19 May 2022 11:18:02 +0300 Subject: [PATCH 0799/1842] serial: rda-uart: Don't allow CS5-6 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 098333a9c7d12bb3ce44c82f08b4d810c44d31b0 ] Only CS7 and CS8 are supported but CSIZE is not sanitized after fallthrough from CS5 or CS6 to CS7. Set CSIZE correctly so that userspace knows the effective value. Incorrect CSIZE also results in miscalculation of the frame bits in tty_get_char_size() or in its predecessor where the roughly the same code is directly within uart_update_timeout(). Fixes: c10b13325ced (tty: serial: Add RDA8810PL UART driver) Cc: Manivannan Sadhasivam Signed-off-by: Ilpo Järvinen Link: https://lore.kernel.org/r/20220519081808.3776-4-ilpo.jarvinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/serial/rda-uart.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/tty/serial/rda-uart.c b/drivers/tty/serial/rda-uart.c index 85366e059258..a45069e7ebea 100644 --- a/drivers/tty/serial/rda-uart.c +++ b/drivers/tty/serial/rda-uart.c @@ -262,6 +262,8 @@ static void rda_uart_set_termios(struct uart_port *port, fallthrough; case CS7: ctrl &= ~RDA_UART_DBITS_8; + termios->c_cflag &= ~CSIZE; + termios->c_cflag |= CS7; break; default: ctrl |= RDA_UART_DBITS_8; From 00456b932e16289de09508a570c6a6dcb0b1ebaf Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ilpo=20J=C3=A4rvinen?= Date: Thu, 19 May 2022 11:18:03 +0300 Subject: [PATCH 0800/1842] serial: txx9: Don't allow CS5-6 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 79ac88655dc0551e3571ad16bdabdbe65d61553e ] Only CS7 and CS8 are supported but CSIZE is not sanitized with CS5 or CS6 to CS8. Set CSIZE correctly so that userspace knows the effective value. Incorrect CSIZE also results in miscalculation of the frame bits in tty_get_char_size() or in its predecessor where the roughly the same code is directly within uart_update_timeout(). Fixes: 1da177e4c3f4 (Linux-2.6.12-rc2) Signed-off-by: Ilpo Järvinen Link: https://lore.kernel.org/r/20220519081808.3776-5-ilpo.jarvinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/serial/serial_txx9.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/tty/serial/serial_txx9.c b/drivers/tty/serial/serial_txx9.c index 7a07e7272de1..7beec331010c 100644 --- a/drivers/tty/serial/serial_txx9.c +++ b/drivers/tty/serial/serial_txx9.c @@ -644,6 +644,8 @@ serial_txx9_set_termios(struct uart_port *port, struct ktermios *termios, case CS6: /* not supported */ case CS8: cval |= TXX9_SILCR_UMODE_8BIT; + termios->c_cflag &= ~CSIZE; + termios->c_cflag |= CS8; break; } From a9f6bee486e7b33579dceddc99bcb30152c77454 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ilpo=20J=C3=A4rvinen?= Date: Thu, 19 May 2022 11:18:04 +0300 Subject: [PATCH 0801/1842] serial: sh-sci: Don't allow CS5-6 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 9b87162de8be26bf3156460b37deee6399fd0fcb ] Only CS7 and CS8 seem supported but CSIZE is not sanitized from CS5 or CS6 to CS8. Set CSIZE correctly so that userspace knows the effective value. Incorrect CSIZE also results in miscalculation of the frame bits in tty_get_char_size() or in its predecessor where the roughly the same code is directly within uart_update_timeout(). Fixes: 1da177e4c3f4 (Linux-2.6.12-rc2) Signed-off-by: Ilpo Järvinen Link: https://lore.kernel.org/r/20220519081808.3776-6-ilpo.jarvinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/serial/sh-sci.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c index f700bfaef129..8d924727d6f0 100644 --- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -2392,8 +2392,12 @@ static void sci_set_termios(struct uart_port *port, struct ktermios *termios, int best_clk = -1; unsigned long flags; - if ((termios->c_cflag & CSIZE) == CS7) + if ((termios->c_cflag & CSIZE) == CS7) { smr_val |= SCSMR_CHR; + } else { + termios->c_cflag &= ~CSIZE; + termios->c_cflag |= CS8; + } if (termios->c_cflag & PARENB) smr_val |= SCSMR_PE; if (termios->c_cflag & PARODD) From afcfc3183cfdae02962c2194f1a207934a2cce1e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ilpo=20J=C3=A4rvinen?= Date: Thu, 19 May 2022 11:18:05 +0300 Subject: [PATCH 0802/1842] serial: sifive: Sanitize CSIZE and c_iflag MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit c069d2756c01ed36121fae6a42c14fdf1325c71d ] Only CS8 is supported but CSIZE was not sanitized to CS8. Set CSIZE correctly so that userspace knows the effective value. Incorrect CSIZE also results in miscalculation of the frame bits in tty_get_char_size() or in its predecessor where the roughly the same code is directly within uart_update_timeout(). Similarly, INPCK, PARMRK, and BRKINT are reported textually unsupported but were not cleared in termios c_iflag which is the machine-readable format. Fixes: 45c054d0815b (tty: serial: add driver for the SiFive UART) Cc: Paul Walmsley Signed-off-by: Ilpo Järvinen Link: https://lore.kernel.org/r/20220519081808.3776-7-ilpo.jarvinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/serial/sifive.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/tty/serial/sifive.c b/drivers/tty/serial/sifive.c index 24036a02a424..91952be01074 100644 --- a/drivers/tty/serial/sifive.c +++ b/drivers/tty/serial/sifive.c @@ -667,12 +667,16 @@ static void sifive_serial_set_termios(struct uart_port *port, int rate; char nstop; - if ((termios->c_cflag & CSIZE) != CS8) + if ((termios->c_cflag & CSIZE) != CS8) { dev_err_once(ssp->port.dev, "only 8-bit words supported\n"); + termios->c_cflag &= ~CSIZE; + termios->c_cflag |= CS8; + } if (termios->c_iflag & (INPCK | PARMRK)) dev_err_once(ssp->port.dev, "parity checking not supported\n"); if (termios->c_iflag & BRKINT) dev_err_once(ssp->port.dev, "BREAK detection not supported\n"); + termios->c_iflag &= ~(INPCK|PARMRK|BRKINT); /* Set number of stop bits */ nstop = (termios->c_cflag & CSTOPB) ? 2 : 1; From b7e560d2ffbe996447aefb67fe403aaa3de5a246 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ilpo=20J=C3=A4rvinen?= Date: Thu, 19 May 2022 11:18:06 +0300 Subject: [PATCH 0803/1842] serial: st-asc: Sanitize CSIZE and correct PARENB for CS7 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 52bb1cb7118564166b04d52387bd8403632f5190 ] Only CS7 and CS8 seem supported but CSIZE is not sanitized from CS5 or CS6 to CS8. In addition, ASC_CTL_MODE_7BIT_PAR suggests that CS7 has to have parity, thus add PARENB. Incorrect CSIZE results in miscalculation of the frame bits in tty_get_char_size() or in its predecessor where the roughly the same code is directly within uart_update_timeout(). Fixes: c4b058560762 (serial:st-asc: Add ST ASC driver.) Cc: Srinivas Kandagatla Signed-off-by: Ilpo Järvinen Link: https://lore.kernel.org/r/20220519081808.3776-8-ilpo.jarvinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/serial/st-asc.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/tty/serial/st-asc.c b/drivers/tty/serial/st-asc.c index e7048515a79c..97d36f870f64 100644 --- a/drivers/tty/serial/st-asc.c +++ b/drivers/tty/serial/st-asc.c @@ -535,10 +535,14 @@ static void asc_set_termios(struct uart_port *port, struct ktermios *termios, /* set character length */ if ((cflag & CSIZE) == CS7) { ctrl_val |= ASC_CTL_MODE_7BIT_PAR; + cflag |= PARENB; } else { ctrl_val |= (cflag & PARENB) ? ASC_CTL_MODE_8BIT_PAR : ASC_CTL_MODE_8BIT; + cflag &= ~CSIZE; + cflag |= CS8; } + termios->c_cflag = cflag; /* set stop bit */ ctrl_val |= (cflag & CSTOPB) ? ASC_CTL_STOP_2BIT : ASC_CTL_STOP_1BIT; From 94acaaad470ee30e7228ad0be57b5242518f88e6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ilpo=20J=C3=A4rvinen?= Date: Thu, 19 May 2022 11:18:07 +0300 Subject: [PATCH 0804/1842] serial: stm32-usart: Correct CSIZE, bits, and parity MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 1deeda8d2877c18bc2b9eeee10dd6d2628852848 ] Add CSIZE sanitization for unsupported CSIZE configurations. In addition, if parity is asked for but CSx was unsupported, the sensible result is CS8+parity which requires setting USART_CR1_M0 like with 9 bits. Incorrect CSIZE results in miscalculation of the frame bits in tty_get_char_size() or in its predecessor where the roughly the same code is directly within uart_update_timeout(). Fixes: c8a9d043947b (serial: stm32: fix word length configuration) Cc: Erwan Le Ray Signed-off-by: Ilpo Järvinen Link: https://lore.kernel.org/r/20220519081808.3776-9-ilpo.jarvinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/serial/stm32-usart.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c index 6afae051ba8d..8cd9e5b077b6 100644 --- a/drivers/tty/serial/stm32-usart.c +++ b/drivers/tty/serial/stm32-usart.c @@ -810,13 +810,22 @@ static void stm32_usart_set_termios(struct uart_port *port, * CS8 or (CS7 + parity), 8 bits word aka [M1:M0] = 0b00 * M0 and M1 already cleared by cr1 initialization. */ - if (bits == 9) + if (bits == 9) { cr1 |= USART_CR1_M0; - else if ((bits == 7) && cfg->has_7bits_data) + } else if ((bits == 7) && cfg->has_7bits_data) { cr1 |= USART_CR1_M1; - else if (bits != 8) + } else if (bits != 8) { dev_dbg(port->dev, "Unsupported data bits config: %u bits\n" , bits); + cflag &= ~CSIZE; + cflag |= CS8; + termios->c_cflag = cflag; + bits = 8; + if (cflag & PARENB) { + bits++; + cr1 |= USART_CR1_M0; + } + } if (ofs->rtor != UNDEF_REG && (stm32_port->rx_ch || stm32_port->fifoen)) { From 985706bd3bbeffc8737bc05965ca8d24837bc7db Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Wed, 11 May 2022 11:14:19 +0400 Subject: [PATCH 0805/1842] firmware: dmi-sysfs: Fix memory leak in dmi_sysfs_register_handle [ Upstream commit 660ba678f9998aca6db74f2dd912fa5124f0fa31 ] kobject_init_and_add() takes reference even when it fails. According to the doc of kobject_init_and_add() If this function returns an error, kobject_put() must be called to properly clean up the memory associated with the object. Fix this issue by calling kobject_put(). Fixes: 948af1f0bbc8 ("firmware: Basic dmi-sysfs support") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220511071421.9769-1-linmq006@gmail.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/firmware/dmi-sysfs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/firmware/dmi-sysfs.c b/drivers/firmware/dmi-sysfs.c index 8b8127fa8955..4a93fb490cb4 100644 --- a/drivers/firmware/dmi-sysfs.c +++ b/drivers/firmware/dmi-sysfs.c @@ -603,7 +603,7 @@ static void __init dmi_sysfs_register_handle(const struct dmi_header *dh, "%d-%d", dh->type, entry->instance); if (*ret) { - kfree(entry); + kobject_put(&entry->kobj); return; } From 7a6337bfedc5c736ca8ec3dce2cd883e296f7762 Mon Sep 17 00:00:00 2001 From: Tony Lindgren Date: Thu, 12 May 2022 08:30:21 +0300 Subject: [PATCH 0806/1842] bus: ti-sysc: Fix warnings for unbind for serial [ Upstream commit c337125b8834f9719dfda0e40b25eaa266f1b8cf ] We can get "failed to disable" clock_unprepare warnings on unbind at least for the serial console device if the unbind is done before the device has been idled. As some devices are using deferred idle, we must check the status for pending idle work to idle the device. Fixes: 76f0f772e469 ("bus: ti-sysc: Improve handling for no-reset-on-init and no-idle-on-init") Cc: Romain Naour Reviewed-by: Romain Naour Signed-off-by: Tony Lindgren Link: https://lore.kernel.org/r/20220512053021.61650-1-tony@atomide.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/bus/ti-sysc.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c index ac559c262033..4ee20be76508 100644 --- a/drivers/bus/ti-sysc.c +++ b/drivers/bus/ti-sysc.c @@ -3291,7 +3291,9 @@ static int sysc_remove(struct platform_device *pdev) struct sysc *ddata = platform_get_drvdata(pdev); int error; - cancel_delayed_work_sync(&ddata->idle_work); + /* Device can still be enabled, see deferred idle quirk in probe */ + if (cancel_delayed_work_sync(&ddata->idle_work)) + ti_sysc_idle(&ddata->idle_work.work); error = pm_runtime_get_sync(ddata->dev); if (error < 0) { From 823f24f2e329babd0330200d0b74882516fe57f4 Mon Sep 17 00:00:00 2001 From: Schspa Shi Date: Fri, 13 May 2022 19:24:44 +0800 Subject: [PATCH 0807/1842] driver: base: fix UAF when driver_attach failed [ Upstream commit 310862e574001a97ad02272bac0fd13f75f42a27 ] When driver_attach(drv); failed, the driver_private will be freed. But it has been added to the bus, which caused a UAF. To fix it, we need to delete it from the bus when failed. Fixes: 190888ac01d0 ("driver core: fix possible missing of device probe") Signed-off-by: Schspa Shi Link: https://lore.kernel.org/r/20220513112444.45112-1-schspa@gmail.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/base/bus.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/base/bus.c b/drivers/base/bus.c index a9c23ecebc7c..df85e928b97f 100644 --- a/drivers/base/bus.c +++ b/drivers/base/bus.c @@ -621,7 +621,7 @@ int bus_add_driver(struct device_driver *drv) if (drv->bus->p->drivers_autoprobe) { error = driver_attach(drv); if (error) - goto out_unregister; + goto out_del_list; } module_add_driver(drv->owner, drv); @@ -648,6 +648,8 @@ int bus_add_driver(struct device_driver *drv) return 0; +out_del_list: + klist_del(&priv->knode_bus); out_unregister: kobject_put(&priv->kobj); /* drv->p is freed in driver_release() */ From 36ee9ffca8ef56c302f2855c4a5fccf61c0c1ada Mon Sep 17 00:00:00 2001 From: Zhang Wensheng Date: Wed, 18 May 2022 15:45:16 +0800 Subject: [PATCH 0808/1842] driver core: fix deadlock in __device_attach [ Upstream commit b232b02bf3c205b13a26dcec08e53baddd8e59ed ] In __device_attach function, The lock holding logic is as follows: ... __device_attach device_lock(dev) // get lock dev async_schedule_dev(__device_attach_async_helper, dev); // func async_schedule_node async_schedule_node_domain(func) entry = kzalloc(sizeof(struct async_entry), GFP_ATOMIC); /* when fail or work limit, sync to execute func, but __device_attach_async_helper will get lock dev as well, which will lead to A-A deadlock. */ if (!entry || atomic_read(&entry_count) > MAX_WORK) { func; else queue_work_node(node, system_unbound_wq, &entry->work) device_unlock(dev) As shown above, when it is allowed to do async probes, because of out of memory or work limit, async work is not allowed, to do sync execute instead. it will lead to A-A deadlock because of __device_attach_async_helper getting lock dev. To fix the deadlock, move the async_schedule_dev outside device_lock, as we can see, in async_schedule_node_domain, the parameter of queue_work_node is system_unbound_wq, so it can accept concurrent operations. which will also not change the code logic, and will not lead to deadlock. Fixes: 765230b5f084 ("driver-core: add asynchronous probing support for drivers") Signed-off-by: Zhang Wensheng Link: https://lore.kernel.org/r/20220518074516.1225580-1-zhangwensheng5@huawei.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/base/dd.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/base/dd.c b/drivers/base/dd.c index 2728223c1fbc..4f4e8aedbd2c 100644 --- a/drivers/base/dd.c +++ b/drivers/base/dd.c @@ -897,6 +897,7 @@ static void __device_attach_async_helper(void *_dev, async_cookie_t cookie) static int __device_attach(struct device *dev, bool allow_async) { int ret = 0; + bool async = false; device_lock(dev); if (dev->p->dead) { @@ -935,7 +936,7 @@ static int __device_attach(struct device *dev, bool allow_async) */ dev_dbg(dev, "scheduling asynchronous probe\n"); get_device(dev); - async_schedule_dev(__device_attach_async_helper, dev); + async = true; } else { pm_request_idle(dev); } @@ -945,6 +946,8 @@ static int __device_attach(struct device *dev, bool allow_async) } out_unlock: device_unlock(dev); + if (async) + async_schedule_dev(__device_attach_async_helper, dev); return ret; } From b3354f2046ccae8a8c941511dfc22f03a859500c Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Tue, 12 Apr 2022 07:08:23 +0000 Subject: [PATCH 0809/1842] watchdog: rti-wdt: Fix pm_runtime_get_sync() error checking [ Upstream commit b3ac0c58fa8934926360268f3d89ec7680644d7b ] If the device is already in a runtime PM enabled state pm_runtime_get_sync() will return 1, so a test for negative value should be used to check for errors. Fixes: 2d63908bdbfb ("watchdog: Add K3 RTI watchdog support") Signed-off-by: Miaoqian Lin Reviewed-by: Guenter Roeck Link: https://lore.kernel.org/r/20220412070824.23708-1-linmq006@gmail.com Signed-off-by: Guenter Roeck Signed-off-by: Wim Van Sebroeck Signed-off-by: Sasha Levin --- drivers/watchdog/rti_wdt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c index ae7f9357bb87..46c2a4bd9ebe 100644 --- a/drivers/watchdog/rti_wdt.c +++ b/drivers/watchdog/rti_wdt.c @@ -227,7 +227,7 @@ static int rti_wdt_probe(struct platform_device *pdev) pm_runtime_enable(dev); ret = pm_runtime_get_sync(dev); - if (ret) { + if (ret < 0) { pm_runtime_put_noidle(dev); pm_runtime_disable(&pdev->dev); return dev_err_probe(dev, ret, "runtime pm failed\n"); From 910b1cdf6c50ae8fb222e46657d04fb181577017 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Wed, 11 May 2022 15:42:03 +0400 Subject: [PATCH 0810/1842] watchdog: ts4800_wdt: Fix refcount leak in ts4800_wdt_probe [ Upstream commit 5d24df3d690809952528e7a19a43d84bc5b99d44 ] of_parse_phandle() returns a node pointer with refcount incremented, we should use of_node_put() on it when done. Add missing of_node_put() in some error paths. Fixes: bf9006399939 ("watchdog: ts4800: add driver for TS-4800 watchdog") Signed-off-by: Miaoqian Lin Reviewed-by: Guenter Roeck Link: https://lore.kernel.org/r/20220511114203.47420-1-linmq006@gmail.com Signed-off-by: Guenter Roeck Signed-off-by: Wim Van Sebroeck Signed-off-by: Sasha Levin --- drivers/watchdog/ts4800_wdt.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/watchdog/ts4800_wdt.c b/drivers/watchdog/ts4800_wdt.c index c137ad2bd5c3..0ea554c7cda5 100644 --- a/drivers/watchdog/ts4800_wdt.c +++ b/drivers/watchdog/ts4800_wdt.c @@ -125,13 +125,16 @@ static int ts4800_wdt_probe(struct platform_device *pdev) ret = of_property_read_u32_index(np, "syscon", 1, ®); if (ret < 0) { dev_err(dev, "no offset in syscon\n"); + of_node_put(syscon_np); return ret; } /* allocate memory for watchdog struct */ wdt = devm_kzalloc(dev, sizeof(*wdt), GFP_KERNEL); - if (!wdt) + if (!wdt) { + of_node_put(syscon_np); return -ENOMEM; + } /* set regmap and offset to know where to write */ wdt->feed_offset = reg; From 1d7361679f0a8d7debbd5e6a76b8782899a19d56 Mon Sep 17 00:00:00 2001 From: Shengjiu Wang Date: Mon, 23 May 2022 13:44:21 +0800 Subject: [PATCH 0811/1842] ASoC: fsl_sai: Fix FSL_SAI_xDR/xFR definition [ Upstream commit e4dd748dc87cf431af7b3954963be0d9f6150217 ] There are multiple xDR and xFR registers, the index is from 0 to 7. FSL_SAI_xDR and FSL_SAI_xFR is abandoned, replace them with FSL_SAI_xDR0 and FSL_SAI_xFR0. Fixes: 4f7a0728b530 ("ASoC: fsl_sai: Add support for SAI new version") Signed-off-by: Shengjiu Wang Link: https://lore.kernel.org/r/1653284661-18964-1-git-send-email-shengjiu.wang@nxp.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/fsl/fsl_sai.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sound/soc/fsl/fsl_sai.h b/sound/soc/fsl/fsl_sai.h index 4bbcd0dbe8f1..8923c680f0e0 100644 --- a/sound/soc/fsl/fsl_sai.h +++ b/sound/soc/fsl/fsl_sai.h @@ -80,8 +80,8 @@ #define FSL_SAI_xCR3(tx, ofs) (tx ? FSL_SAI_TCR3(ofs) : FSL_SAI_RCR3(ofs)) #define FSL_SAI_xCR4(tx, ofs) (tx ? FSL_SAI_TCR4(ofs) : FSL_SAI_RCR4(ofs)) #define FSL_SAI_xCR5(tx, ofs) (tx ? FSL_SAI_TCR5(ofs) : FSL_SAI_RCR5(ofs)) -#define FSL_SAI_xDR(tx, ofs) (tx ? FSL_SAI_TDR(ofs) : FSL_SAI_RDR(ofs)) -#define FSL_SAI_xFR(tx, ofs) (tx ? FSL_SAI_TFR(ofs) : FSL_SAI_RFR(ofs)) +#define FSL_SAI_xDR0(tx) (tx ? FSL_SAI_TDR0 : FSL_SAI_RDR0) +#define FSL_SAI_xFR0(tx) (tx ? FSL_SAI_TFR0 : FSL_SAI_RFR0) #define FSL_SAI_xMR(tx) (tx ? FSL_SAI_TMR : FSL_SAI_RMR) /* SAI Transmit/Receive Control Register */ From e892a7e60f1f508262c0fe6c84f2486c69594745 Mon Sep 17 00:00:00 2001 From: Krzysztof Kozlowski Date: Fri, 22 Apr 2022 12:41:01 +0200 Subject: [PATCH 0812/1842] clocksource/drivers/oxnas-rps: Fix irq_of_parse_and_map() return value [ Upstream commit 9c04a8ff03def4df3f81219ffbe1ec9b44ff5348 ] The irq_of_parse_and_map() returns 0 on failure, not a negative ERRNO. Fixes: 89355274e1f7 ("clocksource/drivers/oxnas-rps: Add Oxford Semiconductor RPS Dual Timer") Signed-off-by: Krzysztof Kozlowski Reviewed-by: Neil Armstrong Link: https://lore.kernel.org/r/20220422104101.55754-1-krzysztof.kozlowski@linaro.org Signed-off-by: Daniel Lezcano Signed-off-by: Sasha Levin --- drivers/clocksource/timer-oxnas-rps.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/clocksource/timer-oxnas-rps.c b/drivers/clocksource/timer-oxnas-rps.c index 56c0cc32d0ac..d514b44e67dd 100644 --- a/drivers/clocksource/timer-oxnas-rps.c +++ b/drivers/clocksource/timer-oxnas-rps.c @@ -236,7 +236,7 @@ static int __init oxnas_rps_timer_init(struct device_node *np) } rps->irq = irq_of_parse_and_map(np, 0); - if (rps->irq < 0) { + if (!rps->irq) { ret = -EINVAL; goto err_iomap; } From ee89d7fd49de438e2eb435f9b14c0d514ff5bed6 Mon Sep 17 00:00:00 2001 From: Jann Horn Date: Tue, 17 May 2022 16:30:47 +0200 Subject: [PATCH 0813/1842] s390/crypto: fix scatterwalk_unmap() callers in AES-GCM [ Upstream commit bd52cd5e23f134019b23f0c389db0f9a436e4576 ] The argument of scatterwalk_unmap() is supposed to be the void* that was returned by the previous scatterwalk_map() call. The s390 AES-GCM implementation was instead passing the pointer to the struct scatter_walk. This doesn't actually break anything because scatterwalk_unmap() only uses its argument under CONFIG_HIGHMEM and ARCH_HAS_FLUSH_ON_KUNMAP. Fixes: bf7fa038707c ("s390/crypto: add s390 platform specific aes gcm support.") Signed-off-by: Jann Horn Acked-by: Harald Freudenberger Link: https://lore.kernel.org/r/20220517143047.3054498-1-jannh@google.com Signed-off-by: Heiko Carstens Signed-off-by: Sasha Levin --- arch/s390/crypto/aes_s390.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c index 73044634d342..812730e6bfff 100644 --- a/arch/s390/crypto/aes_s390.c +++ b/arch/s390/crypto/aes_s390.c @@ -700,7 +700,7 @@ static inline void _gcm_sg_unmap_and_advance(struct gcm_sg_walk *gw, unsigned int nbytes) { gw->walk_bytes_remain -= nbytes; - scatterwalk_unmap(&gw->walk); + scatterwalk_unmap(gw->walk_ptr); scatterwalk_advance(&gw->walk, nbytes); scatterwalk_done(&gw->walk, 0, gw->walk_bytes_remain); gw->walk_ptr = NULL; @@ -775,7 +775,7 @@ static int gcm_out_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded) goto out; } - scatterwalk_unmap(&gw->walk); + scatterwalk_unmap(gw->walk_ptr); gw->walk_ptr = NULL; gw->ptr = gw->buf; From 503a3fd6466d517c489c1f5fa000929852e2815b Mon Sep 17 00:00:00 2001 From: Vincent Ray Date: Wed, 25 May 2022 17:17:46 -0700 Subject: [PATCH 0814/1842] net: sched: fixed barrier to prevent skbuff sticking in qdisc backlog [ Upstream commit a54ce3703613e41fe1d98060b62ec09a3984dc28 ] In qdisc_run_begin(), smp_mb__before_atomic() used before test_bit() does not provide any ordering guarantee as test_bit() is not an atomic operation. This, added to the fact that the spin_trylock() call at the beginning of qdisc_run_begin() does not guarantee acquire semantics if it does not grab the lock, makes it possible for the following statement : if (test_bit(__QDISC_STATE_MISSED, &qdisc->state)) to be executed before an enqueue operation called before qdisc_run_begin(). As a result the following race can happen : CPU 1 CPU 2 qdisc_run_begin() qdisc_run_begin() /* true */ set(MISSED) . /* returns false */ . . /* sees MISSED = 1 */ . /* so qdisc not empty */ . __qdisc_run() . . . pfifo_fast_dequeue() ----> /* may be done here */ . | . clear(MISSED) | . . | . smp_mb __after_atomic(); | . . | . /* recheck the queue */ | . /* nothing => exit */ | enqueue(skb1) | . | qdisc_run_begin() | . | spin_trylock() /* fail */ | . | smp_mb__before_atomic() /* not enough */ | . ---- if (test_bit(MISSED)) return false; /* exit */ In the above scenario, CPU 1 and CPU 2 both try to grab the qdisc->seqlock at the same time. Only CPU 2 succeeds and enters the bypass code path, where it emits its skb then calls __qdisc_run(). CPU1 fails, sets MISSED and goes down the traditionnal enqueue() + dequeue() code path. But when executing qdisc_run_begin() for the second time, after enqueuing its skbuff, it sees the MISSED bit still set (by itself) and consequently chooses to exit early without setting it again nor trying to grab the spinlock again. Meanwhile CPU2 has seen MISSED = 1, cleared it, checked the queue and found it empty, so it returned. At the end of the sequence, we end up with skb1 enqueued in the backlog, both CPUs out of __dev_xmit_skb(), the MISSED bit not set, and no __netif_schedule() called made. skb1 will now linger in the qdisc until somebody later performs a full __qdisc_run(). Associated to the bypass capacity of the qdisc, and the ability of the TCP layer to avoid resending packets which it knows are still in the qdisc, this can lead to serious traffic "holes" in a TCP connection. We fix this by replacing the smp_mb__before_atomic() / test_bit() / set_bit() / smp_mb__after_atomic() sequence inside qdisc_run_begin() by a single test_and_set_bit() call, which is more concise and enforces the needed memory barriers. Fixes: 89837eb4b246 ("net: sched: add barrier to ensure correct ordering for lockless qdisc") Signed-off-by: Vincent Ray Signed-off-by: Eric Dumazet Link: https://lore.kernel.org/r/20220526001746.2437669-1-eric.dumazet@gmail.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- include/net/sch_generic.h | 36 ++++++++---------------------------- 1 file changed, 8 insertions(+), 28 deletions(-) diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index 1042c449e7db..769764bda7a8 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -163,37 +163,17 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc) if (spin_trylock(&qdisc->seqlock)) goto nolock_empty; - /* Paired with smp_mb__after_atomic() to make sure - * STATE_MISSED checking is synchronized with clearing - * in pfifo_fast_dequeue(). + /* No need to insist if the MISSED flag was already set. + * Note that test_and_set_bit() also gives us memory ordering + * guarantees wrt potential earlier enqueue() and below + * spin_trylock(), both of which are necessary to prevent races */ - smp_mb__before_atomic(); - - /* If the MISSED flag is set, it means other thread has - * set the MISSED flag before second spin_trylock(), so - * we can return false here to avoid multi cpus doing - * the set_bit() and second spin_trylock() concurrently. - */ - if (test_bit(__QDISC_STATE_MISSED, &qdisc->state)) + if (test_and_set_bit(__QDISC_STATE_MISSED, &qdisc->state)) return false; - /* Set the MISSED flag before the second spin_trylock(), - * if the second spin_trylock() return false, it means - * other cpu holding the lock will do dequeuing for us - * or it will see the MISSED flag set after releasing - * lock and reschedule the net_tx_action() to do the - * dequeuing. - */ - set_bit(__QDISC_STATE_MISSED, &qdisc->state); - - /* spin_trylock() only has load-acquire semantic, so use - * smp_mb__after_atomic() to ensure STATE_MISSED is set - * before doing the second spin_trylock(). - */ - smp_mb__after_atomic(); - - /* Retry again in case other CPU may not see the new flag - * after it releases the lock at the end of qdisc_run_end(). + /* Try to take the lock again to make sure that we will either + * grab it or the CPU that still has it will see MISSED set + * when testing it in qdisc_run_end() */ if (!spin_trylock(&qdisc->seqlock)) return false; From 71ae30662ec610b92644d13f79c78f76f17873b3 Mon Sep 17 00:00:00 2001 From: Dan Carpenter Date: Thu, 26 May 2022 11:02:42 +0300 Subject: [PATCH 0815/1842] net: ethernet: mtk_eth_soc: out of bounds read in mtk_hwlro_get_fdir_entry() [ Upstream commit e7e7104e2d5ddf3806a28695670f21bef471f1e1 ] The "fsp->location" variable comes from user via ethtool_get_rxnfc(). Check that it is valid to prevent an out of bounds read. Fixes: 7aab747e5563 ("net: ethernet: mediatek: add ethtool functions to configure RX flows of HW LRO") Signed-off-by: Dan Carpenter Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 7d7dc0754a3a..789642647cd3 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -1966,6 +1966,9 @@ static int mtk_hwlro_get_fdir_entry(struct net_device *dev, struct ethtool_rx_flow_spec *fsp = (struct ethtool_rx_flow_spec *)&cmd->fs; + if (fsp->location >= ARRAY_SIZE(mac->hwlro_ip)) + return -EINVAL; + /* only tcp dst ipv4 is meaningful, others are meaningless */ fsp->flow_type = TCP_V4_FLOW; fsp->h_u.tcp_ip4_spec.ip4dst = ntohl(mac->hwlro_ip[fsp->location]); From f7ba2cc57f404d2d9f26fb85bd3833d35a477829 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Thu, 26 May 2022 12:52:08 +0400 Subject: [PATCH 0816/1842] net: ethernet: ti: am65-cpsw-nuss: Fix some refcount leaks [ Upstream commit 5dd89d2fc438457811cbbec07999ce0d80051ff5 ] of_get_child_by_name() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. am65_cpsw_init_cpts() and am65_cpsw_nuss_probe() don't release the refcount in error case. Add missing of_node_put() to avoid refcount leak. Fixes: b1f66a5bee07 ("net: ethernet: ti: am65-cpsw-nuss: enable packet timestamping support") Fixes: 93a76530316a ("net: ethernet: ti: introduce am65x/j721e gigabit eth subsystem driver") Signed-off-by: Miaoqian Lin Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c index 0805edef5625..059d68d48f1e 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -1716,6 +1716,7 @@ static int am65_cpsw_init_cpts(struct am65_cpsw_common *common) if (IS_ERR(cpts)) { int ret = PTR_ERR(cpts); + of_node_put(node); if (ret == -EOPNOTSUPP) { dev_info(dev, "cpts disabled\n"); return 0; @@ -2064,9 +2065,9 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev) if (!node) return -ENOENT; common->port_num = of_get_child_count(node); + of_node_put(node); if (common->port_num < 1 || common->port_num > AM65_CPSW_MAX_PORTS) return -ENOENT; - of_node_put(node); if (common->port_num != 1) return -EOPNOTSUPP; From 42658e47f1abbbe592007d3ba303de466114d0bb Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Thu, 26 May 2022 18:52:08 +0400 Subject: [PATCH 0817/1842] net: dsa: mv88e6xxx: Fix refcount leak in mv88e6xxx_mdios_register MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 02ded5a173619b11728b8bf75a3fd995a2c1ff28 ] of_get_child_by_name() returns a node pointer with refcount incremented, we should use of_node_put() on it when done. mv88e6xxx_mdio_register() pass the device node to of_mdiobus_register(). We don't need the device node after it. Add missing of_node_put() to avoid refcount leak. Fixes: a3c53be55c95 ("net: dsa: mv88e6xxx: Support multiple MDIO busses") Signed-off-by: Miaoqian Lin Reviewed-by: Marek Behún Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/dsa/mv88e6xxx/chip.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c index e79a808375fc..7b7a8a74405d 100644 --- a/drivers/net/dsa/mv88e6xxx/chip.c +++ b/drivers/net/dsa/mv88e6xxx/chip.c @@ -3148,6 +3148,7 @@ static int mv88e6xxx_mdios_register(struct mv88e6xxx_chip *chip, */ child = of_get_child_by_name(np, "mdio"); err = mv88e6xxx_mdio_register(chip, child, false); + of_node_put(child); if (err) return err; From 741e49eacdcd31ec9b448a77b1ab9219a00c1db9 Mon Sep 17 00:00:00 2001 From: Alexander Lobakin Date: Tue, 24 May 2022 17:27:18 +0200 Subject: [PATCH 0818/1842] modpost: fix removing numeric suffixes [ Upstream commit b5beffa20d83c4e15306c991ffd00de0d8628338 ] With the `-z unique-symbol` linker flag or any similar mechanism, it is possible to trigger the following: ERROR: modpost: "param_set_uint.0" [vmlinux] is a static EXPORT_SYMBOL The reason is that for now the condition from remove_dot(): if (m && (s[n + m] == '.' || s[n + m] == 0)) which was designed to test if it's a dot or a '\0' after the suffix is never satisfied. This is due to that `s[n + m]` always points to the last digit of a numeric suffix, not on the symbol next to it (from a custom debug print added to modpost): param_set_uint.0, s[n + m] is '0', s[n + m + 1] is '\0' So it's off-by-one and was like that since 2014. Fix this for the sake of any potential upcoming features, but don't bother stable-backporting, as it's well hidden -- apart from that LD flag, it can be triggered only with GCC LTO which never landed upstream. Fixes: fcd38ed0ff26 ("scripts: modpost: fix compilation warning") Signed-off-by: Alexander Lobakin Reviewed-by: Petr Mladek Signed-off-by: Masahiro Yamada Signed-off-by: Sasha Levin --- scripts/mod/modpost.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c index e08f75aed429..a21aa74b4948 100644 --- a/scripts/mod/modpost.c +++ b/scripts/mod/modpost.c @@ -1982,7 +1982,7 @@ static char *remove_dot(char *s) if (n && s[n]) { size_t m = strspn(s + n + 1, "0123456789"); - if (m && (s[n + m] == '.' || s[n + m] == 0)) + if (m && (s[n + m + 1] == '.' || s[n + m + 1] == 0)) s[n] = 0; } return s; From 3252d327f977b14663a10967f3b0930d6c325687 Mon Sep 17 00:00:00 2001 From: Baokun Li Date: Tue, 12 Apr 2022 17:38:16 +0800 Subject: [PATCH 0819/1842] jffs2: fix memory leak in jffs2_do_fill_super [ Upstream commit c14adb1cf70a984ed081c67e9d27bc3caad9537c ] If jffs2_iget() or d_make_root() in jffs2_do_fill_super() returns an error, we can observe the following kmemleak report: -------------------------------------------- unreferenced object 0xffff888105a65340 (size 64): comm "mount", pid 710, jiffies 4302851558 (age 58.239s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [] kmem_cache_alloc_trace+0x475/0x8a0 [] jffs2_sum_init+0x96/0x1a0 [] jffs2_do_mount_fs+0x745/0x2120 [] jffs2_do_fill_super+0x35c/0x810 [] jffs2_fill_super+0x2b9/0x3b0 [...] unreferenced object 0xffff8881bd7f0000 (size 65536): comm "mount", pid 710, jiffies 4302851558 (age 58.239s) hex dump (first 32 bytes): bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb ................ bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb ................ backtrace: [] kmalloc_order+0xda/0x110 [] kmalloc_order_trace+0x21/0x130 [] __kmalloc+0x711/0x8a0 [] jffs2_sum_init+0xd9/0x1a0 [] jffs2_do_mount_fs+0x745/0x2120 [] jffs2_do_fill_super+0x35c/0x810 [] jffs2_fill_super+0x2b9/0x3b0 [...] -------------------------------------------- This is because the resources allocated in jffs2_sum_init() are not released. Call jffs2_sum_exit() to release these resources to solve the problem. Fixes: e631ddba5887 ("[JFFS2] Add erase block summary support (mount time improvement)") Signed-off-by: Baokun Li Signed-off-by: Richard Weinberger Signed-off-by: Sasha Levin --- fs/jffs2/fs.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/jffs2/fs.c b/fs/jffs2/fs.c index 7170de78cd26..db210989784d 100644 --- a/fs/jffs2/fs.c +++ b/fs/jffs2/fs.c @@ -603,6 +603,7 @@ int jffs2_do_fill_super(struct super_block *sb, struct fs_context *fc) jffs2_free_raw_node_refs(c); kvfree(c->blocks); jffs2_clear_xattr_subsystem(c); + jffs2_sum_exit(c); out_inohash: kfree(c->inocache_list); out_wbuf: From f413e4d7cdf3d837583e9f4b3019a3d828ac7aad Mon Sep 17 00:00:00 2001 From: Zhihao Cheng Date: Tue, 10 May 2022 20:31:24 +0800 Subject: [PATCH 0820/1842] ubi: fastmap: Fix high cpu usage of ubi_bgt by making sure wl_pool not empty [ Upstream commit d09e9a2bddba6c48e0fddb16c4383172ac593251 ] There at least 6 PEBs reserved on UBI device: 1. EBA_RESERVED_PEBS[1] 2. WL_RESERVED_PEBS[1] 3. UBI_LAYOUT_VOLUME_EBS[2] 4. MIN_FASTMAP_RESERVED_PEBS[2] When all ubi volumes take all their PEBs, there are 3 (EBA_RESERVED_PEBS + WL_RESERVED_PEBS + MIN_FASTMAP_RESERVED_PEBS - MIN_FASTMAP_TAKEN_PEBS[1]) free PEBs. Since commit f9c34bb529975fe ("ubi: Fix producing anchor PEBs") and commit 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs considering wear level rules") applied, there is only 1 (3 - FASTMAP_ANCHOR_PEBS[1] - FASTMAP_NEXT_ANCHOR_PEBS[1]) free PEB to fill pool and wl_pool, after filling pool, wl_pool is always empty. So, UBI could be stuck in an infinite loop: ubi_thread system_wq wear_leveling_worker <-------------------------------------------------- get_peb_for_wl | // fm_wl_pool, used = size = 0 | schedule_work(&ubi->fm_work) | | update_fastmap_work_fn | ubi_update_fastmap | ubi_refill_pools | // ubi->free_count - ubi->beb_rsvd_pebs < 5 | // wl_pool is not filled with any PEBs | schedule_erase(old_fm_anchor) | ubi_ensure_anchor_pebs | __schedule_ubi_work(wear_leveling_worker) | | __erase_worker | ensure_wear_leveling | __schedule_ubi_work(wear_leveling_worker) -------------------------- , which cause high cpu usage of ubi_bgt: top - 12:10:42 up 5 min, 2 users, load average: 1.76, 0.68, 0.27 Tasks: 123 total, 3 running, 54 sleeping, 0 stopped, 0 zombie PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1589 root 20 0 0 0 0 R 45.0 0.0 0:38.86 ubi_bgt0d 319 root 20 0 0 0 0 I 15.2 0.0 0:15.29 kworker/0:3-eve 371 root 20 0 0 0 0 I 14.9 0.0 0:12.85 kworker/3:3-eve 20 root 20 0 0 0 0 I 11.3 0.0 0:05.33 kworker/1:0-eve 202 root 20 0 0 0 0 I 11.3 0.0 0:04.93 kworker/2:3-eve In commit 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs considering wear level rules"), there are three key changes: 1) Choose the fastmap anchor when the most free PEBs are available. 2) Enable anchor move within the anchor area again as it is useful for distributing wear. 3) Import a candidate fm anchor and check this PEB's erase count during wear leveling. If the wear leveling limit is exceeded, use the used anchor area PEB with the lowest erase count to replace it. The anchor candidate can be removed, we can check fm_anchor PEB's erase count during wear leveling. Fix it by: 1) Removing 'fm_next_anchor' and check 'fm_anchor' during wear leveling. 2) Preferentially filling one free peb into fm_wl_pool in condition of ubi->free_count > ubi->beb_rsvd_pebs, then try to reserve enough free count for fastmap non anchor pebs after the above prerequisites are met. Then, there are at least 1 PEB in pool and 1 PEB in wl_pool after calling ubi_refill_pools() with all erase works done. Fetch a reproducer in [Link]. Fixes: 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs ... rules") Link: https://bugzilla.kernel.org/show_bug.cgi?id=215407 Signed-off-by: Zhihao Cheng Signed-off-by: Richard Weinberger Signed-off-by: Sasha Levin --- drivers/mtd/ubi/fastmap-wl.c | 69 ++++++++++++++++++++++++------------ drivers/mtd/ubi/fastmap.c | 11 ------ drivers/mtd/ubi/ubi.h | 4 +-- drivers/mtd/ubi/wl.c | 19 +++++----- 4 files changed, 57 insertions(+), 46 deletions(-) diff --git a/drivers/mtd/ubi/fastmap-wl.c b/drivers/mtd/ubi/fastmap-wl.c index 28f55f9cf715..053ab52668e8 100644 --- a/drivers/mtd/ubi/fastmap-wl.c +++ b/drivers/mtd/ubi/fastmap-wl.c @@ -97,6 +97,33 @@ struct ubi_wl_entry *ubi_wl_get_fm_peb(struct ubi_device *ubi, int anchor) return e; } +/* + * has_enough_free_count - whether ubi has enough free pebs to fill fm pools + * @ubi: UBI device description object + * @is_wl_pool: whether UBI is filling wear leveling pool + * + * This helper function checks whether there are enough free pebs (deducted + * by fastmap pebs) to fill fm_pool and fm_wl_pool, above rule works after + * there is at least one of free pebs is filled into fm_wl_pool. + * For wear leveling pool, UBI should also reserve free pebs for bad pebs + * handling, because there maybe no enough free pebs for user volumes after + * producing new bad pebs. + */ +static bool has_enough_free_count(struct ubi_device *ubi, bool is_wl_pool) +{ + int fm_used = 0; // fastmap non anchor pebs. + int beb_rsvd_pebs; + + if (!ubi->free.rb_node) + return false; + + beb_rsvd_pebs = is_wl_pool ? ubi->beb_rsvd_pebs : 0; + if (ubi->fm_wl_pool.size > 0 && !(ubi->ro_mode || ubi->fm_disabled)) + fm_used = ubi->fm_size / ubi->leb_size - 1; + + return ubi->free_count - beb_rsvd_pebs > fm_used; +} + /** * ubi_refill_pools - refills all fastmap PEB pools. * @ubi: UBI device description object @@ -120,21 +147,17 @@ void ubi_refill_pools(struct ubi_device *ubi) wl_tree_add(ubi->fm_anchor, &ubi->free); ubi->free_count++; } - if (ubi->fm_next_anchor) { - wl_tree_add(ubi->fm_next_anchor, &ubi->free); - ubi->free_count++; - } - /* All available PEBs are in ubi->free, now is the time to get + /* + * All available PEBs are in ubi->free, now is the time to get * the best anchor PEBs. */ ubi->fm_anchor = ubi_wl_get_fm_peb(ubi, 1); - ubi->fm_next_anchor = ubi_wl_get_fm_peb(ubi, 1); for (;;) { enough = 0; if (pool->size < pool->max_size) { - if (!ubi->free.rb_node) + if (!has_enough_free_count(ubi, false)) break; e = wl_get_wle(ubi); @@ -147,8 +170,7 @@ void ubi_refill_pools(struct ubi_device *ubi) enough++; if (wl_pool->size < wl_pool->max_size) { - if (!ubi->free.rb_node || - (ubi->free_count - ubi->beb_rsvd_pebs < 5)) + if (!has_enough_free_count(ubi, true)) break; e = find_wl_entry(ubi, &ubi->free, WL_FREE_MAX_DIFF); @@ -286,20 +308,26 @@ static struct ubi_wl_entry *get_peb_for_wl(struct ubi_device *ubi) int ubi_ensure_anchor_pebs(struct ubi_device *ubi) { struct ubi_work *wrk; + struct ubi_wl_entry *anchor; spin_lock(&ubi->wl_lock); - /* Do we have a next anchor? */ - if (!ubi->fm_next_anchor) { - ubi->fm_next_anchor = ubi_wl_get_fm_peb(ubi, 1); - if (!ubi->fm_next_anchor) - /* Tell wear leveling to produce a new anchor PEB */ - ubi->fm_do_produce_anchor = 1; + /* Do we already have an anchor? */ + if (ubi->fm_anchor) { + spin_unlock(&ubi->wl_lock); + return 0; } - /* Do wear leveling to get a new anchor PEB or check the - * existing next anchor candidate. - */ + /* See if we can find an anchor PEB on the list of free PEBs */ + anchor = ubi_wl_get_fm_peb(ubi, 1); + if (anchor) { + ubi->fm_anchor = anchor; + spin_unlock(&ubi->wl_lock); + return 0; + } + + ubi->fm_do_produce_anchor = 1; + /* No luck, trigger wear leveling to produce a new anchor PEB. */ if (ubi->wl_scheduled) { spin_unlock(&ubi->wl_lock); return 0; @@ -381,11 +409,6 @@ static void ubi_fastmap_close(struct ubi_device *ubi) ubi->fm_anchor = NULL; } - if (ubi->fm_next_anchor) { - return_unused_peb(ubi, ubi->fm_next_anchor); - ubi->fm_next_anchor = NULL; - } - if (ubi->fm) { for (i = 0; i < ubi->fm->used_blocks; i++) kfree(ubi->fm->e[i]); diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c index 6b5f1ffd961b..6e95c4b1473e 100644 --- a/drivers/mtd/ubi/fastmap.c +++ b/drivers/mtd/ubi/fastmap.c @@ -1230,17 +1230,6 @@ static int ubi_write_fastmap(struct ubi_device *ubi, fm_pos += sizeof(*fec); ubi_assert(fm_pos <= ubi->fm_size); } - if (ubi->fm_next_anchor) { - fec = (struct ubi_fm_ec *)(fm_raw + fm_pos); - - fec->pnum = cpu_to_be32(ubi->fm_next_anchor->pnum); - set_seen(ubi, ubi->fm_next_anchor->pnum, seen_pebs); - fec->ec = cpu_to_be32(ubi->fm_next_anchor->ec); - - free_peb_count++; - fm_pos += sizeof(*fec); - ubi_assert(fm_pos <= ubi->fm_size); - } fmh->free_peb_count = cpu_to_be32(free_peb_count); ubi_for_each_used_peb(ubi, wl_e, tmp_rb) { diff --git a/drivers/mtd/ubi/ubi.h b/drivers/mtd/ubi/ubi.h index c2da77163f94..da0bee13fe7f 100644 --- a/drivers/mtd/ubi/ubi.h +++ b/drivers/mtd/ubi/ubi.h @@ -491,8 +491,7 @@ struct ubi_debug_info { * @fm_work: fastmap work queue * @fm_work_scheduled: non-zero if fastmap work was scheduled * @fast_attach: non-zero if UBI was attached by fastmap - * @fm_anchor: The new anchor PEB used during fastmap update - * @fm_next_anchor: An anchor PEB candidate for the next time fastmap is updated + * @fm_anchor: The next anchor PEB to use for fastmap * @fm_do_produce_anchor: If true produce an anchor PEB in wl * * @used: RB-tree of used physical eraseblocks @@ -603,7 +602,6 @@ struct ubi_device { int fm_work_scheduled; int fast_attach; struct ubi_wl_entry *fm_anchor; - struct ubi_wl_entry *fm_next_anchor; int fm_do_produce_anchor; /* Wear-leveling sub-system's stuff */ diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c index 7847de75a74c..820b5c1c8e8e 100644 --- a/drivers/mtd/ubi/wl.c +++ b/drivers/mtd/ubi/wl.c @@ -688,16 +688,16 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk, #ifdef CONFIG_MTD_UBI_FASTMAP e1 = find_anchor_wl_entry(&ubi->used); - if (e1 && ubi->fm_next_anchor && - (ubi->fm_next_anchor->ec - e1->ec >= UBI_WL_THRESHOLD)) { + if (e1 && ubi->fm_anchor && + (ubi->fm_anchor->ec - e1->ec >= UBI_WL_THRESHOLD)) { ubi->fm_do_produce_anchor = 1; - /* fm_next_anchor is no longer considered a good anchor - * candidate. + /* + * fm_anchor is no longer considered a good anchor. * NULL assignment also prevents multiple wear level checks * of this PEB. */ - wl_tree_add(ubi->fm_next_anchor, &ubi->free); - ubi->fm_next_anchor = NULL; + wl_tree_add(ubi->fm_anchor, &ubi->free); + ubi->fm_anchor = NULL; ubi->free_count++; } @@ -1086,12 +1086,13 @@ static int __erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk) if (!err) { spin_lock(&ubi->wl_lock); - if (!ubi->fm_disabled && !ubi->fm_next_anchor && + if (!ubi->fm_disabled && !ubi->fm_anchor && e->pnum < UBI_FM_MAX_START) { - /* Abort anchor production, if needed it will be + /* + * Abort anchor production, if needed it will be * enabled again in the wear leveling started below. */ - ubi->fm_next_anchor = e; + ubi->fm_anchor = e; ubi->fm_do_produce_anchor = 0; } else { wl_tree_add(e, &ubi->free); From 6d8d3f68cbecfd31925796f0fb668eb21ab06734 Mon Sep 17 00:00:00 2001 From: Zhihao Cheng Date: Tue, 10 May 2022 20:31:26 +0800 Subject: [PATCH 0821/1842] ubi: ubi_create_volume: Fix use-after-free when volume creation failed [ Upstream commit 8c03a1c21d72210f81cb369cc528e3fde4b45411 ] There is an use-after-free problem for 'eba_tbl' in ubi_create_volume()'s error handling path: ubi_eba_replace_table(vol, eba_tbl) vol->eba_tbl = tbl out_mapping: ubi_eba_destroy_table(eba_tbl) // Free 'eba_tbl' out_unlock: put_device(&vol->dev) vol_release kfree(tbl->entries) // UAF Fix it by removing redundant 'eba_tbl' releasing. Fetch a reproducer in [Link]. Fixes: 493cfaeaa0c9b ("mtd: utilize new cdev_device_add helper function") Link: https://bugzilla.kernel.org/show_bug.cgi?id=215965 Signed-off-by: Zhihao Cheng Signed-off-by: Richard Weinberger Signed-off-by: Sasha Levin --- drivers/mtd/ubi/vmt.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c index 1bc7b3a05604..6ea95ade4ca6 100644 --- a/drivers/mtd/ubi/vmt.c +++ b/drivers/mtd/ubi/vmt.c @@ -309,7 +309,6 @@ int ubi_create_volume(struct ubi_device *ubi, struct ubi_mkvol_req *req) ubi->volumes[vol_id] = NULL; ubi->vol_count -= 1; spin_unlock(&ubi->volumes_lock); - ubi_eba_destroy_table(eba_tbl); out_acc: spin_lock(&ubi->volumes_lock); ubi->rsvd_pebs -= vol->reserved_pebs; From 8f49e1694cbc29e76d5028267c1978cc2630e494 Mon Sep 17 00:00:00 2001 From: Menglong Dong Date: Tue, 24 May 2022 10:12:27 +0800 Subject: [PATCH 0822/1842] bpf: Fix probe read error in ___bpf_prog_run() [ Upstream commit caff1fa4118cec4dfd4336521ebd22a6408a1e3e ] I think there is something wrong with BPF_PROBE_MEM in ___bpf_prog_run() in big-endian machine. Let's make a test and see what will happen if we want to load a 'u16' with BPF_PROBE_MEM. Let's make the src value '0x0001', the value of dest register will become 0x0001000000000000, as the value will be loaded to the first 2 byte of DST with following code: bpf_probe_read_kernel(&DST, SIZE, (const void *)(long) (SRC + insn->off)); Obviously, the value in DST is not correct. In fact, we can compare BPF_PROBE_MEM with LDX_MEM_H: DST = *(SIZE *)(unsigned long) (SRC + insn->off); If the memory load is done by LDX_MEM_H, the value in DST will be 0x1 now. And I think this error results in the test case 'test_bpf_sk_storage_map' failing: test_bpf_sk_storage_map:PASS:bpf_iter_bpf_sk_storage_map__open_and_load 0 nsec test_bpf_sk_storage_map:PASS:socket 0 nsec test_bpf_sk_storage_map:PASS:map_update 0 nsec test_bpf_sk_storage_map:PASS:socket 0 nsec test_bpf_sk_storage_map:PASS:map_update 0 nsec test_bpf_sk_storage_map:PASS:socket 0 nsec test_bpf_sk_storage_map:PASS:map_update 0 nsec test_bpf_sk_storage_map:PASS:attach_iter 0 nsec test_bpf_sk_storage_map:PASS:create_iter 0 nsec test_bpf_sk_storage_map:PASS:read 0 nsec test_bpf_sk_storage_map:FAIL:ipv6_sk_count got 0 expected 3 $10/26 bpf_iter/bpf_sk_storage_map:FAIL The code of the test case is simply, it will load sk->sk_family to the register with BPF_PROBE_MEM and check if it is AF_INET6. With this patch, now the test case 'bpf_iter' can pass: $10 bpf_iter:OK Fixes: 2a02759ef5f8 ("bpf: Add support for BTF pointers to interpreter") Signed-off-by: Menglong Dong Signed-off-by: Daniel Borkmann Reviewed-by: Jiang Biao Reviewed-by: Hao Peng Cc: Ilya Leoshkevich Link: https://lore.kernel.org/bpf/20220524021228.533216-1-imagedong@tencent.com Signed-off-by: Sasha Levin --- kernel/bpf/core.c | 14 +++++--------- 1 file changed, 5 insertions(+), 9 deletions(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index d3a1f25f8ec2..845a4c052433 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1653,6 +1653,11 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) CONT; \ LDX_MEM_##SIZEOP: \ DST = *(SIZE *)(unsigned long) (SRC + insn->off); \ + CONT; \ + LDX_PROBE_MEM_##SIZEOP: \ + bpf_probe_read_kernel(&DST, sizeof(SIZE), \ + (const void *)(long) (SRC + insn->off)); \ + DST = *((SIZE *)&DST); \ CONT; LDST(B, u8) @@ -1660,15 +1665,6 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) LDST(W, u32) LDST(DW, u64) #undef LDST -#define LDX_PROBE(SIZEOP, SIZE) \ - LDX_PROBE_MEM_##SIZEOP: \ - bpf_probe_read_kernel(&DST, SIZE, (const void *)(long) (SRC + insn->off)); \ - CONT; - LDX_PROBE(B, 1) - LDX_PROBE(H, 2) - LDX_PROBE(W, 4) - LDX_PROBE(DW, 8) -#undef LDX_PROBE STX_XADD_W: /* lock xadd *(u32 *)(dst_reg + off16) += src_reg */ atomic_add((u32) SRC, (atomic_t *)(unsigned long) From b97550e380ca21db3a3cb6586bdef798cc2ae1d1 Mon Sep 17 00:00:00 2001 From: Heinrich Schuchardt Date: Sat, 28 May 2022 03:41:32 +0200 Subject: [PATCH 0823/1842] riscv: read-only pages should not be writable [ Upstream commit 630f972d76d6460235e84e1aa034ee06f9c8c3a9 ] If EFI pages are marked as read-only, we should remove the _PAGE_WRITE flag. The current code overwrites an unused value. Fixes: b91540d52a08b ("RISC-V: Add EFI runtime services") Signed-off-by: Heinrich Schuchardt Link: https://lore.kernel.org/r/20220528014132.91052-1-heinrich.schuchardt@canonical.com Signed-off-by: Ard Biesheuvel Signed-off-by: Sasha Levin --- arch/riscv/kernel/efi.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/riscv/kernel/efi.c b/arch/riscv/kernel/efi.c index 024159298231..1aa540350abd 100644 --- a/arch/riscv/kernel/efi.c +++ b/arch/riscv/kernel/efi.c @@ -65,7 +65,7 @@ static int __init set_permissions(pte_t *ptep, unsigned long addr, void *data) if (md->attribute & EFI_MEMORY_RO) { val = pte_val(pte) & ~_PAGE_WRITE; - val = pte_val(pte) | _PAGE_READ; + val |= _PAGE_READ; pte = __pte(val); } if (md->attribute & EFI_MEMORY_XP) { From 630e0a10c020472e58aad5c2b5a05a7c59364723 Mon Sep 17 00:00:00 2001 From: Guangguan Wang Date: Sat, 28 May 2022 14:54:57 +0800 Subject: [PATCH 0824/1842] net/smc: fixes for converting from "struct smc_cdc_tx_pend **" to "struct smc_wr_tx_pend_priv *" [ Upstream commit e225c9a5a74b12e9ef8516f30a3db2c7eb866ee1 ] "struct smc_cdc_tx_pend **" can not directly convert to "struct smc_wr_tx_pend_priv *". Fixes: 2bced6aefa3d ("net/smc: put slot when connection is killed") Signed-off-by: Guangguan Wang Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/smc/smc_cdc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/smc/smc_cdc.c b/net/smc/smc_cdc.c index 0c490cdde6a4..94503f36b9a6 100644 --- a/net/smc/smc_cdc.c +++ b/net/smc/smc_cdc.c @@ -72,7 +72,7 @@ int smc_cdc_get_free_slot(struct smc_connection *conn, /* abnormal termination */ if (!rc) smc_wr_tx_put_slot(link, - (struct smc_wr_tx_pend_priv *)pend); + (struct smc_wr_tx_pend_priv *)(*pend)); rc = -EPIPE; } return rc; From 7ac3a034d96a7c592462ec6750a20d2aa88521ae Mon Sep 17 00:00:00 2001 From: Yu Xiao Date: Fri, 27 May 2022 20:24:24 +0200 Subject: [PATCH 0825/1842] nfp: only report pause frame configuration for physical device [ Upstream commit 0649e4d63420ebc8cbebef3e9d39e12ffc5eb9fa ] Only report pause frame configuration for physical device. Logical port of both PCI PF and PCI VF do not support it. Fixes: 9fdc5d85a8fe ("nfp: update ethtool reporting of pauseframe control") Signed-off-by: Yu Xiao Signed-off-by: Simon Horman Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c index cd0c9623f7dd..e0b801d10739 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c @@ -286,8 +286,6 @@ nfp_net_get_link_ksettings(struct net_device *netdev, /* Init to unknowns */ ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE); - ethtool_link_ksettings_add_link_mode(cmd, supported, Pause); - ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause); cmd->base.port = PORT_OTHER; cmd->base.speed = SPEED_UNKNOWN; cmd->base.duplex = DUPLEX_UNKNOWN; @@ -295,6 +293,8 @@ nfp_net_get_link_ksettings(struct net_device *netdev, port = nfp_port_from_netdev(netdev); eth_port = nfp_port_get_eth_port(port); if (eth_port) { + ethtool_link_ksettings_add_link_mode(cmd, supported, Pause); + ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause); cmd->base.autoneg = eth_port->aneg != NFP_ANEG_DISABLED ? AUTONEG_ENABLE : AUTONEG_DISABLE; nfp_net_set_fec_link_mode(eth_port, cmd); From 8f81a4113e1e574d2cbde4f2cd599380a9189c0f Mon Sep 17 00:00:00 2001 From: Martin Habets Date: Fri, 27 May 2022 10:05:28 +0200 Subject: [PATCH 0826/1842] sfc: fix considering that all channels have TX queues MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 2e102b53f8a778f872dc137f4c7ac548705817aa ] Normally, all channels have RX and TX queues, but this is not true if modparam efx_separate_tx_channels=1 is used. In that cases, some channels only have RX queues and others only TX queues (or more preciselly, they have them allocated, but not initialized). Fix efx_channel_has_tx_queues to return the correct value for this case too. Messages shown at probe time before the fix: sfc 0000:03:00.0 ens6f0np0: MC command 0x82 inlen 544 failed rc=-22 (raw=0) arg=0 ------------[ cut here ]------------ netdevice: ens6f0np0: failed to initialise TXQ -1 WARNING: CPU: 1 PID: 626 at drivers/net/ethernet/sfc/ef10.c:2393 efx_ef10_tx_init+0x201/0x300 [sfc] [...] stripped RIP: 0010:efx_ef10_tx_init+0x201/0x300 [sfc] [...] stripped Call Trace: efx_init_tx_queue+0xaa/0xf0 [sfc] efx_start_channels+0x49/0x120 [sfc] efx_start_all+0x1f8/0x430 [sfc] efx_net_open+0x5a/0xe0 [sfc] __dev_open+0xd0/0x190 __dev_change_flags+0x1b3/0x220 dev_change_flags+0x21/0x60 [...] stripped Messages shown at remove time before the fix: sfc 0000:03:00.0 ens6f0np0: failed to flush 10 queues sfc 0000:03:00.0 ens6f0np0: failed to flush queues Fixes: 8700aff08984 ("sfc: fix channel allocation with brute force") Reported-by: Tianhao Zhao Signed-off-by: Martin Habets Tested-by: Íñigo Huguet Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/sfc/net_driver.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/sfc/net_driver.h b/drivers/net/ethernet/sfc/net_driver.h index 9f7dfdf708cf..8aecb4bd2c0d 100644 --- a/drivers/net/ethernet/sfc/net_driver.h +++ b/drivers/net/ethernet/sfc/net_driver.h @@ -1522,7 +1522,7 @@ static inline bool efx_channel_is_xdp_tx(struct efx_channel *channel) static inline bool efx_channel_has_tx_queues(struct efx_channel *channel) { - return true; + return channel && channel->channel >= channel->efx->tx_channel_offset; } static inline unsigned int efx_channel_num_tx_queues(struct efx_channel *channel) From bf2af9b24313553f3f0b30443220ab0ac8595d2d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=C3=8D=C3=B1igo=20Huguet?= Date: Fri, 27 May 2022 10:05:29 +0200 Subject: [PATCH 0827/1842] sfc: fix wrong tx channel offset with efx_separate_tx_channels MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit c308dfd1b43ef0d4c3e57b741bb3462eb7a7f4a2 ] tx_channel_offset is calculated in efx_allocate_msix_channels, but it is also calculated again in efx_set_channels because it was originally done there, and when efx_allocate_msix_channels was introduced it was forgotten to be removed from efx_set_channels. Moreover, the old calculation is wrong when using efx_separate_tx_channels because now we can have XDP channels after the TX channels, so n_channels - n_tx_channels doesn't point to the first TX channel. Remove the old calculation from efx_set_channels, and add the initialization of this variable if MSI or legacy interrupts are used, next to the initialization of the rest of the related variables, where it was missing. Fixes: 3990a8fffbda ("sfc: allocate channels for XDP tx queues") Reported-by: Tianhao Zhao Signed-off-by: Íñigo Huguet Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/sfc/efx_channels.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c index 2ab8571ef1cc..d0f1b2dc7dff 100644 --- a/drivers/net/ethernet/sfc/efx_channels.c +++ b/drivers/net/ethernet/sfc/efx_channels.c @@ -287,6 +287,7 @@ int efx_probe_interrupts(struct efx_nic *efx) efx->n_channels = 1; efx->n_rx_channels = 1; efx->n_tx_channels = 1; + efx->tx_channel_offset = 0; efx->n_xdp_channels = 0; efx->xdp_channel_offset = efx->n_channels; rc = pci_enable_msi(efx->pci_dev); @@ -307,6 +308,7 @@ int efx_probe_interrupts(struct efx_nic *efx) efx->n_channels = 1 + (efx_separate_tx_channels ? 1 : 0); efx->n_rx_channels = 1; efx->n_tx_channels = 1; + efx->tx_channel_offset = 1; efx->n_xdp_channels = 0; efx->xdp_channel_offset = efx->n_channels; efx->legacy_irq = efx->pci_dev->irq; @@ -858,10 +860,6 @@ int efx_set_channels(struct efx_nic *efx) int xdp_queue_number; int rc; - efx->tx_channel_offset = - efx_separate_tx_channels ? - efx->n_channels - efx->n_tx_channels : 0; - if (efx->xdp_tx_queue_count) { EFX_WARN_ON_PARANOID(efx->xdp_tx_queues); From ea5edd015feb1824002579513e351002bde9226e Mon Sep 17 00:00:00 2001 From: Leon Romanovsky Date: Tue, 24 May 2022 15:59:27 +0300 Subject: [PATCH 0828/1842] net/mlx5: Don't use already freed action pointer [ Upstream commit 80b2bd737d0e833e6a2b77e482e5a714a79c86a4 ] The call to mlx5dr_action_destroy() releases "action" memory. That pointer is set to miss_action later and generates the following smatch error: drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c:53 set_miss_action() warn: 'action' was already freed. Make sure that the pointer is always valid by setting NULL after destroy. Fixes: 6a48faeeca10 ("net/mlx5: Add direct rule fs_cmd implementation") Reported-by: Dan Carpenter Signed-off-by: Leon Romanovsky Signed-off-by: Saeed Mahameed Signed-off-by: Sasha Levin --- drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c index 96c39a17d026..b227fa9ada46 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c @@ -43,11 +43,10 @@ static int set_miss_action(struct mlx5_flow_root_namespace *ns, err = mlx5dr_table_set_miss_action(ft->fs_dr_table.dr_table, action); if (err && action) { err = mlx5dr_action_destroy(action); - if (err) { - action = NULL; - mlx5_core_err(ns->dev, "Failed to destroy action (%d)\n", - err); - } + if (err) + mlx5_core_err(ns->dev, + "Failed to destroy action (%d)\n", err); + action = NULL; } ft->fs_dr_table.miss_action = action; if (old_miss_action) { From b50eef7a38ed074b0c4a958155da0e103aa6419f Mon Sep 17 00:00:00 2001 From: Changcheng Liu Date: Tue, 26 Apr 2022 21:28:14 +0800 Subject: [PATCH 0829/1842] net/mlx5: correct ECE offset in query qp output [ Upstream commit 3fc2a9e89b3508a5cc0c324f26d7b4740ba8c456 ] ECE field should be after opt_param_mask in query qp output. Fixes: 6b646a7e4af6 ("net/mlx5: Add ability to read and write ECE options") Signed-off-by: Changcheng Liu Signed-off-by: Saeed Mahameed Signed-off-by: Sasha Levin --- include/linux/mlx5/mlx5_ifc.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index eba1f1cbc9fb..6ca97729b54a 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -4877,12 +4877,11 @@ struct mlx5_ifc_query_qp_out_bits { u8 syndrome[0x20]; - u8 reserved_at_40[0x20]; - u8 ece[0x20]; + u8 reserved_at_40[0x40]; u8 opt_param_mask[0x20]; - u8 reserved_at_a0[0x20]; + u8 ece[0x20]; struct mlx5_ifc_qpc_bits qpc; From e9fe72b95d7f538a956dbe7455e81dc5ebbfb0e6 Mon Sep 17 00:00:00 2001 From: Maxim Mikityanskiy Date: Mon, 23 May 2022 15:39:13 +0300 Subject: [PATCH 0830/1842] net/mlx5e: Update netdev features after changing XDP state [ Upstream commit f6279f113ad593971999c877eb69dc3d36a75894 ] Some features (LRO, HW GRO) conflict with XDP. If there is an attempt to enable such features while XDP is active, they will be set to `off [requested on]`. In order to activate these features after XDP is turned off, the driver needs to call netdev_update_features(). This commit adds this missing call after XDP state changes. Fixes: cf6e34c8c22f ("net/mlx5e: Properly block LRO when XDP is enabled") Fixes: b0617e7b3500 ("net/mlx5e: Properly block HW GRO when XDP is enabled") Signed-off-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed Signed-off-by: Sasha Levin --- drivers/net/ethernet/mellanox/mlx5/core/en_main.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index d9cc0ed6c5f7..cfc3bfcb04a2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -4576,6 +4576,11 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog) unlock: mutex_unlock(&priv->state_lock); + + /* Need to fix some features. */ + if (!err) + netdev_update_features(netdev); + return err; } From e05dd93826e1a0f9dedb6bcf8f99ba17462ae08f Mon Sep 17 00:00:00 2001 From: Guoju Fang Date: Sat, 28 May 2022 18:16:28 +0800 Subject: [PATCH 0831/1842] net: sched: add barrier to fix packet stuck problem for lockless qdisc [ Upstream commit 2e8728c955ce0624b958eee6e030a37aca3a5d86 ] In qdisc_run_end(), the spin_unlock() only has store-release semantic, which guarantees all earlier memory access are visible before it. But the subsequent test_bit() has no barrier semantics so may be reordered ahead of the spin_unlock(). The store-load reordering may cause a packet stuck problem. The concurrent operations can be described as below, CPU 0 | CPU 1 qdisc_run_end() | qdisc_run_begin() . | . ----> /* may be reorderd here */ | . | . | . | spin_unlock() | set_bit() | . | smp_mb__after_atomic() ---- test_bit() | spin_trylock() . | . Consider the following sequence of events: CPU 0 reorder test_bit() ahead and see MISSED = 0 CPU 1 calls set_bit() CPU 1 calls spin_trylock() and return fail CPU 0 executes spin_unlock() At the end of the sequence, CPU 0 calls spin_unlock() and does nothing because it see MISSED = 0. The skb on CPU 1 has beed enqueued but no one take it, until the next cpu pushing to the qdisc (if ever ...) will notice and dequeue it. This patch fix this by adding one explicit barrier. As spin_unlock() and test_bit() ordering is a store-load ordering, a full memory barrier smp_mb() is needed here. Fixes: a90c57f2cedd ("net: sched: fix packet stuck problem for lockless qdisc") Signed-off-by: Guoju Fang Link: https://lore.kernel.org/r/20220528101628.120193-1-gjfang@linux.alibaba.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- include/net/sch_generic.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index 769764bda7a8..bed2387af456 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -197,6 +197,12 @@ static inline void qdisc_run_end(struct Qdisc *qdisc) if (qdisc->flags & TCQ_F_NOLOCK) { spin_unlock(&qdisc->seqlock); + /* spin_unlock() only has store-release semantic. The unlock + * and test_bit() ordering is a store-load ordering, so a full + * memory barrier is needed here. + */ + smp_mb(); + if (unlikely(test_bit(__QDISC_STATE_MISSED, &qdisc->state))) { clear_bit(__QDISC_STATE_MISSED, &qdisc->state); From 0a0f7f84148445c9f02f226928803a870139d820 Mon Sep 17 00:00:00 2001 From: Eric Dumazet Date: Mon, 30 May 2022 14:37:13 -0700 Subject: [PATCH 0832/1842] tcp: tcp_rtx_synack() can be called from process context [ Upstream commit 0a375c822497ed6ad6b5da0792a12a6f1af10c0b ] Laurent reported the enclosed report [1] This bug triggers with following coditions: 0) Kernel built with CONFIG_DEBUG_PREEMPT=y 1) A new passive FastOpen TCP socket is created. This FO socket waits for an ACK coming from client to be a complete ESTABLISHED one. 2) A socket operation on this socket goes through lock_sock() release_sock() dance. 3) While the socket is owned by the user in step 2), a retransmit of the SYN is received and stored in socket backlog. 4) At release_sock() time, the socket backlog is processed while in process context. 5) A SYNACK packet is cooked in response of the SYN retransmit. 6) -> tcp_rtx_synack() is called in process context. Before blamed commit, tcp_rtx_synack() was always called from BH handler, from a timer handler. Fix this by using TCP_INC_STATS() & NET_INC_STATS() which do not assume caller is in non preemptible context. [1] BUG: using __this_cpu_add() in preemptible [00000000] code: epollpep/2180 caller is tcp_rtx_synack.part.0+0x36/0xc0 CPU: 10 PID: 2180 Comm: epollpep Tainted: G OE 5.16.0-0.bpo.4-amd64 #1 Debian 5.16.12-1~bpo11+1 Hardware name: Supermicro SYS-5039MC-H8TRF/X11SCD-F, BIOS 1.7 11/23/2021 Call Trace: dump_stack_lvl+0x48/0x5e check_preemption_disabled+0xde/0xe0 tcp_rtx_synack.part.0+0x36/0xc0 tcp_rtx_synack+0x8d/0xa0 ? kmem_cache_alloc+0x2e0/0x3e0 ? apparmor_file_alloc_security+0x3b/0x1f0 inet_rtx_syn_ack+0x16/0x30 tcp_check_req+0x367/0x610 tcp_rcv_state_process+0x91/0xf60 ? get_nohz_timer_target+0x18/0x1a0 ? lock_timer_base+0x61/0x80 ? preempt_count_add+0x68/0xa0 tcp_v4_do_rcv+0xbd/0x270 __release_sock+0x6d/0xb0 release_sock+0x2b/0x90 sock_setsockopt+0x138/0x1140 ? __sys_getsockname+0x7e/0xc0 ? aa_sk_perm+0x3e/0x1a0 __sys_setsockopt+0x198/0x1e0 __x64_sys_setsockopt+0x21/0x30 do_syscall_64+0x38/0xc0 entry_SYSCALL_64_after_hwframe+0x44/0xae Fixes: 168a8f58059a ("tcp: TCP Fast Open Server - main code path") Signed-off-by: Eric Dumazet Reported-by: Laurent Fasnacht Acked-by: Neal Cardwell Link: https://lore.kernel.org/r/20220530213713.601888-1-eric.dumazet@gmail.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- net/ipv4/tcp_output.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index e37ad0b3645c..8634a5c853f5 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -4115,8 +4115,8 @@ int tcp_rtx_synack(const struct sock *sk, struct request_sock *req) res = af_ops->send_synack(sk, NULL, &fl, req, NULL, TCP_SYNACK_NORMAL, NULL); if (!res) { - __TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS); - __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNRETRANS); + TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS); + NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNRETRANS); if (unlikely(tcp_passive_fastopen(sk))) tcp_sk(sk)->total_retrans++; trace_tcp_retransmit_synack(sk, req); From 04622d631826ba483ae3a0b8a71c745d8e21453d Mon Sep 17 00:00:00 2001 From: Haibo Chen Date: Mon, 30 May 2022 18:48:48 +0800 Subject: [PATCH 0833/1842] gpio: pca953x: use the correct register address to do regcache sync [ Upstream commit 43624eda86c98b0de726d0b6f2516ccc3ef7313f ] For regcache_sync_region, need to use pca953x_recalc_addr() to get the real register address. Fixes: b76574300504 ("gpio: pca953x: Restore registers after suspend/resume cycle") Signed-off-by: Haibo Chen Signed-off-by: Bartosz Golaszewski Signed-off-by: Sasha Levin --- drivers/gpio/gpio-pca953x.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c index e936e1eb1f95..bb4ca064447e 100644 --- a/drivers/gpio/gpio-pca953x.c +++ b/drivers/gpio/gpio-pca953x.c @@ -1107,20 +1107,21 @@ static int pca953x_regcache_sync(struct device *dev) { struct pca953x_chip *chip = dev_get_drvdata(dev); int ret; + u8 regaddr; /* * The ordering between direction and output is important, * sync these registers first and only then sync the rest. */ - ret = regcache_sync_region(chip->regmap, chip->regs->direction, - chip->regs->direction + NBANK(chip)); + regaddr = pca953x_recalc_addr(chip, chip->regs->direction, 0); + ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip)); if (ret) { dev_err(dev, "Failed to sync GPIO dir registers: %d\n", ret); return ret; } - ret = regcache_sync_region(chip->regmap, chip->regs->output, - chip->regs->output + NBANK(chip)); + regaddr = pca953x_recalc_addr(chip, chip->regs->output, 0); + ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip)); if (ret) { dev_err(dev, "Failed to sync GPIO out registers: %d\n", ret); return ret; @@ -1128,16 +1129,18 @@ static int pca953x_regcache_sync(struct device *dev) #ifdef CONFIG_GPIO_PCA953X_IRQ if (chip->driver_data & PCA_PCAL) { - ret = regcache_sync_region(chip->regmap, PCAL953X_IN_LATCH, - PCAL953X_IN_LATCH + NBANK(chip)); + regaddr = pca953x_recalc_addr(chip, PCAL953X_IN_LATCH, 0); + ret = regcache_sync_region(chip->regmap, regaddr, + regaddr + NBANK(chip)); if (ret) { dev_err(dev, "Failed to sync INT latch registers: %d\n", ret); return ret; } - ret = regcache_sync_region(chip->regmap, PCAL953X_INT_MASK, - PCAL953X_INT_MASK + NBANK(chip)); + regaddr = pca953x_recalc_addr(chip, PCAL953X_INT_MASK, 0); + ret = regcache_sync_region(chip->regmap, regaddr, + regaddr + NBANK(chip)); if (ret) { dev_err(dev, "Failed to sync INT mask registers: %d\n", ret); From d2e297eaf456265d256fef3506438effe71d9381 Mon Sep 17 00:00:00 2001 From: David Howells Date: Tue, 31 May 2022 09:30:40 +0100 Subject: [PATCH 0834/1842] afs: Fix infinite loop found by xfstest generic/676 [ Upstream commit 17eabd42560f4636648ad65ba5b20228071e2363 ] In AFS, a directory is handled as a file that the client downloads and parses locally for the purposes of performing lookup and getdents operations. The in-kernel afs filesystem has a number of functions that do this. A directory file is arranged as a series of 2K blocks divided into 32-byte slots, where a directory entry occupies one or more slots, plus each block starts with one or more metadata blocks. When parsing a block, if the last slots are occupied by a dirent that occupies more than a single slot and the file position points at a slot that's not the initial one, the logic in afs_dir_iterate_block() that skips over it won't advance the file pointer to the end of it. This will cause an infinite loop in getdents() as it will keep retrying that block and failing to advance beyond the final entry. Fix this by advancing the file pointer if the next entry will be beyond it when we skip a block. This was found by the generic/676 xfstest but can also be triggered with something like: ~/xfstests-dev/src/t_readdir_3 /xfstest.test/z 4000 1 Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: David Howells Reviewed-by: Marc Dionne Tested-by: Marc Dionne cc: linux-afs@lists.infradead.org Link: http://lore.kernel.org/r/165391973497.110268.2939296942213894166.stgit@warthog.procyon.org.uk/ Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- fs/afs/dir.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/fs/afs/dir.c b/fs/afs/dir.c index 262c0ae505af..159795059547 100644 --- a/fs/afs/dir.c +++ b/fs/afs/dir.c @@ -412,8 +412,11 @@ static int afs_dir_iterate_block(struct afs_vnode *dvnode, } /* skip if starts before the current position */ - if (offset < curr) + if (offset < curr) { + if (next > curr) + ctx->pos = blkoff + next * sizeof(union afs_xdr_dirent); continue; + } /* found the next entry */ if (!dir_emit(ctx, dire->u.name, nlen, From c1f0187025905e9981000d44a92e159468b561a8 Mon Sep 17 00:00:00 2001 From: Damien Le Moal Date: Wed, 1 Jun 2022 15:25:43 +0900 Subject: [PATCH 0835/1842] scsi: sd: Fix potential NULL pointer dereference [ Upstream commit 05fbde3a77a4f1d62e4c4428f384288c1f1a0be5 ] If sd_probe() sees an early error before sdkp->device is initialized, sd_zbc_release_disk() is called. This causes a NULL pointer dereference when sd_is_zoned() is called inside that function. Avoid this by removing the call to sd_zbc_release_disk() in sd_probe() error path. This change is safe and does not result in zone information memory leakage because the zone information for a zoned disk is allocated only when sd_revalidate_disk() is called, at which point sdkp->disk_dev is fully set, resulting in sd_disk_release() being called when needed to cleanup a disk zone information using sd_zbc_release_disk(). Link: https://lore.kernel.org/r/20220601062544.905141-2-damien.lemoal@opensource.wdc.com Fixes: 89d947561077 ("sd: Implement support for ZBC devices") Reported-by: Dongliang Mu Suggested-by: Christoph Hellwig Reviewed-by: Christoph Hellwig Signed-off-by: Damien Le Moal Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin --- drivers/scsi/sd.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c index 56e291708587..bd068d3bb455 100644 --- a/drivers/scsi/sd.c +++ b/drivers/scsi/sd.c @@ -3511,7 +3511,6 @@ static int sd_probe(struct device *dev) out_put: put_disk(gd); out_free: - sd_zbc_release_disk(sdkp); kfree(sdkp); out: scsi_autopm_put_device(sdp); From b8fac8e321044a9ac50f7185b4e9d91a7745e4b0 Mon Sep 17 00:00:00 2001 From: Hoang Le Date: Thu, 2 Jun 2022 13:30:53 +0700 Subject: [PATCH 0836/1842] tipc: check attribute length for bearer name [ Upstream commit 7f36f798f89bf32c0164049cb0e3fd1af613d0bb ] syzbot reported uninit-value: ===================================================== BUG: KMSAN: uninit-value in string_nocheck lib/vsprintf.c:644 [inline] BUG: KMSAN: uninit-value in string+0x4f9/0x6f0 lib/vsprintf.c:725 string_nocheck lib/vsprintf.c:644 [inline] string+0x4f9/0x6f0 lib/vsprintf.c:725 vsnprintf+0x2222/0x3650 lib/vsprintf.c:2806 vprintk_store+0x537/0x2150 kernel/printk/printk.c:2158 vprintk_emit+0x28b/0xab0 kernel/printk/printk.c:2256 vprintk_default+0x86/0xa0 kernel/printk/printk.c:2283 vprintk+0x15f/0x180 kernel/printk/printk_safe.c:50 _printk+0x18d/0x1cf kernel/printk/printk.c:2293 tipc_enable_bearer net/tipc/bearer.c:371 [inline] __tipc_nl_bearer_enable+0x2022/0x22a0 net/tipc/bearer.c:1033 tipc_nl_bearer_enable+0x6c/0xb0 net/tipc/bearer.c:1042 genl_family_rcv_msg_doit net/netlink/genetlink.c:731 [inline] - Do sanity check the attribute length for TIPC_NLA_BEARER_NAME. - Do not use 'illegal name' in printing message. Reported-by: syzbot+e820fdc8ce362f2dea51@syzkaller.appspotmail.com Fixes: cb30a63384bc ("tipc: refactor function tipc_enable_bearer()") Acked-by: Jon Maloy Signed-off-by: Hoang Le Link: https://lore.kernel.org/r/20220602063053.5892-1-hoang.h.le@dektech.com.au Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- net/tipc/bearer.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c index 6911f1cab206..72c31ef985eb 100644 --- a/net/tipc/bearer.c +++ b/net/tipc/bearer.c @@ -249,9 +249,8 @@ static int tipc_enable_bearer(struct net *net, const char *name, u32 i; if (!bearer_name_validate(name, &b_names)) { - errstr = "illegal name"; NL_SET_ERR_MSG(extack, "Illegal name"); - goto rejected; + return res; } if (prio > TIPC_MAX_LINK_PRI && prio != TIPC_MEDIA_LINK_PRI) { From 71cbce75031aed26c72c2dc8a83111d181685f1b Mon Sep 17 00:00:00 2001 From: Saravana Kannan Date: Fri, 3 Jun 2022 13:31:37 +0200 Subject: [PATCH 0837/1842] driver core: Fix wait_for_device_probe() & deferred_probe_timeout interaction [ Upstream commit 5ee76c256e928455212ab759c51d198fedbe7523 ] Mounting NFS rootfs was timing out when deferred_probe_timeout was non-zero [1]. This was because ip_auto_config() initcall times out waiting for the network interfaces to show up when deferred_probe_timeout was non-zero. While ip_auto_config() calls wait_for_device_probe() to make sure any currently running deferred probe work or asynchronous probe finishes, that wasn't sufficient to account for devices being deferred until deferred_probe_timeout. Commit 35a672363ab3 ("driver core: Ensure wait_for_device_probe() waits until the deferred_probe_timeout fires") tried to fix that by making sure wait_for_device_probe() waits for deferred_probe_timeout to expire before returning. However, if wait_for_device_probe() is called from the kernel_init() context: - Before deferred_probe_initcall() [2], it causes the boot process to hang due to a deadlock. - After deferred_probe_initcall() [3], it blocks kernel_init() from continuing till deferred_probe_timeout expires and beats the point of deferred_probe_timeout that's trying to wait for userspace to load modules. Neither of this is good. So revert the changes to wait_for_device_probe(). [1] - https://lore.kernel.org/lkml/TYAPR01MB45443DF63B9EF29054F7C41FD8C60@TYAPR01MB4544.jpnprd01.prod.outlook.com/ [2] - https://lore.kernel.org/lkml/YowHNo4sBjr9ijZr@dev-arch.thelio-3990X/ [3] - https://lore.kernel.org/lkml/Yo3WvGnNk3LvLb7R@linutronix.de/ Fixes: 35a672363ab3 ("driver core: Ensure wait_for_device_probe() waits until the deferred_probe_timeout fires") Cc: John Stultz Cc: "David S. Miller" Cc: Alexey Kuznetsov Cc: Hideaki YOSHIFUJI Cc: Jakub Kicinski Cc: Rob Herring Cc: Geert Uytterhoeven Cc: Yoshihiro Shimoda Cc: Robin Murphy Cc: Andy Shevchenko Cc: Sudeep Holla Cc: Andy Shevchenko Cc: Naresh Kamboju Cc: Basil Eljuse Cc: Ferry Toth Cc: Arnd Bergmann Cc: Anders Roxell Cc: linux-pm@vger.kernel.org Reported-by: Nathan Chancellor Reported-by: Sebastian Andrzej Siewior Tested-by: Geert Uytterhoeven Acked-by: John Stultz Signed-off-by: Saravana Kannan Link: https://lore.kernel.org/r/20220526034609.480766-2-saravanak@google.com Signed-off-by: Greg Kroah-Hartman Reviewed-by: Rafael J. Wysocki Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- drivers/base/dd.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/drivers/base/dd.c b/drivers/base/dd.c index 4f4e8aedbd2c..f9d9f1ad9215 100644 --- a/drivers/base/dd.c +++ b/drivers/base/dd.c @@ -250,7 +250,6 @@ DEFINE_SHOW_ATTRIBUTE(deferred_devs); int driver_deferred_probe_timeout; EXPORT_SYMBOL_GPL(driver_deferred_probe_timeout); -static DECLARE_WAIT_QUEUE_HEAD(probe_timeout_waitqueue); static int __init deferred_probe_timeout_setup(char *str) { @@ -302,7 +301,6 @@ static void deferred_probe_timeout_work_func(struct work_struct *work) list_for_each_entry(p, &deferred_probe_pending_list, deferred_probe) dev_info(p->device, "deferred probe pending\n"); mutex_unlock(&deferred_probe_mutex); - wake_up_all(&probe_timeout_waitqueue); } static DECLARE_DELAYED_WORK(deferred_probe_timeout_work, deferred_probe_timeout_work_func); @@ -706,9 +704,6 @@ int driver_probe_done(void) */ void wait_for_device_probe(void) { - /* wait for probe timeout */ - wait_event(probe_timeout_waitqueue, !driver_deferred_probe_timeout); - /* wait for the deferred probe workqueue to finish */ flush_work(&deferred_probe_work); From 32be2b805a1a13ccc68bd209ec3ae198dd3ba5d6 Mon Sep 17 00:00:00 2001 From: Leo Yan Date: Mon, 30 May 2022 16:42:53 +0800 Subject: [PATCH 0838/1842] perf c2c: Fix sorting in percent_rmt_hitm_cmp() [ Upstream commit b24192a17337abbf3f44aaa75e15df14a2d0016e ] The function percent_rmt_hitm_cmp() wrongly uses local HITMs for sorting remote HITMs. Since this function is to sort cache lines for remote HITMs, this patch changes to use 'rmt_hitm' field for correct sorting. Fixes: 9cb3500afc0980c5 ("perf c2c report: Add hitm/store percent related sort keys") Signed-off-by: Leo Yan Acked-by: Namhyung Kim Cc: Alexander Shishkin Cc: Ingo Molnar Cc: Jiri Olsa Cc: Joe Mario Cc: Mark Rutland Cc: Peter Zijlstra Link: https://lore.kernel.org/r/20220530084253.750190-1-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Sasha Levin --- tools/perf/builtin-c2c.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c index 7f7111d4b3ad..fb7d01f3961b 100644 --- a/tools/perf/builtin-c2c.c +++ b/tools/perf/builtin-c2c.c @@ -918,8 +918,8 @@ percent_rmt_hitm_cmp(struct perf_hpp_fmt *fmt __maybe_unused, double per_left; double per_right; - per_left = PERCENT(left, lcl_hitm); - per_right = PERCENT(right, lcl_hitm); + per_left = PERCENT(left, rmt_hitm); + per_right = PERCENT(right, rmt_hitm); return per_left - per_right; } From 76b226eaf0550c6acf9830ef732b6063bfeeb504 Mon Sep 17 00:00:00 2001 From: Dave Jiang Date: Mon, 11 Apr 2022 15:09:38 -0700 Subject: [PATCH 0839/1842] dmaengine: idxd: set DMA_INTERRUPT cap bit [ Upstream commit 4e5a4eb20393b851590b4465f1197a8041c2076b ] Even though idxd driver has always supported interrupt, it never actually set the DMA_INTERRUPT cap bit. Rectify this mistake so the interrupt capability is advertised. Reported-by: Ben Walker Signed-off-by: Dave Jiang Link: https://lore.kernel.org/r/164971497859.2201379.17925303210723708961.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul Signed-off-by: Sasha Levin --- drivers/dma/idxd/dma.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/dma/idxd/dma.c b/drivers/dma/idxd/dma.c index aa7435555de9..d53ce22b4b8f 100644 --- a/drivers/dma/idxd/dma.c +++ b/drivers/dma/idxd/dma.c @@ -188,6 +188,7 @@ int idxd_register_dma_device(struct idxd_device *idxd) INIT_LIST_HEAD(&dma->channels); dma->dev = dev; + dma_cap_set(DMA_INTERRUPT, dma->cap_mask); dma_cap_set(DMA_PRIVATE, dma->cap_mask); dma_cap_set(DMA_COMPLETION_NO_ORDER, dma->cap_mask); dma->device_release = idxd_dma_release; From c667b3872a4c435a3f29d4e15971cd8c378b0043 Mon Sep 17 00:00:00 2001 From: Gong Yuanjun Date: Thu, 7 Apr 2022 12:26:57 +0800 Subject: [PATCH 0840/1842] mips: cpc: Fix refcount leak in mips_cpc_default_phys_base [ Upstream commit 4107fa700f314592850e2c64608f6ede4c077476 ] Add the missing of_node_put() to release the refcount incremented by of_find_compatible_node(). Signed-off-by: Gong Yuanjun Reviewed-by: Serge Semin Signed-off-by: Thomas Bogendoerfer Signed-off-by: Sasha Levin --- arch/mips/kernel/mips-cpc.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/mips/kernel/mips-cpc.c b/arch/mips/kernel/mips-cpc.c index 8d2535123f11..d005be84c482 100644 --- a/arch/mips/kernel/mips-cpc.c +++ b/arch/mips/kernel/mips-cpc.c @@ -27,6 +27,7 @@ phys_addr_t __weak mips_cpc_default_phys_base(void) cpc_node = of_find_compatible_node(of_root, NULL, "mti,mips-cpc"); if (cpc_node) { err = of_address_to_resource(cpc_node, 0, &res); + of_node_put(cpc_node); if (!err) return res.start; } From 2f452a33067d5f6b3c45a3ae3b6b107478d4416b Mon Sep 17 00:00:00 2001 From: Masami Hiramatsu Date: Wed, 6 Apr 2022 11:30:59 +0900 Subject: [PATCH 0841/1842] bootconfig: Make the bootconfig.o as a normal object file [ Upstream commit 6014a23638cdee63a71ef13c51d7c563eb5829ee ] Since the APIs defined in the bootconfig.o are not individually used, it is meaningless to build it as library by lib-y. Use obj-y for that. Link: https://lkml.kernel.org/r/164921225875.1090670.15565363126983098971.stgit@devnote2 Cc: Padmanabha Srinivasaiah Cc: Jonathan Corbet Cc: Randy Dunlap Cc: Nick Desaulniers Cc: Sami Tolvanen Cc: Nathan Chancellor Cc: Linux Kbuild mailing list Reported-by: Masahiro Yamada Signed-off-by: Masami Hiramatsu Signed-off-by: Steven Rostedt (Google) Signed-off-by: Sasha Levin --- lib/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/Makefile b/lib/Makefile index d415fc7067c5..69b8217652ed 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -274,7 +274,7 @@ $(foreach file, $(libfdt_files), \ $(eval CFLAGS_$(file) = -I $(srctree)/scripts/dtc/libfdt)) lib-$(CONFIG_LIBFDT) += $(libfdt_files) -lib-$(CONFIG_BOOT_CONFIG) += bootconfig.o +obj-$(CONFIG_BOOT_CONFIG) += bootconfig.o obj-$(CONFIG_RBTREE_TEST) += rbtree_test.o obj-$(CONFIG_INTERVAL_TREE_TEST) += interval_tree_test.o From 1788e6dbb61286215442b1af99e51405a6206762 Mon Sep 17 00:00:00 2001 From: Jun Miao Date: Tue, 19 Apr 2022 09:39:10 +0800 Subject: [PATCH 0842/1842] tracing: Fix sleeping function called from invalid context on RT kernel [ Upstream commit 12025abdc8539ed9d5014e2d647a3fd1bd3de5cd ] When setting bootparams="trace_event=initcall:initcall_start tp_printk=1" in the cmdline, the output_printk() was called, and the spin_lock_irqsave() was called in the atomic and irq disable interrupt context suitation. On the PREEMPT_RT kernel, these locks are replaced with sleepable rt-spinlock, so the stack calltrace will be triggered. Fix it by raw_spin_lock_irqsave when PREEMPT_RT and "trace_event=initcall:initcall_start tp_printk=1" enabled. BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:46 in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 1, name: swapper/0 preempt_count: 2, expected: 0 RCU nest depth: 0, expected: 0 Preemption disabled at: [] try_to_wake_up+0x7e/0xba0 CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.17.1-rt17+ #19 34c5812404187a875f32bee7977f7367f9679ea7 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 Call Trace: dump_stack_lvl+0x60/0x8c dump_stack+0x10/0x12 __might_resched.cold+0x11d/0x155 rt_spin_lock+0x40/0x70 trace_event_buffer_commit+0x2fa/0x4c0 ? map_vsyscall+0x93/0x93 trace_event_raw_event_initcall_start+0xbe/0x110 ? perf_trace_initcall_finish+0x210/0x210 ? probe_sched_wakeup+0x34/0x40 ? ttwu_do_wakeup+0xda/0x310 ? trace_hardirqs_on+0x35/0x170 ? map_vsyscall+0x93/0x93 do_one_initcall+0x217/0x3c0 ? trace_event_raw_event_initcall_level+0x170/0x170 ? push_cpu_stop+0x400/0x400 ? cblist_init_generic+0x241/0x290 kernel_init_freeable+0x1ac/0x347 ? _raw_spin_unlock_irq+0x65/0x80 ? rest_init+0xf0/0xf0 kernel_init+0x1e/0x150 ret_from_fork+0x22/0x30 Link: https://lkml.kernel.org/r/20220419013910.894370-1-jun.miao@intel.com Signed-off-by: Jun Miao Signed-off-by: Steven Rostedt (Google) Signed-off-by: Sasha Levin --- kernel/trace/trace.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 953dd9568dd7..124e3e25e155 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -2784,7 +2784,7 @@ trace_event_buffer_lock_reserve(struct trace_buffer **current_rb, } EXPORT_SYMBOL_GPL(trace_event_buffer_lock_reserve); -static DEFINE_SPINLOCK(tracepoint_iter_lock); +static DEFINE_RAW_SPINLOCK(tracepoint_iter_lock); static DEFINE_MUTEX(tracepoint_printk_mutex); static void output_printk(struct trace_event_buffer *fbuffer) @@ -2812,14 +2812,14 @@ static void output_printk(struct trace_event_buffer *fbuffer) event = &fbuffer->trace_file->event_call->event; - spin_lock_irqsave(&tracepoint_iter_lock, flags); + raw_spin_lock_irqsave(&tracepoint_iter_lock, flags); trace_seq_init(&iter->seq); iter->ent = fbuffer->entry; event_call->event.funcs->trace(iter, 0, event); trace_seq_putc(&iter->seq, 0); printk("%s", iter->seq.buffer); - spin_unlock_irqrestore(&tracepoint_iter_lock, flags); + raw_spin_unlock_irqrestore(&tracepoint_iter_lock, flags); } int tracepoint_printk_sysctl(struct ctl_table *table, int write, From 9e801c891aa2a002aa2b4259459c1d09f24e75b9 Mon Sep 17 00:00:00 2001 From: Mark-PK Tsai Date: Tue, 26 Apr 2022 20:24:06 +0800 Subject: [PATCH 0843/1842] tracing: Avoid adding tracer option before update_tracer_options [ Upstream commit ef9188bcc6ca1d8a2ad83e826b548e6820721061 ] To prepare for support asynchronous tracer_init_tracefs initcall, avoid calling create_trace_option_files before __update_tracer_options. Otherwise, create_trace_option_files will show warning because some tracers in trace_types list are already in tr->topts. For example, hwlat_tracer call register_tracer in late_initcall, and global_trace.dir is already created in tracing_init_dentry, hwlat_tracer will be put into tr->topts. Then if the __update_tracer_options is executed after hwlat_tracer registered, create_trace_option_files find that hwlat_tracer is already in tr->topts. Link: https://lkml.kernel.org/r/20220426122407.17042-2-mark-pk.tsai@mediatek.com Link: https://lore.kernel.org/lkml/20220322133339.GA32582@xsang-OptiPlex-9020/ Reported-by: kernel test robot Signed-off-by: Mark-PK Tsai Signed-off-by: Steven Rostedt (Google) Signed-off-by: Sasha Levin --- kernel/trace/trace.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 124e3e25e155..50200898410d 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -5907,12 +5907,18 @@ static void tracing_set_nop(struct trace_array *tr) tr->current_trace = &nop_trace; } +static bool tracer_options_updated; + static void add_tracer_options(struct trace_array *tr, struct tracer *t) { /* Only enable if the directory has been created already. */ if (!tr->dir) return; + /* Only create trace option files after update_tracer_options finish */ + if (!tracer_options_updated) + return; + create_trace_option_files(tr, t); } @@ -8649,6 +8655,7 @@ static void __update_tracer_options(struct trace_array *tr) static void update_tracer_options(struct trace_array *tr) { mutex_lock(&trace_types_lock); + tracer_options_updated = true; __update_tracer_options(tr); mutex_unlock(&trace_types_lock); } From 3660db29b0305f9a1d95979c7af0f5db6ea99f5d Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Mon, 25 Apr 2022 19:41:36 +0800 Subject: [PATCH 0844/1842] iommu/arm-smmu: fix possible null-ptr-deref in arm_smmu_device_probe() [ Upstream commit d9ed8af1dee37f181096631fb03729ece98ba816 ] It will cause null-ptr-deref when using 'res', if platform_get_resource() returns NULL, so move using 'res' after devm_ioremap_resource() that will check it to avoid null-ptr-deref. And use devm_platform_get_and_ioremap_resource() to simplify code. Signed-off-by: Yang Yingliang Link: https://lore.kernel.org/r/20220425114136.2649310-1-yangyingliang@huawei.com Signed-off-by: Will Deacon Signed-off-by: Sasha Levin --- drivers/iommu/arm/arm-smmu/arm-smmu.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index df24bbe3ea4f..6b41fe229a05 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -2123,11 +2123,10 @@ static int arm_smmu_device_probe(struct platform_device *pdev) if (err) return err; - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - ioaddr = res->start; - smmu->base = devm_ioremap_resource(dev, res); + smmu->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res); if (IS_ERR(smmu->base)) return PTR_ERR(smmu->base); + ioaddr = res->start; /* * The resource size should effectively match the value of SMMU_TOP; * stash that temporarily until we know PAGESIZE to validate it with. From 54c1e0e3bbcab2abe25b2874a43050ae5df87831 Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Mon, 25 Apr 2022 19:45:25 +0800 Subject: [PATCH 0845/1842] iommu/arm-smmu-v3: check return value after calling platform_get_resource() [ Upstream commit b131fa8c1d2afd05d0b7598621114674289c2fbb ] It will cause null-ptr-deref if platform_get_resource() returns NULL, we need check the return value. Signed-off-by: Yang Yingliang Link: https://lore.kernel.org/r/20220425114525.2651143-1-yangyingliang@huawei.com Signed-off-by: Will Deacon Signed-off-by: Sasha Levin --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 483c1362cc4a..bc4cbc7542ce 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -3512,6 +3512,8 @@ static int arm_smmu_device_probe(struct platform_device *pdev) /* Base address */ res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!res) + return -EINVAL; if (resource_size(res) < arm_smmu_resource_size(smmu)) { dev_err(dev, "MMIO region too small (%pr)\n", res); return -EINVAL; From 32bea51fe4c6e92c00403739f7547c89219bea88 Mon Sep 17 00:00:00 2001 From: Dongliang Mu Date: Fri, 15 Apr 2022 21:19:02 +0800 Subject: [PATCH 0846/1842] f2fs: remove WARN_ON in f2fs_is_valid_blkaddr [ Upstream commit dc2f78e2d4cc844a1458653d57ce1b54d4a29f21 ] Syzbot triggers two WARNs in f2fs_is_valid_blkaddr and __is_bitmap_valid. For example, in f2fs_is_valid_blkaddr, if type is DATA_GENERIC_ENHANCE or DATA_GENERIC_ENHANCE_READ, it invokes WARN_ON if blkaddr is not in the right range. The call trace is as follows: f2fs_get_node_info+0x45f/0x1070 read_node_page+0x577/0x1190 __get_node_page.part.0+0x9e/0x10e0 __get_node_page f2fs_get_node_page+0x109/0x180 do_read_inode f2fs_iget+0x2a5/0x58b0 f2fs_fill_super+0x3b39/0x7ca0 Fix these two WARNs by replacing WARN_ON with dump_stack. Reported-by: syzbot+763ae12a2ede1d99d4dc@syzkaller.appspotmail.com Signed-off-by: Dongliang Mu Reviewed-by: Chao Yu Signed-off-by: Jaegeuk Kim Signed-off-by: Sasha Levin --- fs/f2fs/checkpoint.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c index 77f30320f862..1c49b9959b32 100644 --- a/fs/f2fs/checkpoint.c +++ b/fs/f2fs/checkpoint.c @@ -148,7 +148,7 @@ static bool __is_bitmap_valid(struct f2fs_sb_info *sbi, block_t blkaddr, f2fs_err(sbi, "Inconsistent error blkaddr:%u, sit bitmap:%d", blkaddr, exist); set_sbi_flag(sbi, SBI_NEED_FSCK); - WARN_ON(1); + dump_stack(); } return exist; } @@ -186,7 +186,7 @@ bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi, f2fs_warn(sbi, "access invalid blkaddr:%u", blkaddr); set_sbi_flag(sbi, SBI_NEED_FSCK); - WARN_ON(1); + dump_stack(); return false; } else { return __is_bitmap_valid(sbi, blkaddr, type); From 344a55ccf5ec14aea4bd7e3f0c1d1134bc71b0c3 Mon Sep 17 00:00:00 2001 From: Lucas Tanure Date: Wed, 13 Apr 2022 10:14:10 +0100 Subject: [PATCH 0847/1842] i2c: cadence: Increase timeout per message if necessary [ Upstream commit 96789dce043f5bff8b7d62aa28d52a7c59403a84 ] Timeout as 1 second sets an upper limit on the length of the transfer executed, but there is no maximum length of a write or read message set in i2c_adapter_quirks for this controller. This upper limit affects devices that require sending large firmware blobs over I2C. To remove that limitation, calculate the minimal time necessary, plus some wiggle room, for every message and use it instead of the default one second, if more than one second. Signed-off-by: Lucas Tanure Acked-by: Michal Simek Signed-off-by: Wolfram Sang Signed-off-by: Sasha Levin --- drivers/i2c/busses/i2c-cadence.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c index c1bbc4caeb5c..50e3ddba52ba 100644 --- a/drivers/i2c/busses/i2c-cadence.c +++ b/drivers/i2c/busses/i2c-cadence.c @@ -724,7 +724,7 @@ static void cdns_i2c_master_reset(struct i2c_adapter *adap) static int cdns_i2c_process_msg(struct cdns_i2c *id, struct i2c_msg *msg, struct i2c_adapter *adap) { - unsigned long time_left; + unsigned long time_left, msg_timeout; u32 reg; id->p_msg = msg; @@ -749,8 +749,16 @@ static int cdns_i2c_process_msg(struct cdns_i2c *id, struct i2c_msg *msg, else cdns_i2c_msend(id); + /* Minimal time to execute this message */ + msg_timeout = msecs_to_jiffies((1000 * msg->len * BITS_PER_BYTE) / id->i2c_clk); + /* Plus some wiggle room */ + msg_timeout += msecs_to_jiffies(500); + + if (msg_timeout < adap->timeout) + msg_timeout = adap->timeout; + /* Wait for the signal of completion */ - time_left = wait_for_completion_timeout(&id->xfer_done, adap->timeout); + time_left = wait_for_completion_timeout(&id->xfer_done, msg_timeout); if (time_left == 0) { cdns_i2c_master_reset(adap); dev_err(id->adap.dev.parent, From d99f04df32369ced1aef1e8277f55c684d667c30 Mon Sep 17 00:00:00 2001 From: Greg Ungerer Date: Wed, 20 Apr 2022 23:27:47 +1000 Subject: [PATCH 0848/1842] m68knommu: set ZERO_PAGE() to the allocated zeroed page [ Upstream commit dc068f46217970d9516f16cd37972a01d50dc055 ] The non-MMU m68k pagetable ZERO_PAGE() macro is being set to the somewhat non-sensical value of "virt_to_page(0)". The zeroth page is not in any way guaranteed to be a page full of "0". So the result is that ZERO_PAGE() will almost certainly contain random values. We already allocate a real "empty_zero_page" in the mm setup code shared between MMU m68k and non-MMU m68k. It is just not hooked up to the ZERO_PAGE() macro for the non-MMU m68k case. Fix ZERO_PAGE() to use the allocated "empty_zero_page" pointer. I am not aware of any specific issues caused by the old code. Link: https://lore.kernel.org/linux-m68k/2a462b23-5b8e-bbf4-ec7d-778434a3b9d7@google.com/T/#t Reported-by: Hugh Dickens Signed-off-by: Greg Ungerer Signed-off-by: Sasha Levin --- arch/m68k/include/asm/pgtable_no.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/m68k/include/asm/pgtable_no.h b/arch/m68k/include/asm/pgtable_no.h index 87151d67d91e..bce5ca56c388 100644 --- a/arch/m68k/include/asm/pgtable_no.h +++ b/arch/m68k/include/asm/pgtable_no.h @@ -42,7 +42,8 @@ extern void paging_init(void); * ZERO_PAGE is a global shared page that is always zero: used * for zero-mapped memory areas etc.. */ -#define ZERO_PAGE(vaddr) (virt_to_page(0)) +extern void *empty_zero_page; +#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) /* * All 32bit addresses are effectively valid for vmalloc... From 2c08cae19d5d0d15d6daf3fb781d0d1d980b5a93 Mon Sep 17 00:00:00 2001 From: Greg Ungerer Date: Fri, 13 May 2022 17:27:39 +1000 Subject: [PATCH 0849/1842] m68knommu: fix undefined reference to `_init_sp' [ Upstream commit a71b9e66fee47c59b3ec34e652b5c23bc6550794 ] When configuring a nommu classic m68k system enabling the uboot parameter passing support (CONFIG_UBOOT) will produce the following compile error: m68k-linux-ld: arch/m68k/kernel/uboot.o: in function `process_uboot_commandline': uboot.c:(.init.text+0x32): undefined reference to `_init_sp' The logic to support this option is only used on ColdFire based platforms (in its head.S startup code). So make the selection of this option depend on building for a ColdFire based platform. Reported-by: kernel test robot Reviewed-by: Geert Uytterhoeven Acked-by: Geert Uytterhoeven Signed-off-by: Greg Ungerer Signed-off-by: Sasha Levin --- arch/m68k/Kconfig.machine | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/m68k/Kconfig.machine b/arch/m68k/Kconfig.machine index 51a878803fb6..16730561d166 100644 --- a/arch/m68k/Kconfig.machine +++ b/arch/m68k/Kconfig.machine @@ -321,6 +321,7 @@ comment "Machine Options" config UBOOT bool "Support for U-Boot command line parameters" + depends on COLDFIRE help If you say Y here kernel will try to collect command line parameters from the initial u-boot stack. From 95a0ba85c1b51b36e909841c02d205cd223ab753 Mon Sep 17 00:00:00 2001 From: Radhey Shyam Pandey Date: Tue, 10 May 2022 12:42:40 +0530 Subject: [PATCH 0850/1842] dmaengine: zynqmp_dma: In struct zynqmp_dma_chan fix desc_size data type [ Upstream commit f9a9f43a62a04ec3183fb0da9226c7706eed0115 ] In zynqmp_dma_alloc/free_chan_resources functions there is a potential overflow in the below expressions. dma_alloc_coherent(chan->dev, (2 * chan->desc_size * ZYNQMP_DMA_NUM_DESCS), &chan->desc_pool_p, GFP_KERNEL); dma_free_coherent(chan->dev,(2 * ZYNQMP_DMA_DESC_SIZE(chan) * ZYNQMP_DMA_NUM_DESCS), chan->desc_pool_v, chan->desc_pool_p); The arguments desc_size and ZYNQMP_DMA_NUM_DESCS were 32 bit. Though this overflow condition is not observed but it is a potential problem in the case of 32-bit multiplication. Hence fix it by changing the desc_size data type to size_t. In addition to coverity fix it also reuse ZYNQMP_DMA_DESC_SIZE macro in dma_alloc_coherent API argument. Addresses-Coverity: Event overflow_before_widen. Signed-off-by: Radhey Shyam Pandey Link: https://lore.kernel.org/r/1652166762-18317-2-git-send-email-radhey.shyam.pandey@xilinx.com Signed-off-by: Vinod Koul Signed-off-by: Sasha Levin --- drivers/dma/xilinx/zynqmp_dma.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/dma/xilinx/zynqmp_dma.c b/drivers/dma/xilinx/zynqmp_dma.c index 5fecf5aa6e85..7e6be076e9d3 100644 --- a/drivers/dma/xilinx/zynqmp_dma.c +++ b/drivers/dma/xilinx/zynqmp_dma.c @@ -232,7 +232,7 @@ struct zynqmp_dma_chan { bool is_dmacoherent; struct tasklet_struct tasklet; bool idle; - u32 desc_size; + size_t desc_size; bool err; u32 bus_width; u32 src_burst_len; @@ -490,7 +490,8 @@ static int zynqmp_dma_alloc_chan_resources(struct dma_chan *dchan) } chan->desc_pool_v = dma_alloc_coherent(chan->dev, - (2 * chan->desc_size * ZYNQMP_DMA_NUM_DESCS), + (2 * ZYNQMP_DMA_DESC_SIZE(chan) * + ZYNQMP_DMA_NUM_DESCS), &chan->desc_pool_p, GFP_KERNEL); if (!chan->desc_pool_v) return -ENOMEM; From 0ee5b9644f06b4d3cdcd9544f43f63312e425a4c Mon Sep 17 00:00:00 2001 From: Trond Myklebust Date: Sat, 14 May 2022 10:08:14 -0400 Subject: [PATCH 0851/1842] NFSv4: Don't hold the layoutget locks across multiple RPC calls [ Upstream commit 6949493884fe88500de4af182588e071cf1544ee ] When doing layoutget as part of the open() compound, we have to be careful to release the layout locks before we can call any further RPC calls, such as setattr(). The reason is that those calls could trigger a recall, which could deadlock. Signed-off-by: Trond Myklebust Signed-off-by: Anna Schumaker Signed-off-by: Sasha Levin --- fs/nfs/nfs4proc.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index b6d60e69043a..b22da4e3165b 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -3086,6 +3086,10 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata, } out: + if (opendata->lgp) { + nfs4_lgopen_release(opendata->lgp); + opendata->lgp = NULL; + } if (!opendata->cancelled) nfs4_sequence_free_slot(&opendata->o_res.seq_res); return ret; From c09b873f3f39ef36d0cb75ed853dbba99f109afb Mon Sep 17 00:00:00 2001 From: Saurabh Sengar Date: Wed, 27 Apr 2022 06:47:53 -0700 Subject: [PATCH 0852/1842] video: fbdev: hyperv_fb: Allow resolutions with size > 64 MB for Gen1 [ Upstream commit c4b4d7047f16a8d138ce76da65faefb7165736f2 ] This patch fixes a bug where GEN1 VMs doesn't allow resolutions greater than 64 MB size (eg 7680x4320). Unnecessary PCI check limits Gen1 VRAM to legacy PCI BAR size only (ie 64MB). Thus any, resolution requesting greater then 64MB (eg 7680x4320) would fail. MMIO region assigning this memory shouldn't be limited by PCI bar size. Signed-off-by: Saurabh Sengar Reviewed-by: Dexuan Cui Signed-off-by: Helge Deller Signed-off-by: Sasha Levin --- drivers/video/fbdev/hyperv_fb.c | 19 +------------------ 1 file changed, 1 insertion(+), 18 deletions(-) diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c index 3c309ab20887..40baa79f8046 100644 --- a/drivers/video/fbdev/hyperv_fb.c +++ b/drivers/video/fbdev/hyperv_fb.c @@ -1008,7 +1008,6 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info) struct pci_dev *pdev = NULL; void __iomem *fb_virt; int gen2vm = efi_enabled(EFI_BOOT); - resource_size_t pot_start, pot_end; phys_addr_t paddr; int ret; @@ -1059,23 +1058,7 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info) dio_fb_size = screen_width * screen_height * screen_depth / 8; - if (gen2vm) { - pot_start = 0; - pot_end = -1; - } else { - if (!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM) || - pci_resource_len(pdev, 0) < screen_fb_size) { - pr_err("Resource not available or (0x%lx < 0x%lx)\n", - (unsigned long) pci_resource_len(pdev, 0), - (unsigned long) screen_fb_size); - goto err1; - } - - pot_end = pci_resource_end(pdev, 0); - pot_start = pot_end - screen_fb_size + 1; - } - - ret = vmbus_allocate_mmio(&par->mem, hdev, pot_start, pot_end, + ret = vmbus_allocate_mmio(&par->mem, hdev, 0, -1, screen_fb_size, 0x100000, true); if (ret != 0) { pr_err("Unable to allocate framebuffer memory\n"); From 8b3d5bafb188ab32c5684ccc12ddf80be83eeac0 Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Fri, 13 May 2022 18:05:41 +0800 Subject: [PATCH 0853/1842] video: fbdev: pxa3xx-gcu: release the resources correctly in pxa3xx_gcu_probe/remove() [ Upstream commit d87ad457f7e1b8d2492ca5b1531eb35030a1cc8f ] In pxa3xx_gcu_probe(), the sequence of error lable is wrong, it will leads some resource leaked, so adjust the sequence to handle the error correctly, and if pxa3xx_gcu_add_buffer() fails, pxa3xx_gcu_free_buffers() need be called. In pxa3xx_gcu_remove(), add missing clk_disable_unpreprare(). Signed-off-by: Yang Yingliang Signed-off-by: Helge Deller Signed-off-by: Sasha Levin --- drivers/video/fbdev/pxa3xx-gcu.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/drivers/video/fbdev/pxa3xx-gcu.c b/drivers/video/fbdev/pxa3xx-gcu.c index 4279e13a3b58..9421d14d0eb0 100644 --- a/drivers/video/fbdev/pxa3xx-gcu.c +++ b/drivers/video/fbdev/pxa3xx-gcu.c @@ -650,6 +650,7 @@ static int pxa3xx_gcu_probe(struct platform_device *pdev) for (i = 0; i < 8; i++) { ret = pxa3xx_gcu_add_buffer(dev, priv); if (ret) { + pxa3xx_gcu_free_buffers(dev, priv); dev_err(dev, "failed to allocate DMA memory\n"); goto err_disable_clk; } @@ -666,15 +667,15 @@ static int pxa3xx_gcu_probe(struct platform_device *pdev) SHARED_SIZE, irq); return 0; -err_free_dma: - dma_free_coherent(dev, SHARED_SIZE, - priv->shared, priv->shared_phys); +err_disable_clk: + clk_disable_unprepare(priv->clk); err_misc_deregister: misc_deregister(&priv->misc_dev); -err_disable_clk: - clk_disable_unprepare(priv->clk); +err_free_dma: + dma_free_coherent(dev, SHARED_SIZE, + priv->shared, priv->shared_phys); return ret; } @@ -687,6 +688,7 @@ static int pxa3xx_gcu_remove(struct platform_device *pdev) pxa3xx_gcu_wait_idle(priv); misc_deregister(&priv->misc_dev); dma_free_coherent(dev, SHARED_SIZE, priv->shared, priv->shared_phys); + clk_disable_unprepare(priv->clk); pxa3xx_gcu_free_buffers(dev, priv); return 0; From 8dbae5affbdbf524b48000f9d357925bb001e5f4 Mon Sep 17 00:00:00 2001 From: Kinglong Mee Date: Sun, 22 May 2022 20:36:48 +0800 Subject: [PATCH 0854/1842] xprtrdma: treat all calls not a bcall when bc_serv is NULL [ Upstream commit 11270e7ca268e8d61b5d9e5c3a54bd1550642c9c ] When a rdma server returns a fault format reply, nfs v3 client may treats it as a bcall when bc service is not exist. The debug message at rpcrdma_bc_receive_call are, [56579.837169] RPC: rpcrdma_bc_receive_call: callback XID 00000001, length=20 [56579.837174] RPC: rpcrdma_bc_receive_call: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 04 After that, rpcrdma_bc_receive_call will meets NULL pointer as, [ 226.057890] BUG: unable to handle kernel NULL pointer dereference at 00000000000000c8 ... [ 226.058704] RIP: 0010:_raw_spin_lock+0xc/0x20 ... [ 226.059732] Call Trace: [ 226.059878] rpcrdma_bc_receive_call+0x138/0x327 [rpcrdma] [ 226.060011] __ib_process_cq+0x89/0x170 [ib_core] [ 226.060092] ib_cq_poll_work+0x26/0x80 [ib_core] [ 226.060257] process_one_work+0x1a7/0x360 [ 226.060367] ? create_worker+0x1a0/0x1a0 [ 226.060440] worker_thread+0x30/0x390 [ 226.060500] ? create_worker+0x1a0/0x1a0 [ 226.060574] kthread+0x116/0x130 [ 226.060661] ? kthread_flush_work_fn+0x10/0x10 [ 226.060724] ret_from_fork+0x35/0x40 ... Signed-off-by: Kinglong Mee Reviewed-by: Chuck Lever Signed-off-by: Anna Schumaker Signed-off-by: Sasha Levin --- net/sunrpc/xprtrdma/rpc_rdma.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c index ca267a855a12..b8174c77dfe1 100644 --- a/net/sunrpc/xprtrdma/rpc_rdma.c +++ b/net/sunrpc/xprtrdma/rpc_rdma.c @@ -1137,6 +1137,7 @@ static bool rpcrdma_is_bcall(struct rpcrdma_xprt *r_xprt, struct rpcrdma_rep *rep) #if defined(CONFIG_SUNRPC_BACKCHANNEL) { + struct rpc_xprt *xprt = &r_xprt->rx_xprt; struct xdr_stream *xdr = &rep->rr_stream; __be32 *p; @@ -1160,6 +1161,10 @@ rpcrdma_is_bcall(struct rpcrdma_xprt *r_xprt, struct rpcrdma_rep *rep) if (*p != cpu_to_be32(RPC_CALL)) return false; + /* No bc service. */ + if (xprt->bc_serv == NULL) + return false; + /* Now that we are sure this is a backchannel call, * advance to the RPC header. */ From 9edafbc7ec29a36655df43349c2ec569902261f0 Mon Sep 17 00:00:00 2001 From: Florian Westphal Date: Wed, 1 Jun 2022 10:47:35 +0200 Subject: [PATCH 0855/1842] netfilter: nat: really support inet nat without l3 address [ Upstream commit 282e5f8fe907dc3f2fbf9f2103b0e62ffc3a68a5 ] When no l3 address is given, priv->family is set to NFPROTO_INET and the evaluation function isn't called. Call it too so l4-only rewrite can work. Also add a test case for this. Fixes: a33f387ecd5aa ("netfilter: nft_nat: allow to specify layer 4 protocol NAT only") Reported-by: Yi Chen Signed-off-by: Florian Westphal Signed-off-by: Pablo Neira Ayuso Signed-off-by: Sasha Levin --- net/netfilter/nft_nat.c | 3 +- tools/testing/selftests/netfilter/nft_nat.sh | 43 ++++++++++++++++++++ 2 files changed, 45 insertions(+), 1 deletion(-) diff --git a/net/netfilter/nft_nat.c b/net/netfilter/nft_nat.c index ea53fd999f46..6a4a5ac88db7 100644 --- a/net/netfilter/nft_nat.c +++ b/net/netfilter/nft_nat.c @@ -341,7 +341,8 @@ static void nft_nat_inet_eval(const struct nft_expr *expr, { const struct nft_nat *priv = nft_expr_priv(expr); - if (priv->family == nft_pf(pkt)) + if (priv->family == nft_pf(pkt) || + priv->family == NFPROTO_INET) nft_nat_eval(expr, regs, pkt); } diff --git a/tools/testing/selftests/netfilter/nft_nat.sh b/tools/testing/selftests/netfilter/nft_nat.sh index d7e07f4c3d7f..4e15e8167310 100755 --- a/tools/testing/selftests/netfilter/nft_nat.sh +++ b/tools/testing/selftests/netfilter/nft_nat.sh @@ -374,6 +374,45 @@ EOF return $lret } +test_local_dnat_portonly() +{ + local family=$1 + local daddr=$2 + local lret=0 + local sr_s + local sr_r + +ip netns exec "$ns0" nft -f /dev/stdin < Date: Mon, 30 May 2022 18:40:06 +0200 Subject: [PATCH 0856/1842] netfilter: nf_tables: delete flowtable hooks via transaction list [ Upstream commit b6d9014a3335194590abdd2a2471ef5147a67645 ] Remove inactive bool field in nft_hook object that was introduced in abadb2f865d7 ("netfilter: nf_tables: delete devices from flowtable"). Move stale flowtable hooks to transaction list instead. Deleting twice the same device does not result in ENOENT. Fixes: abadb2f865d7 ("netfilter: nf_tables: delete devices from flowtable") Signed-off-by: Pablo Neira Ayuso Signed-off-by: Sasha Levin --- include/net/netfilter/nf_tables.h | 1 - net/netfilter/nf_tables_api.c | 31 ++++++------------------------- 2 files changed, 6 insertions(+), 26 deletions(-) diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h index 76bfb6cd5815..b7907385a02f 100644 --- a/include/net/netfilter/nf_tables.h +++ b/include/net/netfilter/nf_tables.h @@ -1013,7 +1013,6 @@ struct nft_stats { struct nft_hook { struct list_head list; - bool inactive; struct nf_hook_ops ops; struct rcu_head rcu; }; diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index ea162e36e0e4..a5779790e337 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -1733,7 +1733,6 @@ static struct nft_hook *nft_netdev_hook_alloc(struct net *net, goto err_hook_dev; } hook->ops.dev = dev; - hook->inactive = false; return hook; @@ -6880,6 +6879,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx, { const struct nlattr * const *nla = ctx->nla; struct nft_flowtable_hook flowtable_hook; + LIST_HEAD(flowtable_del_list); struct nft_hook *this, *hook; struct nft_trans *trans; int err; @@ -6895,7 +6895,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx, err = -ENOENT; goto err_flowtable_del_hook; } - hook->inactive = true; + list_move(&hook->list, &flowtable_del_list); } trans = nft_trans_alloc(ctx, NFT_MSG_DELFLOWTABLE, @@ -6908,6 +6908,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx, nft_trans_flowtable(trans) = flowtable; nft_trans_flowtable_update(trans) = true; INIT_LIST_HEAD(&nft_trans_flowtable_hooks(trans)); + list_splice(&flowtable_del_list, &nft_trans_flowtable_hooks(trans)); nft_flowtable_hook_release(&flowtable_hook); list_add_tail(&trans->list, &ctx->net->nft.commit_list); @@ -6915,13 +6916,7 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx, return 0; err_flowtable_del_hook: - list_for_each_entry(this, &flowtable_hook.list, list) { - hook = nft_hook_list_find(&flowtable->hook_list, this); - if (!hook) - break; - - hook->inactive = false; - } + list_splice(&flowtable_del_list, &flowtable->hook_list); nft_flowtable_hook_release(&flowtable_hook); return err; @@ -7771,17 +7766,6 @@ void nft_chain_del(struct nft_chain *chain) list_del_rcu(&chain->list); } -static void nft_flowtable_hooks_del(struct nft_flowtable *flowtable, - struct list_head *hook_list) -{ - struct nft_hook *hook, *next; - - list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) { - if (hook->inactive) - list_move(&hook->list, hook_list); - } -} - static void nf_tables_module_autoload_cleanup(struct net *net) { struct nft_module_request *req, *next; @@ -8045,8 +8029,6 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb) break; case NFT_MSG_DELFLOWTABLE: if (nft_trans_flowtable_update(trans)) { - nft_flowtable_hooks_del(nft_trans_flowtable(trans), - &nft_trans_flowtable_hooks(trans)); nf_tables_flowtable_notify(&trans->ctx, nft_trans_flowtable(trans), &nft_trans_flowtable_hooks(trans), @@ -8124,7 +8106,6 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action) { struct nft_trans *trans, *next; struct nft_trans_elem *te; - struct nft_hook *hook; if (action == NFNL_ABORT_VALIDATE && nf_tables_validate(net) < 0) @@ -8242,8 +8223,8 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action) break; case NFT_MSG_DELFLOWTABLE: if (nft_trans_flowtable_update(trans)) { - list_for_each_entry(hook, &nft_trans_flowtable(trans)->hook_list, list) - hook->inactive = false; + list_splice(&nft_trans_flowtable_hooks(trans), + &nft_trans_flowtable(trans)->hook_list); } else { trans->ctx.table->use++; nft_clear(trans->ctx.net, nft_trans_flowtable(trans)); From 7fd03e34f01fef1d060b5fb7bedb6831be3cb42f Mon Sep 17 00:00:00 2001 From: Michael Ellerman Date: Thu, 2 Jun 2022 00:31:14 +1000 Subject: [PATCH 0857/1842] powerpc/kasan: Force thread size increase with KASAN [ Upstream commit 3e8635fb2e072672cbc650989ffedf8300ad67fb ] KASAN causes increased stack usage, which can lead to stack overflows. The logic in Kconfig to suggest a larger default doesn't work if a user has CONFIG_EXPERT enabled and has an existing .config with a smaller value. Follow the lead of x86 and arm64, and force the thread size to be increased when KASAN is enabled. That also has the effect of enlarging the stack for 64-bit KASAN builds, which is also desirable. Fixes: edbadaf06710 ("powerpc/kasan: Fix stack overflow by increasing THREAD_SHIFT") Reported-by: Erhard Furtner Reported-by: Christophe Leroy [mpe: Use MIN_THREAD_SHIFT as suggested by Christophe] Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220601143114.133524-1-mpe@ellerman.id.au Signed-off-by: Sasha Levin --- arch/powerpc/Kconfig | 1 - arch/powerpc/include/asm/thread_info.h | 10 ++++++++-- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 5afa0ebd78ca..78dd6be8b31d 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -786,7 +786,6 @@ config THREAD_SHIFT range 13 15 default "15" if PPC_256K_PAGES default "14" if PPC64 - default "14" if KASAN default "13" help Used to define the stack size. The default is almost always what you diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h index 46a210b03d2b..6de3517bea94 100644 --- a/arch/powerpc/include/asm/thread_info.h +++ b/arch/powerpc/include/asm/thread_info.h @@ -14,10 +14,16 @@ #ifdef __KERNEL__ -#if defined(CONFIG_VMAP_STACK) && CONFIG_THREAD_SHIFT < PAGE_SHIFT +#ifdef CONFIG_KASAN +#define MIN_THREAD_SHIFT (CONFIG_THREAD_SHIFT + 1) +#else +#define MIN_THREAD_SHIFT CONFIG_THREAD_SHIFT +#endif + +#if defined(CONFIG_VMAP_STACK) && MIN_THREAD_SHIFT < PAGE_SHIFT #define THREAD_SHIFT PAGE_SHIFT #else -#define THREAD_SHIFT CONFIG_THREAD_SHIFT +#define THREAD_SHIFT MIN_THREAD_SHIFT #endif #define THREAD_SIZE (1 << THREAD_SHIFT) From ec5548066d34b1b72b738315b0215bfb49b5b74b Mon Sep 17 00:00:00 2001 From: Pablo Neira Ayuso Date: Wed, 1 Jun 2022 17:49:36 +0200 Subject: [PATCH 0858/1842] netfilter: nf_tables: always initialize flowtable hook list in transaction [ Upstream commit 2c9e4559773c261900c674a86b8e455911675d71 ] The hook list is used if nft_trans_flowtable_update(trans) == true. However, initialize this list for other cases for safety reasons. Fixes: 78d9f48f7f44 ("netfilter: nf_tables: add devices to existing flowtable") Signed-off-by: Pablo Neira Ayuso Signed-off-by: Sasha Levin --- net/netfilter/nf_tables_api.c | 1 + 1 file changed, 1 insertion(+) diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index a5779790e337..b90e45f1ffa0 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -481,6 +481,7 @@ static int nft_trans_flowtable_add(struct nft_ctx *ctx, int msg_type, if (msg_type == NFT_MSG_NEWFLOWTABLE) nft_activate_next(ctx->net, flowtable); + INIT_LIST_HEAD(&nft_trans_flowtable_hooks(trans)); nft_trans_flowtable(trans) = flowtable; list_add_tail(&trans->list, &ctx->net->nft.commit_list); From 19cb3ece14547cb1ca2021798aaf49a3f82643d1 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Wed, 1 Jun 2022 12:59:26 +0400 Subject: [PATCH 0859/1842] ata: pata_octeon_cf: Fix refcount leak in octeon_cf_probe [ Upstream commit 10d6bdf532902be1d8aa5900b3c03c5671612aa2 ] of_find_device_by_node() takes reference, we should use put_device() to release it when not need anymore. Add missing put_device() to avoid refcount leak. Fixes: 43f01da0f279 ("MIPS/OCTEON/ata: Convert pata_octeon_cf.c to use device tree.") Signed-off-by: Miaoqian Lin Reviewed-by: Sergey Shtylyov Signed-off-by: Damien Le Moal Signed-off-by: Sasha Levin --- drivers/ata/pata_octeon_cf.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/ata/pata_octeon_cf.c b/drivers/ata/pata_octeon_cf.c index b5a3f710d76d..4cc8a1027888 100644 --- a/drivers/ata/pata_octeon_cf.c +++ b/drivers/ata/pata_octeon_cf.c @@ -888,12 +888,14 @@ static int octeon_cf_probe(struct platform_device *pdev) int i; res_dma = platform_get_resource(dma_dev, IORESOURCE_MEM, 0); if (!res_dma) { + put_device(&dma_dev->dev); of_node_put(dma_node); return -EINVAL; } cf_port->dma_base = (u64)devm_ioremap(&pdev->dev, res_dma->start, resource_size(res_dma)); if (!cf_port->dma_base) { + put_device(&dma_dev->dev); of_node_put(dma_node); return -EINVAL; } @@ -903,6 +905,7 @@ static int octeon_cf_probe(struct platform_device *pdev) irq = i; irq_handler = octeon_cf_interrupt; } + put_device(&dma_dev->dev); } of_node_put(dma_node); } From 67e2d448733c2480ea5f5e9def6392f3615c836d Mon Sep 17 00:00:00 2001 From: Pablo Neira Ayuso Date: Sun, 5 Jun 2022 13:40:06 +0200 Subject: [PATCH 0860/1842] netfilter: nf_tables: release new hooks on unsupported flowtable flags [ Upstream commit c271cc9febaaa1bcbc0842d1ee30466aa6148ea8 ] Release the list of new hooks that are pending to be registered in case that unsupported flowtable flags are provided. Fixes: 78d9f48f7f44 ("netfilter: nf_tables: add devices to existing flowtable") Signed-off-by: Pablo Neira Ayuso Signed-off-by: Sasha Levin --- net/netfilter/nf_tables_api.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index b90e45f1ffa0..2872722488c9 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -6694,11 +6694,15 @@ static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh, if (nla[NFTA_FLOWTABLE_FLAGS]) { flags = ntohl(nla_get_be32(nla[NFTA_FLOWTABLE_FLAGS])); - if (flags & ~NFT_FLOWTABLE_MASK) - return -EOPNOTSUPP; + if (flags & ~NFT_FLOWTABLE_MASK) { + err = -EOPNOTSUPP; + goto err_flowtable_update_hook; + } if ((flowtable->data.flags & NFT_FLOWTABLE_HW_OFFLOAD) ^ - (flags & NFT_FLOWTABLE_HW_OFFLOAD)) - return -EOPNOTSUPP; + (flags & NFT_FLOWTABLE_HW_OFFLOAD)) { + err = -EOPNOTSUPP; + goto err_flowtable_update_hook; + } } else { flags = flowtable->data.flags; } From 330c0c6cd2150a2d7f47af16aa590078b0d2f736 Mon Sep 17 00:00:00 2001 From: Pablo Neira Ayuso Date: Mon, 6 Jun 2022 17:15:57 +0200 Subject: [PATCH 0861/1842] netfilter: nf_tables: memleak flow rule from commit path [ Upstream commit 9dd732e0bdf538b1b76dc7c157e2b5e560ff30d3 ] Abort path release flow rule object, however, commit path does not. Update code to destroy these objects before releasing the transaction. Fixes: c9626a2cbdb2 ("netfilter: nf_tables: add hardware offload support") Signed-off-by: Pablo Neira Ayuso Signed-off-by: Sasha Levin --- net/netfilter/nf_tables_api.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 2872722488c9..8507c1bdd736 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -7587,6 +7587,9 @@ static void nft_commit_release(struct nft_trans *trans) nf_tables_chain_destroy(&trans->ctx); break; case NFT_MSG_DELRULE: + if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD) + nft_flow_rule_destroy(nft_trans_flow_rule(trans)); + nf_tables_rule_destroy(&trans->ctx, nft_trans_rule(trans)); break; case NFT_MSG_DELSET: @@ -7946,6 +7949,9 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb) nf_tables_rule_notify(&trans->ctx, nft_trans_rule(trans), NFT_MSG_NEWRULE); + if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD) + nft_flow_rule_destroy(nft_trans_flow_rule(trans)); + nft_trans_destroy(trans); break; case NFT_MSG_DELRULE: From 86c87d2c0338a5e84e50a312943bca1f33bd8164 Mon Sep 17 00:00:00 2001 From: Pablo Neira Ayuso Date: Mon, 6 Jun 2022 17:31:29 +0200 Subject: [PATCH 0862/1842] netfilter: nf_tables: bail out early if hardware offload is not supported [ Upstream commit 3a41c64d9c1185a2f3a184015e2a9b78bfc99c71 ] If user requests for NFT_CHAIN_HW_OFFLOAD, then check if either device provides the .ndo_setup_tc interface or there is an indirect flow block that has been registered. Otherwise, bail out early from the preparation phase. Moreover, validate that family == NFPROTO_NETDEV and hook is NF_NETDEV_INGRESS. Fixes: c9626a2cbdb2 ("netfilter: nf_tables: add hardware offload support") Signed-off-by: Pablo Neira Ayuso Signed-off-by: Sasha Levin --- include/net/flow_offload.h | 1 + include/net/netfilter/nf_tables_offload.h | 2 +- net/core/flow_offload.c | 6 ++++++ net/netfilter/nf_tables_api.c | 2 +- net/netfilter/nf_tables_offload.c | 23 ++++++++++++++++++++++- 5 files changed, 31 insertions(+), 3 deletions(-) diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h index 010d58159887..9a58274e6217 100644 --- a/include/net/flow_offload.h +++ b/include/net/flow_offload.h @@ -568,5 +568,6 @@ int flow_indr_dev_setup_offload(struct net_device *dev, struct Qdisc *sch, enum tc_setup_type type, void *data, struct flow_block_offload *bo, void (*cleanup)(struct flow_block_cb *block_cb)); +bool flow_indr_dev_exists(void); #endif /* _NET_FLOW_OFFLOAD_H */ diff --git a/include/net/netfilter/nf_tables_offload.h b/include/net/netfilter/nf_tables_offload.h index 7a453a35a41d..1058f38e2aca 100644 --- a/include/net/netfilter/nf_tables_offload.h +++ b/include/net/netfilter/nf_tables_offload.h @@ -91,7 +91,7 @@ int nft_flow_rule_offload_commit(struct net *net); NFT_OFFLOAD_MATCH(__key, __base, __field, __len, __reg) \ memset(&(__reg)->mask, 0xff, (__reg)->len); -int nft_chain_offload_priority(struct nft_base_chain *basechain); +bool nft_chain_offload_support(const struct nft_base_chain *basechain); int nft_offload_init(void); void nft_offload_exit(void); diff --git a/net/core/flow_offload.c b/net/core/flow_offload.c index e3f0d5906811..8d958290b7d2 100644 --- a/net/core/flow_offload.c +++ b/net/core/flow_offload.c @@ -566,3 +566,9 @@ int flow_indr_dev_setup_offload(struct net_device *dev, struct Qdisc *sch, return list_empty(&bo->cb_list) ? -EOPNOTSUPP : 0; } EXPORT_SYMBOL(flow_indr_dev_setup_offload); + +bool flow_indr_dev_exists(void) +{ + return !list_empty(&flow_block_indr_dev_list); +} +EXPORT_SYMBOL(flow_indr_dev_exists); diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 8507c1bdd736..0c56a90c3f08 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -1963,7 +1963,7 @@ static int nft_basechain_init(struct nft_base_chain *basechain, u8 family, chain->flags |= NFT_CHAIN_BASE | flags; basechain->policy = NF_ACCEPT; if (chain->flags & NFT_CHAIN_HW_OFFLOAD && - nft_chain_offload_priority(basechain) < 0) + !nft_chain_offload_support(basechain)) return -EOPNOTSUPP; flow_block_init(&basechain->flow_block); diff --git a/net/netfilter/nf_tables_offload.c b/net/netfilter/nf_tables_offload.c index 839fd09f1bb4..4e99b1731b3f 100644 --- a/net/netfilter/nf_tables_offload.c +++ b/net/netfilter/nf_tables_offload.c @@ -208,7 +208,7 @@ static int nft_setup_cb_call(enum tc_setup_type type, void *type_data, return 0; } -int nft_chain_offload_priority(struct nft_base_chain *basechain) +static int nft_chain_offload_priority(const struct nft_base_chain *basechain) { if (basechain->ops.priority <= 0 || basechain->ops.priority > USHRT_MAX) @@ -217,6 +217,27 @@ int nft_chain_offload_priority(struct nft_base_chain *basechain) return 0; } +bool nft_chain_offload_support(const struct nft_base_chain *basechain) +{ + struct net_device *dev; + struct nft_hook *hook; + + if (nft_chain_offload_priority(basechain) < 0) + return false; + + list_for_each_entry(hook, &basechain->hook_list, list) { + if (hook->ops.pf != NFPROTO_NETDEV || + hook->ops.hooknum != NF_NETDEV_INGRESS) + return false; + + dev = hook->ops.dev; + if (!dev->netdev_ops->ndo_setup_tc && !flow_indr_dev_exists()) + return false; + } + + return true; +} + static void nft_flow_cls_offload_setup(struct flow_cls_offload *cls_flow, const struct nft_base_chain *basechain, const struct nft_rule *rule, From be9581f4fda795aa0e18cdc333efc1e447e1a55c Mon Sep 17 00:00:00 2001 From: Masahiro Yamada Date: Mon, 6 Jun 2022 13:59:20 +0900 Subject: [PATCH 0863/1842] xen: unexport __init-annotated xen_xlate_map_ballooned_pages() [ Upstream commit dbac14a5a05ff8e1ce7c0da0e1f520ce39ec62ea ] EXPORT_SYMBOL and __init is a bad combination because the .init.text section is freed up after the initialization. Hence, modules cannot use symbols annotated __init. The access to a freed symbol may end up with kernel panic. modpost used to detect it, but it has been broken for a decade. Recently, I fixed modpost so it started to warn it again, then this showed up in linux-next builds. There are two ways to fix it: - Remove __init - Remove EXPORT_SYMBOL I chose the latter for this case because none of the in-tree call-sites (arch/arm/xen/enlighten.c, arch/x86/xen/grant-table.c) is compiled as modular. Fixes: 243848fc018c ("xen/grant-table: Move xlated_setup_gnttab_pages to common place") Reported-by: Stephen Rothwell Signed-off-by: Masahiro Yamada Reviewed-by: Oleksandr Tyshchenko Acked-by: Stefano Stabellini Link: https://lore.kernel.org/r/20220606045920.4161881-1-masahiroy@kernel.org Signed-off-by: Juergen Gross Signed-off-by: Sasha Levin --- drivers/xen/xlate_mmu.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c index 34742c6e189e..f17c4c03db30 100644 --- a/drivers/xen/xlate_mmu.c +++ b/drivers/xen/xlate_mmu.c @@ -261,7 +261,6 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt, return 0; } -EXPORT_SYMBOL_GPL(xen_xlate_map_ballooned_pages); struct remap_pfn { struct mm_struct *mm; From c61848500a3fd6867dfa4834b8c7f97133eceb9f Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Sun, 5 Jun 2022 16:23:25 -0700 Subject: [PATCH 0864/1842] af_unix: Fix a data-race in unix_dgram_peer_wake_me(). [ Upstream commit 662a80946ce13633ae90a55379f1346c10f0c432 ] unix_dgram_poll() calls unix_dgram_peer_wake_me() without `other`'s lock held and check if its receive queue is full. Here we need to use unix_recvq_full_lockless() instead of unix_recvq_full(), otherwise KCSAN will report a data-race. Fixes: 7d267278a9ec ("unix: avoid use-after-free in ep_remove_wait_queue") Signed-off-by: Kuniyuki Iwashima Link: https://lore.kernel.org/r/20220605232325.11804-1-kuniyu@amazon.com Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- net/unix/af_unix.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c index b7edca89e0ba..28721e9575b7 100644 --- a/net/unix/af_unix.c +++ b/net/unix/af_unix.c @@ -438,7 +438,7 @@ static int unix_dgram_peer_wake_me(struct sock *sk, struct sock *other) * -ECONNREFUSED. Otherwise, if we haven't queued any skbs * to other and its full, we will hang waiting for POLLOUT. */ - if (unix_recvq_full(other) && !sock_flag(other, SOCK_DEAD)) + if (unix_recvq_full_lockless(other) && !sock_flag(other, SOCK_DEAD)) return 1; if (connected) From 0cf7aaff290cdc4d7cee683d4a18138b0dacac48 Mon Sep 17 00:00:00 2001 From: Eric Dumazet Date: Tue, 31 May 2022 14:51:13 -0700 Subject: [PATCH 0865/1842] bpf, arm64: Clear prog->jited_len along prog->jited [ Upstream commit 10f3b29c65bb2fe0d47c2945cd0b4087be1c5218 ] syzbot reported an illegal copy_to_user() attempt from bpf_prog_get_info_by_fd() [1] There was no repro yet on this bug, but I think that commit 0aef499f3172 ("mm/usercopy: Detect vmalloc overruns") is exposing a prior bug in bpf arm64. bpf_prog_get_info_by_fd() looks at prog->jited_len to determine if the JIT image can be copied out to user space. My theory is that syzbot managed to get a prog where prog->jited_len has been set to 43, while prog->bpf_func has ben cleared. It is not clear why copy_to_user(uinsns, NULL, ulen) is triggering this particular warning. I thought find_vma_area(NULL) would not find a vm_struct. As we do not hold vmap_area_lock spinlock, it might be possible that the found vm_struct was garbage. [1] usercopy: Kernel memory exposure attempt detected from vmalloc (offset 792633534417210172, size 43)! kernel BUG at mm/usercopy.c:101! Internal error: Oops - BUG: 0 [#1] PREEMPT SMP Modules linked in: CPU: 0 PID: 25002 Comm: syz-executor.1 Not tainted 5.18.0-syzkaller-10139-g8291eaafed36 #0 Hardware name: linux,dummy-virt (DT) pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : usercopy_abort+0x90/0x94 mm/usercopy.c:101 lr : usercopy_abort+0x90/0x94 mm/usercopy.c:89 sp : ffff80000b773a20 x29: ffff80000b773a30 x28: faff80000b745000 x27: ffff80000b773b48 x26: 0000000000000000 x25: 000000000000002b x24: 0000000000000000 x23: 00000000000000e0 x22: ffff80000b75db67 x21: 0000000000000001 x20: 000000000000002b x19: ffff80000b75db3c x18: 00000000fffffffd x17: 2820636f6c6c616d x16: 76206d6f72662064 x15: 6574636574656420 x14: 74706d6574746120 x13: 2129333420657a69 x12: 73202c3237313031 x11: 3237313434333533 x10: 3336323937207465 x9 : 657275736f707865 x8 : ffff80000a30c550 x7 : ffff80000b773830 x6 : ffff80000b773830 x5 : 0000000000000000 x4 : ffff00007fbbaa10 x3 : 0000000000000000 x2 : 0000000000000000 x1 : f7ff000028fc0000 x0 : 0000000000000064 Call trace: usercopy_abort+0x90/0x94 mm/usercopy.c:89 check_heap_object mm/usercopy.c:186 [inline] __check_object_size mm/usercopy.c:252 [inline] __check_object_size+0x198/0x36c mm/usercopy.c:214 check_object_size include/linux/thread_info.h:199 [inline] check_copy_size include/linux/thread_info.h:235 [inline] copy_to_user include/linux/uaccess.h:159 [inline] bpf_prog_get_info_by_fd.isra.0+0xf14/0xfdc kernel/bpf/syscall.c:3993 bpf_obj_get_info_by_fd+0x12c/0x510 kernel/bpf/syscall.c:4253 __sys_bpf+0x900/0x2150 kernel/bpf/syscall.c:4956 __do_sys_bpf kernel/bpf/syscall.c:5021 [inline] __se_sys_bpf kernel/bpf/syscall.c:5019 [inline] __arm64_sys_bpf+0x28/0x40 kernel/bpf/syscall.c:5019 __invoke_syscall arch/arm64/kernel/syscall.c:38 [inline] invoke_syscall+0x48/0x114 arch/arm64/kernel/syscall.c:52 el0_svc_common.constprop.0+0x44/0xec arch/arm64/kernel/syscall.c:142 do_el0_svc+0xa0/0xc0 arch/arm64/kernel/syscall.c:206 el0_svc+0x44/0xb0 arch/arm64/kernel/entry-common.c:624 el0t_64_sync_handler+0x1ac/0x1b0 arch/arm64/kernel/entry-common.c:642 el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:581 Code: aa0003e3 d00038c0 91248000 97fff65f (d4210000) Fixes: db496944fdaa ("bpf: arm64: add JIT support for multi-function programs") Reported-by: syzbot Signed-off-by: Eric Dumazet Signed-off-by: Daniel Borkmann Acked-by: Song Liu Link: https://lore.kernel.org/bpf/20220531215113.1100754-1-eric.dumazet@gmail.com Signed-off-by: Alexei Starovoitov Signed-off-by: Sasha Levin --- arch/arm64/net/bpf_jit_comp.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 9c6cab71ba98..18627cbd6da4 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -1111,6 +1111,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) bpf_jit_binary_free(header); prog->bpf_func = NULL; prog->jited = 0; + prog->jited_len = 0; goto out_off; } bpf_jit_binary_lock_ro(header); From c2ae49a113a5344232f1ebb93bcf18bbd11e9c39 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Sun, 5 Jun 2022 11:23:34 +0400 Subject: [PATCH 0866/1842] net: dsa: lantiq_gswip: Fix refcount leak in gswip_gphy_fw_list [ Upstream commit 0737e018a05e2aa352828c52bdeed3b02cff2930 ] Every iteration of for_each_available_child_of_node() decrements the reference count of the previous node. when breaking early from a for_each_available_child_of_node() loop, we need to explicitly call of_node_put() on the gphy_fw_np. Add missing of_node_put() to avoid refcount leak. Fixes: 14fceff4771e ("net: dsa: Add Lantiq / Intel DSA driver for vrx200") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220605072335.11257-1-linmq006@gmail.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/dsa/lantiq_gswip.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c index 4abae06499a9..70895e480683 100644 --- a/drivers/net/dsa/lantiq_gswip.c +++ b/drivers/net/dsa/lantiq_gswip.c @@ -1981,8 +1981,10 @@ static int gswip_gphy_fw_list(struct gswip_priv *priv, for_each_available_child_of_node(gphy_fw_list_np, gphy_fw_np) { err = gswip_gphy_fw_probe(priv, &priv->gphy_fw[i], gphy_fw_np, i); - if (err) + if (err) { + of_node_put(gphy_fw_np); goto remove_gphy; + } i++; } From 3d8122e1692bc1997300e608c83af2309719f018 Mon Sep 17 00:00:00 2001 From: Gal Pressman Date: Mon, 6 Jun 2022 14:57:18 +0300 Subject: [PATCH 0867/1842] net/mlx4_en: Fix wrong return value on ioctl EEPROM query failure [ Upstream commit f5826c8c9d57210a17031af5527056eefdc2b7eb ] The ioctl EEPROM query wrongly returns success on read failures, fix that by returning the appropriate error code. Fixes: 7202da8b7f71 ("ethtool, net/mlx4_en: Cable info, get_module_info/eeprom ethtool support") Signed-off-by: Gal Pressman Signed-off-by: Tariq Toukan Link: https://lore.kernel.org/r/20220606115718.14233-1-tariqt@nvidia.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/mellanox/mlx4/en_ethtool.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c index 01275c376721..962851000ace 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c @@ -2099,7 +2099,7 @@ static int mlx4_en_get_module_eeprom(struct net_device *dev, en_err(priv, "mlx4_get_module_info i(%d) offset(%d) bytes_to_read(%d) - FAILED (0x%x)\n", i, offset, ee->len - i, ret); - return 0; + return ret; } i += ret; From b585b87fd5c736522ee24b735ea893321f2cad49 Mon Sep 17 00:00:00 2001 From: Chuck Lever Date: Tue, 7 Jun 2022 16:47:52 -0400 Subject: [PATCH 0868/1842] SUNRPC: Fix the calculation of xdr->end in xdr_get_next_encode_buffer() [ Upstream commit 6c254bf3b637dd4ef4f78eb78c7447419c0161d7 ] I found that NFSD's new NFSv3 READDIRPLUS XDR encoder was screwing up right at the end of the page array. xdr_get_next_encode_buffer() does not compute the value of xdr->end correctly: * The check to see if we're on the final available page in xdr->buf needs to account for the space consumed by @nbytes. * The new xdr->end value needs to account for the portion of @nbytes that is to be encoded into the previous buffer. Fixes: 2825a7f90753 ("nfsd4: allow encoding across page boundaries") Signed-off-by: Chuck Lever Reviewed-by: NeilBrown Reviewed-by: J. Bruce Fields Signed-off-by: Sasha Levin --- net/sunrpc/xdr.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c index 71e03b930b70..c8ed6d3d5762 100644 --- a/net/sunrpc/xdr.c +++ b/net/sunrpc/xdr.c @@ -752,7 +752,11 @@ static __be32 *xdr_get_next_encode_buffer(struct xdr_stream *xdr, */ xdr->p = (void *)p + frag2bytes; space_left = xdr->buf->buflen - xdr->buf->len; - xdr->end = (void *)p + min_t(int, space_left, PAGE_SIZE); + if (space_left - nbytes >= PAGE_SIZE) + xdr->end = (void *)p + PAGE_SIZE; + else + xdr->end = (void *)p + space_left - frag1bytes; + xdr->buf->page_len += frag2bytes; xdr->buf->len += nbytes; return p; From 7759c3222815b945a94b212bc0c6cdec475cfec2 Mon Sep 17 00:00:00 2001 From: Masahiro Yamada Date: Mon, 6 Jun 2022 13:53:53 +0900 Subject: [PATCH 0869/1842] net: mdio: unexport __init-annotated mdio_bus_init() [ Upstream commit 35b42dce619701f1300fb8498dae82c9bb1f0263 ] EXPORT_SYMBOL and __init is a bad combination because the .init.text section is freed up after the initialization. Hence, modules cannot use symbols annotated __init. The access to a freed symbol may end up with kernel panic. modpost used to detect it, but it has been broken for a decade. Recently, I fixed modpost so it started to warn it again, then this showed up in linux-next builds. There are two ways to fix it: - Remove __init - Remove EXPORT_SYMBOL I chose the latter for this case because the only in-tree call-site, drivers/net/phy/phy_device.c is never compiled as modular. (CONFIG_PHYLIB is boolean) Fixes: 90eff9096c01 ("net: phy: Allow splitting MDIO bus/device support from PHYs") Reported-by: Stephen Rothwell Signed-off-by: Masahiro Yamada Reviewed-by: Florian Fainelli Reviewed-by: Russell King (Oracle) Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/phy/mdio_bus.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c index c416ab1d2b00..c1cbdac4b376 100644 --- a/drivers/net/phy/mdio_bus.c +++ b/drivers/net/phy/mdio_bus.c @@ -1008,7 +1008,6 @@ int __init mdio_bus_init(void) return ret; } -EXPORT_SYMBOL_GPL(mdio_bus_init); #if IS_ENABLED(CONFIG_PHYLIB) void mdio_bus_exit(void) From be3884d5cd04ccd58294b83a02d70b7c5fca19d3 Mon Sep 17 00:00:00 2001 From: Masahiro Yamada Date: Mon, 6 Jun 2022 13:53:54 +0900 Subject: [PATCH 0870/1842] net: xfrm: unexport __init-annotated xfrm4_protocol_init() [ Upstream commit 4a388f08d8784af48f352193d2b72aaf167a57a1 ] EXPORT_SYMBOL and __init is a bad combination because the .init.text section is freed up after the initialization. Hence, modules cannot use symbols annotated __init. The access to a freed symbol may end up with kernel panic. modpost used to detect it, but it has been broken for a decade. Recently, I fixed modpost so it started to warn it again, then this showed up in linux-next builds. There are two ways to fix it: - Remove __init - Remove EXPORT_SYMBOL I chose the latter for this case because the only in-tree call-site, net/ipv4/xfrm4_policy.c is never compiled as modular. (CONFIG_XFRM is boolean) Fixes: 2f32b51b609f ("xfrm: Introduce xfrm_input_afinfo to access the the callbacks properly") Reported-by: Stephen Rothwell Signed-off-by: Masahiro Yamada Acked-by: Steffen Klassert Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- net/ipv4/xfrm4_protocol.c | 1 - 1 file changed, 1 deletion(-) diff --git a/net/ipv4/xfrm4_protocol.c b/net/ipv4/xfrm4_protocol.c index ea595c8549c7..cfd46222ef91 100644 --- a/net/ipv4/xfrm4_protocol.c +++ b/net/ipv4/xfrm4_protocol.c @@ -307,4 +307,3 @@ void __init xfrm4_protocol_init(void) { xfrm_input_register_afinfo(&xfrm4_input_afinfo); } -EXPORT_SYMBOL(xfrm4_protocol_init); From 5d9c1b081ad28c852a97e10dd75412546497694a Mon Sep 17 00:00:00 2001 From: Masahiro Yamada Date: Mon, 6 Jun 2022 13:53:55 +0900 Subject: [PATCH 0871/1842] net: ipv6: unexport __init-annotated seg6_hmac_init() [ Upstream commit 5801f064e35181c71857a80ff18af4dbec3c5f5c ] EXPORT_SYMBOL and __init is a bad combination because the .init.text section is freed up after the initialization. Hence, modules cannot use symbols annotated __init. The access to a freed symbol may end up with kernel panic. modpost used to detect it, but it has been broken for a decade. Recently, I fixed modpost so it started to warn it again, then this showed up in linux-next builds. There are two ways to fix it: - Remove __init - Remove EXPORT_SYMBOL I chose the latter for this case because the caller (net/ipv6/seg6.c) and the callee (net/ipv6/seg6_hmac.c) belong to the same module. It seems an internal function call in ipv6.ko. Fixes: bf355b8d2c30 ("ipv6: sr: add core files for SR HMAC support") Reported-by: Stephen Rothwell Signed-off-by: Masahiro Yamada Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- net/ipv6/seg6_hmac.c | 1 - 1 file changed, 1 deletion(-) diff --git a/net/ipv6/seg6_hmac.c b/net/ipv6/seg6_hmac.c index 85dddfe3a2c6..b9179708e3c1 100644 --- a/net/ipv6/seg6_hmac.c +++ b/net/ipv6/seg6_hmac.c @@ -400,7 +400,6 @@ int __init seg6_hmac_init(void) { return seg6_hmac_init_algo(); } -EXPORT_SYMBOL(seg6_hmac_init); int __net_init seg6_hmac_net_init(struct net *net) { From 652418d82b7db399d658eb1452ab97545dc6160e Mon Sep 17 00:00:00 2001 From: Feras Daoud Date: Sat, 19 Mar 2022 21:47:48 +0200 Subject: [PATCH 0872/1842] net/mlx5: Rearm the FW tracer after each tracer event [ Upstream commit 8bf94e6414c9481bfa28269022688ab445d0081d ] The current design does not arm the tracer if traces are available before the tracer string database is fully loaded, leading to an unfunctional tracer. This fix will rearm the tracer every time the FW triggers tracer event regardless of the tracer strings database status. Fixes: c71ad41ccb0c ("net/mlx5: FW tracer, events handling") Signed-off-by: Feras Daoud Signed-off-by: Roy Novich Reviewed-by: Moshe Shemesh Signed-off-by: Saeed Mahameed Signed-off-by: Sasha Levin --- drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c index 857be86b4a11..e8a4adccd2b2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c @@ -675,6 +675,9 @@ static void mlx5_fw_tracer_handle_traces(struct work_struct *work) if (!tracer->owner) return; + if (unlikely(!tracer->str_db.loaded)) + goto arm; + block_count = tracer->buff.size / TRACER_BLOCK_SIZE_BYTE; start_offset = tracer->buff.consumer_index * TRACER_BLOCK_SIZE_BYTE; @@ -732,6 +735,7 @@ static void mlx5_fw_tracer_handle_traces(struct work_struct *work) &tmp_trace_block[TRACES_PER_BLOCK - 1]); } +arm: mlx5_fw_tracer_arm(dev); } @@ -1138,8 +1142,7 @@ static int fw_tracer_event(struct notifier_block *nb, unsigned long action, void queue_work(tracer->work_queue, &tracer->ownership_change_work); break; case MLX5_TRACER_SUBTYPE_TRACES_AVAILABLE: - if (likely(tracer->str_db.loaded)) - queue_work(tracer->work_queue, &tracer->handle_traces_work); + queue_work(tracer->work_queue, &tracer->handle_traces_work); break; default: mlx5_core_dbg(dev, "FWTracer: Event with unrecognized subtype: sub_type %d\n", From 1981cd7a774e2e028cbc4f5eddfee196ea6381f8 Mon Sep 17 00:00:00 2001 From: Mark Bloch Date: Mon, 30 May 2022 10:46:59 +0300 Subject: [PATCH 0873/1842] net/mlx5: fs, fail conflicting actions [ Upstream commit 8fa5e7b20e01042b14f8cd684d2da9b638460c74 ] When combining two steering rules into one check not only do they share the same actions but those actions are also the same. This resolves an issue where when creating two different rules with the same match the actions are overwritten and one of the rules is deleted a FW syndrome can be seen in dmesg. mlx5_core 0000:03:00.0: mlx5_cmd_check:819:(pid 2105): DEALLOC_MODIFY_HEADER_CONTEXT(0x941) op_mod(0x0) failed, status bad resource state(0x9), syndrome (0x1ab444) Fixes: 0d235c3fabb7 ("net/mlx5: Add hash table to search FTEs in a flow-group") Signed-off-by: Mark Bloch Reviewed-by: Maor Gottlieb Signed-off-by: Saeed Mahameed Signed-off-by: Sasha Levin --- .../net/ethernet/mellanox/mlx5/core/fs_core.c | 35 +++++++++++++++++-- 1 file changed, 32 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c index 15472fb15d7d..4bdcceffe9d3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c @@ -1520,9 +1520,22 @@ static struct mlx5_flow_rule *find_flow_rule(struct fs_fte *fte, return NULL; } -static bool check_conflicting_actions(u32 action1, u32 action2) +static bool check_conflicting_actions_vlan(const struct mlx5_fs_vlan *vlan0, + const struct mlx5_fs_vlan *vlan1) { - u32 xored_actions = action1 ^ action2; + return vlan0->ethtype != vlan1->ethtype || + vlan0->vid != vlan1->vid || + vlan0->prio != vlan1->prio; +} + +static bool check_conflicting_actions(const struct mlx5_flow_act *act1, + const struct mlx5_flow_act *act2) +{ + u32 action1 = act1->action; + u32 action2 = act2->action; + u32 xored_actions; + + xored_actions = action1 ^ action2; /* if one rule only wants to count, it's ok */ if (action1 == MLX5_FLOW_CONTEXT_ACTION_COUNT || @@ -1539,6 +1552,22 @@ static bool check_conflicting_actions(u32 action1, u32 action2) MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2)) return true; + if (action1 & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT && + act1->pkt_reformat != act2->pkt_reformat) + return true; + + if (action1 & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR && + act1->modify_hdr != act2->modify_hdr) + return true; + + if (action1 & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH && + check_conflicting_actions_vlan(&act1->vlan[0], &act2->vlan[0])) + return true; + + if (action1 & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2 && + check_conflicting_actions_vlan(&act1->vlan[1], &act2->vlan[1])) + return true; + return false; } @@ -1546,7 +1575,7 @@ static int check_conflicting_ftes(struct fs_fte *fte, const struct mlx5_flow_context *flow_context, const struct mlx5_flow_act *flow_act) { - if (check_conflicting_actions(flow_act->action, fte->action.action)) { + if (check_conflicting_actions(flow_act, &fte->action)) { mlx5_core_warn(get_dev(&fte->node), "Found two FTEs with conflicting actions\n"); return -EEXIST; From fbeb8dfa8b87ef259eef0c89e39b53962a3cf604 Mon Sep 17 00:00:00 2001 From: Willem de Bruijn Date: Mon, 6 Jun 2022 09:21:07 -0400 Subject: [PATCH 0874/1842] ip_gre: test csum_start instead of transport header [ Upstream commit 8d21e9963bec1aad2280cdd034c8993033ef2948 ] GRE with TUNNEL_CSUM will apply local checksum offload on CHECKSUM_PARTIAL packets. ipgre_xmit must validate csum_start after an optional skb_pull, else lco_csum may trigger an overflow. The original check was if (csum && skb_checksum_start(skb) < skb->data) return -EINVAL; This had false positives when skb_checksum_start is undefined: when ip_summed is not CHECKSUM_PARTIAL. A discussed refinement was straightforward if (csum && skb->ip_summed == CHECKSUM_PARTIAL && skb_checksum_start(skb) < skb->data) return -EINVAL; But was eventually revised more thoroughly: - restrict the check to the only branch where needed, in an uncommon GRE path that uses header_ops and calls skb_pull. - test skb_transport_header, which is set along with csum_start in skb_partial_csum_set in the normal header_ops datapath. Turns out skbs can arrive in this branch without the transport header set, e.g., through BPF redirection. Revise the check back to check csum_start directly, and only if CHECKSUM_PARTIAL. Do leave the check in the updated location. Check field regardless of whether TUNNEL_CSUM is configured. Link: https://lore.kernel.org/netdev/YS+h%2FtqCJJiQei+W@shredder/ Link: https://lore.kernel.org/all/20210902193447.94039-2-willemdebruijn.kernel@gmail.com/T/#u Fixes: 8a0ed250f911 ("ip_gre: validate csum_start only on pull") Reported-by: syzbot Signed-off-by: Willem de Bruijn Reviewed-by: Eric Dumazet Reviewed-by: Alexander Duyck Link: https://lore.kernel.org/r/20220606132107.3582565-1-willemdebruijn.kernel@gmail.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- net/ipv4/ip_gre.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c index 2a80038575d2..a7e32be8714f 100644 --- a/net/ipv4/ip_gre.c +++ b/net/ipv4/ip_gre.c @@ -624,21 +624,20 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb, } if (dev->header_ops) { - const int pull_len = tunnel->hlen + sizeof(struct iphdr); - if (skb_cow_head(skb, 0)) goto free_skb; tnl_params = (const struct iphdr *)skb->data; - if (pull_len > skb_transport_offset(skb)) - goto free_skb; - /* Pull skb since ip_tunnel_xmit() needs skb->data pointing * to gre header. */ - skb_pull(skb, pull_len); + skb_pull(skb, tunnel->hlen + sizeof(struct iphdr)); skb_reset_mac_header(skb); + + if (skb->ip_summed == CHECKSUM_PARTIAL && + skb_checksum_start(skb) < skb->data) + goto free_skb; } else { if (skb_cow_head(skb, dev->needed_headroom)) goto free_skb; From 96bf5ed057df2d157274d4e2079002f9a9404bb8 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Tue, 7 Jun 2022 08:11:43 +0400 Subject: [PATCH 0875/1842] net: altera: Fix refcount leak in altera_tse_mdio_create [ Upstream commit 11ec18b1d8d92b9df307d31950dcba0b3dd7283c ] Every iteration of for_each_child_of_node() decrements the reference count of the previous node. When break from a for_each_child_of_node() loop, we need to explicitly call of_node_put() on the child node when not need anymore. Add missing of_node_put() to avoid refcount leak. Fixes: bbd2190ce96d ("Altera TSE: Add main and header file for Altera Ethernet Driver") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220607041144.7553-1-linmq006@gmail.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/altera/altera_tse_main.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/altera/altera_tse_main.c b/drivers/net/ethernet/altera/altera_tse_main.c index a7d8d45e0e94..b779f3adbc56 100644 --- a/drivers/net/ethernet/altera/altera_tse_main.c +++ b/drivers/net/ethernet/altera/altera_tse_main.c @@ -163,7 +163,8 @@ static int altera_tse_mdio_create(struct net_device *dev, unsigned int id) mdio = mdiobus_alloc(); if (mdio == NULL) { netdev_err(dev, "Error allocating MDIO bus\n"); - return -ENOMEM; + ret = -ENOMEM; + goto put_node; } mdio->name = ALTERA_TSE_RESOURCE_NAME; @@ -180,6 +181,7 @@ static int altera_tse_mdio_create(struct net_device *dev, unsigned int id) mdio->id); goto out_free_mdio; } + of_node_put(mdio_node); if (netif_msg_drv(priv)) netdev_info(dev, "MDIO bus %s: created\n", mdio->id); @@ -189,6 +191,8 @@ static int altera_tse_mdio_create(struct net_device *dev, unsigned int id) out_free_mdio: mdiobus_free(mdio); mdio = NULL; +put_node: + of_node_put(mdio_node); return ret; } From f091e29ed872e0a87c8655e1a25385bfe9868896 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Wed, 8 Jun 2022 16:59:29 -0700 Subject: [PATCH 0876/1842] drm: imx: fix compiler warning with gcc-12 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 7aefd8b53815274f3ef398d370a3c9b27dd9f00c ] Gcc-12 correctly warned about this code using a non-NULL pointer as a truth value: drivers/gpu/drm/imx/ipuv3-crtc.c: In function ‘ipu_crtc_disable_planes’: drivers/gpu/drm/imx/ipuv3-crtc.c:72:21: error: the comparison will always evaluate as ‘true’ for the address of ‘plane’ will never be NULL [-Werror=address] 72 | if (&ipu_crtc->plane[1] && plane == &ipu_crtc->plane[1]->base) | ^ due to the extraneous '&' address-of operator. Philipp Zabel points out that The mistake had no adverse effect since the following condition doesn't actually dereference the NULL pointer, but the intent of the code was obviously to check for it, not to take the address of the member. Fixes: eb8c88808c83 ("drm/imx: add deferred plane disabling") Acked-by: Philipp Zabel Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- drivers/gpu/drm/imx/ipuv3-crtc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/imx/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3-crtc.c index d412fc265395..fd9d8e51837f 100644 --- a/drivers/gpu/drm/imx/ipuv3-crtc.c +++ b/drivers/gpu/drm/imx/ipuv3-crtc.c @@ -68,7 +68,7 @@ static void ipu_crtc_disable_planes(struct ipu_crtc *ipu_crtc, drm_atomic_crtc_state_for_each_plane(plane, old_crtc_state) { if (plane == &ipu_crtc->plane[0]->base) disable_full = true; - if (&ipu_crtc->plane[1] && plane == &ipu_crtc->plane[1]->base) + if (ipu_crtc->plane[1] && plane == &ipu_crtc->plane[1]->base) disable_partial = true; } From 8caa4b7d411c429e5beed7952d9c5af8ee2ac68c Mon Sep 17 00:00:00 2001 From: Xiaoke Wang Date: Sat, 5 Mar 2022 11:14:05 +0800 Subject: [PATCH 0877/1842] iio: dummy: iio_simple_dummy: check the return value of kstrdup() [ Upstream commit ba93642188a6fed754bf7447f638bc410e05a929 ] kstrdup() is also a memory allocation-related function, it returns NULL when some memory errors happen. So it is better to check the return value of it so to catch the memory error in time. Besides, there should have a kfree() to clear up the allocation if we get a failure later in this function to prevent memory leak. Signed-off-by: Xiaoke Wang Link: https://lore.kernel.org/r/tencent_C920CFCC33B9CC1C63141FE1334A39FF8508@qq.com Signed-off-by: Jonathan Cameron Signed-off-by: Sasha Levin --- drivers/iio/dummy/iio_simple_dummy.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/drivers/iio/dummy/iio_simple_dummy.c b/drivers/iio/dummy/iio_simple_dummy.c index c0b7ef900735..c24f609c2ade 100644 --- a/drivers/iio/dummy/iio_simple_dummy.c +++ b/drivers/iio/dummy/iio_simple_dummy.c @@ -575,10 +575,9 @@ static struct iio_sw_device *iio_dummy_probe(const char *name) */ swd = kzalloc(sizeof(*swd), GFP_KERNEL); - if (!swd) { - ret = -ENOMEM; - goto error_kzalloc; - } + if (!swd) + return ERR_PTR(-ENOMEM); + /* * Allocate an IIO device. * @@ -590,7 +589,7 @@ static struct iio_sw_device *iio_dummy_probe(const char *name) indio_dev = iio_device_alloc(parent, sizeof(*st)); if (!indio_dev) { ret = -ENOMEM; - goto error_ret; + goto error_free_swd; } st = iio_priv(indio_dev); @@ -616,6 +615,10 @@ static struct iio_sw_device *iio_dummy_probe(const char *name) * indio_dev->name = spi_get_device_id(spi)->name; */ indio_dev->name = kstrdup(name, GFP_KERNEL); + if (!indio_dev->name) { + ret = -ENOMEM; + goto error_free_device; + } /* Provide description of available channels */ indio_dev->channels = iio_dummy_channels; @@ -632,7 +635,7 @@ static struct iio_sw_device *iio_dummy_probe(const char *name) ret = iio_simple_dummy_events_register(indio_dev); if (ret < 0) - goto error_free_device; + goto error_free_name; ret = iio_simple_dummy_configure_buffer(indio_dev); if (ret < 0) @@ -649,11 +652,12 @@ static struct iio_sw_device *iio_dummy_probe(const char *name) iio_simple_dummy_unconfigure_buffer(indio_dev); error_unregister_events: iio_simple_dummy_events_unregister(indio_dev); +error_free_name: + kfree(indio_dev->name); error_free_device: iio_device_free(indio_dev); -error_ret: +error_free_swd: kfree(swd); -error_kzalloc: return ERR_PTR(ret); } From 5a89a92efc342dd7c44b6056da87debc598f9c73 Mon Sep 17 00:00:00 2001 From: Xiaoke Wang Date: Tue, 5 Apr 2022 12:43:07 +0800 Subject: [PATCH 0878/1842] staging: rtl8712: fix a potential memory leak in r871xu_drv_init() [ Upstream commit 7288ff561de650d4139fab80e9cb0da9b5b32434 ] In r871xu_drv_init(), if r8712_init_drv_sw() fails, then the memory allocated by r8712_alloc_io_queue() in r8712_usb_dvobj_init() is not properly released as there is no action will be performed by r8712_usb_dvobj_deinit(). To properly release it, we should call r8712_free_io_queue() in r8712_usb_dvobj_deinit(). Besides, in r871xu_dev_remove(), r8712_usb_dvobj_deinit() will be called by r871x_dev_unload() under condition `padapter->bup` and r8712_free_io_queue() is called by r8712_free_drv_sw(). However, r8712_usb_dvobj_deinit() does not rely on `padapter->bup` and calling r8712_free_io_queue() in r8712_free_drv_sw() is negative for better understading the code. So I move r8712_usb_dvobj_deinit() into r871xu_dev_remove(), and remove r8712_free_io_queue() from r8712_free_drv_sw(). Reviewed-by: Dan Carpenter Signed-off-by: Xiaoke Wang Link: https://lore.kernel.org/r/tencent_B8048C592777830380A23A7C4409F9DF1305@qq.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/staging/rtl8712/os_intfs.c | 1 - drivers/staging/rtl8712/usb_intf.c | 6 +++--- 2 files changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/staging/rtl8712/os_intfs.c b/drivers/staging/rtl8712/os_intfs.c index 2214aca09730..daa3180dfde3 100644 --- a/drivers/staging/rtl8712/os_intfs.c +++ b/drivers/staging/rtl8712/os_intfs.c @@ -332,7 +332,6 @@ void r8712_free_drv_sw(struct _adapter *padapter) r8712_free_evt_priv(&padapter->evtpriv); r8712_DeInitSwLeds(padapter); r8712_free_mlme_priv(&padapter->mlmepriv); - r8712_free_io_queue(padapter); _free_xmit_priv(&padapter->xmitpriv); _r8712_free_sta_priv(&padapter->stapriv); _r8712_free_recv_priv(&padapter->recvpriv); diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c index fed96d4251bf..77f090bdd36e 100644 --- a/drivers/staging/rtl8712/usb_intf.c +++ b/drivers/staging/rtl8712/usb_intf.c @@ -266,6 +266,7 @@ static uint r8712_usb_dvobj_init(struct _adapter *padapter) static void r8712_usb_dvobj_deinit(struct _adapter *padapter) { + r8712_free_io_queue(padapter); } void rtl871x_intf_stop(struct _adapter *padapter) @@ -303,9 +304,6 @@ void r871x_dev_unload(struct _adapter *padapter) rtl8712_hal_deinit(padapter); } - /*s6.*/ - if (padapter->dvobj_deinit) - padapter->dvobj_deinit(padapter); padapter->bup = false; } } @@ -610,6 +608,8 @@ static void r871xu_dev_remove(struct usb_interface *pusb_intf) /* Stop driver mlme relation timer */ r8712_stop_drv_timers(padapter); r871x_dev_unload(padapter); + if (padapter->dvobj_deinit) + padapter->dvobj_deinit(padapter); r8712_free_drv_sw(padapter); free_netdev(pnetdev); From 7821d743abb301a916731a4294eba4927482e148 Mon Sep 17 00:00:00 2001 From: Miquel Raynal Date: Mon, 7 Feb 2022 15:38:33 +0100 Subject: [PATCH 0879/1842] iio: st_sensors: Add a local lock for protecting odr [ Upstream commit 474010127e2505fc463236470908e1ff5ddb3578 ] Right now the (framework) mlock lock is (ab)used for multiple purposes: 1- protecting concurrent accesses over the odr local cache 2- avoid changing samplig frequency whilst buffer is running Let's start by handling situation #1 with a local lock. Suggested-by: Jonathan Cameron Cc: Denis Ciocca Signed-off-by: Miquel Raynal Link: https://lore.kernel.org/r/20220207143840.707510-7-miquel.raynal@bootlin.com Signed-off-by: Jonathan Cameron Signed-off-by: Sasha Levin --- .../iio/common/st_sensors/st_sensors_core.c | 24 ++++++++++++++----- include/linux/iio/common/st_sensors.h | 3 +++ 2 files changed, 21 insertions(+), 6 deletions(-) diff --git a/drivers/iio/common/st_sensors/st_sensors_core.c b/drivers/iio/common/st_sensors/st_sensors_core.c index 7a69c1be7393..56206fdbceb9 100644 --- a/drivers/iio/common/st_sensors/st_sensors_core.c +++ b/drivers/iio/common/st_sensors/st_sensors_core.c @@ -70,16 +70,18 @@ static int st_sensors_match_odr(struct st_sensor_settings *sensor_settings, int st_sensors_set_odr(struct iio_dev *indio_dev, unsigned int odr) { - int err; + int err = 0; struct st_sensor_odr_avl odr_out = {0, 0}; struct st_sensor_data *sdata = iio_priv(indio_dev); + mutex_lock(&sdata->odr_lock); + if (!sdata->sensor_settings->odr.mask) - return 0; + goto unlock_mutex; err = st_sensors_match_odr(sdata->sensor_settings, odr, &odr_out); if (err < 0) - goto st_sensors_match_odr_error; + goto unlock_mutex; if ((sdata->sensor_settings->odr.addr == sdata->sensor_settings->pw.addr) && @@ -102,7 +104,9 @@ int st_sensors_set_odr(struct iio_dev *indio_dev, unsigned int odr) if (err >= 0) sdata->odr = odr_out.hz; -st_sensors_match_odr_error: +unlock_mutex: + mutex_unlock(&sdata->odr_lock); + return err; } EXPORT_SYMBOL(st_sensors_set_odr); @@ -364,6 +368,8 @@ int st_sensors_init_sensor(struct iio_dev *indio_dev, struct st_sensors_platform_data *of_pdata; int err = 0; + mutex_init(&sdata->odr_lock); + /* If OF/DT pdata exists, it will take precedence of anything else */ of_pdata = st_sensors_dev_probe(indio_dev->dev.parent, pdata); if (IS_ERR(of_pdata)) @@ -557,18 +563,24 @@ int st_sensors_read_info_raw(struct iio_dev *indio_dev, err = -EBUSY; goto out; } else { + mutex_lock(&sdata->odr_lock); err = st_sensors_set_enable(indio_dev, true); - if (err < 0) + if (err < 0) { + mutex_unlock(&sdata->odr_lock); goto out; + } msleep((sdata->sensor_settings->bootime * 1000) / sdata->odr); err = st_sensors_read_axis_data(indio_dev, ch, val); - if (err < 0) + if (err < 0) { + mutex_unlock(&sdata->odr_lock); goto out; + } *val = *val >> ch->scan_type.shift; err = st_sensors_set_enable(indio_dev, false); + mutex_unlock(&sdata->odr_lock); } out: mutex_unlock(&indio_dev->mlock); diff --git a/include/linux/iio/common/st_sensors.h b/include/linux/iio/common/st_sensors.h index 33e939977444..c16a9dda3ad5 100644 --- a/include/linux/iio/common/st_sensors.h +++ b/include/linux/iio/common/st_sensors.h @@ -228,6 +228,7 @@ struct st_sensor_settings { * @hw_irq_trigger: if we're using the hardware interrupt on the sensor. * @hw_timestamp: Latest timestamp from the interrupt handler, when in use. * @buffer_data: Data used by buffer part. + * @odr_lock: Local lock for preventing concurrent ODR accesses/changes */ struct st_sensor_data { struct device *dev; @@ -253,6 +254,8 @@ struct st_sensor_data { s64 hw_timestamp; char buffer_data[ST_SENSORS_MAX_BUFFER_SIZE] ____cacheline_aligned; + + struct mutex odr_lock; }; #ifdef CONFIG_IIO_BUFFER From 61ca1b97adb98569b90c916ebecb8fc8f1283d7c Mon Sep 17 00:00:00 2001 From: Kees Cook Date: Wed, 16 Feb 2022 12:15:03 -0800 Subject: [PATCH 0880/1842] lkdtm/usercopy: Expand size of "out of frame" object [ Upstream commit f387e86d3a74407bdd9c5815820ac9d060962840 ] To be sufficiently out of range for the usercopy test to see the lifetime mismatch, expand the size of the "bad" buffer, which will let it be beyond current_stack_pointer regardless of stack growth direction. Paired with the recent addition of stack depth checking under CONFIG_HARDENED_USERCOPY=y, this will correctly start tripping again. Reported-by: Muhammad Usama Anjum Cc: Arnd Bergmann Cc: Greg Kroah-Hartman Reviewed-by: Muhammad Usama Anjum Link: https://lore.kernel.org/lkml/762faf1b-0443-5ddf-4430-44a20cf2ec4d@collabora.com/ Signed-off-by: Kees Cook Signed-off-by: Sasha Levin --- drivers/misc/lkdtm/usercopy.c | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/drivers/misc/lkdtm/usercopy.c b/drivers/misc/lkdtm/usercopy.c index 109e8d4302c1..cde2655487ff 100644 --- a/drivers/misc/lkdtm/usercopy.c +++ b/drivers/misc/lkdtm/usercopy.c @@ -30,12 +30,12 @@ static const unsigned char test_text[] = "This is a test.\n"; */ static noinline unsigned char *trick_compiler(unsigned char *stack) { - return stack + 0; + return stack + unconst; } static noinline unsigned char *do_usercopy_stack_callee(int value) { - unsigned char buf[32]; + unsigned char buf[128]; int i; /* Exercise stack to avoid everything living in registers. */ @@ -43,7 +43,12 @@ static noinline unsigned char *do_usercopy_stack_callee(int value) buf[i] = value & 0xff; } - return trick_compiler(buf); + /* + * Put the target buffer in the middle of stack allocation + * so that we don't step on future stack users regardless + * of stack growth direction. + */ + return trick_compiler(&buf[(128/2)-32]); } static noinline void do_usercopy_stack(bool to_user, bool bad_frame) @@ -66,6 +71,12 @@ static noinline void do_usercopy_stack(bool to_user, bool bad_frame) bad_stack -= sizeof(unsigned long); } +#ifdef ARCH_HAS_CURRENT_STACK_POINTER + pr_info("stack : %px\n", (void *)current_stack_pointer); +#endif + pr_info("good_stack: %px-%px\n", good_stack, good_stack + sizeof(good_stack)); + pr_info("bad_stack : %px-%px\n", bad_stack, bad_stack + sizeof(good_stack)); + user_addr = vm_mmap(NULL, 0, PAGE_SIZE, PROT_READ | PROT_WRITE | PROT_EXEC, MAP_ANONYMOUS | MAP_PRIVATE, 0); From d68d5e68b7f64de7170f8e04dd9b995c36b2c71c Mon Sep 17 00:00:00 2001 From: Zheyu Ma Date: Sun, 10 Apr 2022 19:48:14 +0800 Subject: [PATCH 0881/1842] tty: synclink_gt: Fix null-pointer-dereference in slgt_clean() [ Upstream commit 689ca31c542687709ba21ec2195c1fbce34fd029 ] When the driver fails at alloc_hdlcdev(), and then we remove the driver module, we will get the following splat: [ 25.065966] general protection fault, probably for non-canonical address 0xdffffc0000000182: 0000 [#1] PREEMPT SMP KASAN PTI [ 25.066914] KASAN: null-ptr-deref in range [0x0000000000000c10-0x0000000000000c17] [ 25.069262] RIP: 0010:detach_hdlc_protocol+0x2a/0x3e0 [ 25.077709] Call Trace: [ 25.077924] [ 25.078108] unregister_hdlc_device+0x16/0x30 [ 25.078481] slgt_cleanup+0x157/0x9f0 [synclink_gt] Fix this by checking whether the 'info->netdev' is a null pointer first. Reviewed-by: Jiri Slaby Signed-off-by: Zheyu Ma Link: https://lore.kernel.org/r/20220410114814.3920474-1-zheyuma97@gmail.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/synclink_gt.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/tty/synclink_gt.c b/drivers/tty/synclink_gt.c index 1a0c7beec101..0569d5949133 100644 --- a/drivers/tty/synclink_gt.c +++ b/drivers/tty/synclink_gt.c @@ -1749,6 +1749,8 @@ static int hdlcdev_init(struct slgt_info *info) */ static void hdlcdev_exit(struct slgt_info *info) { + if (!info->netdev) + return; unregister_hdlc_device(info->netdev); free_netdev(info->netdev); info->netdev = NULL; From cb7147afd328c07edeeee287710d8d96ac0459f5 Mon Sep 17 00:00:00 2001 From: Huang Guobin Date: Thu, 31 Mar 2022 17:10:05 +0800 Subject: [PATCH 0882/1842] tty: Fix a possible resource leak in icom_probe [ Upstream commit ee157a79e7c82b01ae4c25de0ac75899801f322c ] When pci_read_config_dword failed, call pci_release_regions() and pci_disable_device() to recycle the resource previously allocated. Reviewed-by: Jiri Slaby Signed-off-by: Huang Guobin Link: https://lore.kernel.org/r/20220331091005.3290753-1-huangguobin4@huawei.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/serial/icom.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/tty/serial/icom.c b/drivers/tty/serial/icom.c index 94c8281ddb5f..74b325c344da 100644 --- a/drivers/tty/serial/icom.c +++ b/drivers/tty/serial/icom.c @@ -1503,7 +1503,7 @@ static int icom_probe(struct pci_dev *dev, retval = pci_read_config_dword(dev, PCI_COMMAND, &command_reg); if (retval) { dev_err(&dev->dev, "PCI Config read FAILED\n"); - return retval; + goto probe_exit0; } pci_write_config_dword(dev, PCI_COMMAND, From 66f769762f65d957f688f3258755c6ec410bf710 Mon Sep 17 00:00:00 2001 From: Duoming Zhou Date: Sun, 17 Apr 2022 21:54:07 +0800 Subject: [PATCH 0883/1842] drivers: staging: rtl8192u: Fix deadlock in ieee80211_beacons_stop() [ Upstream commit 806c7b53414934ba2a39449b31fd1a038e500273 ] There is a deadlock in ieee80211_beacons_stop(), which is shown below: (Thread 1) | (Thread 2) | ieee80211_send_beacon() ieee80211_beacons_stop() | mod_timer() spin_lock_irqsave() //(1) | (wait a time) ... | ieee80211_send_beacon_cb() del_timer_sync() | spin_lock_irqsave() //(2) (wait timer to stop) | ... We hold ieee->beacon_lock in position (1) of thread 1 and use del_timer_sync() to wait timer to stop, but timer handler also need ieee->beacon_lock in position (2) of thread 2. As a result, ieee80211_beacons_stop() will block forever. This patch extracts del_timer_sync() from the protection of spin_lock_irqsave(), which could let timer handler to obtain the needed lock. Signed-off-by: Duoming Zhou Link: https://lore.kernel.org/r/20220417135407.109536-1-duoming@zju.edu.cn Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c b/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c index 690b664df8fa..56a447651644 100644 --- a/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c +++ b/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c @@ -528,9 +528,9 @@ static void ieee80211_beacons_stop(struct ieee80211_device *ieee) spin_lock_irqsave(&ieee->beacon_lock, flags); ieee->beacon_txing = 0; - del_timer_sync(&ieee->beacon_timer); spin_unlock_irqrestore(&ieee->beacon_lock, flags); + del_timer_sync(&ieee->beacon_timer); } void ieee80211_stop_send_beacons(struct ieee80211_device *ieee) From 0f69d7d5e918aa43423d86bd17ddb11b1b5e8ada Mon Sep 17 00:00:00 2001 From: Duoming Zhou Date: Sun, 17 Apr 2022 22:16:41 +0800 Subject: [PATCH 0884/1842] drivers: staging: rtl8192e: Fix deadlock in rtllib_beacons_stop() [ Upstream commit 9b6bdbd9337de3917945847bde262a34a87a6303 ] There is a deadlock in rtllib_beacons_stop(), which is shown below: (Thread 1) | (Thread 2) | rtllib_send_beacon() rtllib_beacons_stop() | mod_timer() spin_lock_irqsave() //(1) | (wait a time) ... | rtllib_send_beacon_cb() del_timer_sync() | spin_lock_irqsave() //(2) (wait timer to stop) | ... We hold ieee->beacon_lock in position (1) of thread 1 and use del_timer_sync() to wait timer to stop, but timer handler also need ieee->beacon_lock in position (2) of thread 2. As a result, rtllib_beacons_stop() will block forever. This patch extracts del_timer_sync() from the protection of spin_lock_irqsave(), which could let timer handler to obtain the needed lock. Signed-off-by: Duoming Zhou Link: https://lore.kernel.org/r/20220417141641.124388-1-duoming@zju.edu.cn Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/staging/rtl8192e/rtllib_softmac.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/staging/rtl8192e/rtllib_softmac.c b/drivers/staging/rtl8192e/rtllib_softmac.c index e8e72f79ca00..aeb6f015fdda 100644 --- a/drivers/staging/rtl8192e/rtllib_softmac.c +++ b/drivers/staging/rtl8192e/rtllib_softmac.c @@ -651,9 +651,9 @@ static void rtllib_beacons_stop(struct rtllib_device *ieee) spin_lock_irqsave(&ieee->beacon_lock, flags); ieee->beacon_txing = 0; - del_timer_sync(&ieee->beacon_timer); spin_unlock_irqrestore(&ieee->beacon_lock, flags); + del_timer_sync(&ieee->beacon_timer); } From ee105039d3653444de4d3ede642383c92855dc1e Mon Sep 17 00:00:00 2001 From: Zhen Ni Date: Wed, 2 Mar 2022 11:37:16 +0800 Subject: [PATCH 0885/1842] USB: host: isp116x: check return value after calling platform_get_resource() [ Upstream commit 134a3408c2d3f7e23eb0e4556e0a2d9f36c2614e ] It will cause null-ptr-deref if platform_get_resource() returns NULL, we need check the return value. Signed-off-by: Zhen Ni Link: https://lore.kernel.org/r/20220302033716.31272-1-nizhen@uniontech.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/usb/host/isp116x-hcd.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/usb/host/isp116x-hcd.c b/drivers/usb/host/isp116x-hcd.c index 3055d9abfec3..3e5c54742bef 100644 --- a/drivers/usb/host/isp116x-hcd.c +++ b/drivers/usb/host/isp116x-hcd.c @@ -1541,10 +1541,12 @@ static int isp116x_remove(struct platform_device *pdev) iounmap(isp116x->data_reg); res = platform_get_resource(pdev, IORESOURCE_MEM, 1); - release_mem_region(res->start, 2); + if (res) + release_mem_region(res->start, 2); iounmap(isp116x->addr_reg); res = platform_get_resource(pdev, IORESOURCE_MEM, 0); - release_mem_region(res->start, 2); + if (res) + release_mem_region(res->start, 2); usb_put_hcd(hcd); return 0; From 6e2273eefab54a521d9c59efb6e1114e742bdf41 Mon Sep 17 00:00:00 2001 From: Duoming Zhou Date: Sun, 17 Apr 2022 19:16:26 +0800 Subject: [PATCH 0886/1842] drivers: tty: serial: Fix deadlock in sa1100_set_termios() [ Upstream commit 62b2caef400c1738b6d22f636c628d9f85cd4c4c ] There is a deadlock in sa1100_set_termios(), which is shown below: (Thread 1) | (Thread 2) | sa1100_enable_ms() sa1100_set_termios() | mod_timer() spin_lock_irqsave() //(1) | (wait a time) ... | sa1100_timeout() del_timer_sync() | spin_lock_irqsave() //(2) (wait timer to stop) | ... We hold sport->port.lock in position (1) of thread 1 and use del_timer_sync() to wait timer to stop, but timer handler also need sport->port.lock in position (2) of thread 2. As a result, sa1100_set_termios() will block forever. This patch moves del_timer_sync() before spin_lock_irqsave() in order to prevent the deadlock. Signed-off-by: Duoming Zhou Link: https://lore.kernel.org/r/20220417111626.7802-1-duoming@zju.edu.cn Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/serial/sa1100.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/tty/serial/sa1100.c b/drivers/tty/serial/sa1100.c index f5fab1dd96bc..aa1cf2ae17a9 100644 --- a/drivers/tty/serial/sa1100.c +++ b/drivers/tty/serial/sa1100.c @@ -448,6 +448,8 @@ sa1100_set_termios(struct uart_port *port, struct ktermios *termios, baud = uart_get_baud_rate(port, termios, old, 0, port->uartclk/16); quot = uart_get_divisor(port, baud); + del_timer_sync(&sport->timer); + spin_lock_irqsave(&sport->port.lock, flags); sport->port.read_status_mask &= UTSR0_TO_SM(UTSR0_TFS); @@ -478,8 +480,6 @@ sa1100_set_termios(struct uart_port *port, struct ktermios *termios, UTSR1_TO_SM(UTSR1_ROR); } - del_timer_sync(&sport->timer); - /* * Update the per-port timeout. */ From ffe9440d698274c6462d2e304562c6ddfc8c84df Mon Sep 17 00:00:00 2001 From: Duoming Zhou Date: Sun, 17 Apr 2022 20:03:05 +0800 Subject: [PATCH 0887/1842] drivers: usb: host: Fix deadlock in oxu_bus_suspend() [ Upstream commit 4d378f2ae58138d4c55684e1d274e7dd94aa6524 ] There is a deadlock in oxu_bus_suspend(), which is shown below: (Thread 1) | (Thread 2) | timer_action() oxu_bus_suspend() | mod_timer() spin_lock_irq() //(1) | (wait a time) ... | oxu_watchdog() del_timer_sync() | spin_lock_irq() //(2) (wait timer to stop) | ... We hold oxu->lock in position (1) of thread 1, and use del_timer_sync() to wait timer to stop, but timer handler also need oxu->lock in position (2) of thread 2. As a result, oxu_bus_suspend() will block forever. This patch extracts del_timer_sync() from the protection of spin_lock_irq(), which could let timer handler to obtain the needed lock. Signed-off-by: Duoming Zhou Link: https://lore.kernel.org/r/20220417120305.64577-1-duoming@zju.edu.cn Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/usb/host/oxu210hp-hcd.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/usb/host/oxu210hp-hcd.c b/drivers/usb/host/oxu210hp-hcd.c index e832909a924f..6df2881cd7b9 100644 --- a/drivers/usb/host/oxu210hp-hcd.c +++ b/drivers/usb/host/oxu210hp-hcd.c @@ -3908,8 +3908,10 @@ static int oxu_bus_suspend(struct usb_hcd *hcd) } } + spin_unlock_irq(&oxu->lock); /* turn off now-idle HC */ del_timer_sync(&oxu->watchdog); + spin_lock_irq(&oxu->lock); ehci_halt(oxu); hcd->state = HC_STATE_SUSPENDED; From f4cb24706ca4b8e031c750e487a65129473caea0 Mon Sep 17 00:00:00 2001 From: Evan Green Date: Thu, 21 Apr 2022 10:39:27 -0700 Subject: [PATCH 0888/1842] USB: hcd-pci: Fully suspend across freeze/thaw cycle [ Upstream commit 63acaa8e9c65dc34dc249440216f8e977f5d2748 ] The documentation for the freeze() method says that it "should quiesce the device so that it doesn't generate IRQs or DMA". The unspoken consequence of not doing this is that MSIs aimed at non-boot CPUs may get fully lost if they're sent during the period where the target CPU is offline. The current callbacks for USB HCD do not fully quiesce interrupts, specifically on XHCI. Change to use the full suspend/resume flow for freeze/thaw to ensure interrupts are fully quiesced. This fixes issues where USB devices fail to thaw during hibernation because XHCI misses its interrupt and cannot recover. Acked-by: Alan Stern Signed-off-by: Evan Green Link: https://lore.kernel.org/r/20220421103751.v3.2.I8226c7fdae88329ef70957b96a39b346c69a914e@changeid Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/usb/core/hcd-pci.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c index ec0d6c50610c..eee78cbfaa72 100644 --- a/drivers/usb/core/hcd-pci.c +++ b/drivers/usb/core/hcd-pci.c @@ -614,10 +614,10 @@ const struct dev_pm_ops usb_hcd_pci_pm_ops = { .suspend_noirq = hcd_pci_suspend_noirq, .resume_noirq = hcd_pci_resume_noirq, .resume = hcd_pci_resume, - .freeze = check_root_hub_suspended, + .freeze = hcd_pci_suspend, .freeze_noirq = check_root_hub_suspended, .thaw_noirq = NULL, - .thaw = NULL, + .thaw = hcd_pci_resume, .poweroff = hcd_pci_suspend, .poweroff_noirq = hcd_pci_suspend_noirq, .restore_noirq = hcd_pci_resume_noirq, From 468fe959eab35f3cc50016f34f21116f0cc3aec3 Mon Sep 17 00:00:00 2001 From: Changbin Du Date: Mon, 17 Jan 2022 23:43:00 +0800 Subject: [PATCH 0889/1842] sysrq: do not omit current cpu when showing backtrace of all active CPUs [ Upstream commit 5390e7f46b9d5546d45a83e6463bc656678b1d0e ] The backtrace of current CPU also should be printed as it is active. This change add stack trace for current CPU and print a hint for idle CPU for the generic workqueue based printing. (x86 already does this) Now it looks like below: [ 279.401567] sysrq: Show backtrace of all active CPUs [ 279.407234] sysrq: CPU5: [ 279.407505] Call Trace: [ 279.408789] [] dump_backtrace+0x2c/0x3a [ 279.411698] [] show_stack+0x32/0x3e [ 279.411809] [] sysrq_handle_showallcpus+0x4c/0xc6 [ 279.411929] [] __handle_sysrq+0x106/0x26c [ 279.412034] [] write_sysrq_trigger+0x64/0x74 [ 279.412139] [] proc_reg_write+0x8e/0xe2 [ 279.412252] [] vfs_write+0x90/0x2be [ 279.412362] [] ksys_write+0xa6/0xce [ 279.412467] [] sys_write+0x2a/0x38 [ 279.412689] [] ret_from_syscall+0x0/0x2 [ 279.417173] sysrq: CPU6: backtrace skipped as idling [ 279.417185] sysrq: CPU4: backtrace skipped as idling [ 279.417187] sysrq: CPU0: backtrace skipped as idling [ 279.417181] sysrq: CPU7: backtrace skipped as idling [ 279.417190] sysrq: CPU1: backtrace skipped as idling [ 279.417193] sysrq: CPU3: backtrace skipped as idling [ 279.417219] sysrq: CPU2: [ 279.419179] Call Trace: [ 279.419440] [] dump_backtrace+0x2c/0x3a [ 279.419782] [] show_stack+0x32/0x3e [ 279.420015] [] showacpu+0x5c/0x96 [ 279.420317] [] flush_smp_call_function_queue+0xd6/0x218 [ 279.420569] [] generic_smp_call_function_single_interrupt+0x14/0x1c [ 279.420798] [] handle_IPI+0xaa/0x13a [ 279.421024] [] riscv_intc_irq+0x56/0x70 [ 279.421274] [] generic_handle_arch_irq+0x6a/0xfa [ 279.421518] [] ret_from_exception+0x0/0x10 [ 279.421750] [] rcu_idle_enter+0x16/0x1e Signed-off-by: Changbin Du Link: https://lore.kernel.org/r/20220117154300.2808-1-changbin.du@gmail.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/sysrq.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c index 959f9e121cc6..7ca209d4e088 100644 --- a/drivers/tty/sysrq.c +++ b/drivers/tty/sysrq.c @@ -231,8 +231,10 @@ static void showacpu(void *dummy) unsigned long flags; /* Idle CPUs have no interesting backtrace. */ - if (idle_cpu(smp_processor_id())) + if (idle_cpu(smp_processor_id())) { + pr_info("CPU%d: backtrace skipped as idling\n", smp_processor_id()); return; + } raw_spin_lock_irqsave(&show_lock, flags); pr_info("CPU%d:\n", smp_processor_id()); @@ -259,10 +261,13 @@ static void sysrq_handle_showallcpus(int key) if (in_irq()) regs = get_irq_regs(); - if (regs) { - pr_info("CPU%d:\n", smp_processor_id()); + + pr_info("CPU%d:\n", smp_processor_id()); + if (regs) show_regs(regs); - } + else + show_stack(NULL, NULL, KERN_INFO); + schedule_work(&sysrq_showallcpus); } } From 5b0c0298f7c3b57417f1729ec4071f76864b72dd Mon Sep 17 00:00:00 2001 From: Marek Szyprowski Date: Thu, 5 May 2022 12:46:18 +0200 Subject: [PATCH 0890/1842] usb: dwc2: gadget: don't reset gadget's driver->bus [ Upstream commit 3120aac6d0ecd9accf56894aeac0e265f74d3d5a ] UDC driver should not touch gadget's driver internals, especially it should not reset driver->bus. This wasn't harmful so far, but since commit fc274c1e9973 ("USB: gadget: Add a new bus for gadgets") gadget subsystem got it's own bus and messing with ->bus triggers the following NULL pointer dereference: dwc2 12480000.hsotg: bound driver g_ether 8<--- cut here --- Unable to handle kernel NULL pointer dereference at virtual address 00000000 [00000000] *pgd=00000000 Internal error: Oops: 5 [#1] SMP ARM Modules linked in: ... CPU: 0 PID: 620 Comm: modprobe Not tainted 5.18.0-rc5-next-20220504 #11862 Hardware name: Samsung Exynos (Flattened Device Tree) PC is at module_add_driver+0x44/0xe8 LR is at sysfs_do_create_link_sd+0x84/0xe0 ... Process modprobe (pid: 620, stack limit = 0x(ptrval)) ... module_add_driver from bus_add_driver+0xf4/0x1e4 bus_add_driver from driver_register+0x78/0x10c driver_register from usb_gadget_register_driver_owner+0x40/0xb4 usb_gadget_register_driver_owner from do_one_initcall+0x44/0x1e0 do_one_initcall from do_init_module+0x44/0x1c8 do_init_module from load_module+0x19b8/0x1b9c load_module from sys_finit_module+0xdc/0xfc sys_finit_module from ret_fast_syscall+0x0/0x54 Exception stack(0xf1771fa8 to 0xf1771ff0) ... dwc2 12480000.hsotg: new device is high-speed ---[ end trace 0000000000000000 ]--- Fix this by removing driver->bus entry reset. Signed-off-by: Marek Szyprowski Link: https://lore.kernel.org/r/20220505104618.22729-1-m.szyprowski@samsung.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/usb/dwc2/gadget.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c index ec54971063f8..64485f82dc5b 100644 --- a/drivers/usb/dwc2/gadget.c +++ b/drivers/usb/dwc2/gadget.c @@ -4518,7 +4518,6 @@ static int dwc2_hsotg_udc_start(struct usb_gadget *gadget, WARN_ON(hsotg->driver); - driver->driver.bus = NULL; hsotg->driver = driver; hsotg->gadget.dev.of_node = hsotg->dev->of_node; hsotg->gadget.speed = USB_SPEED_UNKNOWN; From 41ec9466944f221950765b2484798f2baf387f0b Mon Sep 17 00:00:00 2001 From: Shuah Khan Date: Fri, 29 Apr 2022 15:09:13 -0600 Subject: [PATCH 0891/1842] misc: rtsx: set NULL intfdata when probe fails [ Upstream commit f861d36e021e1ac4a0a2a1f6411d623809975d63 ] rtsx_usb_probe() doesn't call usb_set_intfdata() to null out the interface pointer when probe fails. This leaves a stale pointer. Noticed the missing usb_set_intfdata() while debugging an unrelated invalid DMA mapping problem. Fix it with a call to usb_set_intfdata(..., NULL). Signed-off-by: Shuah Khan Link: https://lore.kernel.org/r/20220429210913.46804-1-skhan@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/misc/cardreader/rtsx_usb.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/misc/cardreader/rtsx_usb.c b/drivers/misc/cardreader/rtsx_usb.c index 59eda55d92a3..1ef9b61077c4 100644 --- a/drivers/misc/cardreader/rtsx_usb.c +++ b/drivers/misc/cardreader/rtsx_usb.c @@ -667,6 +667,7 @@ static int rtsx_usb_probe(struct usb_interface *intf, return 0; out_init_fail: + usb_set_intfdata(ucr->pusb_intf, NULL); usb_free_coherent(ucr->pusb_dev, IOBUF_SIZE, ucr->iobuf, ucr->iobuf_dma); return ret; From abf3b222614f49f98e606fccdd269161c0d70204 Mon Sep 17 00:00:00 2001 From: bumwoo lee Date: Wed, 27 Apr 2022 12:00:05 +0900 Subject: [PATCH 0892/1842] extcon: Modify extcon device to be created after driver data is set [ Upstream commit 5dcc2afe716d69f5112ce035cb14f007461ff189 ] Currently, someone can invoke the sysfs such as state_show() intermittently before dev_set_drvdata() is done. And it can be a cause of kernel Oops because of edev is Null at that time. So modified the driver registration to after setting drviver data. - Oops's backtrace. Backtrace: [] (state_show) from [] (dev_attr_show) [] (dev_attr_show) from [] (sysfs_kf_seq_show) [] (sysfs_kf_seq_show) from [] (kernfs_seq_show) [] (kernfs_seq_show) from [] (seq_read) [] (seq_read) from [] (kernfs_fop_read) [] (kernfs_fop_read) from [] (__vfs_read) [] (__vfs_read) from [] (vfs_read) [] (vfs_read) from [] (ksys_read) [] (ksys_read) from [] (sys_read) [] (sys_read) from [] (__sys_trace_return) Signed-off-by: bumwoo lee Signed-off-by: Chanwoo Choi Signed-off-by: Sasha Levin --- drivers/extcon/extcon.c | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/drivers/extcon/extcon.c b/drivers/extcon/extcon.c index e7a9561a826d..356610404bb4 100644 --- a/drivers/extcon/extcon.c +++ b/drivers/extcon/extcon.c @@ -1230,19 +1230,14 @@ int extcon_dev_register(struct extcon_dev *edev) edev->dev.type = &edev->extcon_dev_type; } - ret = device_register(&edev->dev); - if (ret) { - put_device(&edev->dev); - goto err_dev; - } - spin_lock_init(&edev->lock); - edev->nh = devm_kcalloc(&edev->dev, edev->max_supported, - sizeof(*edev->nh), GFP_KERNEL); - if (!edev->nh) { - ret = -ENOMEM; - device_unregister(&edev->dev); - goto err_dev; + if (edev->max_supported) { + edev->nh = kcalloc(edev->max_supported, sizeof(*edev->nh), + GFP_KERNEL); + if (!edev->nh) { + ret = -ENOMEM; + goto err_alloc_nh; + } } for (index = 0; index < edev->max_supported; index++) @@ -1253,6 +1248,12 @@ int extcon_dev_register(struct extcon_dev *edev) dev_set_drvdata(&edev->dev, edev); edev->state = 0; + ret = device_register(&edev->dev); + if (ret) { + put_device(&edev->dev); + goto err_dev; + } + mutex_lock(&extcon_dev_list_lock); list_add(&edev->entry, &extcon_dev_list); mutex_unlock(&extcon_dev_list_lock); @@ -1260,6 +1261,9 @@ int extcon_dev_register(struct extcon_dev *edev) return 0; err_dev: + if (edev->max_supported) + kfree(edev->nh); +err_alloc_nh: if (edev->max_supported) kfree(edev->extcon_dev_type.groups); err_alloc_groups: @@ -1320,6 +1324,7 @@ void extcon_dev_unregister(struct extcon_dev *edev) if (edev->max_supported) { kfree(edev->extcon_dev_type.groups); kfree(edev->cables); + kfree(edev->nh); } put_device(&edev->dev); From f3f754d72d2df4c72f5052cf3ed1979d07625ece Mon Sep 17 00:00:00 2001 From: Andre Przywara Date: Fri, 6 May 2022 17:25:22 +0100 Subject: [PATCH 0893/1842] clocksource/drivers/sp804: Avoid error on multiple instances [ Upstream commit a98399cbc1e05f7b977419f03905501d566cf54e ] When a machine sports more than one SP804 timer instance, we only bring up the first one, since multiple timers of the same kind are not useful to Linux. As this is intentional behaviour, we should not return an error message, as we do today: =============== [ 0.000800] Failed to initialize '/bus@8000000/motherboard-bus@8000000/iofpga-bus@300000000/timer@120000': -22 =============== Replace the -EINVAL return with a debug message and return 0 instead. Also we do not reach the init function anymore if the DT node is disabled (as this is now handled by OF_DECLARE), so remove the explicit check for that case. This fixes a long standing bogus error when booting ARM's fastmodels. Signed-off-by: Andre Przywara Reviewed-by: Robin Murphy Link: https://lore.kernel.org/r/20220506162522.3675399-1-andre.przywara@arm.com Signed-off-by: Daniel Lezcano Signed-off-by: Sasha Levin --- drivers/clocksource/timer-sp804.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/clocksource/timer-sp804.c b/drivers/clocksource/timer-sp804.c index 6e8ad4a4ea3c..bedd3570474b 100644 --- a/drivers/clocksource/timer-sp804.c +++ b/drivers/clocksource/timer-sp804.c @@ -274,6 +274,11 @@ static int __init sp804_of_init(struct device_node *np, struct sp804_timer *time struct clk *clk1, *clk2; const char *name = of_get_property(np, "compatible", NULL); + if (initialized) { + pr_debug("%pOF: skipping further SP804 timer device\n", np); + return 0; + } + base = of_iomap(np, 0); if (!base) return -ENXIO; @@ -285,11 +290,6 @@ static int __init sp804_of_init(struct device_node *np, struct sp804_timer *time writel(0, timer1_base + timer->ctrl); writel(0, timer2_base + timer->ctrl); - if (initialized || !of_device_is_available(np)) { - ret = -EINVAL; - goto err; - } - clk1 = of_clk_get(np, 0); if (IS_ERR(clk1)) clk1 = NULL; From 33ef21d55418ab6a62a63fd550b2dbe297433372 Mon Sep 17 00:00:00 2001 From: Wang Cheng Date: Mon, 16 May 2022 17:22:23 +0800 Subject: [PATCH 0894/1842] staging: rtl8712: fix uninit-value in usb_read8() and friends [ Upstream commit d1b57669732d09da7e13ef86d058dab0cd57f6e0 ] When r8712_usbctrl_vendorreq() returns negative, 'data' in usb_read{8,16,32} will not be initialized. BUG: KMSAN: uninit-value in string_nocheck lib/vsprintf.c:643 [inline] BUG: KMSAN: uninit-value in string+0x4ec/0x6f0 lib/vsprintf.c:725 string_nocheck lib/vsprintf.c:643 [inline] string+0x4ec/0x6f0 lib/vsprintf.c:725 vsnprintf+0x2222/0x3650 lib/vsprintf.c:2806 va_format lib/vsprintf.c:1704 [inline] pointer+0x18e6/0x1f70 lib/vsprintf.c:2443 vsnprintf+0x1a9b/0x3650 lib/vsprintf.c:2810 vprintk_store+0x537/0x2150 kernel/printk/printk.c:2158 vprintk_emit+0x28b/0xab0 kernel/printk/printk.c:2256 dev_vprintk_emit+0x5ef/0x6d0 drivers/base/core.c:4604 dev_printk_emit+0x1dd/0x21f drivers/base/core.c:4615 __dev_printk+0x3be/0x440 drivers/base/core.c:4627 _dev_info+0x1ea/0x22f drivers/base/core.c:4673 r871xu_drv_init+0x1929/0x3070 drivers/staging/rtl8712/usb_intf.c:401 usb_probe_interface+0xf19/0x1600 drivers/usb/core/driver.c:396 really_probe+0x6c7/0x1350 drivers/base/dd.c:621 __driver_probe_device+0x3e9/0x530 drivers/base/dd.c:752 driver_probe_device drivers/base/dd.c:782 [inline] __device_attach_driver+0x79f/0x1120 drivers/base/dd.c:899 bus_for_each_drv+0x2d6/0x3f0 drivers/base/bus.c:427 __device_attach+0x593/0x8e0 drivers/base/dd.c:970 device_initial_probe+0x4a/0x60 drivers/base/dd.c:1017 bus_probe_device+0x17b/0x3e0 drivers/base/bus.c:487 device_add+0x1fff/0x26e0 drivers/base/core.c:3405 usb_set_configuration+0x37e9/0x3ed0 drivers/usb/core/message.c:2170 usb_generic_driver_probe+0x13c/0x300 drivers/usb/core/generic.c:238 usb_probe_device+0x309/0x570 drivers/usb/core/driver.c:293 really_probe+0x6c7/0x1350 drivers/base/dd.c:621 __driver_probe_device+0x3e9/0x530 drivers/base/dd.c:752 driver_probe_device drivers/base/dd.c:782 [inline] __device_attach_driver+0x79f/0x1120 drivers/base/dd.c:899 bus_for_each_drv+0x2d6/0x3f0 drivers/base/bus.c:427 __device_attach+0x593/0x8e0 drivers/base/dd.c:970 device_initial_probe+0x4a/0x60 drivers/base/dd.c:1017 bus_probe_device+0x17b/0x3e0 drivers/base/bus.c:487 device_add+0x1fff/0x26e0 drivers/base/core.c:3405 usb_new_device+0x1b91/0x2950 drivers/usb/core/hub.c:2566 hub_port_connect drivers/usb/core/hub.c:5363 [inline] hub_port_connect_change drivers/usb/core/hub.c:5507 [inline] port_event drivers/usb/core/hub.c:5665 [inline] hub_event+0x58e3/0x89e0 drivers/usb/core/hub.c:5747 process_one_work+0xdb6/0x1820 kernel/workqueue.c:2289 worker_thread+0x10d0/0x2240 kernel/workqueue.c:2436 kthread+0x3c7/0x500 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 Local variable data created at: usb_read8+0x5d/0x130 drivers/staging/rtl8712/usb_ops.c:33 r8712_read8+0xa5/0xd0 drivers/staging/rtl8712/rtl8712_io.c:29 KMSAN: uninit-value in r871xu_drv_init https://syzkaller.appspot.com/bug?id=3cd92b1d85428b128503bfa7a250294c9ae00bd8 Reported-by: Tested-by: Reviewed-by: Dan Carpenter Signed-off-by: Wang Cheng Link: https://lore.kernel.org/r/b9b7a6ee02c02aa28054f5cf16129977775f3cd9.1652618244.git.wanngchenng@gmail.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/staging/rtl8712/usb_ops.c | 27 ++++++++++++++++++--------- 1 file changed, 18 insertions(+), 9 deletions(-) diff --git a/drivers/staging/rtl8712/usb_ops.c b/drivers/staging/rtl8712/usb_ops.c index e64845e6adf3..af9966d03979 100644 --- a/drivers/staging/rtl8712/usb_ops.c +++ b/drivers/staging/rtl8712/usb_ops.c @@ -29,7 +29,8 @@ static u8 usb_read8(struct intf_hdl *intfhdl, u32 addr) u16 wvalue; u16 index; u16 len; - __le32 data; + int status; + __le32 data = 0; struct intf_priv *intfpriv = intfhdl->pintfpriv; request = 0x05; @@ -37,8 +38,10 @@ static u8 usb_read8(struct intf_hdl *intfhdl, u32 addr) index = 0; wvalue = (u16)(addr & 0x0000ffff); len = 1; - r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, &data, len, - requesttype); + status = r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, + &data, len, requesttype); + if (status < 0) + return 0; return (u8)(le32_to_cpu(data) & 0x0ff); } @@ -49,7 +52,8 @@ static u16 usb_read16(struct intf_hdl *intfhdl, u32 addr) u16 wvalue; u16 index; u16 len; - __le32 data; + int status; + __le32 data = 0; struct intf_priv *intfpriv = intfhdl->pintfpriv; request = 0x05; @@ -57,8 +61,10 @@ static u16 usb_read16(struct intf_hdl *intfhdl, u32 addr) index = 0; wvalue = (u16)(addr & 0x0000ffff); len = 2; - r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, &data, len, - requesttype); + status = r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, + &data, len, requesttype); + if (status < 0) + return 0; return (u16)(le32_to_cpu(data) & 0xffff); } @@ -69,7 +75,8 @@ static u32 usb_read32(struct intf_hdl *intfhdl, u32 addr) u16 wvalue; u16 index; u16 len; - __le32 data; + int status; + __le32 data = 0; struct intf_priv *intfpriv = intfhdl->pintfpriv; request = 0x05; @@ -77,8 +84,10 @@ static u32 usb_read32(struct intf_hdl *intfhdl, u32 addr) index = 0; wvalue = (u16)(addr & 0x0000ffff); len = 4; - r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, &data, len, - requesttype); + status = r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, + &data, len, requesttype); + if (status < 0) + return 0; return le32_to_cpu(data); } From ff727ab0b7d7a56b5ef281f12abd00c4b85894e9 Mon Sep 17 00:00:00 2001 From: Wang Cheng Date: Mon, 16 May 2022 17:22:41 +0800 Subject: [PATCH 0895/1842] staging: rtl8712: fix uninit-value in r871xu_drv_init() [ Upstream commit 0458e5428e5e959d201a40ffe71d762a79ecedc4 ] When 'tmpU1b' returns from r8712_read8(padapter, EE_9346CR) is 0, 'mac[6]' will not be initialized. BUG: KMSAN: uninit-value in r871xu_drv_init+0x2d54/0x3070 drivers/staging/rtl8712/usb_intf.c:541 r871xu_drv_init+0x2d54/0x3070 drivers/staging/rtl8712/usb_intf.c:541 usb_probe_interface+0xf19/0x1600 drivers/usb/core/driver.c:396 really_probe+0x653/0x14b0 drivers/base/dd.c:596 __driver_probe_device+0x3e9/0x530 drivers/base/dd.c:752 driver_probe_device drivers/base/dd.c:782 [inline] __device_attach_driver+0x79f/0x1120 drivers/base/dd.c:899 bus_for_each_drv+0x2d6/0x3f0 drivers/base/bus.c:427 __device_attach+0x593/0x8e0 drivers/base/dd.c:970 device_initial_probe+0x4a/0x60 drivers/base/dd.c:1017 bus_probe_device+0x17b/0x3e0 drivers/base/bus.c:487 device_add+0x1fff/0x26e0 drivers/base/core.c:3405 usb_set_configuration+0x37e9/0x3ed0 drivers/usb/core/message.c:2170 usb_generic_driver_probe+0x13c/0x300 drivers/usb/core/generic.c:238 usb_probe_device+0x309/0x570 drivers/usb/core/driver.c:293 really_probe+0x653/0x14b0 drivers/base/dd.c:596 __driver_probe_device+0x3e9/0x530 drivers/base/dd.c:752 driver_probe_device drivers/base/dd.c:782 [inline] __device_attach_driver+0x79f/0x1120 drivers/base/dd.c:899 bus_for_each_drv+0x2d6/0x3f0 drivers/base/bus.c:427 __device_attach+0x593/0x8e0 drivers/base/dd.c:970 device_initial_probe+0x4a/0x60 drivers/base/dd.c:1017 bus_probe_device+0x17b/0x3e0 drivers/base/bus.c:487 device_add+0x1fff/0x26e0 drivers/base/core.c:3405 usb_new_device+0x1b8e/0x2950 drivers/usb/core/hub.c:2566 hub_port_connect drivers/usb/core/hub.c:5358 [inline] hub_port_connect_change drivers/usb/core/hub.c:5502 [inline] port_event drivers/usb/core/hub.c:5660 [inline] hub_event+0x58e3/0x89e0 drivers/usb/core/hub.c:5742 process_one_work+0xdb6/0x1820 kernel/workqueue.c:2307 worker_thread+0x10b3/0x21e0 kernel/workqueue.c:2454 kthread+0x3c7/0x500 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 Local variable mac created at: r871xu_drv_init+0x1771/0x3070 drivers/staging/rtl8712/usb_intf.c:394 usb_probe_interface+0xf19/0x1600 drivers/usb/core/driver.c:396 KMSAN: uninit-value in r871xu_drv_init https://syzkaller.appspot.com/bug?id=3cd92b1d85428b128503bfa7a250294c9ae00bd8 Reported-by: Tested-by: Reviewed-by: Dan Carpenter Signed-off-by: Wang Cheng Link: https://lore.kernel.org/r/14c3886173dfa4597f0704547c414cfdbcd11d16.1652618244.git.wanngchenng@gmail.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/staging/rtl8712/usb_intf.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c index 77f090bdd36e..68d66c3ce2c8 100644 --- a/drivers/staging/rtl8712/usb_intf.c +++ b/drivers/staging/rtl8712/usb_intf.c @@ -539,13 +539,13 @@ static int r871xu_drv_init(struct usb_interface *pusb_intf, } else { AutoloadFail = false; } - if (((mac[0] == 0xff) && (mac[1] == 0xff) && + if ((!AutoloadFail) || + ((mac[0] == 0xff) && (mac[1] == 0xff) && (mac[2] == 0xff) && (mac[3] == 0xff) && (mac[4] == 0xff) && (mac[5] == 0xff)) || ((mac[0] == 0x00) && (mac[1] == 0x00) && (mac[2] == 0x00) && (mac[3] == 0x00) && - (mac[4] == 0x00) && (mac[5] == 0x00)) || - (!AutoloadFail)) { + (mac[4] == 0x00) && (mac[5] == 0x00))) { mac[0] = 0x00; mac[1] = 0xe0; mac[2] = 0x4c; From 1e3b3a5762a9f6cd167828d705354b083cf4709f Mon Sep 17 00:00:00 2001 From: John Ogness Date: Fri, 6 May 2022 23:39:24 +0206 Subject: [PATCH 0896/1842] serial: msm_serial: disable interrupts in __msm_console_write() [ Upstream commit aabdbb1b7a5819e18c403334a31fb0cc2c06ad41 ] __msm_console_write() assumes that interrupts are disabled, but with threaded console printers it is possible that the write() callback of the console is called with interrupts enabled. Explicitly disable interrupts using local_irq_save() to preserve the assumed context. Reported-by: Marek Szyprowski Reviewed-by: Petr Mladek Signed-off-by: John Ogness Link: https://lore.kernel.org/r/20220506213324.470461-1-john.ogness@linutronix.de Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/serial/msm_serial.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/tty/serial/msm_serial.c b/drivers/tty/serial/msm_serial.c index 26bcbec5422e..27023a56f3ac 100644 --- a/drivers/tty/serial/msm_serial.c +++ b/drivers/tty/serial/msm_serial.c @@ -1593,6 +1593,7 @@ static inline struct uart_port *msm_get_port_from_line(unsigned int line) static void __msm_console_write(struct uart_port *port, const char *s, unsigned int count, bool is_uartdm) { + unsigned long flags; int i; int num_newlines = 0; bool replaced = false; @@ -1610,6 +1611,8 @@ static void __msm_console_write(struct uart_port *port, const char *s, num_newlines++; count += num_newlines; + local_irq_save(flags); + if (port->sysrq) locked = 0; else if (oops_in_progress) @@ -1655,6 +1658,8 @@ static void __msm_console_write(struct uart_port *port, const char *s, if (locked) spin_unlock(&port->lock); + + local_irq_restore(flags); } static void msm_console_write(struct console *co, const char *s, From e20bc8b5a2920f0382b2be13a86571db35a575bf Mon Sep 17 00:00:00 2001 From: Hao Luo Date: Mon, 16 May 2022 12:09:51 -0700 Subject: [PATCH 0897/1842] kernfs: Separate kernfs_pr_cont_buf and rename_lock. [ Upstream commit 1a702dc88e150487c9c173a249b3d236498b9183 ] Previously the protection of kernfs_pr_cont_buf was piggy backed by rename_lock, which means that pr_cont() needs to be protected under rename_lock. This can cause potential circular lock dependencies. If there is an OOM, we have the following call hierarchy: -> cpuset_print_current_mems_allowed() -> pr_cont_cgroup_name() -> pr_cont_kernfs_name() pr_cont_kernfs_name() will grab rename_lock and call printk. So we have the following lock dependencies: kernfs_rename_lock -> console_sem Sometimes, printk does a wakeup before releasing console_sem, which has the dependence chain: console_sem -> p->pi_lock -> rq->lock Now, imagine one wants to read cgroup_name under rq->lock, for example, printing cgroup_name in a tracepoint in the scheduler code. They will be holding rq->lock and take rename_lock: rq->lock -> kernfs_rename_lock Now they will deadlock. A prevention to this circular lock dependency is to separate the protection of pr_cont_buf from rename_lock. In principle, rename_lock is to protect the integrity of cgroup name when copying to buf. Once pr_cont_buf has got its content, rename_lock can be dropped. So it's safe to drop rename_lock after kernfs_name_locked (and kernfs_path_from_node_locked) and rely on a dedicated pr_cont_lock to protect pr_cont_buf. Acked-by: Tejun Heo Signed-off-by: Hao Luo Link: https://lore.kernel.org/r/20220516190951.3144144-1-haoluo@google.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- fs/kernfs/dir.c | 31 +++++++++++++++++++------------ 1 file changed, 19 insertions(+), 12 deletions(-) diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c index 9aec80b9d7c6..afb39e1bbe3b 100644 --- a/fs/kernfs/dir.c +++ b/fs/kernfs/dir.c @@ -19,7 +19,15 @@ DEFINE_MUTEX(kernfs_mutex); static DEFINE_SPINLOCK(kernfs_rename_lock); /* kn->parent and ->name */ -static char kernfs_pr_cont_buf[PATH_MAX]; /* protected by rename_lock */ +/* + * Don't use rename_lock to piggy back on pr_cont_buf. We don't want to + * call pr_cont() while holding rename_lock. Because sometimes pr_cont() + * will perform wakeups when releasing console_sem. Holding rename_lock + * will introduce deadlock if the scheduler reads the kernfs_name in the + * wakeup path. + */ +static DEFINE_SPINLOCK(kernfs_pr_cont_lock); +static char kernfs_pr_cont_buf[PATH_MAX]; /* protected by pr_cont_lock */ static DEFINE_SPINLOCK(kernfs_idr_lock); /* root->ino_idr */ #define rb_to_kn(X) rb_entry((X), struct kernfs_node, rb) @@ -230,12 +238,12 @@ void pr_cont_kernfs_name(struct kernfs_node *kn) { unsigned long flags; - spin_lock_irqsave(&kernfs_rename_lock, flags); + spin_lock_irqsave(&kernfs_pr_cont_lock, flags); - kernfs_name_locked(kn, kernfs_pr_cont_buf, sizeof(kernfs_pr_cont_buf)); + kernfs_name(kn, kernfs_pr_cont_buf, sizeof(kernfs_pr_cont_buf)); pr_cont("%s", kernfs_pr_cont_buf); - spin_unlock_irqrestore(&kernfs_rename_lock, flags); + spin_unlock_irqrestore(&kernfs_pr_cont_lock, flags); } /** @@ -249,10 +257,10 @@ void pr_cont_kernfs_path(struct kernfs_node *kn) unsigned long flags; int sz; - spin_lock_irqsave(&kernfs_rename_lock, flags); + spin_lock_irqsave(&kernfs_pr_cont_lock, flags); - sz = kernfs_path_from_node_locked(kn, NULL, kernfs_pr_cont_buf, - sizeof(kernfs_pr_cont_buf)); + sz = kernfs_path_from_node(kn, NULL, kernfs_pr_cont_buf, + sizeof(kernfs_pr_cont_buf)); if (sz < 0) { pr_cont("(error)"); goto out; @@ -266,7 +274,7 @@ void pr_cont_kernfs_path(struct kernfs_node *kn) pr_cont("%s", kernfs_pr_cont_buf); out: - spin_unlock_irqrestore(&kernfs_rename_lock, flags); + spin_unlock_irqrestore(&kernfs_pr_cont_lock, flags); } /** @@ -864,13 +872,12 @@ static struct kernfs_node *kernfs_walk_ns(struct kernfs_node *parent, lockdep_assert_held(&kernfs_mutex); - /* grab kernfs_rename_lock to piggy back on kernfs_pr_cont_buf */ - spin_lock_irq(&kernfs_rename_lock); + spin_lock_irq(&kernfs_pr_cont_lock); len = strlcpy(kernfs_pr_cont_buf, path, sizeof(kernfs_pr_cont_buf)); if (len >= sizeof(kernfs_pr_cont_buf)) { - spin_unlock_irq(&kernfs_rename_lock); + spin_unlock_irq(&kernfs_pr_cont_lock); return NULL; } @@ -882,7 +889,7 @@ static struct kernfs_node *kernfs_walk_ns(struct kernfs_node *parent, parent = kernfs_find_ns(parent, name, ns); } - spin_unlock_irq(&kernfs_rename_lock); + spin_unlock_irq(&kernfs_pr_cont_lock); return parent; } From 668c3f9fa2ddc93bab26e5acd7e6f43917c71e7d Mon Sep 17 00:00:00 2001 From: Liu Xinpeng Date: Tue, 26 Apr 2022 22:53:29 +0800 Subject: [PATCH 0898/1842] watchdog: wdat_wdt: Stop watchdog when rebooting the system [ Upstream commit 27fdf84510a1374748904db43f6755f912736d92 ] Executing reboot command several times on the machine "Dell PowerEdge R740", UEFI security detection stopped machine with the following prompt: UEFI0082: The system was reset due to a timeout from the watchdog timer. Check the System Event Log (SEL) or crash dumps from Operating Sysstem to identify the source that triggered the watchdog timer reset. Update the firmware or driver for the identified device. iDRAC has warning event: "The watchdog timer reset the system". This patch fixes this issue by adding the reboot notifier. Signed-off-by: Liu Xinpeng Reviewed-by: Guenter Roeck Link: https://lore.kernel.org/r/1650984810-6247-3-git-send-email-liuxp11@chinatelecom.cn Signed-off-by: Guenter Roeck Signed-off-by: Wim Van Sebroeck Signed-off-by: Sasha Levin --- drivers/watchdog/wdat_wdt.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/watchdog/wdat_wdt.c b/drivers/watchdog/wdat_wdt.c index 3065dd670a18..c60723f5ed99 100644 --- a/drivers/watchdog/wdat_wdt.c +++ b/drivers/watchdog/wdat_wdt.c @@ -462,6 +462,7 @@ static int wdat_wdt_probe(struct platform_device *pdev) return ret; watchdog_set_nowayout(&wdat->wdd, nowayout); + watchdog_stop_on_reboot(&wdat->wdd); return devm_watchdog_register_device(dev, &wdat->wdd); } From 7eb32f286e6841ca091e737e5f8b02fee365d16c Mon Sep 17 00:00:00 2001 From: Guoqing Jiang Date: Fri, 29 Apr 2022 16:49:09 +0800 Subject: [PATCH 0899/1842] md: protect md_unregister_thread from reentrancy [ Upstream commit 1e267742283a4b5a8ca65755c44166be27e9aa0f ] Generally, the md_unregister_thread is called with reconfig_mutex, but raid_message in dm-raid doesn't hold reconfig_mutex to unregister thread, so md_unregister_thread can be called simulitaneously from two call sites in theory. Then after previous commit which remove the protection of reconfig_mutex for md_unregister_thread completely, the potential issue could be worse than before. Let's take pers_lock at the beginning of function to ensure reentrancy. Reported-by: Donald Buczek Signed-off-by: Guoqing Jiang Signed-off-by: Song Liu Signed-off-by: Sasha Levin --- drivers/md/md.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 7a9701adee73..5bd1edbb415b 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -7970,17 +7970,22 @@ EXPORT_SYMBOL(md_register_thread); void md_unregister_thread(struct md_thread **threadp) { - struct md_thread *thread = *threadp; - if (!thread) - return; - pr_debug("interrupting MD-thread pid %d\n", task_pid_nr(thread->tsk)); - /* Locking ensures that mddev_unlock does not wake_up a + struct md_thread *thread; + + /* + * Locking ensures that mddev_unlock does not wake_up a * non-existent thread */ spin_lock(&pers_lock); + thread = *threadp; + if (!thread) { + spin_unlock(&pers_lock); + return; + } *threadp = NULL; spin_unlock(&pers_lock); + pr_debug("interrupting MD-thread pid %d\n", task_pid_nr(thread->tsk)); kthread_stop(thread->tsk); kfree(thread); } From ebfe2797253f64f728855d377fdc7d136fdbcfb5 Mon Sep 17 00:00:00 2001 From: Hannes Reinecke Date: Mon, 23 May 2022 14:02:44 +0200 Subject: [PATCH 0900/1842] scsi: myrb: Fix up null pointer access on myrb_cleanup() [ Upstream commit f9f0a46141e2e39bedb4779c88380d1b5f018c14 ] When myrb_probe() fails the callback might not be set, so we need to validate the 'disable_intr' callback in myrb_cleanup() to not cause a null pointer exception. And while at it do not call myrb_cleanup() if we cannot enable the PCI device at all. Link: https://lore.kernel.org/r/20220523120244.99515-1-hare@suse.de Reported-by: Zheyu Ma Tested-by: Zheyu Ma Signed-off-by: Hannes Reinecke Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin --- drivers/scsi/myrb.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/myrb.c b/drivers/scsi/myrb.c index 5fa0f4ed6565..ad17c2beaaca 100644 --- a/drivers/scsi/myrb.c +++ b/drivers/scsi/myrb.c @@ -1241,7 +1241,8 @@ static void myrb_cleanup(struct myrb_hba *cb) myrb_unmap(cb); if (cb->mmio_base) { - cb->disable_intr(cb->io_base); + if (cb->disable_intr) + cb->disable_intr(cb->io_base); iounmap(cb->mmio_base); } if (cb->irq) @@ -3515,9 +3516,13 @@ static struct myrb_hba *myrb_detect(struct pci_dev *pdev, mutex_init(&cb->dcmd_mutex); mutex_init(&cb->dma_mutex); cb->pdev = pdev; + cb->host = shost; - if (pci_enable_device(pdev)) - goto failure; + if (pci_enable_device(pdev)) { + dev_err(&pdev->dev, "Failed to enable PCI device\n"); + scsi_host_put(shost); + return NULL; + } if (privdata->hw_init == DAC960_PD_hw_init || privdata->hw_init == DAC960_P_hw_init) { From 7fa8312879f78d255a63b372473a7016d4c29d67 Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Mon, 23 May 2022 22:05:24 +0200 Subject: [PATCH 0901/1842] Revert "net: af_key: add check for pfkey_broadcast in function pfkey_process" [ Upstream commit 9c90c9b3e50e16d03c7f87d63e9db373974781e0 ] This reverts commit 4dc2a5a8f6754492180741facf2a8787f2c415d7. A non-zero return value from pfkey_broadcast() does not necessarily mean an error occurred as this function returns -ESRCH when no registered listener received the message. In particular, a call with BROADCAST_PROMISC_ONLY flag and null one_sk argument can never return zero so that this commit in fact prevents processing any PF_KEY message. One visible effect is that racoon daemon fails to find encryption algorithms like aes and refuses to start. Excluding -ESRCH return value would fix this but it's not obvious that we really want to bail out here and most other callers of pfkey_broadcast() also ignore the return value. Also, as pointed out by Steffen Klassert, PF_KEY is kind of deprecated and newer userspace code should use netlink instead so that we should only disturb the code for really important fixes. v2: add a comment explaining why is the return value ignored Signed-off-by: Michal Kubecek Signed-off-by: Steffen Klassert Signed-off-by: Sasha Levin --- net/key/af_key.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/net/key/af_key.c b/net/key/af_key.c index 6b7ed5568c09..2aa16a171285 100644 --- a/net/key/af_key.c +++ b/net/key/af_key.c @@ -2830,10 +2830,12 @@ static int pfkey_process(struct sock *sk, struct sk_buff *skb, const struct sadb void *ext_hdrs[SADB_EXT_MAX]; int err; - err = pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL, - BROADCAST_PROMISC_ONLY, NULL, sock_net(sk)); - if (err) - return err; + /* Non-zero return value of pfkey_broadcast() does not always signal + * an error and even on an actual error we may still want to process + * the message so rather ignore the return value. + */ + pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL, + BROADCAST_PROMISC_ONLY, NULL, sock_net(sk)); memset(ext_hdrs, 0, sizeof(ext_hdrs)); err = parse_exthdrs(skb, hdr, ext_hdrs); From 3e5768683022f2ac878d38c0332bf1550e3db120 Mon Sep 17 00:00:00 2001 From: Venky Shankar Date: Thu, 10 Mar 2022 09:34:19 -0500 Subject: [PATCH 0902/1842] ceph: allow ceph.dir.rctime xattr to be updatable [ Upstream commit d7a2dc523085f8b8c60548ceedc696934aefeb0e ] `rctime' has been a pain point in cephfs due to its buggy nature - inconsistent values reported and those sorts. Fixing rctime is non-trivial needing an overall redesign of the entire nested statistics infrastructure. As a workaround, PR http://github.com/ceph/ceph/pull/37938 allows this extended attribute to be manually set. This allows users to "fixup" inconsistent rctime values. While this sounds messy, its probably the wisest approach allowing users/scripts to workaround buggy rctime values. The above PR enables Ceph MDS to allow manually setting rctime extended attribute with the corresponding user-land changes. We may as well allow the same to be done via kclient for parity. Signed-off-by: Venky Shankar Reviewed-by: Xiubo Li Signed-off-by: Ilya Dryomov Signed-off-by: Sasha Levin --- fs/ceph/xattr.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/fs/ceph/xattr.c b/fs/ceph/xattr.c index 197cb1234341..76322c0f6e5f 100644 --- a/fs/ceph/xattr.c +++ b/fs/ceph/xattr.c @@ -317,6 +317,14 @@ static ssize_t ceph_vxattrcb_snap_btime(struct ceph_inode_info *ci, char *val, } #define XATTR_RSTAT_FIELD(_type, _name) \ XATTR_NAME_CEPH(_type, _name, VXATTR_FLAG_RSTAT) +#define XATTR_RSTAT_FIELD_UPDATABLE(_type, _name) \ + { \ + .name = CEPH_XATTR_NAME(_type, _name), \ + .name_size = sizeof (CEPH_XATTR_NAME(_type, _name)), \ + .getxattr_cb = ceph_vxattrcb_ ## _type ## _ ## _name, \ + .exists_cb = NULL, \ + .flags = VXATTR_FLAG_RSTAT, \ + } #define XATTR_LAYOUT_FIELD(_type, _name, _field) \ { \ .name = CEPH_XATTR_NAME2(_type, _name, _field), \ @@ -354,7 +362,7 @@ static struct ceph_vxattr ceph_dir_vxattrs[] = { XATTR_RSTAT_FIELD(dir, rfiles), XATTR_RSTAT_FIELD(dir, rsubdirs), XATTR_RSTAT_FIELD(dir, rbytes), - XATTR_RSTAT_FIELD(dir, rctime), + XATTR_RSTAT_FIELD_UPDATABLE(dir, rctime), { .name = "ceph.dir.pin", .name_size = sizeof("ceph.dir.pin"), From fee8ae0a0bb66eb7730c22f44fbd7203f63c2eab Mon Sep 17 00:00:00 2001 From: Gong Yuanjun Date: Tue, 17 May 2022 17:57:00 +0800 Subject: [PATCH 0903/1842] drm/radeon: fix a possible null pointer dereference [ Upstream commit a2b28708b645c5632dc93669ab06e97874c8244f ] In radeon_fp_native_mode(), the return value of drm_mode_duplicate() is assigned to mode, which will lead to a NULL pointer dereference on failure of drm_mode_duplicate(). Add a check to avoid npd. The failure status of drm_cvt_mode() on the other path is checked too. Signed-off-by: Gong Yuanjun Signed-off-by: Alex Deucher Signed-off-by: Sasha Levin --- drivers/gpu/drm/radeon/radeon_connectors.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c index e30834434442..ef111d460be2 100644 --- a/drivers/gpu/drm/radeon/radeon_connectors.c +++ b/drivers/gpu/drm/radeon/radeon_connectors.c @@ -473,6 +473,8 @@ static struct drm_display_mode *radeon_fp_native_mode(struct drm_encoder *encode native_mode->vdisplay != 0 && native_mode->clock != 0) { mode = drm_mode_duplicate(dev, native_mode); + if (!mode) + return NULL; mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER; drm_mode_set_name(mode); @@ -487,6 +489,8 @@ static struct drm_display_mode *radeon_fp_native_mode(struct drm_encoder *encode * simpler. */ mode = drm_cvt_mode(dev, native_mode->hdisplay, native_mode->vdisplay, 60, true, false, false); + if (!mode) + return NULL; mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER; DRM_DEBUG_KMS("Adding cvt approximation of native panel mode %s\n", mode->name); } From 82876878210ac3b0855742861d4cd0be78ff0315 Mon Sep 17 00:00:00 2001 From: Masahiro Yamada Date: Tue, 24 May 2022 01:46:22 +0900 Subject: [PATCH 0904/1842] modpost: fix undefined behavior of is_arm_mapping_symbol() [ Upstream commit d6b732666a1bae0df3c3ae06925043bba34502b1 ] The return value of is_arm_mapping_symbol() is unpredictable when "$" is passed in. strchr(3) says: The strchr() and strrchr() functions return a pointer to the matched character or NULL if the character is not found. The terminating null byte is considered part of the string, so that if c is specified as '\0', these functions return a pointer to the terminator. When str[1] is '\0', strchr("axtd", str[1]) is not NULL, and str[2] is referenced (i.e. buffer overrun). Test code --------- char str1[] = "abc"; char str2[] = "ab"; strcpy(str1, "$"); strcpy(str2, "$"); printf("test1: %d\n", is_arm_mapping_symbol(str1)); printf("test2: %d\n", is_arm_mapping_symbol(str2)); Result ------ test1: 0 test2: 1 Signed-off-by: Masahiro Yamada Reviewed-by: Nick Desaulniers Signed-off-by: Sasha Levin --- scripts/mod/modpost.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c index a21aa74b4948..79aef50ede17 100644 --- a/scripts/mod/modpost.c +++ b/scripts/mod/modpost.c @@ -1271,7 +1271,8 @@ static int secref_whitelist(const struct sectioncheck *mismatch, static inline int is_arm_mapping_symbol(const char *str) { - return str[0] == '$' && strchr("axtd", str[1]) + return str[0] == '$' && + (str[1] == 'a' || str[1] == 'd' || str[1] == 't' || str[1] == 'x') && (str[2] == '\0' || str[2] == '.'); } From 320acaf84a6469492f3355b75562f7472b91aaf2 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Mon, 2 May 2022 12:15:23 +0200 Subject: [PATCH 0905/1842] x86/cpu: Elide KCSAN for cpu_has() and friends [ Upstream commit a6a5eb269f6f3a2fe392f725a8d9052190c731e2 ] As x86 uses the headers, the regular forms of all bitops are instrumented with explicit calls to KASAN and KCSAN checks. As these are explicit calls, these are not suppressed by the noinstr function attribute. This can result in calls to those check functions in noinstr code, which objtool warns about: vmlinux.o: warning: objtool: enter_from_user_mode+0x24: call to __kcsan_check_access() leaves .noinstr.text section vmlinux.o: warning: objtool: syscall_enter_from_user_mode+0x28: call to __kcsan_check_access() leaves .noinstr.text section vmlinux.o: warning: objtool: syscall_enter_from_user_mode_prepare+0x24: call to __kcsan_check_access() leaves .noinstr.text section vmlinux.o: warning: objtool: irqentry_enter_from_user_mode+0x24: call to __kcsan_check_access() leaves .noinstr.text section Prevent this by using the arch_*() bitops, which are the underlying bitops without explciit instrumentation. [null: Changelog] Reported-by: kernel test robot Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220502111216.290518605@infradead.org Signed-off-by: Sasha Levin --- arch/x86/include/asm/cpufeature.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h index 59bf91c57aa8..619c1f80a2ab 100644 --- a/arch/x86/include/asm/cpufeature.h +++ b/arch/x86/include/asm/cpufeature.h @@ -49,7 +49,7 @@ extern const char * const x86_power_flags[32]; extern const char * const x86_bug_flags[NBUGINTS*32]; #define test_cpu_cap(c, bit) \ - test_bit(bit, (unsigned long *)((c)->x86_capability)) + arch_test_bit(bit, (unsigned long *)((c)->x86_capability)) /* * There are 32 bits/features in each mask word. The high bits From cb8da20d71f983410a9bd3f4f0a7c09cedff169b Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Mon, 2 May 2022 12:30:20 +0200 Subject: [PATCH 0906/1842] jump_label,noinstr: Avoid instrumentation for JUMP_LABEL=n builds [ Upstream commit 656d054e0a15ec327bd82801ccd58201e59f6896 ] When building x86_64 with JUMP_LABEL=n it's possible for instrumentation to sneak into noinstr: vmlinux.o: warning: objtool: exit_to_user_mode+0x14: call to static_key_count.constprop.0() leaves .noinstr.text section vmlinux.o: warning: objtool: syscall_exit_to_user_mode+0x2d: call to static_key_count.constprop.0() leaves .noinstr.text section vmlinux.o: warning: objtool: irqentry_exit_to_user_mode+0x1b: call to static_key_count.constprop.0() leaves .noinstr.text section Switch to arch_ prefixed atomic to avoid the explicit instrumentation. Reported-by: kernel test robot Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sasha Levin --- include/linux/jump_label.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h index 32809624d422..e67ee4d7318f 100644 --- a/include/linux/jump_label.h +++ b/include/linux/jump_label.h @@ -249,9 +249,9 @@ extern void static_key_disable_cpuslocked(struct static_key *key); #include #include -static inline int static_key_count(struct static_key *key) +static __always_inline int static_key_count(struct static_key *key) { - return atomic_read(&key->enabled); + return arch_atomic_read(&key->enabled); } static __always_inline void jump_label_init(void) From c0868f6e728c3c28bef0e8bee89d2daf86a8bbca Mon Sep 17 00:00:00 2001 From: Yu Kuai Date: Sat, 21 May 2022 15:37:44 +0800 Subject: [PATCH 0907/1842] nbd: call genl_unregister_family() first in nbd_cleanup() [ Upstream commit 06c4da89c24e7023ea448cadf8e9daf06a0aae6e ] Otherwise there may be race between module removal and the handling of netlink command, which can lead to the oops as shown below: BUG: kernel NULL pointer dereference, address: 0000000000000098 Oops: 0002 [#1] SMP PTI CPU: 1 PID: 31299 Comm: nbd-client Tainted: G E 5.14.0-rc4 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996) RIP: 0010:down_write+0x1a/0x50 Call Trace: start_creating+0x89/0x130 debugfs_create_dir+0x1b/0x130 nbd_start_device+0x13d/0x390 [nbd] nbd_genl_connect+0x42f/0x748 [nbd] genl_family_rcv_msg_doit.isra.0+0xec/0x150 genl_rcv_msg+0xe5/0x1e0 netlink_rcv_skb+0x55/0x100 genl_rcv+0x29/0x40 netlink_unicast+0x1a8/0x250 netlink_sendmsg+0x21b/0x430 ____sys_sendmsg+0x2a4/0x2d0 ___sys_sendmsg+0x81/0xc0 __sys_sendmsg+0x62/0xb0 __x64_sys_sendmsg+0x1f/0x30 do_syscall_64+0x3b/0xc0 entry_SYSCALL_64_after_hwframe+0x44/0xae Modules linked in: nbd(E-) Signed-off-by: Hou Tao Signed-off-by: Yu Kuai Reviewed-by: Josef Bacik Link: https://lore.kernel.org/r/20220521073749.3146892-2-yukuai3@huawei.com Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- drivers/block/nbd.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index ecde800ba210..1ca326c66521 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -2461,6 +2461,12 @@ static void __exit nbd_cleanup(void) struct nbd_device *nbd; LIST_HEAD(del_list); + /* + * Unregister netlink interface prior to waiting + * for the completion of netlink commands. + */ + genl_unregister_family(&nbd_genl_family); + nbd_dbg_close(); mutex_lock(&nbd_index_mutex); @@ -2476,7 +2482,6 @@ static void __exit nbd_cleanup(void) } idr_destroy(&nbd_index_idr); - genl_unregister_family(&nbd_genl_family); unregister_blkdev(NBD_MAJOR, "nbd"); } From 122e4adaff2439f1cc18cc7e931980fa7560df5c Mon Sep 17 00:00:00 2001 From: Yu Kuai Date: Sat, 21 May 2022 15:37:45 +0800 Subject: [PATCH 0908/1842] nbd: fix race between nbd_alloc_config() and module removal [ Upstream commit c55b2b983b0fa012942c3eb16384b2b722caa810 ] When nbd module is being removing, nbd_alloc_config() may be called concurrently by nbd_genl_connect(), although try_module_get() will return false, but nbd_alloc_config() doesn't handle it. The race may lead to the leak of nbd_config and its related resources (e.g, recv_workq) and oops in nbd_read_stat() due to the unload of nbd module as shown below: BUG: kernel NULL pointer dereference, address: 0000000000000040 Oops: 0000 [#1] SMP PTI CPU: 5 PID: 13840 Comm: kworker/u17:33 Not tainted 5.14.0+ #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996) Workqueue: knbd16-recv recv_work [nbd] RIP: 0010:nbd_read_stat.cold+0x130/0x1a4 [nbd] Call Trace: recv_work+0x3b/0xb0 [nbd] process_one_work+0x1ed/0x390 worker_thread+0x4a/0x3d0 kthread+0x12a/0x150 ret_from_fork+0x22/0x30 Fixing it by checking the return value of try_module_get() in nbd_alloc_config(). As nbd_alloc_config() may return ERR_PTR(-ENODEV), assign nbd->config only when nbd_alloc_config() succeeds to ensure the value of nbd->config is binary (valid or NULL). Also adding a debug message to check the reference counter of nbd_config during module removal. Signed-off-by: Hou Tao Signed-off-by: Yu Kuai Reviewed-by: Josef Bacik Link: https://lore.kernel.org/r/20220521073749.3146892-3-yukuai3@huawei.com Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- drivers/block/nbd.c | 28 +++++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 1ca326c66521..74afa50c7864 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -1472,15 +1472,20 @@ static struct nbd_config *nbd_alloc_config(void) { struct nbd_config *config; + if (!try_module_get(THIS_MODULE)) + return ERR_PTR(-ENODEV); + config = kzalloc(sizeof(struct nbd_config), GFP_NOFS); - if (!config) - return NULL; + if (!config) { + module_put(THIS_MODULE); + return ERR_PTR(-ENOMEM); + } + atomic_set(&config->recv_threads, 0); init_waitqueue_head(&config->recv_wq); init_waitqueue_head(&config->conn_wait); config->blksize = NBD_DEF_BLKSIZE; atomic_set(&config->live_connections, 0); - try_module_get(THIS_MODULE); return config; } @@ -1507,12 +1512,13 @@ static int nbd_open(struct block_device *bdev, fmode_t mode) mutex_unlock(&nbd->config_lock); goto out; } - config = nbd->config = nbd_alloc_config(); - if (!config) { - ret = -ENOMEM; + config = nbd_alloc_config(); + if (IS_ERR(config)) { + ret = PTR_ERR(config); mutex_unlock(&nbd->config_lock); goto out; } + nbd->config = config; refcount_set(&nbd->config_refs, 1); refcount_inc(&nbd->refs); mutex_unlock(&nbd->config_lock); @@ -1934,13 +1940,14 @@ static int nbd_genl_connect(struct sk_buff *skb, struct genl_info *info) nbd_put(nbd); return -EINVAL; } - config = nbd->config = nbd_alloc_config(); - if (!nbd->config) { + config = nbd_alloc_config(); + if (IS_ERR(config)) { mutex_unlock(&nbd->config_lock); nbd_put(nbd); printk(KERN_ERR "nbd: couldn't allocate config\n"); - return -ENOMEM; + return PTR_ERR(config); } + nbd->config = config; refcount_set(&nbd->config_refs, 1); set_bit(NBD_RT_BOUND, &config->runtime_flags); @@ -2476,6 +2483,9 @@ static void __exit nbd_cleanup(void) while (!list_empty(&del_list)) { nbd = list_first_entry(&del_list, struct nbd_device, list); list_del_init(&nbd->list); + if (refcount_read(&nbd->config_refs)) + printk(KERN_ERR "nbd: possibly leaking nbd_config (ref %d)\n", + refcount_read(&nbd->config_refs)); if (refcount_read(&nbd->refs) != 1) printk(KERN_ERR "nbd: possibly leaking a device\n"); nbd_put(nbd); From f72df77600a43e59b3189e53b47f8685739867d3 Mon Sep 17 00:00:00 2001 From: Yu Kuai Date: Sat, 21 May 2022 15:37:47 +0800 Subject: [PATCH 0909/1842] nbd: fix io hung while disconnecting device [ Upstream commit 09dadb5985023e27d4740ebd17e6fea4640110e5 ] In our tests, "qemu-nbd" triggers a io hung: INFO: task qemu-nbd:11445 blocked for more than 368 seconds. Not tainted 5.18.0-rc3-next-20220422-00003-g2176915513ca #884 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:qemu-nbd state:D stack: 0 pid:11445 ppid: 1 flags:0x00000000 Call Trace: __schedule+0x480/0x1050 ? _raw_spin_lock_irqsave+0x3e/0xb0 schedule+0x9c/0x1b0 blk_mq_freeze_queue_wait+0x9d/0xf0 ? ipi_rseq+0x70/0x70 blk_mq_freeze_queue+0x2b/0x40 nbd_add_socket+0x6b/0x270 [nbd] nbd_ioctl+0x383/0x510 [nbd] blkdev_ioctl+0x18e/0x3e0 __x64_sys_ioctl+0xac/0x120 do_syscall_64+0x35/0x80 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x7fd8ff706577 RSP: 002b:00007fd8fcdfebf8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 0000000040000000 RCX: 00007fd8ff706577 RDX: 000000000000000d RSI: 000000000000ab00 RDI: 000000000000000f RBP: 000000000000000f R08: 000000000000fbe8 R09: 000055fe497c62b0 R10: 00000002aff20000 R11: 0000000000000246 R12: 000000000000006d R13: 0000000000000000 R14: 00007ffe82dc5e70 R15: 00007fd8fcdff9c0 "qemu-ndb -d" will call ioctl 'NBD_DISCONNECT' first, however, following message was found: block nbd0: Send disconnect failed -32 Which indicate that something is wrong with the server. Then, "qemu-nbd -d" will call ioctl 'NBD_CLEAR_SOCK', however ioctl can't clear requests after commit 2516ab1543fd("nbd: only clear the queue on device teardown"). And in the meantime, request can't complete through timeout because nbd_xmit_timeout() will always return 'BLK_EH_RESET_TIMER', which means such request will never be completed in this situation. Now that the flag 'NBD_CMD_INFLIGHT' can make sure requests won't complete multiple times, switch back to call nbd_clear_sock() in nbd_clear_sock_ioctl(), so that inflight requests can be cleared. Signed-off-by: Yu Kuai Reviewed-by: Josef Bacik Link: https://lore.kernel.org/r/20220521073749.3146892-5-yukuai3@huawei.com Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- drivers/block/nbd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 74afa50c7864..4a6b82d434ee 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -1359,7 +1359,7 @@ static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *b static void nbd_clear_sock_ioctl(struct nbd_device *nbd, struct block_device *bdev) { - sock_shutdown(nbd); + nbd_clear_sock(nbd); __invalidate_device(bdev, true); nbd_bdev_reset(bdev); if (test_and_clear_bit(NBD_RT_HAS_CONFIG_REF, From 01137d898039cd1ca44effc8f35b35e39309d078 Mon Sep 17 00:00:00 2001 From: Christian Borntraeger Date: Mon, 30 May 2022 11:27:05 +0200 Subject: [PATCH 0910/1842] s390/gmap: voluntarily schedule during key setting [ Upstream commit 6d5946274df1fff539a7eece458a43be733d1db8 ] With large and many guest with storage keys it is possible to create large latencies or stalls during initial key setting: rcu: INFO: rcu_sched self-detected stall on CPU rcu: 18-....: (2099 ticks this GP) idle=54e/1/0x4000000000000002 softirq=35598716/35598716 fqs=998 (t=2100 jiffies g=155867385 q=20879) Task dump for CPU 18: CPU 1/KVM R running task 0 1030947 256019 0x06000004 Call Trace: sched_show_task rcu_dump_cpu_stacks rcu_sched_clock_irq update_process_times tick_sched_handle tick_sched_timer __hrtimer_run_queues hrtimer_interrupt do_IRQ ext_int_handler ptep_zap_key The mmap lock is held during the page walking but since this is a semaphore scheduling is still possible. Same for the kvm srcu. To minimize overhead do this on every segment table entry or large page. Signed-off-by: Christian Borntraeger Reviewed-by: Alexander Gordeev Reviewed-by: Claudio Imbrenda Link: https://lore.kernel.org/r/20220530092706.11637-2-borntraeger@linux.ibm.com Signed-off-by: Christian Borntraeger Signed-off-by: Heiko Carstens Signed-off-by: Sasha Levin --- arch/s390/mm/gmap.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index f2d19d40272c..2db097c14cec 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -2596,6 +2596,18 @@ static int __s390_enable_skey_pte(pte_t *pte, unsigned long addr, return 0; } +/* + * Give a chance to schedule after setting a key to 256 pages. + * We only hold the mm lock, which is a rwsem and the kvm srcu. + * Both can sleep. + */ +static int __s390_enable_skey_pmd(pmd_t *pmd, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + cond_resched(); + return 0; +} + static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr, unsigned long hmask, unsigned long next, struct mm_walk *walk) @@ -2618,12 +2630,14 @@ static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr, end = start + HPAGE_SIZE - 1; __storage_key_init_range(start, end); set_bit(PG_arch_1, &page->flags); + cond_resched(); return 0; } static const struct mm_walk_ops enable_skey_walk_ops = { .hugetlb_entry = __s390_enable_skey_hugetlb, .pte_entry = __s390_enable_skey_pte, + .pmd_entry = __s390_enable_skey_pmd, }; int s390_enable_skey(void) From a262e1255b91dd88af483ca8f1b7951917338b88 Mon Sep 17 00:00:00 2001 From: Steve French Date: Wed, 1 Jun 2022 22:08:46 -0500 Subject: [PATCH 0911/1842] cifs: version operations for smb20 unneeded when legacy support disabled [ Upstream commit 7ef93ffccd55fb0ba000ed16ef6a81cd7dee07b5 ] We should not be including unused smb20 specific code when legacy support is disabled (CONFIG_CIFS_ALLOW_INSECURE_LEGACY turned off). For example smb2_operations and smb2_values aren't used in that case. Over time we can move more and more SMB1/CIFS and SMB2.0 code into the insecure legacy ifdefs Reviewed-by: Ronnie Sahlberg Signed-off-by: Steve French Signed-off-by: Sasha Levin --- fs/cifs/cifsglob.h | 4 +++- fs/cifs/smb2ops.c | 7 ++++++- 2 files changed, 9 insertions(+), 2 deletions(-) diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h index 6599069be690..196285b0fe46 100644 --- a/fs/cifs/cifsglob.h +++ b/fs/cifs/cifsglob.h @@ -1982,11 +1982,13 @@ extern mempool_t *cifs_mid_poolp; /* Operations for different SMB versions */ #define SMB1_VERSION_STRING "1.0" +#define SMB20_VERSION_STRING "2.0" +#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY extern struct smb_version_operations smb1_operations; extern struct smb_version_values smb1_values; -#define SMB20_VERSION_STRING "2.0" extern struct smb_version_operations smb20_operations; extern struct smb_version_values smb20_values; +#endif /* CIFS_ALLOW_INSECURE_LEGACY */ #define SMB21_VERSION_STRING "2.1" extern struct smb_version_operations smb21_operations; extern struct smb_version_values smb21_values; diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c index 7fea94ebda57..b855abfaaf87 100644 --- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -4032,11 +4032,13 @@ smb3_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock, } } +#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY static bool smb2_is_read_op(__u32 oplock) { return oplock == SMB2_OPLOCK_LEVEL_II; } +#endif /* CIFS_ALLOW_INSECURE_LEGACY */ static bool smb21_is_read_op(__u32 oplock) @@ -5122,7 +5124,7 @@ smb2_make_node(unsigned int xid, struct inode *inode, return rc; } - +#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY struct smb_version_operations smb20_operations = { .compare_fids = smb2_compare_fids, .setup_request = smb2_setup_request, @@ -5220,6 +5222,7 @@ struct smb_version_operations smb20_operations = { .llseek = smb3_llseek, .is_status_io_timeout = smb2_is_status_io_timeout, }; +#endif /* CIFS_ALLOW_INSECURE_LEGACY */ struct smb_version_operations smb21_operations = { .compare_fids = smb2_compare_fids, @@ -5548,6 +5551,7 @@ struct smb_version_operations smb311_operations = { .is_status_io_timeout = smb2_is_status_io_timeout, }; +#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY struct smb_version_values smb20_values = { .version_string = SMB20_VERSION_STRING, .protocol_id = SMB20_PROT_ID, @@ -5568,6 +5572,7 @@ struct smb_version_values smb20_values = { .signing_required = SMB2_NEGOTIATE_SIGNING_REQUIRED, .create_lease_size = sizeof(struct create_lease), }; +#endif /* ALLOW_INSECURE_LEGACY */ struct smb_version_values smb21_values = { .version_string = SMB21_VERSION_STRING, From 362e3b3a5953f272b5456daef90efd2a2a9cbff9 Mon Sep 17 00:00:00 2001 From: Kees Cook Date: Wed, 18 May 2022 13:52:23 -0700 Subject: [PATCH 0912/1842] nodemask: Fix return values to be unsigned MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 0dfe54071d7c828a02917b595456bfde1afdddc9 ] The nodemask routines had mixed return values that provided potentially signed return values that could never happen. This was leading to the compiler getting confusing about the range of possible return values (it was thinking things could be negative where they could not be). Fix all the nodemask routines that should be returning unsigned (or bool) values. Silences: mm/swapfile.c: In function ‘setup_swap_info’: mm/swapfile.c:2291:47: error: array subscript -1 is below array bounds of ‘struct plist_node[]’ [-Werror=array-bounds] 2291 | p->avail_lists[i].prio = 1; | ~~~~~~~~~~~~~~^~~ In file included from mm/swapfile.c:16: ./include/linux/swap.h:292:27: note: while referencing ‘avail_lists’ 292 | struct plist_node avail_lists[]; /* | ^~~~~~~~~~~ Reported-by: Christophe de Dinechin Link: https://lore.kernel.org/lkml/20220414150855.2407137-3-dinechin@redhat.com/ Cc: Alexey Dobriyan Cc: Yury Norov Cc: Andy Shevchenko Cc: Rasmus Villemoes Cc: Andrew Morton Cc: Zhen Lei Signed-off-by: Kees Cook Signed-off-by: Yury Norov Signed-off-by: Sasha Levin --- include/linux/nodemask.h | 38 +++++++++++++++++++------------------- lib/nodemask.c | 4 ++-- 2 files changed, 21 insertions(+), 21 deletions(-) diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h index 843678bfc364..2a63ef05a6cc 100644 --- a/include/linux/nodemask.h +++ b/include/linux/nodemask.h @@ -42,11 +42,11 @@ * void nodes_shift_right(dst, src, n) Shift right * void nodes_shift_left(dst, src, n) Shift left * - * int first_node(mask) Number lowest set bit, or MAX_NUMNODES - * int next_node(node, mask) Next node past 'node', or MAX_NUMNODES - * int next_node_in(node, mask) Next node past 'node', or wrap to first, + * unsigned int first_node(mask) Number lowest set bit, or MAX_NUMNODES + * unsigend int next_node(node, mask) Next node past 'node', or MAX_NUMNODES + * unsigned int next_node_in(node, mask) Next node past 'node', or wrap to first, * or MAX_NUMNODES - * int first_unset_node(mask) First node not set in mask, or + * unsigned int first_unset_node(mask) First node not set in mask, or * MAX_NUMNODES * * nodemask_t nodemask_of_node(node) Return nodemask with bit 'node' set @@ -153,7 +153,7 @@ static inline void __nodes_clear(nodemask_t *dstp, unsigned int nbits) #define node_test_and_set(node, nodemask) \ __node_test_and_set((node), &(nodemask)) -static inline int __node_test_and_set(int node, nodemask_t *addr) +static inline bool __node_test_and_set(int node, nodemask_t *addr) { return test_and_set_bit(node, addr->bits); } @@ -200,7 +200,7 @@ static inline void __nodes_complement(nodemask_t *dstp, #define nodes_equal(src1, src2) \ __nodes_equal(&(src1), &(src2), MAX_NUMNODES) -static inline int __nodes_equal(const nodemask_t *src1p, +static inline bool __nodes_equal(const nodemask_t *src1p, const nodemask_t *src2p, unsigned int nbits) { return bitmap_equal(src1p->bits, src2p->bits, nbits); @@ -208,7 +208,7 @@ static inline int __nodes_equal(const nodemask_t *src1p, #define nodes_intersects(src1, src2) \ __nodes_intersects(&(src1), &(src2), MAX_NUMNODES) -static inline int __nodes_intersects(const nodemask_t *src1p, +static inline bool __nodes_intersects(const nodemask_t *src1p, const nodemask_t *src2p, unsigned int nbits) { return bitmap_intersects(src1p->bits, src2p->bits, nbits); @@ -216,20 +216,20 @@ static inline int __nodes_intersects(const nodemask_t *src1p, #define nodes_subset(src1, src2) \ __nodes_subset(&(src1), &(src2), MAX_NUMNODES) -static inline int __nodes_subset(const nodemask_t *src1p, +static inline bool __nodes_subset(const nodemask_t *src1p, const nodemask_t *src2p, unsigned int nbits) { return bitmap_subset(src1p->bits, src2p->bits, nbits); } #define nodes_empty(src) __nodes_empty(&(src), MAX_NUMNODES) -static inline int __nodes_empty(const nodemask_t *srcp, unsigned int nbits) +static inline bool __nodes_empty(const nodemask_t *srcp, unsigned int nbits) { return bitmap_empty(srcp->bits, nbits); } #define nodes_full(nodemask) __nodes_full(&(nodemask), MAX_NUMNODES) -static inline int __nodes_full(const nodemask_t *srcp, unsigned int nbits) +static inline bool __nodes_full(const nodemask_t *srcp, unsigned int nbits) { return bitmap_full(srcp->bits, nbits); } @@ -260,15 +260,15 @@ static inline void __nodes_shift_left(nodemask_t *dstp, > MAX_NUMNODES, then the silly min_ts could be dropped. */ #define first_node(src) __first_node(&(src)) -static inline int __first_node(const nodemask_t *srcp) +static inline unsigned int __first_node(const nodemask_t *srcp) { - return min_t(int, MAX_NUMNODES, find_first_bit(srcp->bits, MAX_NUMNODES)); + return min_t(unsigned int, MAX_NUMNODES, find_first_bit(srcp->bits, MAX_NUMNODES)); } #define next_node(n, src) __next_node((n), &(src)) -static inline int __next_node(int n, const nodemask_t *srcp) +static inline unsigned int __next_node(int n, const nodemask_t *srcp) { - return min_t(int,MAX_NUMNODES,find_next_bit(srcp->bits, MAX_NUMNODES, n+1)); + return min_t(unsigned int, MAX_NUMNODES, find_next_bit(srcp->bits, MAX_NUMNODES, n+1)); } /* @@ -276,7 +276,7 @@ static inline int __next_node(int n, const nodemask_t *srcp) * the first node in src if needed. Returns MAX_NUMNODES if src is empty. */ #define next_node_in(n, src) __next_node_in((n), &(src)) -int __next_node_in(int node, const nodemask_t *srcp); +unsigned int __next_node_in(int node, const nodemask_t *srcp); static inline void init_nodemask_of_node(nodemask_t *mask, int node) { @@ -296,9 +296,9 @@ static inline void init_nodemask_of_node(nodemask_t *mask, int node) }) #define first_unset_node(mask) __first_unset_node(&(mask)) -static inline int __first_unset_node(const nodemask_t *maskp) +static inline unsigned int __first_unset_node(const nodemask_t *maskp) { - return min_t(int,MAX_NUMNODES, + return min_t(unsigned int, MAX_NUMNODES, find_first_zero_bit(maskp->bits, MAX_NUMNODES)); } @@ -435,11 +435,11 @@ static inline int num_node_state(enum node_states state) #define first_online_node first_node(node_states[N_ONLINE]) #define first_memory_node first_node(node_states[N_MEMORY]) -static inline int next_online_node(int nid) +static inline unsigned int next_online_node(int nid) { return next_node(nid, node_states[N_ONLINE]); } -static inline int next_memory_node(int nid) +static inline unsigned int next_memory_node(int nid) { return next_node(nid, node_states[N_MEMORY]); } diff --git a/lib/nodemask.c b/lib/nodemask.c index 3aa454c54c0d..e22647f5181b 100644 --- a/lib/nodemask.c +++ b/lib/nodemask.c @@ -3,9 +3,9 @@ #include #include -int __next_node_in(int node, const nodemask_t *srcp) +unsigned int __next_node_in(int node, const nodemask_t *srcp) { - int ret = __next_node(node, srcp); + unsigned int ret = __next_node(node, srcp); if (ret == MAX_NUMNODES) ret = __first_node(srcp); From b6ea26873edbd9ca3c0c338c9de856de7f1fcede Mon Sep 17 00:00:00 2001 From: Xie Yongji Date: Thu, 5 May 2022 18:09:10 +0800 Subject: [PATCH 0913/1842] vringh: Fix loop descriptors check in the indirect cases [ Upstream commit dbd29e0752286af74243cf891accf472b2f3edd8 ] We should use size of descriptor chain to test loop condition in the indirect case. And another statistical count is also introduced for indirect descriptors to avoid conflict with the statistical count of direct descriptors. Fixes: f87d0fbb5798 ("vringh: host-side implementation of virtio rings.") Signed-off-by: Xie Yongji Signed-off-by: Fam Zheng Message-Id: <20220505100910.137-1-xieyongji@bytedance.com> Signed-off-by: Michael S. Tsirkin Acked-by: Jason Wang Signed-off-by: Sasha Levin --- drivers/vhost/vringh.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c index 0bd7e64331f0..5a0340c85dc6 100644 --- a/drivers/vhost/vringh.c +++ b/drivers/vhost/vringh.c @@ -274,7 +274,7 @@ __vringh_iov(struct vringh *vrh, u16 i, int (*copy)(const struct vringh *vrh, void *dst, const void *src, size_t len)) { - int err, count = 0, up_next, desc_max; + int err, count = 0, indirect_count = 0, up_next, desc_max; struct vring_desc desc, *descs; struct vringh_range range = { -1ULL, 0 }, slowrange; bool slow = false; @@ -331,7 +331,12 @@ __vringh_iov(struct vringh *vrh, u16 i, continue; } - if (count++ == vrh->vring.num) { + if (up_next == -1) + count++; + else + indirect_count++; + + if (count > vrh->vring.num || indirect_count > desc_max) { vringh_bad("Descriptor loop in %p", descs); err = -ELOOP; goto fail; @@ -393,6 +398,7 @@ __vringh_iov(struct vringh *vrh, u16 i, i = return_from_indirect(vrh, &up_next, &descs, &desc_max); slow = false; + indirect_count = 0; } else break; } From 13639c970fdb18ebfd621df71861a9d8b8491d53 Mon Sep 17 00:00:00 2001 From: Kuan-Ying Lee Date: Fri, 10 Jun 2022 15:14:57 +0800 Subject: [PATCH 0914/1842] scripts/gdb: change kernel config dumping method [ Upstream commit 1f7a6cf6b07c74a17343c2559cd5f5018a245961 ] MAGIC_START("IKCFG_ST") and MAGIC_END("IKCFG_ED") are moved out from the kernel_config_data variable. Thus, we parse kernel_config_data directly instead of considering offset of MAGIC_START and MAGIC_END. Fixes: 13610aa908dc ("kernel/configs: use .incbin directive to embed config_data.gz") Signed-off-by: Kuan-Ying Lee Signed-off-by: Masahiro Yamada Signed-off-by: Sasha Levin --- scripts/gdb/linux/config.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/scripts/gdb/linux/config.py b/scripts/gdb/linux/config.py index 90e1565b1967..8843ab3cbadd 100644 --- a/scripts/gdb/linux/config.py +++ b/scripts/gdb/linux/config.py @@ -24,9 +24,9 @@ class LxConfigDump(gdb.Command): filename = arg try: - py_config_ptr = gdb.parse_and_eval("kernel_config_data + 8") - py_config_size = gdb.parse_and_eval( - "sizeof(kernel_config_data) - 1 - 8 * 2") + py_config_ptr = gdb.parse_and_eval("&kernel_config_data") + py_config_ptr_end = gdb.parse_and_eval("&kernel_config_data_end") + py_config_size = py_config_ptr_end - py_config_ptr except gdb.error as e: raise gdb.GdbError("Can't find config, enable CONFIG_IKCONFIG?") From 94bd216d171800acbfea0b7c6ec49e9944e236de Mon Sep 17 00:00:00 2001 From: huangwenhui Date: Tue, 7 Jun 2022 14:56:31 +0800 Subject: [PATCH 0915/1842] ALSA: hda/conexant - Fix loopback issue with CX20632 commit d5ea7544c32ba27c2c5826248e4ff58bd50a2518 upstream. On a machine with CX20632, Alsamixer doesn't have 'Loopback Mixing' and 'Line'. Signed-off-by: huangwenhui Cc: Link: https://lore.kernel.org/r/20220607065631.10708-1-huangwenhuia@uniontech.com Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_conexant.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c index 8098088b0056..0dd6d37db966 100644 --- a/sound/pci/hda/patch_conexant.c +++ b/sound/pci/hda/patch_conexant.c @@ -1045,6 +1045,13 @@ static int patch_conexant_auto(struct hda_codec *codec) snd_hda_pick_fixup(codec, cxt5051_fixup_models, cxt5051_fixups, cxt_fixups); break; + case 0x14f15098: + codec->pin_amp_workaround = 1; + spec->gen.mixer_nid = 0x22; + spec->gen.add_stereo_mix_input = HDA_HINT_STEREO_MIX_AUTO; + snd_hda_pick_fixup(codec, cxt5066_fixup_models, + cxt5066_fixups, cxt_fixups); + break; case 0x14f150f2: codec->power_save_node = 1; fallthrough; From b423cd2a81e85b3c526747efa49af235ca97bcb4 Mon Sep 17 00:00:00 2001 From: Cameron Berkenpas Date: Sun, 5 Jun 2022 17:23:30 -0700 Subject: [PATCH 0916/1842] ALSA: hda/realtek: Fix for quirk to enable speaker output on the Lenovo Yoga DuetITL 2021 commit 85743a847caeab696dafc4ce1a7e1e2b7e29a0f6 upstream. Enables the ALC287_FIXUP_YOGA7_14ITL_SPEAKERS quirk for the Lenovo Yoga DuetITL 2021 laptop to fix speaker output. [ re-sorted in the SSID order by tiwai ] BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=208555 Signed-off-by: Cameron Berkenpas Co-authored-by: Songine Cc: stable@vger.kernel.org> Link: https://lore.kernel.org/r/20220606002329.215330-1-cam@neo-zeon.de Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 71a9462e8f6e..cf3b1133b785 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -8977,6 +8977,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x17aa, 0x3176, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340), + SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS), SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME), SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS), From 9023ecfd33788294c83e63b59c2623d4e2ec54be Mon Sep 17 00:00:00 2001 From: Shyam Prasad N Date: Tue, 31 May 2022 12:31:05 +0000 Subject: [PATCH 0917/1842] cifs: return errors during session setup during reconnects commit 8ea21823aa584b55ba4b861307093b78054b0c1b upstream. During reconnects, we check the return value from cifs_negotiate_protocol, and have handlers for both success and failures. But if that passes, and cifs_setup_session returns any errors other than -EACCES, we do not handle that. This fix adds a handler for that, so that we don't go ahead and try a tree_connect on a failed session. Signed-off-by: Shyam Prasad N Reviewed-by: Enzo Matsumiya Cc: stable@vger.kernel.org Signed-off-by: Steve French Signed-off-by: Greg Kroah-Hartman --- fs/cifs/smb2pdu.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c index 88554b640b0d..24dd711fa9b9 100644 --- a/fs/cifs/smb2pdu.c +++ b/fs/cifs/smb2pdu.c @@ -281,6 +281,9 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon, ses->binding_chan = NULL; mutex_unlock(&tcon->ses->session_mutex); goto failed; + } else if (rc) { + mutex_unlock(&ses->session_mutex); + goto out; } } /* From 471a41320170be3404822ce98d63b53d6e136d6f Mon Sep 17 00:00:00 2001 From: Paulo Alcantara Date: Sun, 5 Jun 2022 19:54:26 -0300 Subject: [PATCH 0918/1842] cifs: fix reconnect on smb3 mount types commit c36ee7dab7749f7be21f7a72392744490b2a9a2b upstream. cifs.ko defines two file system types: cifs & smb3, and __cifs_get_super() was not including smb3 file system type when looking up superblocks, therefore failing to reconnect tcons in cifs_tree_connect(). Fix this by calling iterate_supers_type() on both file system types. Link: https://lore.kernel.org/r/CAFrh3J9soC36+BVuwHB=g9z_KB5Og2+p2_W+BBoBOZveErz14w@mail.gmail.com Cc: stable@vger.kernel.org Tested-by: Satadru Pramanik Reported-by: Satadru Pramanik Signed-off-by: Paulo Alcantara (SUSE) Signed-off-by: Steve French Signed-off-by: Greg Kroah-Hartman --- fs/cifs/cifsfs.c | 2 +- fs/cifs/cifsfs.h | 2 +- fs/cifs/misc.c | 27 ++++++++++++++++----------- 3 files changed, 18 insertions(+), 13 deletions(-) diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c index 370188b2a55d..bc957e6ca48b 100644 --- a/fs/cifs/cifsfs.c +++ b/fs/cifs/cifsfs.c @@ -1033,7 +1033,7 @@ struct file_system_type cifs_fs_type = { }; MODULE_ALIAS_FS("cifs"); -static struct file_system_type smb3_fs_type = { +struct file_system_type smb3_fs_type = { .owner = THIS_MODULE, .name = "smb3", .mount = smb3_do_mount, diff --git a/fs/cifs/cifsfs.h b/fs/cifs/cifsfs.h index 905d03863721..e996f0bef414 100644 --- a/fs/cifs/cifsfs.h +++ b/fs/cifs/cifsfs.h @@ -51,7 +51,7 @@ static inline unsigned long cifs_get_time(struct dentry *dentry) return (unsigned long) dentry->d_fsdata; } -extern struct file_system_type cifs_fs_type; +extern struct file_system_type cifs_fs_type, smb3_fs_type; extern const struct address_space_operations cifs_addr_ops; extern const struct address_space_operations cifs_addr_ops_smallbuf; diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c index 1c14cf01dbef..9d740916a8ee 100644 --- a/fs/cifs/misc.c +++ b/fs/cifs/misc.c @@ -1053,18 +1053,23 @@ static struct super_block *__cifs_get_super(void (*f)(struct super_block *, void .data = data, .sb = NULL, }; + struct file_system_type **fs_type = (struct file_system_type *[]) { + &cifs_fs_type, &smb3_fs_type, NULL, + }; - iterate_supers_type(&cifs_fs_type, f, &sd); - - if (!sd.sb) - return ERR_PTR(-EINVAL); - /* - * Grab an active reference in order to prevent automounts (DFS links) - * of expiring and then freeing up our cifs superblock pointer while - * we're doing failover. - */ - cifs_sb_active(sd.sb); - return sd.sb; + for (; *fs_type; fs_type++) { + iterate_supers_type(*fs_type, f, &sd); + if (sd.sb) { + /* + * Grab an active reference in order to prevent automounts (DFS links) + * of expiring and then freeing up our cifs superblock pointer while + * we're doing failover. + */ + cifs_sb_active(sd.sb); + return sd.sb; + } + } + return ERR_PTR(-EINVAL); } static void __cifs_put_super(struct super_block *sb) From 0248a8c844a497f6b129d9b11adbfaaeee117c8a Mon Sep 17 00:00:00 2001 From: Sergey Shtylyov Date: Wed, 8 Jun 2022 22:51:07 +0300 Subject: [PATCH 0919/1842] ata: libata-transport: fix {dma|pio|xfer}_mode sysfs files commit 72aad489f992871e908ff6d9055b26c6366fb864 upstream. The {dma|pio}_mode sysfs files are incorrectly documented as having a list of the supported DMA/PIO transfer modes, while the corresponding fields of the *struct* ata_device hold the transfer mode IDs, not masks. To match these docs, the {dma|pio}_mode (and even xfer_mode!) sysfs files are handled by the ata_bitfield_name_match() macro which leads to reading such kind of nonsense from them: $ cat /sys/class/ata_device/dev3.0/pio_mode XFER_UDMA_7, XFER_UDMA_6, XFER_UDMA_5, XFER_UDMA_4, XFER_MW_DMA_4, XFER_PIO_6, XFER_PIO_5, XFER_PIO_4, XFER_PIO_3, XFER_PIO_2, XFER_PIO_1, XFER_PIO_0 Using the correct ata_bitfield_name_search() macro fixes that: $ cat /sys/class/ata_device/dev3.0/pio_mode XFER_PIO_4 While fixing the file documentation, somewhat reword the {dma|pio}_mode file doc and add a note about being mostly useful for PATA devices to the xfer_mode file doc... Fixes: d9027470b886 ("[libata] Add ATA transport class") Signed-off-by: Sergey Shtylyov Cc: stable@vger.kernel.org Signed-off-by: Damien Le Moal Signed-off-by: Greg Kroah-Hartman --- Documentation/ABI/testing/sysfs-ata | 11 ++++++----- drivers/ata/libata-transport.c | 2 +- 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-ata b/Documentation/ABI/testing/sysfs-ata index 9ab0ef1dd1c7..299e0d1dc161 100644 --- a/Documentation/ABI/testing/sysfs-ata +++ b/Documentation/ABI/testing/sysfs-ata @@ -107,13 +107,14 @@ Description: described in ATA8 7.16 and 7.17. Only valid if the device is not a PM. - pio_mode: (RO) Transfer modes supported by the device when - in PIO mode. Mostly used by PATA device. + pio_mode: (RO) PIO transfer mode used by the device. + Mostly used by PATA devices. - xfer_mode: (RO) Current transfer mode + xfer_mode: (RO) Current transfer mode. Mostly used by + PATA devices. - dma_mode: (RO) Transfer modes supported by the device when - in DMA mode. Mostly used by PATA device. + dma_mode: (RO) DMA transfer mode used by the device. + Mostly used by PATA devices. class: (RO) Device class. Can be "ata" for disk, "atapi" for packet device, "pmp" for PM, or diff --git a/drivers/ata/libata-transport.c b/drivers/ata/libata-transport.c index 6a40e3c6cf49..b33772df9bc6 100644 --- a/drivers/ata/libata-transport.c +++ b/drivers/ata/libata-transport.c @@ -196,7 +196,7 @@ static struct { { XFER_PIO_0, "XFER_PIO_0" }, { XFER_PIO_SLOW, "XFER_PIO_SLOW" } }; -ata_bitfield_name_match(xfer,ata_xfer_names) +ata_bitfield_name_search(xfer, ata_xfer_names) /* * ATA Port attributes From 133c9870cd6b40b3f94b9a228ffc9a2cc0546195 Mon Sep 17 00:00:00 2001 From: Adrian Hunter Date: Tue, 31 May 2022 20:19:22 +0300 Subject: [PATCH 0920/1842] mmc: block: Fix CQE recovery reset success commit a051246b786af7e4a9d9219cc7038a6e8a411531 upstream. The intention of the use of mmc_blk_reset_success() in mmc_blk_cqe_recovery() was to prevent repeated resets when retrying and getting the same error. However, that may not be the case - any amount of time and I/O may pass before another recovery is needed, in which case there would be no reason to deny it the opportunity to recover via a reset if necessary. CQE recovery is expected seldom and failure to recover (if the clear tasks command fails), even more seldom, so it is better to allow the reset always, which can be done by calling mmc_blk_reset_success() always. Fixes: 1e8e55b67030c6 ("mmc: block: Add CQE support") Cc: stable@vger.kernel.org Signed-off-by: Adrian Hunter Link: https://lore.kernel.org/r/20220531171922.76080-1-adrian.hunter@intel.com Signed-off-by: Ulf Hansson Signed-off-by: Greg Kroah-Hartman --- drivers/mmc/core/block.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 99b981a05b6c..70eb3d03937f 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -1442,8 +1442,7 @@ void mmc_blk_cqe_recovery(struct mmc_queue *mq) err = mmc_cqe_recovery(host); if (err) mmc_blk_reset(mq->blkdata, host, MMC_BLK_CQE_RECOVERY); - else - mmc_blk_reset_success(mq->blkdata, MMC_BLK_CQE_RECOVERY); + mmc_blk_reset_success(mq->blkdata, MMC_BLK_CQE_RECOVERY); pr_debug("%s: CQE recovery done\n", mmc_hostname(host)); } From c4e4c07d86db2e51d6d463f789d5d6545ed66ab5 Mon Sep 17 00:00:00 2001 From: Tan Tee Min Date: Thu, 26 May 2022 17:03:47 +0800 Subject: [PATCH 0921/1842] net: phy: dp83867: retrigger SGMII AN when link change commit c76acfb7e19dcc3a0964e0563770b1d11b8d4540 upstream. There is a limitation in TI DP83867 PHY device where SGMII AN is only triggered once after the device is booted up. Even after the PHY TPI is down and up again, SGMII AN is not triggered and hence no new in-band message from PHY to MAC side SGMII. This could cause an issue during power up, when PHY is up prior to MAC. At this condition, once MAC side SGMII is up, MAC side SGMII wouldn`t receive new in-band message from TI PHY with correct link status, speed and duplex info. As suggested by TI, implemented a SW solution here to retrigger SGMII Auto-Neg whenever there is a link change. v2: Add Fixes tag in commit message. Fixes: 2a10154abcb7 ("net: phy: dp83867: Add TI dp83867 phy") Cc: # 5.4.x Signed-off-by: Sit, Michael Wei Hong Reviewed-by: Voon Weifeng Signed-off-by: Tan Tee Min Reviewed-by: Andrew Lunn Link: https://lore.kernel.org/r/20220526090347.128742-1-tee.min.tan@linux.intel.com Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/net/phy/dp83867.c | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c index c716074fdef0..f86acad0aad4 100644 --- a/drivers/net/phy/dp83867.c +++ b/drivers/net/phy/dp83867.c @@ -137,6 +137,7 @@ #define DP83867_DOWNSHIFT_2_COUNT 2 #define DP83867_DOWNSHIFT_4_COUNT 4 #define DP83867_DOWNSHIFT_8_COUNT 8 +#define DP83867_SGMII_AUTONEG_EN BIT(7) /* CFG3 bits */ #define DP83867_CFG3_INT_OE BIT(7) @@ -802,6 +803,32 @@ static int dp83867_phy_reset(struct phy_device *phydev) DP83867_PHYCR_FORCE_LINK_GOOD, 0); } +static void dp83867_link_change_notify(struct phy_device *phydev) +{ + /* There is a limitation in DP83867 PHY device where SGMII AN is + * only triggered once after the device is booted up. Even after the + * PHY TPI is down and up again, SGMII AN is not triggered and + * hence no new in-band message from PHY to MAC side SGMII. + * This could cause an issue during power up, when PHY is up prior + * to MAC. At this condition, once MAC side SGMII is up, MAC side + * SGMII wouldn`t receive new in-band message from TI PHY with + * correct link status, speed and duplex info. + * Thus, implemented a SW solution here to retrigger SGMII Auto-Neg + * whenever there is a link change. + */ + if (phydev->interface == PHY_INTERFACE_MODE_SGMII) { + int val = 0; + + val = phy_clear_bits(phydev, DP83867_CFG2, + DP83867_SGMII_AUTONEG_EN); + if (val < 0) + return; + + phy_set_bits(phydev, DP83867_CFG2, + DP83867_SGMII_AUTONEG_EN); + } +} + static struct phy_driver dp83867_driver[] = { { .phy_id = DP83867_PHY_ID, @@ -826,6 +853,8 @@ static struct phy_driver dp83867_driver[] = { .suspend = genphy_suspend, .resume = genphy_resume, + + .link_change_notify = dp83867_link_change_notify, }, }; module_phy_driver(dp83867_driver); From 4f0a2c46f588e09b37b831f78f9015463a4b21b6 Mon Sep 17 00:00:00 2001 From: Martin Faltesek Date: Mon, 6 Jun 2022 21:57:27 -0500 Subject: [PATCH 0922/1842] nfc: st21nfca: fix incorrect validating logic in EVT_TRANSACTION commit 77e5fe8f176a525523ae091d6fd0fbb8834c156d upstream. The first validation check for EVT_TRANSACTION has two different checks tied together with logical AND. One is a check for minimum packet length, and the other is for a valid aid_tag. If either condition is true (fails), then an error should be triggered. The fix is to change && to ||. Fixes: 26fc6c7f02cb ("NFC: st21nfca: Add HCI transaction event support") Cc: stable@vger.kernel.org Signed-off-by: Martin Faltesek Reviewed-by: Guenter Roeck Reviewed-by: Krzysztof Kozlowski Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/nfc/st21nfca/se.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/nfc/st21nfca/se.c b/drivers/nfc/st21nfca/se.c index 0194e80193d9..fafa6d003c2c 100644 --- a/drivers/nfc/st21nfca/se.c +++ b/drivers/nfc/st21nfca/se.c @@ -319,7 +319,7 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host, * AID 81 5 to 16 * PARAMETERS 82 0 to 255 */ - if (skb->len < NFC_MIN_AID_LENGTH + 2 && + if (skb->len < NFC_MIN_AID_LENGTH + 2 || skb->data[0] != NFC_EVT_TRANSACTION_AID_TAG) return -EPROTO; From 54423649bc0ed464b75807a7cf2857a5871f738f Mon Sep 17 00:00:00 2001 From: Martin Faltesek Date: Mon, 6 Jun 2022 21:57:28 -0500 Subject: [PATCH 0923/1842] nfc: st21nfca: fix memory leaks in EVT_TRANSACTION handling commit 996419e0594abb311fb958553809f24f38e7abbe upstream. Error paths do not free previously allocated memory. Add devm_kfree() to those failure paths. Fixes: 26fc6c7f02cb ("NFC: st21nfca: Add HCI transaction event support") Fixes: 4fbcc1a4cb20 ("nfc: st21nfca: Fix potential buffer overflows in EVT_TRANSACTION") Cc: stable@vger.kernel.org Signed-off-by: Martin Faltesek Reviewed-by: Guenter Roeck Reviewed-by: Krzysztof Kozlowski Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/nfc/st21nfca/se.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/nfc/st21nfca/se.c b/drivers/nfc/st21nfca/se.c index fafa6d003c2c..ecd5566999c5 100644 --- a/drivers/nfc/st21nfca/se.c +++ b/drivers/nfc/st21nfca/se.c @@ -330,22 +330,29 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host, transaction->aid_len = skb->data[1]; /* Checking if the length of the AID is valid */ - if (transaction->aid_len > sizeof(transaction->aid)) + if (transaction->aid_len > sizeof(transaction->aid)) { + devm_kfree(dev, transaction); return -EINVAL; + } memcpy(transaction->aid, &skb->data[2], transaction->aid_len); /* Check next byte is PARAMETERS tag (82) */ if (skb->data[transaction->aid_len + 2] != - NFC_EVT_TRANSACTION_PARAMS_TAG) + NFC_EVT_TRANSACTION_PARAMS_TAG) { + devm_kfree(dev, transaction); return -EPROTO; + } transaction->params_len = skb->data[transaction->aid_len + 3]; /* Total size is allocated (skb->len - 2) minus fixed array members */ - if (transaction->params_len > ((skb->len - 2) - sizeof(struct nfc_evt_transaction))) + if (transaction->params_len > ((skb->len - 2) - + sizeof(struct nfc_evt_transaction))) { + devm_kfree(dev, transaction); return -EINVAL; + } memcpy(transaction->params, skb->data + transaction->aid_len + 4, transaction->params_len); From cdd9227373f29bd68a1729dc33105026ff3202f6 Mon Sep 17 00:00:00 2001 From: Martin Faltesek Date: Mon, 6 Jun 2022 21:57:29 -0500 Subject: [PATCH 0924/1842] nfc: st21nfca: fix incorrect sizing calculations in EVT_TRANSACTION commit f2e19b36593caed4c977c2f55aeba7408aeb2132 upstream. The transaction buffer is allocated by using the size of the packet buf, and subtracting two which seem intended to remove the two tags which are not present in the target structure. This calculation leads to under counting memory because of differences between the packet contents and the target structure. The aid_len field is a u8 in the packet, but a u32 in the structure, resulting in at least 3 bytes always being under counted. Further, the aid data is a variable length field in the packet, but fixed in the structure, so if this field is less than the max, the difference is added to the under counting. The last validation check for transaction->params_len is also incorrect since it employs the same accounting error. To fix, perform validation checks progressively to safely reach the next field, to determine the size of both buffers and verify both tags. Once all validation checks pass, allocate the buffer and copy the data. This eliminates freeing memory on the error path, as those checks are moved ahead of memory allocation. Fixes: 26fc6c7f02cb ("NFC: st21nfca: Add HCI transaction event support") Fixes: 4fbcc1a4cb20 ("nfc: st21nfca: Fix potential buffer overflows in EVT_TRANSACTION") Cc: stable@vger.kernel.org Signed-off-by: Martin Faltesek Reviewed-by: Guenter Roeck Reviewed-by: Krzysztof Kozlowski Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/nfc/st21nfca/se.c | 66 +++++++++++++++++++-------------------- 1 file changed, 33 insertions(+), 33 deletions(-) diff --git a/drivers/nfc/st21nfca/se.c b/drivers/nfc/st21nfca/se.c index ecd5566999c5..d41636504246 100644 --- a/drivers/nfc/st21nfca/se.c +++ b/drivers/nfc/st21nfca/se.c @@ -304,6 +304,8 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host, int r = 0; struct device *dev = &hdev->ndev->dev; struct nfc_evt_transaction *transaction; + u32 aid_len; + u8 params_len; pr_debug("connectivity gate event: %x\n", event); @@ -312,50 +314,48 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host, r = nfc_se_connectivity(hdev->ndev, host); break; case ST21NFCA_EVT_TRANSACTION: - /* - * According to specification etsi 102 622 + /* According to specification etsi 102 622 * 11.2.2.4 EVT_TRANSACTION Table 52 * Description Tag Length * AID 81 5 to 16 * PARAMETERS 82 0 to 255 + * + * The key differences are aid storage length is variably sized + * in the packet, but fixed in nfc_evt_transaction, and that the aid_len + * is u8 in the packet, but u32 in the structure, and the tags in + * the packet are not included in nfc_evt_transaction. + * + * size in bytes: 1 1 5-16 1 1 0-255 + * offset: 0 1 2 aid_len + 2 aid_len + 3 aid_len + 4 + * member name: aid_tag(M) aid_len aid params_tag(M) params_len params + * example: 0x81 5-16 X 0x82 0-255 X */ - if (skb->len < NFC_MIN_AID_LENGTH + 2 || - skb->data[0] != NFC_EVT_TRANSACTION_AID_TAG) + if (skb->len < 2 || skb->data[0] != NFC_EVT_TRANSACTION_AID_TAG) return -EPROTO; - transaction = devm_kzalloc(dev, skb->len - 2, GFP_KERNEL); + aid_len = skb->data[1]; + + if (skb->len < aid_len + 4 || aid_len > sizeof(transaction->aid)) + return -EPROTO; + + params_len = skb->data[aid_len + 3]; + + /* Verify PARAMETERS tag is (82), and final check that there is enough + * space in the packet to read everything. + */ + if ((skb->data[aid_len + 2] != NFC_EVT_TRANSACTION_PARAMS_TAG) || + (skb->len < aid_len + 4 + params_len)) + return -EPROTO; + + transaction = devm_kzalloc(dev, sizeof(*transaction) + params_len, GFP_KERNEL); if (!transaction) return -ENOMEM; - transaction->aid_len = skb->data[1]; + transaction->aid_len = aid_len; + transaction->params_len = params_len; - /* Checking if the length of the AID is valid */ - if (transaction->aid_len > sizeof(transaction->aid)) { - devm_kfree(dev, transaction); - return -EINVAL; - } - - memcpy(transaction->aid, &skb->data[2], - transaction->aid_len); - - /* Check next byte is PARAMETERS tag (82) */ - if (skb->data[transaction->aid_len + 2] != - NFC_EVT_TRANSACTION_PARAMS_TAG) { - devm_kfree(dev, transaction); - return -EPROTO; - } - - transaction->params_len = skb->data[transaction->aid_len + 3]; - - /* Total size is allocated (skb->len - 2) minus fixed array members */ - if (transaction->params_len > ((skb->len - 2) - - sizeof(struct nfc_evt_transaction))) { - devm_kfree(dev, transaction); - return -EINVAL; - } - - memcpy(transaction->params, skb->data + - transaction->aid_len + 4, transaction->params_len); + memcpy(transaction->aid, &skb->data[2], aid_len); + memcpy(transaction->params, &skb->data[aid_len + 4], params_len); r = nfc_se_transaction(hdev->ndev, host, transaction); break; From 91620cded92dda652ce5a79aa06d8e118d93e769 Mon Sep 17 00:00:00 2001 From: Olivier Matz Date: Wed, 6 Apr 2022 11:52:51 +0200 Subject: [PATCH 0925/1842] ixgbe: fix bcast packets Rx on VF after promisc removal commit 803e9895ea2b0fe80bc85980ae2d7a7e44037914 upstream. After a VF requested to remove the promiscuous flag on an interface, the broadcast packets are not received anymore. This breaks some protocols like ARP. In ixgbe_update_vf_xcast_mode(), we should keep the IXGBE_VMOLR_BAM bit (Broadcast Accept) on promiscuous removal. This flag is already set by default in ixgbe_set_vmolr() on VF reset. Fixes: 8443c1a4b192 ("ixgbe, ixgbevf: Add new mbox API xcast mode") Cc: stable@vger.kernel.org Cc: Nicolas Dichtel Signed-off-by: Olivier Matz Tested-by: Konrad Jankowski Signed-off-by: Tony Nguyen Signed-off-by: Greg Kroah-Hartman --- drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c index 214a38de3f41..8f3352586f0c 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c @@ -1157,9 +1157,9 @@ static int ixgbe_update_vf_xcast_mode(struct ixgbe_adapter *adapter, switch (xcast_mode) { case IXGBEVF_XCAST_MODE_NONE: - disable = IXGBE_VMOLR_BAM | IXGBE_VMOLR_ROMPE | + disable = IXGBE_VMOLR_ROMPE | IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE; - enable = 0; + enable = IXGBE_VMOLR_BAM; break; case IXGBEVF_XCAST_MODE_MULTI: disable = IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE; From 2dba96d19d254cde21c145bce4f2714a9535ae29 Mon Sep 17 00:00:00 2001 From: Olivier Matz Date: Wed, 6 Apr 2022 11:52:52 +0200 Subject: [PATCH 0926/1842] ixgbe: fix unexpected VLAN Rx in promisc mode on VF MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 7bb0fb7c63df95d6027dc50d6af3bc3bbbc25483 upstream. When the promiscuous mode is enabled on a VF, the IXGBE_VMOLR_VPE bit (VLAN Promiscuous Enable) is set. This means that the VF will receive packets whose VLAN is not the same than the VLAN of the VF. For instance, in this situation: ┌────────┐ ┌────────┐ ┌────────┐ │ │ │ │ │ │ │ │ │ │ │ │ │ VF0├────┤VF1 VF2├────┤VF3 │ │ │ │ │ │ │ └────────┘ └────────┘ └────────┘ VM1 VM2 VM3 vf 0: vlan 1000 vf 1: vlan 1000 vf 2: vlan 1001 vf 3: vlan 1001 If we tcpdump on VF3, we see all the packets, even those transmitted on vlan 1000. This behavior prevents to bridge VF1 and VF2 in VM2, because it will create a loop: packets transmitted on VF1 will be received by VF2 and vice-versa, and bridged again through the software bridge. This patch remove the activation of VLAN Promiscuous when a VF enables the promiscuous mode. However, the IXGBE_VMOLR_UPE bit (Unicast Promiscuous) is kept, so that a VF receives all packets that has the same VLAN, whatever the destination MAC address. Fixes: 8443c1a4b192 ("ixgbe, ixgbevf: Add new mbox API xcast mode") Cc: stable@vger.kernel.org Cc: Nicolas Dichtel Signed-off-by: Olivier Matz Tested-by: Konrad Jankowski Signed-off-by: Tony Nguyen Signed-off-by: Greg Kroah-Hartman --- drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c index 8f3352586f0c..aaebdae8b5ff 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c @@ -1181,9 +1181,9 @@ static int ixgbe_update_vf_xcast_mode(struct ixgbe_adapter *adapter, return -EPERM; } - disable = 0; + disable = IXGBE_VMOLR_VPE; enable = IXGBE_VMOLR_BAM | IXGBE_VMOLR_ROMPE | - IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE; + IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE; break; default: return -EOPNOTSUPP; From 61297ee0c329c84669d3c4aa700eea2fdb2a8974 Mon Sep 17 00:00:00 2001 From: Mathias Nyman Date: Tue, 7 Jun 2022 12:11:33 -0700 Subject: [PATCH 0927/1842] Input: bcm5974 - set missing URB_NO_TRANSFER_DMA_MAP urb flag commit c42e65664390be7c1ef3838cd84956d3a2739d60 upstream. The bcm5974 driver does the allocation and dma mapping of the usb urb data buffer, but driver does not set the URB_NO_TRANSFER_DMA_MAP flag to let usb core know the buffer is already mapped. usb core tries to map the already mapped buffer, causing a warning: "xhci_hcd 0000:00:14.0: rejecting DMA map of vmalloc memory" Fix this by setting the URB_NO_TRANSFER_DMA_MAP, letting usb core know buffer is already mapped by bcm5974 driver Signed-off-by: Mathias Nyman Cc: stable@vger.kernel.org Link: https://bugzilla.kernel.org/show_bug.cgi?id=215890 Link: https://lore.kernel.org/r/20220606113636.588955-1-mathias.nyman@linux.intel.com Signed-off-by: Dmitry Torokhov Signed-off-by: Greg Kroah-Hartman --- drivers/input/mouse/bcm5974.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/input/mouse/bcm5974.c b/drivers/input/mouse/bcm5974.c index 59a14505b9cd..ca150618d32f 100644 --- a/drivers/input/mouse/bcm5974.c +++ b/drivers/input/mouse/bcm5974.c @@ -942,17 +942,22 @@ static int bcm5974_probe(struct usb_interface *iface, if (!dev->tp_data) goto err_free_bt_buffer; - if (dev->bt_urb) + if (dev->bt_urb) { usb_fill_int_urb(dev->bt_urb, udev, usb_rcvintpipe(udev, cfg->bt_ep), dev->bt_data, dev->cfg.bt_datalen, bcm5974_irq_button, dev, 1); + dev->bt_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; + } + usb_fill_int_urb(dev->tp_urb, udev, usb_rcvintpipe(udev, cfg->tp_ep), dev->tp_data, dev->cfg.tp_datalen, bcm5974_irq_trackpad, dev, 1); + dev->tp_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; + /* create bcm5974 device */ usb_make_path(udev, dev->phys, sizeof(dev->phys)); strlcat(dev->phys, "/input0", sizeof(dev->phys)); From dbe04e874d4fbd56be64fdfcb29410241b6ad08a Mon Sep 17 00:00:00 2001 From: Brian Norris Date: Mon, 28 Feb 2022 12:25:31 -0800 Subject: [PATCH 0928/1842] drm/bridge: analogix_dp: Support PSR-exit to disable transition commit ca871659ec1606d33b1e76de8d4cf924cf627e34 upstream. Most eDP panel functions only work correctly when the panel is not in self-refresh. In particular, analogix_dp_bridge_disable() tends to hit AUX channel errors if the panel is in self-refresh. Given the above, it appears that so far, this driver assumes that we are never in self-refresh when it comes time to fully disable the bridge. Prior to commit 846c7dfc1193 ("drm/atomic: Try to preserve the crtc enabled state in drm_atomic_remove_fb, v2."), this tended to be true, because we would automatically disable the pipe when framebuffers were removed, and so we'd typically disable the bridge shortly after the last display activity. However, that is not guaranteed: an idle (self-refresh) display pipe may be disabled, e.g., when switching CRTCs. We need to exit PSR first. Stable notes: this is definitely a bugfix, and the bug has likely existed in some form for quite a while. It may predate the "PSR helpers" refactor, but the code looked very different before that, and it's probably not worth rewriting the fix. Cc: Fixes: 6c836d965bad ("drm/rockchip: Use the helpers for PSR") Signed-off-by: Brian Norris Reviewed-by: Sean Paul Signed-off-by: Douglas Anderson Link: https://patchwork.freedesktop.org/patch/msgid/20220228122522.v2.1.I161904be17ba14526f78536ccd78b85818449b51@changeid Signed-off-by: Greg Kroah-Hartman --- .../drm/bridge/analogix/analogix_dp_core.c | 42 +++++++++++++++++-- 1 file changed, 38 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c index 9755672caf1a..a7bcb429c02b 100644 --- a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c +++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c @@ -1268,6 +1268,25 @@ static int analogix_dp_bridge_attach(struct drm_bridge *bridge, return 0; } +static +struct drm_crtc *analogix_dp_get_old_crtc(struct analogix_dp_device *dp, + struct drm_atomic_state *state) +{ + struct drm_encoder *encoder = dp->encoder; + struct drm_connector *connector; + struct drm_connector_state *conn_state; + + connector = drm_atomic_get_old_connector_for_encoder(state, encoder); + if (!connector) + return NULL; + + conn_state = drm_atomic_get_old_connector_state(state, connector); + if (!conn_state) + return NULL; + + return conn_state->crtc; +} + static struct drm_crtc *analogix_dp_get_new_crtc(struct analogix_dp_device *dp, struct drm_atomic_state *state) @@ -1448,14 +1467,16 @@ analogix_dp_bridge_atomic_disable(struct drm_bridge *bridge, { struct drm_atomic_state *old_state = old_bridge_state->base.state; struct analogix_dp_device *dp = bridge->driver_private; - struct drm_crtc *crtc; + struct drm_crtc *old_crtc, *new_crtc; + struct drm_crtc_state *old_crtc_state = NULL; struct drm_crtc_state *new_crtc_state = NULL; + int ret; - crtc = analogix_dp_get_new_crtc(dp, old_state); - if (!crtc) + new_crtc = analogix_dp_get_new_crtc(dp, old_state); + if (!new_crtc) goto out; - new_crtc_state = drm_atomic_get_new_crtc_state(old_state, crtc); + new_crtc_state = drm_atomic_get_new_crtc_state(old_state, new_crtc); if (!new_crtc_state) goto out; @@ -1464,6 +1485,19 @@ analogix_dp_bridge_atomic_disable(struct drm_bridge *bridge, return; out: + old_crtc = analogix_dp_get_old_crtc(dp, old_state); + if (old_crtc) { + old_crtc_state = drm_atomic_get_old_crtc_state(old_state, + old_crtc); + + /* When moving from PSR to fully disabled, exit PSR first. */ + if (old_crtc_state && old_crtc_state->self_refresh_active) { + ret = analogix_dp_disable_psr(dp); + if (ret) + DRM_ERROR("Failed to disable psr (%d)\n", ret); + } + } + analogix_dp_bridge_disable(bridge); } From fa0d3d71dc081377d7c56fa96f75401679834d7e Mon Sep 17 00:00:00 2001 From: Brian Norris Date: Mon, 28 Feb 2022 12:25:32 -0800 Subject: [PATCH 0929/1842] drm/atomic: Force bridge self-refresh-exit on CRTC switch commit e54a4424925a27ed94dff046db3ce5caf4b1e748 upstream. It's possible to change which CRTC is in use for a given connector/encoder/bridge while we're in self-refresh without fully disabling the connector/encoder/bridge along the way. This can confuse the bridge encoder/bridge, because (a) it needs to track the SR state (trying to perform "active" operations while the panel is still in SR can be Bad(TM)); and (b) it tracks the SR state via the CRTC state (and after the switch, the previous SR state is lost). Thus, we need to either somehow carry the self-refresh state over to the new CRTC, or else force an encoder/bridge self-refresh transition during such a switch. I choose the latter, so we disable the encoder (and exit PSR) before attaching it to the new CRTC (where we can continue to assume a clean (non-self-refresh) state). This fixes PSR issues seen on Rockchip RK3399 systems with drivers/gpu/drm/bridge/analogix/analogix_dp_core.c. Change in v2: - Drop "->enable" condition; this could possibly be "->active" to reflect the intended hardware state, but it also is a little over-specific. We want to make a transition through "disabled" any time we're exiting PSR at the same time as a CRTC switch. (Thanks Liu Ying) Cc: Liu Ying Cc: Fixes: 1452c25b0e60 ("drm: Add helpers to kick off self refresh mode in drivers") Signed-off-by: Brian Norris Reviewed-by: Sean Paul Signed-off-by: Douglas Anderson Link: https://patchwork.freedesktop.org/patch/msgid/20220228122522.v2.2.Ic15a2ef69c540aee8732703103e2cff51fb9c399@changeid Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/drm_atomic_helper.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c index 8a871e5c3e26..7fc8e7000046 100644 --- a/drivers/gpu/drm/drm_atomic_helper.c +++ b/drivers/gpu/drm/drm_atomic_helper.c @@ -996,9 +996,19 @@ crtc_needs_disable(struct drm_crtc_state *old_state, return drm_atomic_crtc_effectively_active(old_state); /* - * We need to run through the crtc_funcs->disable() function if the CRTC - * is currently on, if it's transitioning to self refresh mode, or if - * it's in self refresh mode and needs to be fully disabled. + * We need to disable bridge(s) and CRTC if we're transitioning out of + * self-refresh and changing CRTCs at the same time, because the + * bridge tracks self-refresh status via CRTC state. + */ + if (old_state->self_refresh_active && + old_state->crtc != new_state->crtc) + return true; + + /* + * We also need to run through the crtc_funcs->disable() function if + * the CRTC is currently on, if it's transitioning to self refresh + * mode, or if it's in self refresh mode and needs to be fully + * disabled. */ return old_state->active || (old_state->self_refresh_active && !new_state->enable) || From 3be74fc0afbeadc2aff8dc69f3bf9716fbe66486 Mon Sep 17 00:00:00 2001 From: Michael Ellerman Date: Tue, 7 Jun 2022 00:34:56 +1000 Subject: [PATCH 0930/1842] powerpc/32: Fix overread/overwrite of thread_struct via ptrace commit 8e1278444446fc97778a5e5c99bca1ce0bbc5ec9 upstream. The ptrace PEEKUSR/POKEUSR (aka PEEKUSER/POKEUSER) API allows a process to read/write registers of another process. To get/set a register, the API takes an index into an imaginary address space called the "USER area", where the registers of the process are laid out in some fashion. The kernel then maps that index to a particular register in its own data structures and gets/sets the value. The API only allows a single machine-word to be read/written at a time. So 4 bytes on 32-bit kernels and 8 bytes on 64-bit kernels. The way floating point registers (FPRs) are addressed is somewhat complicated, because double precision float values are 64-bit even on 32-bit CPUs. That means on 32-bit kernels each FPR occupies two word-sized locations in the USER area. On 64-bit kernels each FPR occupies one word-sized location in the USER area. Internally the kernel stores the FPRs in an array of u64s, or if VSX is enabled, an array of pairs of u64s where one half of each pair stores the FPR. Which half of the pair stores the FPR depends on the kernel's endianness. To handle the different layouts of the FPRs depending on VSX/no-VSX and big/little endian, the TS_FPR() macro was introduced. Unfortunately the TS_FPR() macro does not take into account the fact that the addressing of each FPR differs between 32-bit and 64-bit kernels. It just takes the index into the "USER area" passed from userspace and indexes into the fp_state.fpr array. On 32-bit there are 64 indexes that address FPRs, but only 32 entries in the fp_state.fpr array, meaning the user can read/write 256 bytes past the end of the array. Because the fp_state sits in the middle of the thread_struct there are various fields than can be overwritten, including some pointers. As such it may be exploitable. It has also been observed to cause systems to hang or otherwise misbehave when using gdbserver, and is probably the root cause of this report which could not be easily reproduced: https://lore.kernel.org/linuxppc-dev/dc38afe9-6b78-f3f5-666b-986939e40fc6@keymile.com/ Rather than trying to make the TS_FPR() macro even more complicated to fix the bug, or add more macros, instead add a special-case for 32-bit kernels. This is more obvious and hopefully avoids a similar bug happening again in future. Note that because 32-bit kernels never have VSX enabled the code doesn't need to consider TS_FPRWIDTH/OFFSET at all. Add a BUILD_BUG_ON() to ensure that 32-bit && VSX is never enabled. Fixes: 87fec0514f61 ("powerpc: PTRACE_PEEKUSR/PTRACE_POKEUSER of FPR registers in little endian builds") Cc: stable@vger.kernel.org # v3.13+ Reported-by: Ariel Miculas Tested-by: Christophe Leroy Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220609133245.573565-1-mpe@ellerman.id.au Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/kernel/ptrace/ptrace.c | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kernel/ptrace/ptrace.c b/arch/powerpc/kernel/ptrace/ptrace.c index f6e51be47c6e..9ea9ee513ae1 100644 --- a/arch/powerpc/kernel/ptrace/ptrace.c +++ b/arch/powerpc/kernel/ptrace/ptrace.c @@ -75,8 +75,13 @@ long arch_ptrace(struct task_struct *child, long request, flush_fp_to_thread(child); if (fpidx < (PT_FPSCR - PT_FPR0)) - memcpy(&tmp, &child->thread.TS_FPR(fpidx), - sizeof(long)); + if (IS_ENABLED(CONFIG_PPC32)) { + // On 32-bit the index we are passed refers to 32-bit words + tmp = ((u32 *)child->thread.fp_state.fpr)[fpidx]; + } else { + memcpy(&tmp, &child->thread.TS_FPR(fpidx), + sizeof(long)); + } else tmp = child->thread.fp_state.fpscr; } @@ -108,8 +113,13 @@ long arch_ptrace(struct task_struct *child, long request, flush_fp_to_thread(child); if (fpidx < (PT_FPSCR - PT_FPR0)) - memcpy(&child->thread.TS_FPR(fpidx), &data, - sizeof(long)); + if (IS_ENABLED(CONFIG_PPC32)) { + // On 32-bit the index we are passed refers to 32-bit words + ((u32 *)child->thread.fp_state.fpr)[fpidx] = data; + } else { + memcpy(&child->thread.TS_FPR(fpidx), &data, + sizeof(long)); + } else child->thread.fp_state.fpscr = data; ret = 0; @@ -478,4 +488,7 @@ void __init pt_regs_check(void) * real registers. */ BUILD_BUG_ON(PT_DSCR < sizeof(struct user_pt_regs) / sizeof(unsigned long)); + + // ptrace_get/put_fpr() rely on PPC32 and VSX being incompatible + BUILD_BUG_ON(IS_ENABLED(CONFIG_PPC32) && IS_ENABLED(CONFIG_VSX)); } From fe6caf512261d2cf81cd44bfe3ec4370f6256135 Mon Sep 17 00:00:00 2001 From: Alexey Kardashevskiy Date: Tue, 21 Dec 2021 16:59:03 +1100 Subject: [PATCH 0931/1842] powerpc/mm: Switch obsolete dssall to .long commit d51f86cfd8e378d4907958db77da3074f6dce3ba upstream. The dssall ("Data Stream Stop All") instruction is obsolete altogether with other Data Cache Instructions since ISA 2.03 (year 2006). LLVM IAS does not support it but PPC970 seems to be using it. This switches dssall to .long as there is no much point in fixing LLVM. Signed-off-by: Alexey Kardashevskiy Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20211221055904.555763-6-aik@ozlabs.ru Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/include/asm/ppc-opcode.h | 2 ++ arch/powerpc/kernel/idle.c | 2 +- arch/powerpc/kernel/idle_6xx.S | 2 +- arch/powerpc/kernel/l2cr_6xx.S | 6 +++--- arch/powerpc/kernel/swsusp_32.S | 2 +- arch/powerpc/kernel/swsusp_asm64.S | 2 +- arch/powerpc/mm/mmu_context.c | 2 +- arch/powerpc/platforms/powermac/cache.S | 4 ++-- 8 files changed, 12 insertions(+), 10 deletions(-) diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h index f0c0816f5727..d6a3cd147059 100644 --- a/arch/powerpc/include/asm/ppc-opcode.h +++ b/arch/powerpc/include/asm/ppc-opcode.h @@ -212,6 +212,7 @@ #define PPC_INST_COPY 0x7c20060c #define PPC_INST_DCBA 0x7c0005ec #define PPC_INST_DCBA_MASK 0xfc0007fe +#define PPC_INST_DSSALL 0x7e00066c #define PPC_INST_ISEL 0x7c00001e #define PPC_INST_ISEL_MASK 0xfc00003e #define PPC_INST_LSWI 0x7c0004aa @@ -517,6 +518,7 @@ #define PPC_DCBZL(a, b) stringify_in_c(.long PPC_RAW_DCBZL(a, b)) #define PPC_DIVDE(t, a, b) stringify_in_c(.long PPC_RAW_DIVDE(t, a, b)) #define PPC_DIVDEU(t, a, b) stringify_in_c(.long PPC_RAW_DIVDEU(t, a, b)) +#define PPC_DSSALL stringify_in_c(.long PPC_INST_DSSALL) #define PPC_LQARX(t, a, b, eh) stringify_in_c(.long PPC_RAW_LQARX(t, a, b, eh)) #define PPC_LDARX(t, a, b, eh) stringify_in_c(.long PPC_RAW_LDARX(t, a, b, eh)) #define PPC_LWARX(t, a, b, eh) stringify_in_c(.long PPC_RAW_LWARX(t, a, b, eh)) diff --git a/arch/powerpc/kernel/idle.c b/arch/powerpc/kernel/idle.c index f0271daa8f6a..77cd4c5a2d63 100644 --- a/arch/powerpc/kernel/idle.c +++ b/arch/powerpc/kernel/idle.c @@ -82,7 +82,7 @@ void power4_idle(void) return; if (cpu_has_feature(CPU_FTR_ALTIVEC)) - asm volatile("DSSALL ; sync" ::: "memory"); + asm volatile(PPC_DSSALL " ; sync" ::: "memory"); power4_idle_nap(); diff --git a/arch/powerpc/kernel/idle_6xx.S b/arch/powerpc/kernel/idle_6xx.S index 69df840f7253..315e5e2ad703 100644 --- a/arch/powerpc/kernel/idle_6xx.S +++ b/arch/powerpc/kernel/idle_6xx.S @@ -129,7 +129,7 @@ BEGIN_FTR_SECTION END_FTR_SECTION_IFCLR(CPU_FTR_NO_DPM) mtspr SPRN_HID0,r4 BEGIN_FTR_SECTION - DSSALL + PPC_DSSALL sync END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) lwz r8,TI_LOCAL_FLAGS(r2) /* set napping bit */ diff --git a/arch/powerpc/kernel/l2cr_6xx.S b/arch/powerpc/kernel/l2cr_6xx.S index 225511d73bef..f2e03ed423d0 100644 --- a/arch/powerpc/kernel/l2cr_6xx.S +++ b/arch/powerpc/kernel/l2cr_6xx.S @@ -96,7 +96,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_L2CR) /* Stop DST streams */ BEGIN_FTR_SECTION - DSSALL + PPC_DSSALL sync END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) @@ -292,7 +292,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_L3CR) isync /* Stop DST streams */ - DSSALL + PPC_DSSALL sync /* Get the current enable bit of the L3CR into r4 */ @@ -401,7 +401,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_L3CR) _GLOBAL(__flush_disable_L1) /* Stop pending alitvec streams and memory accesses */ BEGIN_FTR_SECTION - DSSALL + PPC_DSSALL END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) sync diff --git a/arch/powerpc/kernel/swsusp_32.S b/arch/powerpc/kernel/swsusp_32.S index f73f4d72fea4..e0cbd63007f2 100644 --- a/arch/powerpc/kernel/swsusp_32.S +++ b/arch/powerpc/kernel/swsusp_32.S @@ -181,7 +181,7 @@ _GLOBAL(swsusp_arch_resume) #ifdef CONFIG_ALTIVEC /* Stop pending alitvec streams and memory accesses */ BEGIN_FTR_SECTION - DSSALL + PPC_DSSALL END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) #endif sync diff --git a/arch/powerpc/kernel/swsusp_asm64.S b/arch/powerpc/kernel/swsusp_asm64.S index 6d3189830dd3..068a268a8013 100644 --- a/arch/powerpc/kernel/swsusp_asm64.S +++ b/arch/powerpc/kernel/swsusp_asm64.S @@ -142,7 +142,7 @@ END_FW_FTR_SECTION_IFCLR(FW_FEATURE_LPAR) _GLOBAL(swsusp_arch_resume) /* Stop pending alitvec streams and memory accesses */ BEGIN_FTR_SECTION - DSSALL + PPC_DSSALL END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) sync diff --git a/arch/powerpc/mm/mmu_context.c b/arch/powerpc/mm/mmu_context.c index 18f20da0d348..64290d343b55 100644 --- a/arch/powerpc/mm/mmu_context.c +++ b/arch/powerpc/mm/mmu_context.c @@ -79,7 +79,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, * context */ if (cpu_has_feature(CPU_FTR_ALTIVEC)) - asm volatile ("dssall"); + asm volatile (PPC_DSSALL); if (new_on_cpu) radix_kvm_prefetch_workaround(next); diff --git a/arch/powerpc/platforms/powermac/cache.S b/arch/powerpc/platforms/powermac/cache.S index ced225415486..b8ae56e9f414 100644 --- a/arch/powerpc/platforms/powermac/cache.S +++ b/arch/powerpc/platforms/powermac/cache.S @@ -48,7 +48,7 @@ flush_disable_75x: /* Stop DST streams */ BEGIN_FTR_SECTION - DSSALL + PPC_DSSALL sync END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) @@ -197,7 +197,7 @@ flush_disable_745x: isync /* Stop prefetch streams */ - DSSALL + PPC_DSSALL sync /* Disable L2 prefetching */ From bcae8f8338abb2885dd1967f1be7b40c3aa23d4a Mon Sep 17 00:00:00 2001 From: Stephen Boyd Date: Wed, 8 Jun 2022 15:54:14 -0500 Subject: [PATCH 0932/1842] interconnect: qcom: sc7180: Drop IP0 interconnects commit 2f3724930eb4bba74f7d10bc3bef5bb22dd323df upstream. The IPA BCM resource ("IP0") on sc7180 was moved to the clk-rpmh driver in commit bcd63d222b60 ("clk: qcom: rpmh: Add IPA clock for SC7180") and modeled as a clk, but this interconnect driver still had it modeled as an interconnect. This was mostly OK because nobody used the interconnect definition, until the interconnect framework started dropping bandwidth requests on interconnects that aren't used via the sync_state callback in commit 7d3b0b0d8184 ("interconnect: qcom: Use icc_sync_state"). Once that patch was applied the IP0 resource was going to be controlled from two places, the clk framework and the interconnect framework. Even then, things were probably going to be OK, because commit b95b668eaaa2 ("interconnect: qcom: icc-rpmh: Add BCMs to commit list in pre_aggregate") was needed to actually drop bandwidth requests on unused interconnects, of which the IPA was one of the interconnect that wasn't getting dropped to zero. Combining the three commits together leads to bad behavior where the interconnect framework is disabling the IP0 resource because it has no users while the clk framework thinks the IP0 resource is on because the only user, the IPA driver, has turned it on via clk_prepare_enable(). Depending on when sync_state is called, we can get into a situation like below: IPA driver probes IPA driver gets notified modem started runtime PM get() IPA clk enabled -> IP0 resource is ON sync_state runs interconnect zeroes out the IP0 resource -> IP0 resource is off IPA driver tries to access a register and blows up The crash is an unclocked access that manifest as an SError. SError Interrupt on CPU0, code 0xbe000011 -- SError CPU: 0 PID: 3595 Comm: mmdata_mgr Not tainted 5.17.1+ #166 Hardware name: Google Lazor (rev1 - 2) with LTE (DT) pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : mutex_lock+0x4c/0x80 lr : mutex_lock+0x30/0x80 sp : ffffffc00da9b9c0 x29: ffffffc00da9b9c0 x28: 0000000000000000 x27: 0000000000000000 x26: ffffffc00da9bc90 x25: ffffff80c2024010 x24: ffffff80c2024000 x23: ffffff8083100000 x22: ffffff80831000d0 x21: ffffff80831000a8 x20: ffffff80831000a8 x19: ffffff8083100070 x18: 00000000ffff0a00 x17: 000000002f7254f1 x16: 0000000000000100 x15: 0000000000000000 x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000 x11: 000000000001f0b8 x10: ffffffc00931f0b8 x9 : 0000000000000000 x8 : 0000000000000000 x7 : fefefefefeff2f60 x6 : 0000808080808080 x5 : 0000000000000000 x4 : 8080808080800000 x3 : ffffff80d2d4ee28 x2 : ffffff808c1d6e40 x1 : 0000000000000000 x0 : ffffff8083100070 Kernel panic - not syncing: Asynchronous SError Interrupt CPU: 0 PID: 3595 Comm: mmdata_mgr Not tainted 5.17.1+ #166 Hardware name: Google Lazor (rev1 - 2) with LTE (DT) Call trace: dump_backtrace+0xf4/0x114 show_stack+0x24/0x30 dump_stack_lvl+0x64/0x7c dump_stack+0x18/0x38 panic+0x150/0x38c nmi_panic+0x88/0xa0 arm64_serror_panic+0x74/0x80 do_serror+0x0/0x80 do_serror+0x58/0x80 el1h_64_error_handler+0x34/0x4c el1h_64_error+0x78/0x7c mutex_lock+0x4c/0x80 __gsi_channel_start+0x50/0x17c gsi_channel_start+0x54/0x90 ipa_endpoint_enable_one+0x34/0xc0 ipa_open+0x4c/0x120 Remove all IP0 resource management from the interconnect driver so that clk-rpmh is the sole owner. This fixes the issue by preventing the interconnect driver from overwriting the IP0 resource data that the clk-rpmh driver wrote. Cc: Alex Elder Cc: Bjorn Andersson Cc: Taniya Das Cc: Mike Tipton Cc: # 5.10.x Fixes: b95b668eaaa2 ("interconnect: qcom: icc-rpmh: Add BCMs to commit list in pre_aggregate") Fixes: bcd63d222b60 ("clk: qcom: rpmh: Add IPA clock for SC7180") Fixes: 7d3b0b0d8184 ("interconnect: qcom: Use icc_sync_state") Signed-off-by: Stephen Boyd Tested-by: Alex Elder Reviewed-by: Alex Elder Reviewed-by: Bjorn Andersson Link: https://lore.kernel.org/r/20220412220033.1273607-2-swboyd@chromium.org Signed-off-by: Georgi Djakov Signed-off-by: Alex Elder Signed-off-by: Greg Kroah-Hartman --- drivers/interconnect/qcom/sc7180.c | 21 --------------------- 1 file changed, 21 deletions(-) diff --git a/drivers/interconnect/qcom/sc7180.c b/drivers/interconnect/qcom/sc7180.c index 8d9044ed18ab..597a7ee7a9bb 100644 --- a/drivers/interconnect/qcom/sc7180.c +++ b/drivers/interconnect/qcom/sc7180.c @@ -47,7 +47,6 @@ DEFINE_QNODE(qnm_mnoc_sf, SC7180_MASTER_MNOC_SF_MEM_NOC, 1, 32, SC7180_SLAVE_GEM DEFINE_QNODE(qnm_snoc_gc, SC7180_MASTER_SNOC_GC_MEM_NOC, 1, 8, SC7180_SLAVE_LLCC); DEFINE_QNODE(qnm_snoc_sf, SC7180_MASTER_SNOC_SF_MEM_NOC, 1, 16, SC7180_SLAVE_LLCC); DEFINE_QNODE(qxm_gpu, SC7180_MASTER_GFX3D, 2, 32, SC7180_SLAVE_GEM_NOC_SNOC, SC7180_SLAVE_LLCC); -DEFINE_QNODE(ipa_core_master, SC7180_MASTER_IPA_CORE, 1, 8, SC7180_SLAVE_IPA_CORE); DEFINE_QNODE(llcc_mc, SC7180_MASTER_LLCC, 2, 4, SC7180_SLAVE_EBI1); DEFINE_QNODE(qhm_mnoc_cfg, SC7180_MASTER_CNOC_MNOC_CFG, 1, 4, SC7180_SLAVE_SERVICE_MNOC); DEFINE_QNODE(qxm_camnoc_hf0, SC7180_MASTER_CAMNOC_HF0, 2, 32, SC7180_SLAVE_MNOC_HF_MEM_NOC); @@ -129,7 +128,6 @@ DEFINE_QNODE(qhs_mdsp_ms_mpu_cfg, SC7180_SLAVE_MSS_PROC_MS_MPU_CFG, 1, 4); DEFINE_QNODE(qns_gem_noc_snoc, SC7180_SLAVE_GEM_NOC_SNOC, 1, 8, SC7180_MASTER_GEM_NOC_SNOC); DEFINE_QNODE(qns_llcc, SC7180_SLAVE_LLCC, 1, 16, SC7180_MASTER_LLCC); DEFINE_QNODE(srvc_gemnoc, SC7180_SLAVE_SERVICE_GEM_NOC, 1, 4); -DEFINE_QNODE(ipa_core_slave, SC7180_SLAVE_IPA_CORE, 1, 8); DEFINE_QNODE(ebi, SC7180_SLAVE_EBI1, 2, 4); DEFINE_QNODE(qns_mem_noc_hf, SC7180_SLAVE_MNOC_HF_MEM_NOC, 1, 32, SC7180_MASTER_MNOC_HF_MEM_NOC); DEFINE_QNODE(qns_mem_noc_sf, SC7180_SLAVE_MNOC_SF_MEM_NOC, 1, 32, SC7180_MASTER_MNOC_SF_MEM_NOC); @@ -160,7 +158,6 @@ DEFINE_QBCM(bcm_mc0, "MC0", true, &ebi); DEFINE_QBCM(bcm_sh0, "SH0", true, &qns_llcc); DEFINE_QBCM(bcm_mm0, "MM0", false, &qns_mem_noc_hf); DEFINE_QBCM(bcm_ce0, "CE0", false, &qxm_crypto); -DEFINE_QBCM(bcm_ip0, "IP0", false, &ipa_core_slave); DEFINE_QBCM(bcm_cn0, "CN0", true, &qnm_snoc, &xm_qdss_dap, &qhs_a1_noc_cfg, &qhs_a2_noc_cfg, &qhs_ahb2phy0, &qhs_aop, &qhs_aoss, &qhs_boot_rom, &qhs_camera_cfg, &qhs_camera_nrt_throttle_cfg, &qhs_camera_rt_throttle_cfg, &qhs_clk_ctl, &qhs_cpr_cx, &qhs_cpr_mx, &qhs_crypto0_cfg, &qhs_dcc_cfg, &qhs_ddrss_cfg, &qhs_display_cfg, &qhs_display_rt_throttle_cfg, &qhs_display_throttle_cfg, &qhs_glm, &qhs_gpuss_cfg, &qhs_imem_cfg, &qhs_ipa, &qhs_mnoc_cfg, &qhs_mss_cfg, &qhs_npu_cfg, &qhs_npu_dma_throttle_cfg, &qhs_npu_dsp_throttle_cfg, &qhs_pimem_cfg, &qhs_prng, &qhs_qdss_cfg, &qhs_qm_cfg, &qhs_qm_mpu_cfg, &qhs_qup0, &qhs_qup1, &qhs_security, &qhs_snoc_cfg, &qhs_tcsr, &qhs_tlmm_1, &qhs_tlmm_2, &qhs_tlmm_3, &qhs_ufs_mem_cfg, &qhs_usb3, &qhs_venus_cfg, &qhs_venus_throttle_cfg, &qhs_vsense_ctrl_cfg, &srvc_cnoc); DEFINE_QBCM(bcm_mm1, "MM1", false, &qxm_camnoc_hf0_uncomp, &qxm_camnoc_hf1_uncomp, &qxm_camnoc_sf_uncomp, &qhm_mnoc_cfg, &qxm_mdp0, &qxm_rot, &qxm_venus0, &qxm_venus_arm9); DEFINE_QBCM(bcm_sh2, "SH2", false, &acm_sys_tcu); @@ -372,22 +369,6 @@ static struct qcom_icc_desc sc7180_gem_noc = { .num_bcms = ARRAY_SIZE(gem_noc_bcms), }; -static struct qcom_icc_bcm *ipa_virt_bcms[] = { - &bcm_ip0, -}; - -static struct qcom_icc_node *ipa_virt_nodes[] = { - [MASTER_IPA_CORE] = &ipa_core_master, - [SLAVE_IPA_CORE] = &ipa_core_slave, -}; - -static struct qcom_icc_desc sc7180_ipa_virt = { - .nodes = ipa_virt_nodes, - .num_nodes = ARRAY_SIZE(ipa_virt_nodes), - .bcms = ipa_virt_bcms, - .num_bcms = ARRAY_SIZE(ipa_virt_bcms), -}; - static struct qcom_icc_bcm *mc_virt_bcms[] = { &bcm_acv, &bcm_mc0, @@ -611,8 +592,6 @@ static const struct of_device_id qnoc_of_match[] = { .data = &sc7180_dc_noc}, { .compatible = "qcom,sc7180-gem-noc", .data = &sc7180_gem_noc}, - { .compatible = "qcom,sc7180-ipa-virt", - .data = &sc7180_ipa_virt}, { .compatible = "qcom,sc7180-mc-virt", .data = &sc7180_mc_virt}, { .compatible = "qcom,sc7180-mmss-noc", From 418db40cc753aba2d92ff6f1d4f04cd39da64da8 Mon Sep 17 00:00:00 2001 From: Stephen Boyd Date: Wed, 8 Jun 2022 15:54:15 -0500 Subject: [PATCH 0933/1842] interconnect: Restore sync state by ignoring ipa-virt in provider count commit 20ce30fb4750f2ffc130cdcb26232b1dd87cd0a5 upstream. Ignore compatible strings for the IPA virt drivers that were removed in commits 2fb251c26560 ("interconnect: qcom: sdx55: Drop IP0 interconnects") and 2f3724930eb4 ("interconnect: qcom: sc7180: Drop IP0 interconnects") so that the sync state logic can kick in again. Otherwise all the interconnects in the system will stay pegged at max speeds because 'providers_count' is always going to be one larger than the number of drivers that will ever probe on sc7180 or sdx55. This fixes suspend on sc7180 and sdx55 devices when you don't have a devicetree patch to remove the ipa-virt compatible node. Cc: Bjorn Andersson Cc: Doug Anderson Cc: Alex Elder Cc: Taniya Das Cc: Mike Tipton Cc: # 5.10.x Fixes: 2fb251c26560 ("interconnect: qcom: sdx55: Drop IP0 interconnects") Fixes: 2f3724930eb4 ("interconnect: qcom: sc7180: Drop IP0 interconnects") Signed-off-by: Stephen Boyd Reviewed-by: Alex Elder Reviewed-by: Douglas Anderson Link: https://lore.kernel.org/r/20220427013226.341209-1-swboyd@chromium.org Signed-off-by: Georgi Djakov Signed-off-by: Alex Elder Signed-off-by: Greg Kroah-Hartman --- drivers/interconnect/core.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c index 7887941730db..ceb6cdc20484 100644 --- a/drivers/interconnect/core.c +++ b/drivers/interconnect/core.c @@ -1084,9 +1084,14 @@ static int of_count_icc_providers(struct device_node *np) { struct device_node *child; int count = 0; + const struct of_device_id __maybe_unused ignore_list[] = { + { .compatible = "qcom,sc7180-ipa-virt" }, + {} + }; for_each_available_child_of_node(np, child) { - if (of_property_read_bool(child, "#interconnect-cells")) + if (of_property_read_bool(child, "#interconnect-cells") && + likely(!of_match_node(ignore_list, child))) count++; count += of_count_icc_providers(child); } From 63bcb9da91eb0aba2e87022200f9b9aa45c4c111 Mon Sep 17 00:00:00 2001 From: Pascal Hambourg Date: Wed, 13 Apr 2022 08:53:56 +0200 Subject: [PATCH 0934/1842] md/raid0: Ignore RAID0 layout if the second zone has only one device commit ea23994edc4169bd90d7a9b5908c6ccefd82fa40 upstream. The RAID0 layout is irrelevant if all members have the same size so the array has only one zone. It is *also* irrelevant if the array has two zones and the second zone has only one device, for example if the array has two members of different sizes. So in that case it makes sense to allow assembly even when the layout is undefined, like what is done when the array has only one zone. Reviewed-by: NeilBrown Signed-off-by: Pascal Hambourg Signed-off-by: Song Liu Signed-off-by: Greg Kroah-Hartman --- drivers/md/raid0.c | 31 ++++++++++++++++--------------- 1 file changed, 16 insertions(+), 15 deletions(-) diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index 35843df15b5e..a4c0cafa6010 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -128,21 +128,6 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf) pr_debug("md/raid0:%s: FINAL %d zones\n", mdname(mddev), conf->nr_strip_zones); - if (conf->nr_strip_zones == 1) { - conf->layout = RAID0_ORIG_LAYOUT; - } else if (mddev->layout == RAID0_ORIG_LAYOUT || - mddev->layout == RAID0_ALT_MULTIZONE_LAYOUT) { - conf->layout = mddev->layout; - } else if (default_layout == RAID0_ORIG_LAYOUT || - default_layout == RAID0_ALT_MULTIZONE_LAYOUT) { - conf->layout = default_layout; - } else { - pr_err("md/raid0:%s: cannot assemble multi-zone RAID0 with default_layout setting\n", - mdname(mddev)); - pr_err("md/raid0: please set raid0.default_layout to 1 or 2\n"); - err = -ENOTSUPP; - goto abort; - } /* * now since we have the hard sector sizes, we can make sure * chunk size is a multiple of that sector size @@ -273,6 +258,22 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf) (unsigned long long)smallest->sectors); } + if (conf->nr_strip_zones == 1 || conf->strip_zone[1].nb_dev == 1) { + conf->layout = RAID0_ORIG_LAYOUT; + } else if (mddev->layout == RAID0_ORIG_LAYOUT || + mddev->layout == RAID0_ALT_MULTIZONE_LAYOUT) { + conf->layout = mddev->layout; + } else if (default_layout == RAID0_ORIG_LAYOUT || + default_layout == RAID0_ALT_MULTIZONE_LAYOUT) { + conf->layout = default_layout; + } else { + pr_err("md/raid0:%s: cannot assemble multi-zone RAID0 with default_layout setting\n", + mdname(mddev)); + pr_err("md/raid0: please set raid0.default_layout to 1 or 2\n"); + err = -EOPNOTSUPP; + goto abort; + } + pr_debug("md/raid0:%s: done.\n", mdname(mddev)); *private_conf = conf; From ef51997771d698e2183c17d5a39a40e107d8ce88 Mon Sep 17 00:00:00 2001 From: Johan Hovold Date: Fri, 1 Apr 2022 15:33:51 +0200 Subject: [PATCH 0935/1842] PCI: qcom: Fix pipe clock imbalance commit fdf6a2f533115ec5d4d9629178f8196331f1ac50 upstream. Fix a clock imbalance introduced by ed8cc3b1fc84 ("PCI: qcom: Add support for SDM845 PCIe controller"), which enables the pipe clock both in init() and in post_init() but only disables in post_deinit(). Note that the pipe clock was also never disabled in the init() error paths and that enabling the clock before powering up the PHY looks questionable. Link: https://lore.kernel.org/r/20220401133351.10113-1-johan+linaro@kernel.org Fixes: ed8cc3b1fc84 ("PCI: qcom: Add support for SDM845 PCIe controller") Signed-off-by: Johan Hovold Signed-off-by: Lorenzo Pieralisi Signed-off-by: Bjorn Helgaas Reviewed-by: Bjorn Andersson Cc: stable@vger.kernel.org # 5.6 Signed-off-by: Greg Kroah-Hartman --- drivers/pci/controller/dwc/pcie-qcom.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c index 9c7cdc4e12f2..1b8b3c12eece 100644 --- a/drivers/pci/controller/dwc/pcie-qcom.c +++ b/drivers/pci/controller/dwc/pcie-qcom.c @@ -1192,12 +1192,6 @@ static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie) goto err_disable_clocks; } - ret = clk_prepare_enable(res->pipe_clk); - if (ret) { - dev_err(dev, "cannot prepare/enable pipe clock\n"); - goto err_disable_clocks; - } - /* configure PCIe to RC mode */ writel(DEVICE_TYPE_RC, pcie->parf + PCIE20_PARF_DEVICE_TYPE); From b8c17121f05b542eb6f315a97adbb91060dfb53a Mon Sep 17 00:00:00 2001 From: Damien Le Moal Date: Thu, 2 Jun 2022 23:16:57 +0900 Subject: [PATCH 0936/1842] zonefs: fix handling of explicit_open option on mount commit a2a513be7139b279f1b5b2cee59c6c4950c34346 upstream. Ignoring the explicit_open mount option on mount for devices that do not have a limit on the number of open zones must be done after the mount options are parsed and set in s_mount_opts. Move the check to ignore the explicit_open option after the call to zonefs_parse_options() in zonefs_fill_super(). Fixes: b5c00e975779 ("zonefs: open/close zone on file open/close") Cc: Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn Signed-off-by: Greg Kroah-Hartman --- fs/zonefs/super.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c index 08ab5d1e3a3e..8c7d01e907a3 100644 --- a/fs/zonefs/super.c +++ b/fs/zonefs/super.c @@ -1706,11 +1706,6 @@ static int zonefs_fill_super(struct super_block *sb, void *data, int silent) sbi->s_mount_opts = ZONEFS_MNTOPT_ERRORS_RO; sbi->s_max_open_zones = bdev_max_open_zones(sb->s_bdev); atomic_set(&sbi->s_open_zones, 0); - if (!sbi->s_max_open_zones && - sbi->s_mount_opts & ZONEFS_MNTOPT_EXPLICIT_OPEN) { - zonefs_info(sb, "No open zones limit. Ignoring explicit_open mount option\n"); - sbi->s_mount_opts &= ~ZONEFS_MNTOPT_EXPLICIT_OPEN; - } ret = zonefs_read_super(sb); if (ret) @@ -1729,6 +1724,12 @@ static int zonefs_fill_super(struct super_block *sb, void *data, int silent) zonefs_info(sb, "Mounting %u zones", blkdev_nr_zones(sb->s_bdev->bd_disk)); + if (!sbi->s_max_open_zones && + sbi->s_mount_opts & ZONEFS_MNTOPT_EXPLICIT_OPEN) { + zonefs_info(sb, "No open zones limit. Ignoring explicit_open mount option\n"); + sbi->s_mount_opts &= ~ZONEFS_MNTOPT_EXPLICIT_OPEN; + } + /* Create root directory inode */ ret = -ENOMEM; inode = new_inode(sb); From 5e34b4975669451f731ec20b8c0e8c3431d691b3 Mon Sep 17 00:00:00 2001 From: Dave Jiang Date: Tue, 26 Apr 2022 15:32:06 -0700 Subject: [PATCH 0937/1842] dmaengine: idxd: add missing callback function to support DMA_INTERRUPT commit 2112b8f4fb5cc35d1c384324763765953186b81f upstream. When setting DMA_INTERRUPT capability, a callback function dma->device_prep_dma_interrupt() is needed to support this capability. Without setting the callback, dma_async_device_register() will fail dma capability check. Fixes: 4e5a4eb20393 ("dmaengine: idxd: set DMA_INTERRUPT cap bit") Signed-off-by: Dave Jiang Link: https://lore.kernel.org/r/165101232637.3951447.15765792791591763119.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul Signed-off-by: Greg Kroah-Hartman --- drivers/dma/idxd/dma.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/drivers/dma/idxd/dma.c b/drivers/dma/idxd/dma.c index d53ce22b4b8f..09ad37bbd98b 100644 --- a/drivers/dma/idxd/dma.c +++ b/drivers/dma/idxd/dma.c @@ -82,6 +82,27 @@ static inline void idxd_prep_desc_common(struct idxd_wq *wq, hw->int_handle = wq->vec_ptr; } +static struct dma_async_tx_descriptor * +idxd_dma_prep_interrupt(struct dma_chan *c, unsigned long flags) +{ + struct idxd_wq *wq = to_idxd_wq(c); + u32 desc_flags; + struct idxd_desc *desc; + + if (wq->state != IDXD_WQ_ENABLED) + return NULL; + + op_flag_setup(flags, &desc_flags); + desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK); + if (IS_ERR(desc)) + return NULL; + + idxd_prep_desc_common(wq, desc->hw, DSA_OPCODE_NOOP, + 0, 0, 0, desc->compl_dma, desc_flags); + desc->txd.flags = flags; + return &desc->txd; +} + static struct dma_async_tx_descriptor * idxd_dma_submit_memcpy(struct dma_chan *c, dma_addr_t dma_dest, dma_addr_t dma_src, size_t len, unsigned long flags) @@ -193,6 +214,7 @@ int idxd_register_dma_device(struct idxd_device *idxd) dma_cap_set(DMA_COMPLETION_NO_ORDER, dma->cap_mask); dma->device_release = idxd_dma_release; + dma->device_prep_dma_interrupt = idxd_dma_prep_interrupt; if (idxd->hw.opcap.bits[0] & IDXD_OPCAP_MEMMOVE) { dma_cap_set(DMA_MEMCPY, dma->cap_mask); dma->device_prep_dma_memcpy = idxd_dma_submit_memcpy; From 9ba2b4ac35935f05ac98cff722f36ba07d62270e Mon Sep 17 00:00:00 2001 From: Eric Dumazet Date: Fri, 27 May 2022 14:28:29 -0700 Subject: [PATCH 0938/1842] tcp: fix tcp_mtup_probe_success vs wrong snd_cwnd commit 11825765291a93d8e7f44230da67b9f607c777bf upstream. syzbot got a new report [1] finally pointing to a very old bug, added in initial support for MTU probing. tcp_mtu_probe() has checks about starting an MTU probe if tcp_snd_cwnd(tp) >= 11. But nothing prevents tcp_snd_cwnd(tp) to be reduced later and before the MTU probe succeeds. This bug would lead to potential zero-divides. Debugging added in commit 40570375356c ("tcp: add accessors to read/set tp->snd_cwnd") has paid off :) While we are at it, address potential overflows in this code. [1] WARNING: CPU: 1 PID: 14132 at include/net/tcp.h:1219 tcp_mtup_probe_success+0x366/0x570 net/ipv4/tcp_input.c:2712 Modules linked in: CPU: 1 PID: 14132 Comm: syz-executor.2 Not tainted 5.18.0-syzkaller-07857-gbabf0bb978e3 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 RIP: 0010:tcp_snd_cwnd_set include/net/tcp.h:1219 [inline] RIP: 0010:tcp_mtup_probe_success+0x366/0x570 net/ipv4/tcp_input.c:2712 Code: 74 08 48 89 ef e8 da 80 17 f9 48 8b 45 00 65 48 ff 80 80 03 00 00 48 83 c4 30 5b 41 5c 41 5d 41 5e 41 5f 5d c3 e8 aa b0 c5 f8 <0f> 0b e9 16 fe ff ff 48 8b 4c 24 08 80 e1 07 38 c1 0f 8c c7 fc ff RSP: 0018:ffffc900079e70f8 EFLAGS: 00010287 RAX: ffffffff88c0f7f6 RBX: ffff8880756e7a80 RCX: 0000000000040000 RDX: ffffc9000c6c4000 RSI: 0000000000031f9e RDI: 0000000000031f9f RBP: 0000000000000000 R08: ffffffff88c0f606 R09: ffffc900079e7520 R10: ffffed101011226d R11: 1ffff1101011226c R12: 1ffff1100eadcf50 R13: ffff8880756e72c0 R14: 1ffff1100eadcf89 R15: dffffc0000000000 FS: 00007f643236e700(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f1ab3f1e2a0 CR3: 0000000064fe7000 CR4: 00000000003506e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: tcp_clean_rtx_queue+0x223a/0x2da0 net/ipv4/tcp_input.c:3356 tcp_ack+0x1962/0x3c90 net/ipv4/tcp_input.c:3861 tcp_rcv_established+0x7c8/0x1ac0 net/ipv4/tcp_input.c:5973 tcp_v6_do_rcv+0x57b/0x1210 net/ipv6/tcp_ipv6.c:1476 sk_backlog_rcv include/net/sock.h:1061 [inline] __release_sock+0x1d8/0x4c0 net/core/sock.c:2849 release_sock+0x5d/0x1c0 net/core/sock.c:3404 sk_stream_wait_memory+0x700/0xdc0 net/core/stream.c:145 tcp_sendmsg_locked+0x111d/0x3fc0 net/ipv4/tcp.c:1410 tcp_sendmsg+0x2c/0x40 net/ipv4/tcp.c:1448 sock_sendmsg_nosec net/socket.c:714 [inline] sock_sendmsg net/socket.c:734 [inline] __sys_sendto+0x439/0x5c0 net/socket.c:2119 __do_sys_sendto net/socket.c:2131 [inline] __se_sys_sendto net/socket.c:2127 [inline] __x64_sys_sendto+0xda/0xf0 net/socket.c:2127 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x46/0xb0 RIP: 0033:0x7f6431289109 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f643236e168 EFLAGS: 00000246 ORIG_RAX: 000000000000002c RAX: ffffffffffffffda RBX: 00007f643139c100 RCX: 00007f6431289109 RDX: 00000000d0d0c2ac RSI: 0000000020000080 RDI: 000000000000000a RBP: 00007f64312e308d R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fff372533af R14: 00007f643236e300 R15: 0000000000022000 Fixes: 5d424d5a674f ("[TCP]: MTU probing") Signed-off-by: Eric Dumazet Reported-by: syzbot Acked-by: Yuchung Cheng Acked-by: Neal Cardwell Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- net/ipv4/tcp_input.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 2e267b2e33e5..54ed68e05b66 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -2667,12 +2667,15 @@ static void tcp_mtup_probe_success(struct sock *sk) { struct tcp_sock *tp = tcp_sk(sk); struct inet_connection_sock *icsk = inet_csk(sk); + u64 val; - /* FIXME: breaks with very large cwnd */ tp->prior_ssthresh = tcp_current_ssthresh(sk); - tp->snd_cwnd = tp->snd_cwnd * - tcp_mss_to_mtu(sk, tp->mss_cache) / - icsk->icsk_mtup.probe_size; + + val = (u64)tp->snd_cwnd * tcp_mss_to_mtu(sk, tp->mss_cache); + do_div(val, icsk->icsk_mtup.probe_size); + WARN_ON_ONCE((u32)val != val); + tp->snd_cwnd = max_t(u32, 1U, val); + tp->snd_cwnd_cnt = 0; tp->snd_cwnd_stamp = tcp_jiffies32; tp->snd_ssthresh = tcp_current_ssthresh(sk); From 5754c570a569d3f31e874abbf08dae1218a7431c Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Tue, 14 Jun 2022 18:32:47 +0200 Subject: [PATCH 0939/1842] Linux 5.10.122 Link: https://lore.kernel.org/r/20220613094850.166931805@linuxfoundation.org Tested-by: Pavel Machek (CIP) Link: https://lore.kernel.org/r/20220613181850.655683495@linuxfoundation.org Tested-by: Florian Fainelli Tested-by: Fox Chen Tested-by: Shuah Khan Tested-by: Sudip Mukherjee Tested-by: Linux Kernel Functional Testing Tested-by: Guenter Roeck Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 5233d3d9a3b5..3ed1da61a3c7 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 121 +SUBLEVEL = 122 EXTRAVERSION = NAME = Dare mighty things From f8a85334a57e7842320476ff27be3a5f151da364 Mon Sep 17 00:00:00 2001 From: Pawan Gupta Date: Thu, 19 May 2022 20:26:07 -0700 Subject: [PATCH 0940/1842] Documentation: Add documentation for Processor MMIO Stale Data commit 4419470191386456e0b8ed4eb06a70b0021798a6 upstream Add the admin guide for Processor MMIO stale data vulnerabilities. Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman --- Documentation/admin-guide/hw-vuln/index.rst | 1 + .../hw-vuln/processor_mmio_stale_data.rst | 246 ++++++++++++++++++ 2 files changed, 247 insertions(+) create mode 100644 Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst index ca4dbdd9016d..2adec1e6520a 100644 --- a/Documentation/admin-guide/hw-vuln/index.rst +++ b/Documentation/admin-guide/hw-vuln/index.rst @@ -15,3 +15,4 @@ are configurable at compile, boot or run time. tsx_async_abort multihit.rst special-register-buffer-data-sampling.rst + processor_mmio_stale_data.rst diff --git a/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst b/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst new file mode 100644 index 000000000000..9393c50b5afc --- /dev/null +++ b/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst @@ -0,0 +1,246 @@ +========================================= +Processor MMIO Stale Data Vulnerabilities +========================================= + +Processor MMIO Stale Data Vulnerabilities are a class of memory-mapped I/O +(MMIO) vulnerabilities that can expose data. The sequences of operations for +exposing data range from simple to very complex. Because most of the +vulnerabilities require the attacker to have access to MMIO, many environments +are not affected. System environments using virtualization where MMIO access is +provided to untrusted guests may need mitigation. These vulnerabilities are +not transient execution attacks. However, these vulnerabilities may propagate +stale data into core fill buffers where the data can subsequently be inferred +by an unmitigated transient execution attack. Mitigation for these +vulnerabilities includes a combination of microcode update and software +changes, depending on the platform and usage model. Some of these mitigations +are similar to those used to mitigate Microarchitectural Data Sampling (MDS) or +those used to mitigate Special Register Buffer Data Sampling (SRBDS). + +Data Propagators +================ +Propagators are operations that result in stale data being copied or moved from +one microarchitectural buffer or register to another. Processor MMIO Stale Data +Vulnerabilities are operations that may result in stale data being directly +read into an architectural, software-visible state or sampled from a buffer or +register. + +Fill Buffer Stale Data Propagator (FBSDP) +----------------------------------------- +Stale data may propagate from fill buffers (FB) into the non-coherent portion +of the uncore on some non-coherent writes. Fill buffer propagation by itself +does not make stale data architecturally visible. Stale data must be propagated +to a location where it is subject to reading or sampling. + +Sideband Stale Data Propagator (SSDP) +------------------------------------- +The sideband stale data propagator (SSDP) is limited to the client (including +Intel Xeon server E3) uncore implementation. The sideband response buffer is +shared by all client cores. For non-coherent reads that go to sideband +destinations, the uncore logic returns 64 bytes of data to the core, including +both requested data and unrequested stale data, from a transaction buffer and +the sideband response buffer. As a result, stale data from the sideband +response and transaction buffers may now reside in a core fill buffer. + +Primary Stale Data Propagator (PSDP) +------------------------------------ +The primary stale data propagator (PSDP) is limited to the client (including +Intel Xeon server E3) uncore implementation. Similar to the sideband response +buffer, the primary response buffer is shared by all client cores. For some +processors, MMIO primary reads will return 64 bytes of data to the core fill +buffer including both requested data and unrequested stale data. This is +similar to the sideband stale data propagator. + +Vulnerabilities +=============== +Device Register Partial Write (DRPW) (CVE-2022-21166) +----------------------------------------------------- +Some endpoint MMIO registers incorrectly handle writes that are smaller than +the register size. Instead of aborting the write or only copying the correct +subset of bytes (for example, 2 bytes for a 2-byte write), more bytes than +specified by the write transaction may be written to the register. On +processors affected by FBSDP, this may expose stale data from the fill buffers +of the core that created the write transaction. + +Shared Buffers Data Sampling (SBDS) (CVE-2022-21125) +---------------------------------------------------- +After propagators may have moved data around the uncore and copied stale data +into client core fill buffers, processors affected by MFBDS can leak data from +the fill buffer. It is limited to the client (including Intel Xeon server E3) +uncore implementation. + +Shared Buffers Data Read (SBDR) (CVE-2022-21123) +------------------------------------------------ +It is similar to Shared Buffer Data Sampling (SBDS) except that the data is +directly read into the architectural software-visible state. It is limited to +the client (including Intel Xeon server E3) uncore implementation. + +Affected Processors +=================== +Not all the CPUs are affected by all the variants. For instance, most +processors for the server market (excluding Intel Xeon E3 processors) are +impacted by only Device Register Partial Write (DRPW). + +Below is the list of affected Intel processors [#f1]_: + + =================== ============ ========= + Common name Family_Model Steppings + =================== ============ ========= + HASWELL_X 06_3FH 2,4 + SKYLAKE_L 06_4EH 3 + BROADWELL_X 06_4FH All + SKYLAKE_X 06_55H 3,4,6,7,11 + BROADWELL_D 06_56H 3,4,5 + SKYLAKE 06_5EH 3 + ICELAKE_X 06_6AH 4,5,6 + ICELAKE_D 06_6CH 1 + ICELAKE_L 06_7EH 5 + ATOM_TREMONT_D 06_86H All + LAKEFIELD 06_8AH 1 + KABYLAKE_L 06_8EH 9 to 12 + ATOM_TREMONT 06_96H 1 + ATOM_TREMONT_L 06_9CH 0 + KABYLAKE 06_9EH 9 to 13 + COMETLAKE 06_A5H 2,3,5 + COMETLAKE_L 06_A6H 0,1 + ROCKETLAKE 06_A7H 1 + =================== ============ ========= + +If a CPU is in the affected processor list, but not affected by a variant, it +is indicated by new bits in MSR IA32_ARCH_CAPABILITIES. As described in a later +section, mitigation largely remains the same for all the variants, i.e. to +clear the CPU fill buffers via VERW instruction. + +New bits in MSRs +================ +Newer processors and microcode update on existing affected processors added new +bits to IA32_ARCH_CAPABILITIES MSR. These bits can be used to enumerate +specific variants of Processor MMIO Stale Data vulnerabilities and mitigation +capability. + +MSR IA32_ARCH_CAPABILITIES +-------------------------- +Bit 13 - SBDR_SSDP_NO - When set, processor is not affected by either the + Shared Buffers Data Read (SBDR) vulnerability or the sideband stale + data propagator (SSDP). +Bit 14 - FBSDP_NO - When set, processor is not affected by the Fill Buffer + Stale Data Propagator (FBSDP). +Bit 15 - PSDP_NO - When set, processor is not affected by Primary Stale Data + Propagator (PSDP). +Bit 17 - FB_CLEAR - When set, VERW instruction will overwrite CPU fill buffer + values as part of MD_CLEAR operations. Processors that do not + enumerate MDS_NO (meaning they are affected by MDS) but that do + enumerate support for both L1D_FLUSH and MD_CLEAR implicitly enumerate + FB_CLEAR as part of their MD_CLEAR support. +Bit 18 - FB_CLEAR_CTRL - Processor supports read and write to MSR + IA32_MCU_OPT_CTRL[FB_CLEAR_DIS]. On such processors, the FB_CLEAR_DIS + bit can be set to cause the VERW instruction to not perform the + FB_CLEAR action. Not all processors that support FB_CLEAR will support + FB_CLEAR_CTRL. + +MSR IA32_MCU_OPT_CTRL +--------------------- +Bit 3 - FB_CLEAR_DIS - When set, VERW instruction does not perform the FB_CLEAR +action. This may be useful to reduce the performance impact of FB_CLEAR in +cases where system software deems it warranted (for example, when performance +is more critical, or the untrusted software has no MMIO access). Note that +FB_CLEAR_DIS has no impact on enumeration (for example, it does not change +FB_CLEAR or MD_CLEAR enumeration) and it may not be supported on all processors +that enumerate FB_CLEAR. + +Mitigation +========== +Like MDS, all variants of Processor MMIO Stale Data vulnerabilities have the +same mitigation strategy to force the CPU to clear the affected buffers before +an attacker can extract the secrets. + +This is achieved by using the otherwise unused and obsolete VERW instruction in +combination with a microcode update. The microcode clears the affected CPU +buffers when the VERW instruction is executed. + +Kernel reuses the MDS function to invoke the buffer clearing: + + mds_clear_cpu_buffers() + +On MDS affected CPUs, the kernel already invokes CPU buffer clear on +kernel/userspace, hypervisor/guest and C-state (idle) transitions. No +additional mitigation is needed on such CPUs. + +For CPUs not affected by MDS or TAA, mitigation is needed only for the attacker +with MMIO capability. Therefore, VERW is not required for kernel/userspace. For +virtualization case, VERW is only needed at VMENTER for a guest with MMIO +capability. + +Mitigation points +----------------- +Return to user space +^^^^^^^^^^^^^^^^^^^^ +Same mitigation as MDS when affected by MDS/TAA, otherwise no mitigation +needed. + +C-State transition +^^^^^^^^^^^^^^^^^^ +Control register writes by CPU during C-state transition can propagate data +from fill buffer to uncore buffers. Execute VERW before C-state transition to +clear CPU fill buffers. + +Guest entry point +^^^^^^^^^^^^^^^^^ +Same mitigation as MDS when processor is also affected by MDS/TAA, otherwise +execute VERW at VMENTER only for MMIO capable guests. On CPUs not affected by +MDS/TAA, guest without MMIO access cannot extract secrets using Processor MMIO +Stale Data vulnerabilities, so there is no need to execute VERW for such guests. + +Mitigation control on the kernel command line +--------------------------------------------- +The kernel command line allows to control the Processor MMIO Stale Data +mitigations at boot time with the option "mmio_stale_data=". The valid +arguments for this option are: + + ========== ================================================================= + full If the CPU is vulnerable, enable mitigation; CPU buffer clearing + on exit to userspace and when entering a VM. Idle transitions are + protected as well. It does not automatically disable SMT. + full,nosmt Same as full, with SMT disabled on vulnerable CPUs. This is the + complete mitigation. + off Disables mitigation completely. + ========== ================================================================= + +If the CPU is affected and mmio_stale_data=off is not supplied on the kernel +command line, then the kernel selects the appropriate mitigation. + +Mitigation status information +----------------------------- +The Linux kernel provides a sysfs interface to enumerate the current +vulnerability status of the system: whether the system is vulnerable, and +which mitigations are active. The relevant sysfs file is: + + /sys/devices/system/cpu/vulnerabilities/mmio_stale_data + +The possible values in this file are: + + .. list-table:: + + * - 'Not affected' + - The processor is not vulnerable + * - 'Vulnerable' + - The processor is vulnerable, but no mitigation enabled + * - 'Vulnerable: Clear CPU buffers attempted, no microcode' + - The processor is vulnerable, but microcode is not updated. The + mitigation is enabled on a best effort basis. + * - 'Mitigation: Clear CPU buffers' + - The processor is vulnerable and the CPU buffer clearing mitigation is + enabled. + +If the processor is vulnerable then the following information is appended to +the above information: + + ======================== =========================================== + 'SMT vulnerable' SMT is enabled + 'SMT disabled' SMT is disabled + 'SMT Host state unknown' Kernel runs in a VM, Host SMT state unknown + ======================== =========================================== + +References +---------- +.. [#f1] Affected Processors + https://www.intel.com/content/www/us/en/developer/topic-technology/software-security-guidance/processors-affected-consolidated-product-cpu-model.html From e66310bc96b74ed3df9993e5d835ef3084d62048 Mon Sep 17 00:00:00 2001 From: Pawan Gupta Date: Thu, 19 May 2022 20:27:08 -0700 Subject: [PATCH 0941/1842] x86/speculation/mmio: Enumerate Processor MMIO Stale Data bug commit 51802186158c74a0304f51ab963e7c2b3a2b046f upstream Processor MMIO Stale Data is a class of vulnerabilities that may expose data after an MMIO operation. For more details please refer to Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst Add the Processor MMIO Stale Data bug enumeration. A microcode update adds new bits to the MSR IA32_ARCH_CAPABILITIES, define them. Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/msr-index.h | 19 +++++++++++ arch/x86/kernel/cpu/common.c | 43 ++++++++++++++++++++++-- tools/arch/x86/include/asm/cpufeatures.h | 1 + tools/arch/x86/include/asm/msr-index.h | 19 +++++++++++ 5 files changed, 81 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 3b407f46f1a0..f6a6ac0b3bcd 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -417,5 +417,6 @@ #define X86_BUG_TAA X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */ #define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */ #define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */ +#define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */ #endif /* _ASM_X86_CPUFEATURES_H */ diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 972a34d93505..37a7b1ec1ac0 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -114,6 +114,25 @@ * Not susceptible to * TSX Async Abort (TAA) vulnerabilities. */ +#define ARCH_CAP_SBDR_SSDP_NO BIT(13) /* + * Not susceptible to SBDR and SSDP + * variants of Processor MMIO stale data + * vulnerabilities. + */ +#define ARCH_CAP_FBSDP_NO BIT(14) /* + * Not susceptible to FBSDP variant of + * Processor MMIO stale data + * vulnerabilities. + */ +#define ARCH_CAP_PSDP_NO BIT(15) /* + * Not susceptible to PSDP variant of + * Processor MMIO stale data + * vulnerabilities. + */ +#define ARCH_CAP_FB_CLEAR BIT(17) /* + * VERW clears CPU fill buffer + * even on MDS_NO CPUs. + */ #define MSR_IA32_FLUSH_CMD 0x0000010b #define L1D_FLUSH BIT(0) /* diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 9c8fc6f513ed..d2e11cc5bd01 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1098,18 +1098,39 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { X86_FEATURE_ANY, issues) #define SRBDS BIT(0) +/* CPU is affected by X86_BUG_MMIO_STALE_DATA */ +#define MMIO BIT(1) static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(HASWELL, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(HASWELL_L, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(HASWELL_G, X86_STEPPING_ANY, SRBDS), + VULNBL_INTEL_STEPPINGS(HASWELL_X, BIT(2) | BIT(4), MMIO), + VULNBL_INTEL_STEPPINGS(BROADWELL_D, X86_STEPPINGS(0x3, 0x5), MMIO), VULNBL_INTEL_STEPPINGS(BROADWELL_G, X86_STEPPING_ANY, SRBDS), + VULNBL_INTEL_STEPPINGS(BROADWELL_X, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(BROADWELL, X86_STEPPING_ANY, SRBDS), + VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO), VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPING_ANY, SRBDS), + VULNBL_INTEL_STEPPINGS(SKYLAKE_X, BIT(3) | BIT(4) | BIT(6) | + BIT(7) | BIT(0xB), MMIO), + VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO), VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x0, 0xC), SRBDS), - VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x0, 0xD), SRBDS), + VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x9, 0xC), SRBDS | MMIO), + VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x0, 0x8), SRBDS), + VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x9, 0xD), SRBDS | MMIO), + VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x0, 0x8), SRBDS), + VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPINGS(0x5, 0x5), MMIO), + VULNBL_INTEL_STEPPINGS(ICELAKE_D, X86_STEPPINGS(0x1, 0x1), MMIO), + VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPINGS(0x4, 0x6), MMIO), + VULNBL_INTEL_STEPPINGS(COMETLAKE, BIT(2) | BIT(3) | BIT(5), MMIO), + VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x1), MMIO), + VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPINGS(0x1, 0x1), MMIO), + VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPINGS(0x1, 0x1), MMIO), + VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPINGS(0x1, 0x1), MMIO), + VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D, X86_STEPPING_ANY, MMIO), + VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPINGS(0x0, 0x0), MMIO), {} }; @@ -1130,6 +1151,13 @@ u64 x86_read_arch_cap_msr(void) return ia32_cap; } +static bool arch_cap_mmio_immune(u64 ia32_cap) +{ + return (ia32_cap & ARCH_CAP_FBSDP_NO && + ia32_cap & ARCH_CAP_PSDP_NO && + ia32_cap & ARCH_CAP_SBDR_SSDP_NO); +} + static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) { u64 ia32_cap = x86_read_arch_cap_msr(); @@ -1189,6 +1217,17 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) cpu_matches(cpu_vuln_blacklist, SRBDS)) setup_force_cpu_bug(X86_BUG_SRBDS); + /* + * Processor MMIO Stale Data bug enumeration + * + * Affected CPU list is generally enough to enumerate the vulnerability, + * but for virtualization case check for ARCH_CAP MSR bits also, VMM may + * not want the guest to enumerate the bug. + */ + if (cpu_matches(cpu_vuln_blacklist, MMIO) && + !arch_cap_mmio_immune(ia32_cap)) + setup_force_cpu_bug(X86_BUG_MMIO_STALE_DATA); + if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) return; diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h index b58730cc12e8..a7b5c5efcf3b 100644 --- a/tools/arch/x86/include/asm/cpufeatures.h +++ b/tools/arch/x86/include/asm/cpufeatures.h @@ -417,5 +417,6 @@ #define X86_BUG_TAA X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */ #define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */ #define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */ +#define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */ #endif /* _ASM_X86_CPUFEATURES_H */ diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h index 972a34d93505..37a7b1ec1ac0 100644 --- a/tools/arch/x86/include/asm/msr-index.h +++ b/tools/arch/x86/include/asm/msr-index.h @@ -114,6 +114,25 @@ * Not susceptible to * TSX Async Abort (TAA) vulnerabilities. */ +#define ARCH_CAP_SBDR_SSDP_NO BIT(13) /* + * Not susceptible to SBDR and SSDP + * variants of Processor MMIO stale data + * vulnerabilities. + */ +#define ARCH_CAP_FBSDP_NO BIT(14) /* + * Not susceptible to FBSDP variant of + * Processor MMIO stale data + * vulnerabilities. + */ +#define ARCH_CAP_PSDP_NO BIT(15) /* + * Not susceptible to PSDP variant of + * Processor MMIO stale data + * vulnerabilities. + */ +#define ARCH_CAP_FB_CLEAR BIT(17) /* + * VERW clears CPU fill buffer + * even on MDS_NO CPUs. + */ #define MSR_IA32_FLUSH_CMD 0x0000010b #define L1D_FLUSH BIT(0) /* From f83d4e5be4a3955a6c8af61ecec0934d0ece40c0 Mon Sep 17 00:00:00 2001 From: Pawan Gupta Date: Thu, 19 May 2022 20:28:10 -0700 Subject: [PATCH 0942/1842] x86/speculation: Add a common function for MD_CLEAR mitigation update commit f52ea6c26953fed339aa4eae717ee5c2133c7ff2 upstream Processor MMIO Stale Data mitigation uses similar mitigation as MDS and TAA. In preparation for adding its mitigation, add a common function to update all mitigations that depend on MD_CLEAR. [ bp: Add a newline in md_clear_update_mitigation() to separate statements better. ] Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 59 +++++++++++++++++++++----------------- 1 file changed, 33 insertions(+), 26 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 78b9514a3844..37fabd29a8a7 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -41,7 +41,7 @@ static void __init spectre_v2_select_mitigation(void); static void __init ssb_select_mitigation(void); static void __init l1tf_select_mitigation(void); static void __init mds_select_mitigation(void); -static void __init mds_print_mitigation(void); +static void __init md_clear_update_mitigation(void); static void __init taa_select_mitigation(void); static void __init srbds_select_mitigation(void); @@ -114,10 +114,10 @@ void __init check_bugs(void) srbds_select_mitigation(); /* - * As MDS and TAA mitigations are inter-related, print MDS - * mitigation until after TAA mitigation selection is done. + * As MDS and TAA mitigations are inter-related, update and print their + * mitigation after TAA mitigation selection is done. */ - mds_print_mitigation(); + md_clear_update_mitigation(); arch_smt_update(); @@ -258,14 +258,6 @@ static void __init mds_select_mitigation(void) } } -static void __init mds_print_mitigation(void) -{ - if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) - return; - - pr_info("%s\n", mds_strings[mds_mitigation]); -} - static int __init mds_cmdline(char *str) { if (!boot_cpu_has_bug(X86_BUG_MDS)) @@ -320,7 +312,7 @@ static void __init taa_select_mitigation(void) /* TSX previously disabled by tsx=off */ if (!boot_cpu_has(X86_FEATURE_RTM)) { taa_mitigation = TAA_MITIGATION_TSX_DISABLED; - goto out; + return; } if (cpu_mitigations_off()) { @@ -334,7 +326,7 @@ static void __init taa_select_mitigation(void) */ if (taa_mitigation == TAA_MITIGATION_OFF && mds_mitigation == MDS_MITIGATION_OFF) - goto out; + return; if (boot_cpu_has(X86_FEATURE_MD_CLEAR)) taa_mitigation = TAA_MITIGATION_VERW; @@ -366,18 +358,6 @@ static void __init taa_select_mitigation(void) if (taa_nosmt || cpu_mitigations_auto_nosmt()) cpu_smt_disable(false); - - /* - * Update MDS mitigation, if necessary, as the mds_user_clear is - * now enabled for TAA mitigation. - */ - if (mds_mitigation == MDS_MITIGATION_OFF && - boot_cpu_has_bug(X86_BUG_MDS)) { - mds_mitigation = MDS_MITIGATION_FULL; - mds_select_mitigation(); - } -out: - pr_info("%s\n", taa_strings[taa_mitigation]); } static int __init tsx_async_abort_parse_cmdline(char *str) @@ -401,6 +381,33 @@ static int __init tsx_async_abort_parse_cmdline(char *str) } early_param("tsx_async_abort", tsx_async_abort_parse_cmdline); +#undef pr_fmt +#define pr_fmt(fmt) "" fmt + +static void __init md_clear_update_mitigation(void) +{ + if (cpu_mitigations_off()) + return; + + if (!static_key_enabled(&mds_user_clear)) + goto out; + + /* + * mds_user_clear is now enabled. Update MDS mitigation, if + * necessary. + */ + if (mds_mitigation == MDS_MITIGATION_OFF && + boot_cpu_has_bug(X86_BUG_MDS)) { + mds_mitigation = MDS_MITIGATION_FULL; + mds_select_mitigation(); + } +out: + if (boot_cpu_has_bug(X86_BUG_MDS)) + pr_info("MDS: %s\n", mds_strings[mds_mitigation]); + if (boot_cpu_has_bug(X86_BUG_TAA)) + pr_info("TAA: %s\n", taa_strings[taa_mitigation]); +} + #undef pr_fmt #define pr_fmt(fmt) "SRBDS: " fmt From 26f6f231f6a5a79ccc274967939b22602dec76e8 Mon Sep 17 00:00:00 2001 From: Pawan Gupta Date: Thu, 19 May 2022 20:29:11 -0700 Subject: [PATCH 0943/1842] x86/speculation/mmio: Add mitigation for Processor MMIO Stale Data commit 8cb861e9e3c9a55099ad3d08e1a3b653d29c33ca upstream Processor MMIO Stale Data is a class of vulnerabilities that may expose data after an MMIO operation. For details please refer to Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst. These vulnerabilities are broadly categorized as: Device Register Partial Write (DRPW): Some endpoint MMIO registers incorrectly handle writes that are smaller than the register size. Instead of aborting the write or only copying the correct subset of bytes (for example, 2 bytes for a 2-byte write), more bytes than specified by the write transaction may be written to the register. On some processors, this may expose stale data from the fill buffers of the core that created the write transaction. Shared Buffers Data Sampling (SBDS): After propagators may have moved data around the uncore and copied stale data into client core fill buffers, processors affected by MFBDS can leak data from the fill buffer. Shared Buffers Data Read (SBDR): It is similar to Shared Buffer Data Sampling (SBDS) except that the data is directly read into the architectural software-visible state. An attacker can use these vulnerabilities to extract data from CPU fill buffers using MDS and TAA methods. Mitigate it by clearing the CPU fill buffers using the VERW instruction before returning to a user or a guest. On CPUs not affected by MDS and TAA, user application cannot sample data from CPU fill buffers using MDS or TAA. A guest with MMIO access can still use DRPW or SBDR to extract data architecturally. Mitigate it with VERW instruction to clear fill buffers before VMENTER for MMIO capable guests. Add a kernel parameter mmio_stale_data={off|full|full,nosmt} to control the mitigation. Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman --- .../admin-guide/kernel-parameters.txt | 36 ++++++ arch/x86/include/asm/nospec-branch.h | 2 + arch/x86/kernel/cpu/bugs.c | 111 +++++++++++++++++- arch/x86/kvm/vmx/vmx.c | 3 + 4 files changed, 148 insertions(+), 4 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 5e34deec819f..ea8b704be705 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2872,6 +2872,7 @@ kvm.nx_huge_pages=off [X86] no_entry_flush [PPC] no_uaccess_flush [PPC] + mmio_stale_data=off [X86] Exceptions: This does not have any effect on @@ -2893,6 +2894,7 @@ Equivalent to: l1tf=flush,nosmt [X86] mds=full,nosmt [X86] tsx_async_abort=full,nosmt [X86] + mmio_stale_data=full,nosmt [X86] mminit_loglevel= [KNL] When CONFIG_DEBUG_MEMORY_INIT is set, this @@ -2902,6 +2904,40 @@ log everything. Information is printed at KERN_DEBUG so loglevel=8 may also need to be specified. + mmio_stale_data= + [X86,INTEL] Control mitigation for the Processor + MMIO Stale Data vulnerabilities. + + Processor MMIO Stale Data is a class of + vulnerabilities that may expose data after an MMIO + operation. Exposed data could originate or end in + the same CPU buffers as affected by MDS and TAA. + Therefore, similar to MDS and TAA, the mitigation + is to clear the affected CPU buffers. + + This parameter controls the mitigation. The + options are: + + full - Enable mitigation on vulnerable CPUs + + full,nosmt - Enable mitigation and disable SMT on + vulnerable CPUs. + + off - Unconditionally disable mitigation + + On MDS or TAA affected machines, + mmio_stale_data=off can be prevented by an active + MDS or TAA mitigation as these vulnerabilities are + mitigated with the same mechanism so in order to + disable this mitigation, you need to specify + mds=off and tsx_async_abort=off too. + + Not specifying this option is equivalent to + mmio_stale_data=full. + + For details see: + Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst + module.sig_enforce [KNL] When CONFIG_MODULE_SIG is set, this means that modules without (valid) signatures will fail to load. diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 4d0f5386e637..e247151c3dcf 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -255,6 +255,8 @@ DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb); DECLARE_STATIC_KEY_FALSE(mds_user_clear); DECLARE_STATIC_KEY_FALSE(mds_idle_clear); +DECLARE_STATIC_KEY_FALSE(mmio_stale_data_clear); + #include /** diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 37fabd29a8a7..284a80f48627 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -43,6 +43,7 @@ static void __init l1tf_select_mitigation(void); static void __init mds_select_mitigation(void); static void __init md_clear_update_mitigation(void); static void __init taa_select_mitigation(void); +static void __init mmio_select_mitigation(void); static void __init srbds_select_mitigation(void); /* The base value of the SPEC_CTRL MSR that always has to be preserved. */ @@ -77,6 +78,10 @@ EXPORT_SYMBOL_GPL(mds_user_clear); DEFINE_STATIC_KEY_FALSE(mds_idle_clear); EXPORT_SYMBOL_GPL(mds_idle_clear); +/* Controls CPU Fill buffer clear before KVM guest MMIO accesses */ +DEFINE_STATIC_KEY_FALSE(mmio_stale_data_clear); +EXPORT_SYMBOL_GPL(mmio_stale_data_clear); + void __init check_bugs(void) { identify_boot_cpu(); @@ -111,11 +116,13 @@ void __init check_bugs(void) l1tf_select_mitigation(); mds_select_mitigation(); taa_select_mitigation(); + mmio_select_mitigation(); srbds_select_mitigation(); /* - * As MDS and TAA mitigations are inter-related, update and print their - * mitigation after TAA mitigation selection is done. + * As MDS, TAA and MMIO Stale Data mitigations are inter-related, update + * and print their mitigation after MDS, TAA and MMIO Stale Data + * mitigation selection is done. */ md_clear_update_mitigation(); @@ -381,6 +388,90 @@ static int __init tsx_async_abort_parse_cmdline(char *str) } early_param("tsx_async_abort", tsx_async_abort_parse_cmdline); +#undef pr_fmt +#define pr_fmt(fmt) "MMIO Stale Data: " fmt + +enum mmio_mitigations { + MMIO_MITIGATION_OFF, + MMIO_MITIGATION_UCODE_NEEDED, + MMIO_MITIGATION_VERW, +}; + +/* Default mitigation for Processor MMIO Stale Data vulnerabilities */ +static enum mmio_mitigations mmio_mitigation __ro_after_init = MMIO_MITIGATION_VERW; +static bool mmio_nosmt __ro_after_init = false; + +static const char * const mmio_strings[] = { + [MMIO_MITIGATION_OFF] = "Vulnerable", + [MMIO_MITIGATION_UCODE_NEEDED] = "Vulnerable: Clear CPU buffers attempted, no microcode", + [MMIO_MITIGATION_VERW] = "Mitigation: Clear CPU buffers", +}; + +static void __init mmio_select_mitigation(void) +{ + u64 ia32_cap; + + if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA) || + cpu_mitigations_off()) { + mmio_mitigation = MMIO_MITIGATION_OFF; + return; + } + + if (mmio_mitigation == MMIO_MITIGATION_OFF) + return; + + ia32_cap = x86_read_arch_cap_msr(); + + /* + * Enable CPU buffer clear mitigation for host and VMM, if also affected + * by MDS or TAA. Otherwise, enable mitigation for VMM only. + */ + if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) && + boot_cpu_has(X86_FEATURE_RTM))) + static_branch_enable(&mds_user_clear); + else + static_branch_enable(&mmio_stale_data_clear); + + /* + * Check if the system has the right microcode. + * + * CPU Fill buffer clear mitigation is enumerated by either an explicit + * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS + * affected systems. + */ + if ((ia32_cap & ARCH_CAP_FB_CLEAR) || + (boot_cpu_has(X86_FEATURE_MD_CLEAR) && + boot_cpu_has(X86_FEATURE_FLUSH_L1D) && + !(ia32_cap & ARCH_CAP_MDS_NO))) + mmio_mitigation = MMIO_MITIGATION_VERW; + else + mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED; + + if (mmio_nosmt || cpu_mitigations_auto_nosmt()) + cpu_smt_disable(false); +} + +static int __init mmio_stale_data_parse_cmdline(char *str) +{ + if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) + return 0; + + if (!str) + return -EINVAL; + + if (!strcmp(str, "off")) { + mmio_mitigation = MMIO_MITIGATION_OFF; + } else if (!strcmp(str, "full")) { + mmio_mitigation = MMIO_MITIGATION_VERW; + } else if (!strcmp(str, "full,nosmt")) { + mmio_mitigation = MMIO_MITIGATION_VERW; + mmio_nosmt = true; + } + + return 0; +} +early_param("mmio_stale_data", mmio_stale_data_parse_cmdline); + #undef pr_fmt #define pr_fmt(fmt) "" fmt @@ -393,19 +484,31 @@ static void __init md_clear_update_mitigation(void) goto out; /* - * mds_user_clear is now enabled. Update MDS mitigation, if - * necessary. + * mds_user_clear is now enabled. Update MDS, TAA and MMIO Stale Data + * mitigation, if necessary. */ if (mds_mitigation == MDS_MITIGATION_OFF && boot_cpu_has_bug(X86_BUG_MDS)) { mds_mitigation = MDS_MITIGATION_FULL; mds_select_mitigation(); } + if (taa_mitigation == TAA_MITIGATION_OFF && + boot_cpu_has_bug(X86_BUG_TAA)) { + taa_mitigation = TAA_MITIGATION_VERW; + taa_select_mitigation(); + } + if (mmio_mitigation == MMIO_MITIGATION_OFF && + boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) { + mmio_mitigation = MMIO_MITIGATION_VERW; + mmio_select_mitigation(); + } out: if (boot_cpu_has_bug(X86_BUG_MDS)) pr_info("MDS: %s\n", mds_strings[mds_mitigation]); if (boot_cpu_has_bug(X86_BUG_TAA)) pr_info("TAA: %s\n", taa_strings[taa_mitigation]); + if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) + pr_info("MMIO Stale Data: %s\n", mmio_strings[mmio_mitigation]); } #undef pr_fmt diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 94f5f2129e3b..2922690bd6ff 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6654,6 +6654,9 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, vmx_l1d_flush(vcpu); else if (static_branch_unlikely(&mds_user_clear)) mds_clear_cpu_buffers(); + else if (static_branch_unlikely(&mmio_stale_data_clear) && + kvm_arch_has_assigned_device(vcpu->kvm)) + mds_clear_cpu_buffers(); if (vcpu->arch.cr2 != native_read_cr2()) native_write_cr2(vcpu->arch.cr2); From 56f0bca5e9c8456b7bb7089cbb6de866a9ba6da9 Mon Sep 17 00:00:00 2001 From: Pawan Gupta Date: Thu, 19 May 2022 20:30:12 -0700 Subject: [PATCH 0944/1842] x86/bugs: Group MDS, TAA & Processor MMIO Stale Data mitigations commit e5925fb867290ee924fcf2fe3ca887b792714366 upstream MDS, TAA and Processor MMIO Stale Data mitigations rely on clearing CPU buffers. Moreover, status of these mitigations affects each other. During boot, it is important to maintain the order in which these mitigations are selected. This is especially true for md_clear_update_mitigation() that needs to be called after MDS, TAA and Processor MMIO Stale Data mitigation selection is done. Introduce md_clear_select_mitigation(), and select all these mitigations from there. This reflects relationships between these mitigations and ensures proper ordering. Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 26 ++++++++++++++++---------- 1 file changed, 16 insertions(+), 10 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 284a80f48627..dc3b8b434fdc 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -42,6 +42,7 @@ static void __init ssb_select_mitigation(void); static void __init l1tf_select_mitigation(void); static void __init mds_select_mitigation(void); static void __init md_clear_update_mitigation(void); +static void __init md_clear_select_mitigation(void); static void __init taa_select_mitigation(void); static void __init mmio_select_mitigation(void); static void __init srbds_select_mitigation(void); @@ -114,18 +115,9 @@ void __init check_bugs(void) spectre_v2_select_mitigation(); ssb_select_mitigation(); l1tf_select_mitigation(); - mds_select_mitigation(); - taa_select_mitigation(); - mmio_select_mitigation(); + md_clear_select_mitigation(); srbds_select_mitigation(); - /* - * As MDS, TAA and MMIO Stale Data mitigations are inter-related, update - * and print their mitigation after MDS, TAA and MMIO Stale Data - * mitigation selection is done. - */ - md_clear_update_mitigation(); - arch_smt_update(); #ifdef CONFIG_X86_32 @@ -511,6 +503,20 @@ static void __init md_clear_update_mitigation(void) pr_info("MMIO Stale Data: %s\n", mmio_strings[mmio_mitigation]); } +static void __init md_clear_select_mitigation(void) +{ + mds_select_mitigation(); + taa_select_mitigation(); + mmio_select_mitigation(); + + /* + * As MDS, TAA and MMIO Stale Data mitigations are inter-related, update + * and print their mitigation after MDS, TAA and MMIO Stale Data + * mitigation selection is done. + */ + md_clear_update_mitigation(); +} + #undef pr_fmt #define pr_fmt(fmt) "SRBDS: " fmt From 3eb1180564fa0ecedc33b44029da7687c0a9fbf5 Mon Sep 17 00:00:00 2001 From: Pawan Gupta Date: Thu, 19 May 2022 20:31:12 -0700 Subject: [PATCH 0945/1842] x86/speculation/mmio: Enable CPU Fill buffer clearing on idle commit 99a83db5a605137424e1efe29dc0573d6a5b6316 upstream When the CPU is affected by Processor MMIO Stale Data vulnerabilities, Fill Buffer Stale Data Propagator (FBSDP) can propagate stale data out of Fill buffer to uncore buffer when CPU goes idle. Stale data can then be exploited with other variants using MMIO operations. Mitigate it by clearing the Fill buffer before entering idle state. Signed-off-by: Pawan Gupta Signed-off-by: Thomas Gleixner Co-developed-by: Josh Poimboeuf Signed-off-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index dc3b8b434fdc..a39019760d9e 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -424,6 +424,14 @@ static void __init mmio_select_mitigation(void) else static_branch_enable(&mmio_stale_data_clear); + /* + * If Processor-MMIO-Stale-Data bug is present and Fill Buffer data can + * be propagated to uncore buffers, clearing the Fill buffers on idle + * is required irrespective of SMT state. + */ + if (!(ia32_cap & ARCH_CAP_FBSDP_NO)) + static_branch_enable(&mds_idle_clear); + /* * Check if the system has the right microcode. * @@ -1188,6 +1196,8 @@ static void update_indir_branch_cond(void) /* Update the static key controlling the MDS CPU buffer clear in idle */ static void update_mds_branch_idle(void) { + u64 ia32_cap = x86_read_arch_cap_msr(); + /* * Enable the idle clearing if SMT is active on CPUs which are * affected only by MSBDS and not any other MDS variant. @@ -1199,10 +1209,12 @@ static void update_mds_branch_idle(void) if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY)) return; - if (sched_smt_active()) + if (sched_smt_active()) { static_branch_enable(&mds_idle_clear); - else + } else if (mmio_mitigation == MMIO_MITIGATION_OFF || + (ia32_cap & ARCH_CAP_FBSDP_NO)) { static_branch_disable(&mds_idle_clear); + } } #define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n" From 001415e4e626403c9ff35f2498feb0021d0c8328 Mon Sep 17 00:00:00 2001 From: Pawan Gupta Date: Thu, 19 May 2022 20:32:13 -0700 Subject: [PATCH 0946/1842] x86/speculation/mmio: Add sysfs reporting for Processor MMIO Stale Data commit 8d50cdf8b8341770bc6367bce40c0c1bb0e1d5b3 upstream Add the sysfs reporting file for Processor MMIO Stale Data vulnerability. It exposes the vulnerability and mitigation state similar to the existing files for the other hardware vulnerabilities. Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman --- .../ABI/testing/sysfs-devices-system-cpu | 1 + arch/x86/kernel/cpu/bugs.c | 22 +++++++++++++++++++ drivers/base/cpu.c | 8 +++++++ include/linux/cpu.h | 3 +++ 4 files changed, 34 insertions(+) diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu index 1a04ca8162ad..44c6e5730398 100644 --- a/Documentation/ABI/testing/sysfs-devices-system-cpu +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu @@ -510,6 +510,7 @@ What: /sys/devices/system/cpu/vulnerabilities /sys/devices/system/cpu/vulnerabilities/srbds /sys/devices/system/cpu/vulnerabilities/tsx_async_abort /sys/devices/system/cpu/vulnerabilities/itlb_multihit + /sys/devices/system/cpu/vulnerabilities/mmio_stale_data Date: January 2018 Contact: Linux kernel mailing list Description: Information about CPU vulnerabilities diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index a39019760d9e..6108e5a294ea 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1832,6 +1832,20 @@ static ssize_t tsx_async_abort_show_state(char *buf) sched_smt_active() ? "vulnerable" : "disabled"); } +static ssize_t mmio_stale_data_show_state(char *buf) +{ + if (mmio_mitigation == MMIO_MITIGATION_OFF) + return sysfs_emit(buf, "%s\n", mmio_strings[mmio_mitigation]); + + if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) { + return sysfs_emit(buf, "%s; SMT Host state unknown\n", + mmio_strings[mmio_mitigation]); + } + + return sysfs_emit(buf, "%s; SMT %s\n", mmio_strings[mmio_mitigation], + sched_smt_active() ? "vulnerable" : "disabled"); +} + static char *stibp_state(void) { if (spectre_v2_in_eibrs_mode(spectre_v2_enabled)) @@ -1932,6 +1946,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr case X86_BUG_SRBDS: return srbds_show_state(buf); + case X86_BUG_MMIO_STALE_DATA: + return mmio_stale_data_show_state(buf); + default: break; } @@ -1983,4 +2000,9 @@ ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char * { return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS); } + +ssize_t cpu_show_mmio_stale_data(struct device *dev, struct device_attribute *attr, char *buf) +{ + return cpu_show_common(dev, attr, buf, X86_BUG_MMIO_STALE_DATA); +} #endif diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c index 8f1d6569564c..8ecb9f90f467 100644 --- a/drivers/base/cpu.c +++ b/drivers/base/cpu.c @@ -566,6 +566,12 @@ ssize_t __weak cpu_show_srbds(struct device *dev, return sysfs_emit(buf, "Not affected\n"); } +ssize_t __weak cpu_show_mmio_stale_data(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return sysfs_emit(buf, "Not affected\n"); +} + static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL); static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL); static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL); @@ -575,6 +581,7 @@ static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL); static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL); static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL); static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL); +static DEVICE_ATTR(mmio_stale_data, 0444, cpu_show_mmio_stale_data, NULL); static struct attribute *cpu_root_vulnerabilities_attrs[] = { &dev_attr_meltdown.attr, @@ -586,6 +593,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = { &dev_attr_tsx_async_abort.attr, &dev_attr_itlb_multihit.attr, &dev_attr_srbds.attr, + &dev_attr_mmio_stale_data.attr, NULL }; diff --git a/include/linux/cpu.h b/include/linux/cpu.h index d6428aaf67e7..d63b8f70d123 100644 --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -65,6 +65,9 @@ extern ssize_t cpu_show_tsx_async_abort(struct device *dev, extern ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr, char *buf); extern ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf); +extern ssize_t cpu_show_mmio_stale_data(struct device *dev, + struct device_attribute *attr, + char *buf); extern __printf(4, 5) struct device *cpu_device_create(struct device *parent, void *drvdata, From cf1c01a5e4c3e269b9211ae2ef0a57f8c9474bfc Mon Sep 17 00:00:00 2001 From: Pawan Gupta Date: Thu, 19 May 2022 20:33:13 -0700 Subject: [PATCH 0947/1842] x86/speculation/srbds: Update SRBDS mitigation selection commit 22cac9c677c95f3ac5c9244f8ca0afdc7c8afb19 upstream Currently, Linux disables SRBDS mitigation on CPUs not affected by MDS and have the TSX feature disabled. On such CPUs, secrets cannot be extracted from CPU fill buffers using MDS or TAA. Without SRBDS mitigation, Processor MMIO Stale Data vulnerabilities can be used to extract RDRAND, RDSEED, and EGETKEY data. Do not disable SRBDS mitigation by default when CPU is also affected by Processor MMIO Stale Data vulnerabilities. Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 6108e5a294ea..3c3e4a466136 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -586,11 +586,13 @@ static void __init srbds_select_mitigation(void) return; /* - * Check to see if this is one of the MDS_NO systems supporting - * TSX that are only exposed to SRBDS when TSX is enabled. + * Check to see if this is one of the MDS_NO systems supporting TSX that + * are only exposed to SRBDS when TSX is enabled or when CPU is affected + * by Processor MMIO Stale Data vulnerability. */ ia32_cap = x86_read_arch_cap_msr(); - if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM)) + if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM) && + !boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) srbds_mitigation = SRBDS_MITIGATION_TSX_OFF; else if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR; From 6df693dca31218f76c63b6fd4aa7b7db3bd6e049 Mon Sep 17 00:00:00 2001 From: Pawan Gupta Date: Thu, 19 May 2022 20:34:14 -0700 Subject: [PATCH 0948/1842] x86/speculation/mmio: Reuse SRBDS mitigation for SBDS commit a992b8a4682f119ae035a01b40d4d0665c4a2875 upstream The Shared Buffers Data Sampling (SBDS) variant of Processor MMIO Stale Data vulnerabilities may expose RDRAND, RDSEED and SGX EGETKEY data. Mitigation for this is added by a microcode update. As some of the implications of SBDS are similar to SRBDS, SRBDS mitigation infrastructure can be leveraged by SBDS. Set X86_BUG_SRBDS and use SRBDS mitigation. Mitigation is enabled by default; use srbds=off to opt-out. Mitigation status can be checked from below file: /sys/devices/system/cpu/vulnerabilities/srbds Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/common.c | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index d2e11cc5bd01..4917c2698ac1 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1100,6 +1100,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { #define SRBDS BIT(0) /* CPU is affected by X86_BUG_MMIO_STALE_DATA */ #define MMIO BIT(1) +/* CPU is affected by Shared Buffers Data Sampling (SBDS), a variant of X86_BUG_MMIO_STALE_DATA */ +#define MMIO_SBDS BIT(2) static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS), @@ -1121,16 +1123,17 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x0, 0x8), SRBDS), VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x9, 0xD), SRBDS | MMIO), VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x0, 0x8), SRBDS), - VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPINGS(0x5, 0x5), MMIO), + VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPINGS(0x5, 0x5), MMIO | MMIO_SBDS), VULNBL_INTEL_STEPPINGS(ICELAKE_D, X86_STEPPINGS(0x1, 0x1), MMIO), VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPINGS(0x4, 0x6), MMIO), - VULNBL_INTEL_STEPPINGS(COMETLAKE, BIT(2) | BIT(3) | BIT(5), MMIO), - VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x1), MMIO), - VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPINGS(0x1, 0x1), MMIO), + VULNBL_INTEL_STEPPINGS(COMETLAKE, BIT(2) | BIT(3) | BIT(5), MMIO | MMIO_SBDS), + VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS), + VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x0), MMIO), + VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS), VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPINGS(0x1, 0x1), MMIO), - VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPINGS(0x1, 0x1), MMIO), + VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D, X86_STEPPING_ANY, MMIO), - VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPINGS(0x0, 0x0), MMIO), + VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPINGS(0x0, 0x0), MMIO | MMIO_SBDS), {} }; @@ -1211,10 +1214,14 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) /* * SRBDS affects CPUs which support RDRAND or RDSEED and are listed * in the vulnerability blacklist. + * + * Some of the implications and mitigation of Shared Buffers Data + * Sampling (SBDS) are similar to SRBDS. Give SBDS same treatment as + * SRBDS. */ if ((cpu_has(c, X86_FEATURE_RDRAND) || cpu_has(c, X86_FEATURE_RDSEED)) && - cpu_matches(cpu_vuln_blacklist, SRBDS)) + cpu_matches(cpu_vuln_blacklist, SRBDS | MMIO_SBDS)) setup_force_cpu_bug(X86_BUG_SRBDS); /* From bde15fdcce44956278b4f50680b7363ca126ffb9 Mon Sep 17 00:00:00 2001 From: Pawan Gupta Date: Thu, 19 May 2022 20:35:15 -0700 Subject: [PATCH 0949/1842] KVM: x86/speculation: Disable Fill buffer clear within guests commit 027bbb884be006b05d9c577d6401686053aa789e upstream The enumeration of MD_CLEAR in CPUID(EAX=7,ECX=0).EDX{bit 10} is not an accurate indicator on all CPUs of whether the VERW instruction will overwrite fill buffers. FB_CLEAR enumeration in IA32_ARCH_CAPABILITIES{bit 17} covers the case of CPUs that are not vulnerable to MDS/TAA, indicating that microcode does overwrite fill buffers. Guests running in VMM environments may not be aware of all the capabilities/vulnerabilities of the host CPU. Specifically, a guest may apply MDS/TAA mitigations when a virtual CPU is enumerated as vulnerable to MDS/TAA even when the physical CPU is not. On CPUs that enumerate FB_CLEAR_CTRL the VMM may set FB_CLEAR_DIS to skip overwriting of fill buffers by the VERW instruction. This is done by setting FB_CLEAR_DIS during VMENTER and resetting on VMEXIT. For guests that enumerate FB_CLEAR (explicitly asking for fill buffer clear capability) the VMM will not use FB_CLEAR_DIS. Irrespective of guest state, host overwrites CPU buffers before VMENTER to protect itself from an MMIO capable guest, as part of mitigation for MMIO Stale Data vulnerabilities. Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/msr-index.h | 6 +++ arch/x86/kvm/vmx/vmx.c | 69 ++++++++++++++++++++++++++ arch/x86/kvm/vmx/vmx.h | 2 + arch/x86/kvm/x86.c | 3 ++ tools/arch/x86/include/asm/msr-index.h | 6 +++ 5 files changed, 86 insertions(+) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 37a7b1ec1ac0..96973d197972 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -133,6 +133,11 @@ * VERW clears CPU fill buffer * even on MDS_NO CPUs. */ +#define ARCH_CAP_FB_CLEAR_CTRL BIT(18) /* + * MSR_IA32_MCU_OPT_CTRL[FB_CLEAR_DIS] + * bit available to control VERW + * behavior. + */ #define MSR_IA32_FLUSH_CMD 0x0000010b #define L1D_FLUSH BIT(0) /* @@ -150,6 +155,7 @@ /* SRBDS support */ #define MSR_IA32_MCU_OPT_CTRL 0x00000123 #define RNGDS_MITG_DIS BIT(0) +#define FB_CLEAR_DIS BIT(3) /* CPU Fill buffer clear disable */ #define MSR_IA32_SYSENTER_CS 0x00000174 #define MSR_IA32_SYSENTER_ESP 0x00000175 diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 2922690bd6ff..e729f65c6760 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -226,6 +226,9 @@ static const struct { #define L1D_CACHE_ORDER 4 static void *vmx_l1d_flush_pages; +/* Control for disabling CPU Fill buffer clear */ +static bool __read_mostly vmx_fb_clear_ctrl_available; + static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf) { struct page *page; @@ -357,6 +360,60 @@ static int vmentry_l1d_flush_get(char *s, const struct kernel_param *kp) return sprintf(s, "%s\n", vmentry_l1d_param[l1tf_vmx_mitigation].option); } +static void vmx_setup_fb_clear_ctrl(void) +{ + u64 msr; + + if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES) && + !boot_cpu_has_bug(X86_BUG_MDS) && + !boot_cpu_has_bug(X86_BUG_TAA)) { + rdmsrl(MSR_IA32_ARCH_CAPABILITIES, msr); + if (msr & ARCH_CAP_FB_CLEAR_CTRL) + vmx_fb_clear_ctrl_available = true; + } +} + +static __always_inline void vmx_disable_fb_clear(struct vcpu_vmx *vmx) +{ + u64 msr; + + if (!vmx->disable_fb_clear) + return; + + rdmsrl(MSR_IA32_MCU_OPT_CTRL, msr); + msr |= FB_CLEAR_DIS; + wrmsrl(MSR_IA32_MCU_OPT_CTRL, msr); + /* Cache the MSR value to avoid reading it later */ + vmx->msr_ia32_mcu_opt_ctrl = msr; +} + +static __always_inline void vmx_enable_fb_clear(struct vcpu_vmx *vmx) +{ + if (!vmx->disable_fb_clear) + return; + + vmx->msr_ia32_mcu_opt_ctrl &= ~FB_CLEAR_DIS; + wrmsrl(MSR_IA32_MCU_OPT_CTRL, vmx->msr_ia32_mcu_opt_ctrl); +} + +static void vmx_update_fb_clear_dis(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) +{ + vmx->disable_fb_clear = vmx_fb_clear_ctrl_available; + + /* + * If guest will not execute VERW, there is no need to set FB_CLEAR_DIS + * at VMEntry. Skip the MSR read/write when a guest has no use case to + * execute VERW. + */ + if ((vcpu->arch.arch_capabilities & ARCH_CAP_FB_CLEAR) || + ((vcpu->arch.arch_capabilities & ARCH_CAP_MDS_NO) && + (vcpu->arch.arch_capabilities & ARCH_CAP_TAA_NO) && + (vcpu->arch.arch_capabilities & ARCH_CAP_PSDP_NO) && + (vcpu->arch.arch_capabilities & ARCH_CAP_FBSDP_NO) && + (vcpu->arch.arch_capabilities & ARCH_CAP_SBDR_SSDP_NO))) + vmx->disable_fb_clear = false; +} + static const struct kernel_param_ops vmentry_l1d_flush_ops = { .set = vmentry_l1d_flush_set, .get = vmentry_l1d_flush_get, @@ -2211,6 +2268,10 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) ret = kvm_set_msr_common(vcpu, msr_info); } + /* FB_CLEAR may have changed, also update the FB_CLEAR_DIS behavior */ + if (msr_index == MSR_IA32_ARCH_CAPABILITIES) + vmx_update_fb_clear_dis(vcpu, vmx); + return ret; } @@ -4483,6 +4544,8 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) vpid_sync_context(vmx->vpid); if (init_event) vmx_clear_hlt(vcpu); + + vmx_update_fb_clear_dis(vcpu, vmx); } static void enable_irq_window(struct kvm_vcpu *vcpu) @@ -6658,6 +6721,8 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, kvm_arch_has_assigned_device(vcpu->kvm)) mds_clear_cpu_buffers(); + vmx_disable_fb_clear(vmx); + if (vcpu->arch.cr2 != native_read_cr2()) native_write_cr2(vcpu->arch.cr2); @@ -6666,6 +6731,8 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, vcpu->arch.cr2 = native_read_cr2(); + vmx_enable_fb_clear(vmx); + /* * VMEXIT disables interrupts (host state), but tracing and lockdep * have them in state 'on' as recorded before entering guest mode. @@ -8050,6 +8117,8 @@ static int __init vmx_init(void) return r; } + vmx_setup_fb_clear_ctrl(); + for_each_possible_cpu(cpu) { INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu)); diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 5ff24537393e..31317c8915e4 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -300,6 +300,8 @@ struct vcpu_vmx { u64 msr_ia32_feature_control; u64 msr_ia32_feature_control_valid_bits; u64 ept_pointer; + u64 msr_ia32_mcu_opt_ctrl; + bool disable_fb_clear; struct pt_desc pt_desc; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d9cec5daa1ff..da547752580a 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1415,6 +1415,9 @@ static u64 kvm_get_arch_capabilities(void) */ } + /* Guests don't need to know "Fill buffer clear control" exists */ + data &= ~ARCH_CAP_FB_CLEAR_CTRL; + return data; } diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h index 37a7b1ec1ac0..96973d197972 100644 --- a/tools/arch/x86/include/asm/msr-index.h +++ b/tools/arch/x86/include/asm/msr-index.h @@ -133,6 +133,11 @@ * VERW clears CPU fill buffer * even on MDS_NO CPUs. */ +#define ARCH_CAP_FB_CLEAR_CTRL BIT(18) /* + * MSR_IA32_MCU_OPT_CTRL[FB_CLEAR_DIS] + * bit available to control VERW + * behavior. + */ #define MSR_IA32_FLUSH_CMD 0x0000010b #define L1D_FLUSH BIT(0) /* @@ -150,6 +155,7 @@ /* SRBDS support */ #define MSR_IA32_MCU_OPT_CTRL 0x00000123 #define RNGDS_MITG_DIS BIT(0) +#define FB_CLEAR_DIS BIT(3) /* CPU Fill buffer clear disable */ #define MSR_IA32_SYSENTER_CS 0x00000174 #define MSR_IA32_SYSENTER_ESP 0x00000175 From aa238a92cc94a15812c0de4adade86ba8f22707a Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Mon, 23 May 2022 09:11:49 -0700 Subject: [PATCH 0950/1842] x86/speculation/mmio: Print SMT warning commit 1dc6ff02c8bf77d71b9b5d11cbc9df77cfb28626 upstream Similar to MDS and TAA, print a warning if SMT is enabled for the MMIO Stale Data vulnerability. Signed-off-by: Josh Poimboeuf Signed-off-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 3c3e4a466136..2a21046846b6 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1221,6 +1221,7 @@ static void update_mds_branch_idle(void) #define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n" #define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n" +#define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n" void cpu_bugs_smt_update(void) { @@ -1265,6 +1266,16 @@ void cpu_bugs_smt_update(void) break; } + switch (mmio_mitigation) { + case MMIO_MITIGATION_VERW: + case MMIO_MITIGATION_UCODE_NEEDED: + if (sched_smt_active()) + pr_warn_once(MMIO_MSG_SMT); + break; + case MMIO_MITIGATION_OFF: + break; + } + mutex_unlock(&spec_ctrl_mutex); } From 2a59239b22e0a39736b47c22462b3faa2c46b729 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 16 Jun 2022 13:28:00 +0200 Subject: [PATCH 0951/1842] Linux 5.10.123 Link: https://lore.kernel.org/r/20220614183719.878453780@linuxfoundation.org Tested-by: Florian Fainelli Tested-by: Shuah Khan Tested-by: Fox Chen Tested-by: Sudip Mukherjee Tested-by: Linux Kernel Functional Testing Tested-by: Guenter Roeck Tested-by: Hulk Robot Tested-by: Tyler Hicks Tested-by: Salvatore Bonaccorso Tested-by: Jon Hunter Tested-by: Pavel Machek (CIP) Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 3ed1da61a3c7..862946040186 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 122 +SUBLEVEL = 123 EXTRAVERSION = NAME = Dare mighty things From 56a7f57da5d0bbcf9066bd61cc0ae0c9ca54e233 Mon Sep 17 00:00:00 2001 From: Al Viro Date: Sun, 31 Jan 2021 14:37:39 -0500 Subject: [PATCH 0952/1842] 9p: missing chunk of "fs/9p: Don't update file type when updating file attributes" commit b577d0cd2104fdfcf0ded3707540a12be8ddd8b0 upstream. In commit 45089142b149 Aneesh had missed one (admittedly, very unlikely to hit) case in v9fs_stat2inode_dotl(). However, the same considerations apply there as well - we have no business whatsoever to change ->i_rdev or the file type. Cc: Tadeusz Struk Signed-off-by: Al Viro Signed-off-by: Greg Kroah-Hartman --- fs/9p/vfs_inode_dotl.c | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c index a13ef836fe4e..84f3a6405b55 100644 --- a/fs/9p/vfs_inode_dotl.c +++ b/fs/9p/vfs_inode_dotl.c @@ -657,14 +657,10 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode, if (stat->st_result_mask & P9_STATS_NLINK) set_nlink(inode, stat->st_nlink); if (stat->st_result_mask & P9_STATS_MODE) { - inode->i_mode = stat->st_mode; - if ((S_ISBLK(inode->i_mode)) || - (S_ISCHR(inode->i_mode))) - init_special_inode(inode, inode->i_mode, - inode->i_rdev); + mode = stat->st_mode & S_IALLUGO; + mode |= inode->i_mode & ~S_IALLUGO; + inode->i_mode = mode; } - if (stat->st_result_mask & P9_STATS_RDEV) - inode->i_rdev = new_decode_dev(stat->st_rdev); if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE) && stat->st_result_mask & P9_STATS_SIZE) v9fs_i_size_write(inode, stat->st_size); From f14816f2f928c560d28ba344af689f56efcd6f55 Mon Sep 17 00:00:00 2001 From: Trond Myklebust Date: Sat, 18 Dec 2021 20:38:01 -0500 Subject: [PATCH 0953/1842] nfsd: Replace use of rwsem with errseq_t commit 555dbf1a9aac6d3150c8b52fa35f768a692f4eeb upstream. The nfsd_file nf_rwsem is currently being used to separate file write and commit instances to ensure that we catch errors and apply them to the correct write/commit. We can improve scalability at the expense of a little accuracy (some extra false positives) by replacing the nf_rwsem with more careful use of the errseq_t mechanism to track errors across the different operations. Signed-off-by: Trond Myklebust Signed-off-by: Chuck Lever [ cel: rebased on zero-verifier fix ] Signed-off-by: Leah Rumancik Signed-off-by: Greg Kroah-Hartman --- fs/nfsd/filecache.c | 1 - fs/nfsd/filecache.h | 1 - fs/nfsd/nfs4proc.c | 7 ++++--- fs/nfsd/vfs.c | 40 +++++++++++++++------------------------- 4 files changed, 19 insertions(+), 30 deletions(-) diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c index acd0898e3866..e30e1ddc1ace 100644 --- a/fs/nfsd/filecache.c +++ b/fs/nfsd/filecache.c @@ -194,7 +194,6 @@ nfsd_file_alloc(struct inode *inode, unsigned int may, unsigned int hashval, __set_bit(NFSD_FILE_BREAK_READ, &nf->nf_flags); } nf->nf_mark = NULL; - init_rwsem(&nf->nf_rwsem); trace_nfsd_file_alloc(nf); } return nf; diff --git a/fs/nfsd/filecache.h b/fs/nfsd/filecache.h index 7872df5a0fe3..435ceab27897 100644 --- a/fs/nfsd/filecache.h +++ b/fs/nfsd/filecache.h @@ -46,7 +46,6 @@ struct nfsd_file { refcount_t nf_ref; unsigned char nf_may; struct nfsd_file_mark *nf_mark; - struct rw_semaphore nf_rwsem; }; int nfsd_file_cache_init(void); diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c index 7850d141c762..735ee8a79870 100644 --- a/fs/nfsd/nfs4proc.c +++ b/fs/nfsd/nfs4proc.c @@ -1380,6 +1380,8 @@ static void nfsd4_init_copy_res(struct nfsd4_copy *copy, bool sync) static ssize_t _nfsd_copy_file_range(struct nfsd4_copy *copy) { + struct file *dst = copy->nf_dst->nf_file; + struct file *src = copy->nf_src->nf_file; ssize_t bytes_copied = 0; size_t bytes_total = copy->cp_count; u64 src_pos = copy->cp_src_pos; @@ -1388,9 +1390,8 @@ static ssize_t _nfsd_copy_file_range(struct nfsd4_copy *copy) do { if (kthread_should_stop()) break; - bytes_copied = nfsd_copy_file_range(copy->nf_src->nf_file, - src_pos, copy->nf_dst->nf_file, dst_pos, - bytes_total); + bytes_copied = nfsd_copy_file_range(src, src_pos, dst, dst_pos, + bytes_total); if (bytes_copied <= 0) break; bytes_total -= bytes_copied; diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c index 011cd570b50d..548ebc913f92 100644 --- a/fs/nfsd/vfs.c +++ b/fs/nfsd/vfs.c @@ -535,10 +535,11 @@ __be32 nfsd4_clone_file_range(struct nfsd_file *nf_src, u64 src_pos, { struct file *src = nf_src->nf_file; struct file *dst = nf_dst->nf_file; + errseq_t since; loff_t cloned; __be32 ret = 0; - down_write(&nf_dst->nf_rwsem); + since = READ_ONCE(dst->f_wb_err); cloned = vfs_clone_file_range(src, src_pos, dst, dst_pos, count, 0); if (cloned < 0) { ret = nfserrno(cloned); @@ -552,6 +553,8 @@ __be32 nfsd4_clone_file_range(struct nfsd_file *nf_src, u64 src_pos, loff_t dst_end = count ? dst_pos + count - 1 : LLONG_MAX; int status = vfs_fsync_range(dst, dst_pos, dst_end, 0); + if (!status) + status = filemap_check_wb_err(dst->f_mapping, since); if (!status) status = commit_inode_metadata(file_inode(src)); if (status < 0) { @@ -561,7 +564,6 @@ __be32 nfsd4_clone_file_range(struct nfsd_file *nf_src, u64 src_pos, } } out_err: - up_write(&nf_dst->nf_rwsem); return ret; } @@ -980,6 +982,7 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf, struct file *file = nf->nf_file; struct svc_export *exp; struct iov_iter iter; + errseq_t since; __be32 nfserr; int host_err; int use_wgather; @@ -1009,21 +1012,18 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf, flags |= RWF_SYNC; iov_iter_kvec(&iter, WRITE, vec, vlen, *cnt); + since = READ_ONCE(file->f_wb_err); if (flags & RWF_SYNC) { - down_write(&nf->nf_rwsem); host_err = vfs_iter_write(file, &iter, &pos, flags); if (host_err < 0) nfsd_reset_boot_verifier(net_generic(SVC_NET(rqstp), nfsd_net_id)); - up_write(&nf->nf_rwsem); } else { - down_read(&nf->nf_rwsem); if (verf) nfsd_copy_boot_verifier(verf, net_generic(SVC_NET(rqstp), nfsd_net_id)); host_err = vfs_iter_write(file, &iter, &pos, flags); - up_read(&nf->nf_rwsem); } if (host_err < 0) { nfsd_reset_boot_verifier(net_generic(SVC_NET(rqstp), @@ -1033,6 +1033,9 @@ nfsd_vfs_write(struct svc_rqst *rqstp, struct svc_fh *fhp, struct nfsd_file *nf, *cnt = host_err; nfsdstats.io_write += *cnt; fsnotify_modify(file); + host_err = filemap_check_wb_err(file->f_mapping, since); + if (host_err < 0) + goto out_nfserr; if (stable && use_wgather) { host_err = wait_for_concurrent_writes(file); @@ -1113,19 +1116,6 @@ nfsd_write(struct svc_rqst *rqstp, struct svc_fh *fhp, loff_t offset, } #ifdef CONFIG_NFSD_V3 -static int -nfsd_filemap_write_and_wait_range(struct nfsd_file *nf, loff_t offset, - loff_t end) -{ - struct address_space *mapping = nf->nf_file->f_mapping; - int ret = filemap_fdatawrite_range(mapping, offset, end); - - if (ret) - return ret; - filemap_fdatawait_range_keep_errors(mapping, offset, end); - return 0; -} - /* * Commit all pending writes to stable storage. * @@ -1156,25 +1146,25 @@ nfsd_commit(struct svc_rqst *rqstp, struct svc_fh *fhp, if (err) goto out; if (EX_ISSYNC(fhp->fh_export)) { - int err2 = nfsd_filemap_write_and_wait_range(nf, offset, end); + errseq_t since = READ_ONCE(nf->nf_file->f_wb_err); + int err2; - down_write(&nf->nf_rwsem); - if (!err2) - err2 = vfs_fsync_range(nf->nf_file, offset, end, 0); + err2 = vfs_fsync_range(nf->nf_file, offset, end, 0); switch (err2) { case 0: nfsd_copy_boot_verifier(verf, net_generic(nf->nf_net, nfsd_net_id)); + err2 = filemap_check_wb_err(nf->nf_file->f_mapping, + since); break; case -EINVAL: err = nfserr_notsupp; break; default: - err = nfserrno(err2); nfsd_reset_boot_verifier(net_generic(nf->nf_net, nfsd_net_id)); } - up_write(&nf->nf_rwsem); + err = nfserrno(err2); } else nfsd_copy_boot_verifier(verf, net_generic(nf->nf_net, nfsd_net_id)); From 28bbdca6a7a471921d890e5c0d70b6f7c99637a7 Mon Sep 17 00:00:00 2001 From: Yuntao Wang Date: Tue, 14 Jun 2022 22:26:22 +0800 Subject: [PATCH 0954/1842] bpf: Fix incorrect memory charge cost calculation in stack_map_alloc() commit b45043192b3e481304062938a6561da2ceea46a6 upstream. This is a backport of the original upstream patch for 5.4/5.10. The original upstream patch has been applied to 5.4/5.10 branches, which simply removed the line: cost += n_buckets * (value_size + sizeof(struct stack_map_bucket)); This is correct for upstream branch but incorrect for 5.4/5.10 branches, as the 5.4/5.10 branches do not have the commit 370868107bf6 ("bpf: Eliminate rlimit-based memory accounting for stackmap maps"), so the bpf_map_charge_init() function has not been removed. Currently the bpf_map_charge_init() function in 5.4/5.10 branches takes a wrong memory charge cost, the attr->max_entries * (sizeof(struct stack_map_bucket) + (u64)value_size)) part is missing, let's fix it. Cc: # 5.4.y Cc: # 5.10.y Signed-off-by: Yuntao Wang Signed-off-by: Greg Kroah-Hartman --- kernel/bpf/stackmap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index c19e669afba0..0c5bf98d5576 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -121,7 +121,8 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr) return ERR_PTR(-E2BIG); cost = n_buckets * sizeof(struct stack_map_bucket *) + sizeof(*smap); - err = bpf_map_charge_init(&mem, cost); + err = bpf_map_charge_init(&mem, cost + attr->max_entries * + (sizeof(struct stack_map_bucket) + (u64)value_size)); if (err) return ERR_PTR(err); From b5699bff1da69ec4109c747c7257999b6a072982 Mon Sep 17 00:00:00 2001 From: Adam Ford Date: Tue, 26 Apr 2022 15:51:43 -0500 Subject: [PATCH 0955/1842] arm64: dts: imx8mm-beacon: Enable RTS-CTS on UART3 commit 4ce01ce36d77137cf60776b320babed89de6bd4c upstream. There is a header for a DB9 serial port, but any attempts to use hardware handshaking fail. Enable RTS and CTS pin muxing and enable handshaking in the uart node. Signed-off-by: Adam Ford Signed-off-by: Shawn Guo Signed-off-by: Greg Kroah-Hartman --- arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi index d6b9dedd168f..5667009aae13 100644 --- a/arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi +++ b/arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi @@ -167,6 +167,7 @@ &uart3 { pinctrl-0 = <&pinctrl_uart3>; assigned-clocks = <&clk IMX8MM_CLK_UART3>; assigned-clock-parents = <&clk IMX8MM_SYS_PLL1_80M>; + uart-has-rtscts; status = "okay"; }; @@ -237,6 +238,8 @@ pinctrl_uart3: uart3grp { fsl,pins = < MX8MM_IOMUXC_ECSPI1_SCLK_UART3_DCE_RX 0x40 MX8MM_IOMUXC_ECSPI1_MOSI_UART3_DCE_TX 0x40 + MX8MM_IOMUXC_ECSPI1_MISO_UART3_DCE_CTS_B 0x40 + MX8MM_IOMUXC_ECSPI1_SS0_UART3_DCE_RTS_B 0x40 >; }; From 2c9548bc2650b3a2778050f62d8f683f507e6e6d Mon Sep 17 00:00:00 2001 From: He Ying Date: Thu, 20 Jan 2022 20:44:18 -0500 Subject: [PATCH 0956/1842] powerpc/kasan: Silence KASAN warnings in __get_wchan() [ Upstream commit a1b29ba2f2c171b9bea73be993bfdf0a62d37d15 ] The following KASAN warning was reported in our kernel. BUG: KASAN: stack-out-of-bounds in get_wchan+0x188/0x250 Read of size 4 at addr d216f958 by task ps/14437 CPU: 3 PID: 14437 Comm: ps Tainted: G O 5.10.0 #1 Call Trace: [daa63858] [c0654348] dump_stack+0x9c/0xe4 (unreliable) [daa63888] [c035cf0c] print_address_description.constprop.3+0x8c/0x570 [daa63908] [c035d6bc] kasan_report+0x1ac/0x218 [daa63948] [c00496e8] get_wchan+0x188/0x250 [daa63978] [c0461ec8] do_task_stat+0xce8/0xe60 [daa63b98] [c0455ac8] proc_single_show+0x98/0x170 [daa63bc8] [c03cab8c] seq_read_iter+0x1ec/0x900 [daa63c38] [c03cb47c] seq_read+0x1dc/0x290 [daa63d68] [c037fc94] vfs_read+0x164/0x510 [daa63ea8] [c03808e4] ksys_read+0x144/0x1d0 [daa63f38] [c005b1dc] ret_from_syscall+0x0/0x38 --- interrupt: c00 at 0x8fa8f4 LR = 0x8fa8cc The buggy address belongs to the page: page:98ebcdd2 refcount:0 mapcount:0 mapping:00000000 index:0x2 pfn:0x1216f flags: 0x0() raw: 00000000 00000000 01010122 00000000 00000002 00000000 ffffffff 00000000 raw: 00000000 page dumped because: kasan: bad access detected Memory state around the buggy address: d216f800: 00 00 00 00 00 f1 f1 f1 f1 00 00 00 00 00 00 00 d216f880: f2 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >d216f900: 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00 ^ d216f980: f2 f2 f2 f2 f2 f2 f2 00 00 00 00 00 00 00 00 00 d216fa00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 After looking into this issue, I find the buggy address belongs to the task stack region. It seems KASAN has something wrong. I look into the code of __get_wchan in x86 architecture and find the same issue has been resolved by the commit f7d27c35ddff ("x86/mm, kasan: Silence KASAN warnings in get_wchan()"). The solution could be applied to powerpc architecture too. As Andrey Ryabinin said, get_wchan() is racy by design, it may access volatile stack of running task, thus it may access redzone in a stack frame and cause KASAN to warn about this. Use READ_ONCE_NOCHECK() to silence these warnings. Reported-by: Wanming Hu Signed-off-by: He Ying Signed-off-by: Chen Jingwen Reviewed-by: Kees Cook Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220121014418.155675-1-heying24@huawei.com Signed-off-by: Sasha Levin --- arch/powerpc/kernel/process.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 3064694afea1..cfb8fd76afb4 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -2108,12 +2108,12 @@ static unsigned long __get_wchan(struct task_struct *p) return 0; do { - sp = *(unsigned long *)sp; + sp = READ_ONCE_NOCHECK(*(unsigned long *)sp); if (!validate_sp(sp, p, STACK_FRAME_OVERHEAD) || p->state == TASK_RUNNING) return 0; if (count > 0) { - ip = ((unsigned long *)sp)[STACK_FRAME_LR_SAVE]; + ip = READ_ONCE_NOCHECK(((unsigned long *)sp)[STACK_FRAME_LR_SAVE]); if (!in_sched_functions(ip)) return ip; } From 1b54c0065763359d3e972961d4d237017b83dfbf Mon Sep 17 00:00:00 2001 From: Hui Wang Date: Mon, 30 May 2022 12:01:50 +0800 Subject: [PATCH 0957/1842] ASoC: nau8822: Add operation for internal PLL off and on [ Upstream commit aeca8a3295022bcec46697f16e098140423d8463 ] We tried to enable the audio on an imx6sx EVB with the codec nau8822, after setting the internal PLL fractional parameters, the audio still couldn't work and the there was no sdma irq at all. After checking with the section "8.1.1 Phase Locked Loop (PLL) Design Example" of "NAU88C22 Datasheet Rev 0.6", we found we need to turn off the PLL before programming fractional parameters and turn on the PLL after programming. After this change, the audio driver could record and play sound and the sdma's irq is triggered when playing or recording. Cc: David Lin Cc: John Hsu Cc: Seven Li Signed-off-by: Hui Wang Link: https://lore.kernel.org/r/20220530040151.95221-2-hui.wang@canonical.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/nau8822.c | 4 ++++ sound/soc/codecs/nau8822.h | 3 +++ 2 files changed, 7 insertions(+) diff --git a/sound/soc/codecs/nau8822.c b/sound/soc/codecs/nau8822.c index 609aeeb27818..d831959d8ff7 100644 --- a/sound/soc/codecs/nau8822.c +++ b/sound/soc/codecs/nau8822.c @@ -740,6 +740,8 @@ static int nau8822_set_pll(struct snd_soc_dai *dai, int pll_id, int source, pll_param->pll_int, pll_param->pll_frac, pll_param->mclk_scaler, pll_param->pre_factor); + snd_soc_component_update_bits(component, + NAU8822_REG_POWER_MANAGEMENT_1, NAU8822_PLL_EN_MASK, NAU8822_PLL_OFF); snd_soc_component_update_bits(component, NAU8822_REG_PLL_N, NAU8822_PLLMCLK_DIV2 | NAU8822_PLLN_MASK, (pll_param->pre_factor ? NAU8822_PLLMCLK_DIV2 : 0) | @@ -757,6 +759,8 @@ static int nau8822_set_pll(struct snd_soc_dai *dai, int pll_id, int source, pll_param->mclk_scaler << NAU8822_MCLKSEL_SFT); snd_soc_component_update_bits(component, NAU8822_REG_CLOCKING, NAU8822_CLKM_MASK, NAU8822_CLKM_PLL); + snd_soc_component_update_bits(component, + NAU8822_REG_POWER_MANAGEMENT_1, NAU8822_PLL_EN_MASK, NAU8822_PLL_ON); return 0; } diff --git a/sound/soc/codecs/nau8822.h b/sound/soc/codecs/nau8822.h index 489191ff187e..b45d42c15de6 100644 --- a/sound/soc/codecs/nau8822.h +++ b/sound/soc/codecs/nau8822.h @@ -90,6 +90,9 @@ #define NAU8822_REFIMP_3K 0x3 #define NAU8822_IOBUF_EN (0x1 << 2) #define NAU8822_ABIAS_EN (0x1 << 3) +#define NAU8822_PLL_EN_MASK (0x1 << 5) +#define NAU8822_PLL_ON (0x1 << 5) +#define NAU8822_PLL_OFF (0x0 << 5) /* NAU8822_REG_AUDIO_INTERFACE (0x4) */ #define NAU8822_AIFMT_MASK (0x3 << 3) From d7be05aff27278c89c5c2d5518e235a315ce447a Mon Sep 17 00:00:00 2001 From: Rob Clark Date: Wed, 1 Jun 2022 07:51:16 -0700 Subject: [PATCH 0958/1842] dma-debug: make things less spammy under memory pressure [ Upstream commit e19f8fa6ce1ca9b8b934ba7d2e8f34c95abc6e60 ] Limit the error msg to avoid flooding the console. If you have a lot of threads hitting this at once, they could have already gotten passed the dma_debug_disabled() check before they get to the point of allocation failure, resulting in quite a lot of this error message spamming the log. Use pr_err_once() to limit that. Signed-off-by: Rob Clark Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- kernel/dma/debug.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index ee7da1f2462f..ae9fc1ee6d20 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -564,7 +564,7 @@ static void add_dma_entry(struct dma_debug_entry *entry) rc = active_cacheline_insert(entry); if (rc == -ENOMEM) { - pr_err("cacheline tracking ENOMEM, dma-debug disabled\n"); + pr_err_once("cacheline tracking ENOMEM, dma-debug disabled\n"); global_disable = true; } From cb6a0b83f1bc74b8a45324ef7838e0fb87f6f014 Mon Sep 17 00:00:00 2001 From: Charles Keepax Date: Thu, 2 Jun 2022 17:21:14 +0100 Subject: [PATCH 0959/1842] ASoC: cs42l52: Fix TLV scales for mixer controls [ Upstream commit 8bf5aabf524eec61013e506f764a0b2652dc5665 ] The datasheet specifies the range of the mixer volumes as between -51.5dB and 12dB with a 0.5dB step. Update the TLVs for this. Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220602162119.3393857-2-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/cs42l52.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sound/soc/codecs/cs42l52.c b/sound/soc/codecs/cs42l52.c index f772628f233e..750e6c923512 100644 --- a/sound/soc/codecs/cs42l52.c +++ b/sound/soc/codecs/cs42l52.c @@ -137,7 +137,7 @@ static DECLARE_TLV_DB_SCALE(mic_tlv, 1600, 100, 0); static DECLARE_TLV_DB_SCALE(pga_tlv, -600, 50, 0); -static DECLARE_TLV_DB_SCALE(mix_tlv, -50, 50, 0); +static DECLARE_TLV_DB_SCALE(mix_tlv, -5150, 50, 0); static DECLARE_TLV_DB_SCALE(beep_tlv, -56, 200, 0); @@ -364,7 +364,7 @@ static const struct snd_kcontrol_new cs42l52_snd_controls[] = { CS42L52_ADCB_VOL, 0, 0xA0, 0x78, ipd_tlv), SOC_DOUBLE_R_SX_TLV("ADC Mixer Volume", CS42L52_ADCA_MIXER_VOL, CS42L52_ADCB_MIXER_VOL, - 0, 0x19, 0x7F, ipd_tlv), + 0, 0x19, 0x7F, mix_tlv), SOC_DOUBLE("ADC Switch", CS42L52_ADC_MISC_CTL, 0, 1, 1, 0), From 70e355867dc21bdc59d6835274077d3073ba423a Mon Sep 17 00:00:00 2001 From: Charles Keepax Date: Thu, 2 Jun 2022 17:21:15 +0100 Subject: [PATCH 0960/1842] ASoC: cs35l36: Update digital volume TLV [ Upstream commit 5005a2345825eb8346546d99bfe669f73111b5c5 ] The digital volume TLV specifies the step as 0.25dB but the actual step of the control is 0.125dB. Update the TLV to correct this. Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220602162119.3393857-3-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/cs35l36.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sound/soc/codecs/cs35l36.c b/sound/soc/codecs/cs35l36.c index e9b5f76f27a8..aa32b8c26578 100644 --- a/sound/soc/codecs/cs35l36.c +++ b/sound/soc/codecs/cs35l36.c @@ -444,7 +444,8 @@ static bool cs35l36_volatile_reg(struct device *dev, unsigned int reg) } } -static DECLARE_TLV_DB_SCALE(dig_vol_tlv, -10200, 25, 0); +static const DECLARE_TLV_DB_RANGE(dig_vol_tlv, 0, 912, + TLV_DB_MINMAX_ITEM(-10200, 1200)); static DECLARE_TLV_DB_SCALE(amp_gain_tlv, 0, 1, 1); static const char * const cs35l36_pcm_sftramp_text[] = { From b8a47bcc4d1405df0d6dc76b9037faae3936b3f3 Mon Sep 17 00:00:00 2001 From: Charles Keepax Date: Thu, 2 Jun 2022 17:21:16 +0100 Subject: [PATCH 0961/1842] ASoC: cs53l30: Correct number of volume levels on SX controls [ Upstream commit 7fbd6dd68127927e844912a16741016d432a0737 ] This driver specified the maximum value rather than the number of volume levels on the SX controls, this is incorrect, so correct them. Reported-by: David Rhodes Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220602162119.3393857-4-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/cs53l30.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/sound/soc/codecs/cs53l30.c b/sound/soc/codecs/cs53l30.c index ed22361b35c1..a5a383b92305 100644 --- a/sound/soc/codecs/cs53l30.c +++ b/sound/soc/codecs/cs53l30.c @@ -347,22 +347,22 @@ static const struct snd_kcontrol_new cs53l30_snd_controls[] = { SOC_ENUM("ADC2 NG Delay", adc2_ng_delay_enum), SOC_SINGLE_SX_TLV("ADC1A PGA Volume", - CS53L30_ADC1A_AFE_CTL, 0, 0x34, 0x18, pga_tlv), + CS53L30_ADC1A_AFE_CTL, 0, 0x34, 0x24, pga_tlv), SOC_SINGLE_SX_TLV("ADC1B PGA Volume", - CS53L30_ADC1B_AFE_CTL, 0, 0x34, 0x18, pga_tlv), + CS53L30_ADC1B_AFE_CTL, 0, 0x34, 0x24, pga_tlv), SOC_SINGLE_SX_TLV("ADC2A PGA Volume", - CS53L30_ADC2A_AFE_CTL, 0, 0x34, 0x18, pga_tlv), + CS53L30_ADC2A_AFE_CTL, 0, 0x34, 0x24, pga_tlv), SOC_SINGLE_SX_TLV("ADC2B PGA Volume", - CS53L30_ADC2B_AFE_CTL, 0, 0x34, 0x18, pga_tlv), + CS53L30_ADC2B_AFE_CTL, 0, 0x34, 0x24, pga_tlv), SOC_SINGLE_SX_TLV("ADC1A Digital Volume", - CS53L30_ADC1A_DIG_VOL, 0, 0xA0, 0x0C, dig_tlv), + CS53L30_ADC1A_DIG_VOL, 0, 0xA0, 0x6C, dig_tlv), SOC_SINGLE_SX_TLV("ADC1B Digital Volume", - CS53L30_ADC1B_DIG_VOL, 0, 0xA0, 0x0C, dig_tlv), + CS53L30_ADC1B_DIG_VOL, 0, 0xA0, 0x6C, dig_tlv), SOC_SINGLE_SX_TLV("ADC2A Digital Volume", - CS53L30_ADC2A_DIG_VOL, 0, 0xA0, 0x0C, dig_tlv), + CS53L30_ADC2A_DIG_VOL, 0, 0xA0, 0x6C, dig_tlv), SOC_SINGLE_SX_TLV("ADC2B Digital Volume", - CS53L30_ADC2B_DIG_VOL, 0, 0xA0, 0x0C, dig_tlv), + CS53L30_ADC2B_DIG_VOL, 0, 0xA0, 0x6C, dig_tlv), }; static const struct snd_soc_dapm_widget cs53l30_dapm_widgets[] = { From 13e5b76d3d71e1c64fc777ac409a3e0c9d4ad1cc Mon Sep 17 00:00:00 2001 From: Charles Keepax Date: Thu, 2 Jun 2022 17:21:17 +0100 Subject: [PATCH 0962/1842] ASoC: cs42l52: Correct TLV for Bypass Volume [ Upstream commit 91e90c712fade0b69cdff7cc6512f6099bd18ae5 ] The Bypass Volume is accidentally using a -6dB minimum TLV rather than the correct -60dB minimum. Add a new TLV to correct this. Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220602162119.3393857-5-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/cs42l52.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sound/soc/codecs/cs42l52.c b/sound/soc/codecs/cs42l52.c index 750e6c923512..38223641bdf6 100644 --- a/sound/soc/codecs/cs42l52.c +++ b/sound/soc/codecs/cs42l52.c @@ -137,6 +137,8 @@ static DECLARE_TLV_DB_SCALE(mic_tlv, 1600, 100, 0); static DECLARE_TLV_DB_SCALE(pga_tlv, -600, 50, 0); +static DECLARE_TLV_DB_SCALE(pass_tlv, -6000, 50, 0); + static DECLARE_TLV_DB_SCALE(mix_tlv, -5150, 50, 0); static DECLARE_TLV_DB_SCALE(beep_tlv, -56, 200, 0); @@ -351,7 +353,7 @@ static const struct snd_kcontrol_new cs42l52_snd_controls[] = { CS42L52_SPKB_VOL, 0, 0x40, 0xC0, hl_tlv), SOC_DOUBLE_R_SX_TLV("Bypass Volume", CS42L52_PASSTHRUA_VOL, - CS42L52_PASSTHRUB_VOL, 0, 0x88, 0x90, pga_tlv), + CS42L52_PASSTHRUB_VOL, 0, 0x88, 0x90, pass_tlv), SOC_DOUBLE("Bypass Mute", CS42L52_MISC_CTL, 4, 5, 1, 0), From f93d8fe3dce89fbeaaa9770982b1514c32022ee9 Mon Sep 17 00:00:00 2001 From: Charles Keepax Date: Thu, 2 Jun 2022 17:21:18 +0100 Subject: [PATCH 0963/1842] ASoC: cs42l56: Correct typo in minimum level for SX volume controls [ Upstream commit a8928ada9b96944cadd8b65d191e33199fd38782 ] A couple of the SX volume controls specify 0x84 as the lowest volume value, however the correct value from the datasheet is 0x44. The datasheet don't include spaces in the value it displays as binary so this was almost certainly just a typo reading 1000100. Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220602162119.3393857-6-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/cs42l56.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sound/soc/codecs/cs42l56.c b/sound/soc/codecs/cs42l56.c index 06dcfae9dfe7..d41e03193106 100644 --- a/sound/soc/codecs/cs42l56.c +++ b/sound/soc/codecs/cs42l56.c @@ -391,9 +391,9 @@ static const struct snd_kcontrol_new cs42l56_snd_controls[] = { SOC_DOUBLE("ADC Boost Switch", CS42L56_GAIN_BIAS_CTL, 3, 2, 1, 1), SOC_DOUBLE_R_SX_TLV("Headphone Volume", CS42L56_HPA_VOLUME, - CS42L56_HPB_VOLUME, 0, 0x84, 0x48, hl_tlv), + CS42L56_HPB_VOLUME, 0, 0x44, 0x48, hl_tlv), SOC_DOUBLE_R_SX_TLV("LineOut Volume", CS42L56_LOA_VOLUME, - CS42L56_LOB_VOLUME, 0, 0x84, 0x48, hl_tlv), + CS42L56_LOB_VOLUME, 0, 0x44, 0x48, hl_tlv), SOC_SINGLE_TLV("Bass Shelving Volume", CS42L56_TONE_CTL, 0, 0x00, 1, tone_tlv), From 440b2a62da2ecbb2cf174ac84a8b7e419527a1d6 Mon Sep 17 00:00:00 2001 From: Charles Keepax Date: Thu, 2 Jun 2022 17:21:19 +0100 Subject: [PATCH 0964/1842] ASoC: cs42l51: Correct minimum value for SX volume control [ Upstream commit fcb3b5a58926d16d9a338841b74af06d4c29be15 ] The minimum value for the PGA Volume is given as 0x1A, however the values from there to 0x19 are all the same volume and this is not represented in the TLV structure. The number of volumes given is correct so this leads to all the volumes being shifted. Move the minimum value up to 0x19 to fix this. Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220602162119.3393857-7-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/cs42l51.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sound/soc/codecs/cs42l51.c b/sound/soc/codecs/cs42l51.c index c61b17dc2af8..fc6a2bc311b4 100644 --- a/sound/soc/codecs/cs42l51.c +++ b/sound/soc/codecs/cs42l51.c @@ -146,7 +146,7 @@ static const struct snd_kcontrol_new cs42l51_snd_controls[] = { 0, 0xA0, 96, adc_att_tlv), SOC_DOUBLE_R_SX_TLV("PGA Volume", CS42L51_ALC_PGA_CTL, CS42L51_ALC_PGB_CTL, - 0, 0x1A, 30, pga_tlv), + 0, 0x19, 30, pga_tlv), SOC_SINGLE("Playback Deemphasis Switch", CS42L51_DAC_CTL, 3, 1, 0), SOC_SINGLE("Auto-Mute Switch", CS42L51_DAC_CTL, 2, 1, 0), SOC_SINGLE("Soft Ramp Switch", CS42L51_DAC_CTL, 1, 1, 0), From 36cd19e7d4e5571d77a2ed20c5b6ef50cf57734a Mon Sep 17 00:00:00 2001 From: Sergey Shtylyov Date: Sat, 21 May 2022 23:34:10 +0300 Subject: [PATCH 0965/1842] ata: libata-core: fix NULL pointer deref in ata_host_alloc_pinfo() [ Upstream commit bf476fe22aa1851bab4728e0c49025a6a0bea307 ] In an unlikely (and probably wrong?) case that the 'ppi' parameter of ata_host_alloc_pinfo() points to an array starting with a NULL pointer, there's going to be a kernel oops as the 'pi' local variable won't get reassigned from the initial value of NULL. Initialize 'pi' instead to '&ata_dummy_port_info' to fix the possible kernel oops for good... Found by Linux Verification Center (linuxtesting.org) with the SVACE static analysis tool. Signed-off-by: Sergey Shtylyov Signed-off-by: Damien Le Moal Signed-off-by: Sasha Levin --- drivers/ata/libata-core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c index f963a0a7da46..2402fa4d8aa5 100644 --- a/drivers/ata/libata-core.c +++ b/drivers/ata/libata-core.c @@ -5475,7 +5475,7 @@ struct ata_host *ata_host_alloc_pinfo(struct device *dev, const struct ata_port_info * const * ppi, int n_ports) { - const struct ata_port_info *pi; + const struct ata_port_info *pi = &ata_dummy_port_info; struct ata_host *host; int i, j; @@ -5483,7 +5483,7 @@ struct ata_host *ata_host_alloc_pinfo(struct device *dev, if (!host) return NULL; - for (i = 0, j = 0, pi = NULL; i < host->n_ports; i++) { + for (i = 0, j = 0; i < host->n_ports; i++) { struct ata_port *ap = host->ports[i]; if (ppi[j]) From 8656623bdc0d12ac0659a264cc8db6e0e770f3ce Mon Sep 17 00:00:00 2001 From: "Matthew Wilcox (Oracle)" Date: Sun, 5 Jun 2022 15:38:13 +0100 Subject: [PATCH 0966/1842] quota: Prevent memory allocation recursion while holding dq_lock [ Upstream commit 537e11cdc7a6b3ce94fa25ed41306193df9677b7 ] As described in commit 02117b8ae9c0 ("f2fs: Set GF_NOFS in read_cache_page_gfp while doing f2fs_quota_read"), we must not enter filesystem reclaim while holding the dq_lock. Prevent this more generally by using memalloc_nofs_save() while holding the lock. Link: https://lore.kernel.org/r/20220605143815.2330891-2-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Jan Kara Signed-off-by: Sasha Levin --- fs/quota/dquot.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c index 09fb8459bb5c..65f123d5809b 100644 --- a/fs/quota/dquot.c +++ b/fs/quota/dquot.c @@ -79,6 +79,7 @@ #include #include #include +#include #include "../internal.h" /* ugh */ #include @@ -427,9 +428,11 @@ EXPORT_SYMBOL(mark_info_dirty); int dquot_acquire(struct dquot *dquot) { int ret = 0, ret2 = 0; + unsigned int memalloc; struct quota_info *dqopt = sb_dqopt(dquot->dq_sb); mutex_lock(&dquot->dq_lock); + memalloc = memalloc_nofs_save(); if (!test_bit(DQ_READ_B, &dquot->dq_flags)) { ret = dqopt->ops[dquot->dq_id.type]->read_dqblk(dquot); if (ret < 0) @@ -460,6 +463,7 @@ int dquot_acquire(struct dquot *dquot) smp_mb__before_atomic(); set_bit(DQ_ACTIVE_B, &dquot->dq_flags); out_iolock: + memalloc_nofs_restore(memalloc); mutex_unlock(&dquot->dq_lock); return ret; } @@ -471,9 +475,11 @@ EXPORT_SYMBOL(dquot_acquire); int dquot_commit(struct dquot *dquot) { int ret = 0; + unsigned int memalloc; struct quota_info *dqopt = sb_dqopt(dquot->dq_sb); mutex_lock(&dquot->dq_lock); + memalloc = memalloc_nofs_save(); if (!clear_dquot_dirty(dquot)) goto out_lock; /* Inactive dquot can be only if there was error during read/init @@ -483,6 +489,7 @@ int dquot_commit(struct dquot *dquot) else ret = -EIO; out_lock: + memalloc_nofs_restore(memalloc); mutex_unlock(&dquot->dq_lock); return ret; } @@ -494,9 +501,11 @@ EXPORT_SYMBOL(dquot_commit); int dquot_release(struct dquot *dquot) { int ret = 0, ret2 = 0; + unsigned int memalloc; struct quota_info *dqopt = sb_dqopt(dquot->dq_sb); mutex_lock(&dquot->dq_lock); + memalloc = memalloc_nofs_save(); /* Check whether we are not racing with some other dqget() */ if (dquot_is_busy(dquot)) goto out_dqlock; @@ -512,6 +521,7 @@ int dquot_release(struct dquot *dquot) } clear_bit(DQ_ACTIVE_B, &dquot->dq_flags); out_dqlock: + memalloc_nofs_restore(memalloc); mutex_unlock(&dquot->dq_lock); return ret; } From c7b8c3758f13061500f7adf17a2f86c3987a7a08 Mon Sep 17 00:00:00 2001 From: Adam Ford Date: Thu, 26 May 2022 13:21:28 -0500 Subject: [PATCH 0967/1842] ASoC: wm8962: Fix suspend while playing music [ Upstream commit d1f5272c0f7d2e53c6f2480f46725442776f5f78 ] If the audio CODEC is playing sound when the system is suspended, it can be left in a state which throws the following error: wm8962 3-001a: ASoC: error at soc_component_read_no_lock on wm8962.3-001a: -16 Once this error has occurred, the audio will not work again until rebooted. Fix this by configuring SET_SYSTEM_SLEEP_PM_OPS. Signed-off-by: Adam Ford Acked-by: Charles Keepax Link: https://lore.kernel.org/r/20220526182129.538472-1-aford173@gmail.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/wm8962.c | 1 + 1 file changed, 1 insertion(+) diff --git a/sound/soc/codecs/wm8962.c b/sound/soc/codecs/wm8962.c index 0bd3bbc2aacf..38651022e3d5 100644 --- a/sound/soc/codecs/wm8962.c +++ b/sound/soc/codecs/wm8962.c @@ -3864,6 +3864,7 @@ static int wm8962_runtime_suspend(struct device *dev) #endif static const struct dev_pm_ops wm8962_pm = { + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume) SET_RUNTIME_PM_OPS(wm8962_runtime_suspend, wm8962_runtime_resume, NULL) }; From a572c7440251fff218bcec093d047ec3a91bb106 Mon Sep 17 00:00:00 2001 From: Mark Brown Date: Fri, 3 Jun 2022 14:39:37 +0200 Subject: [PATCH 0968/1842] ASoC: es8328: Fix event generation for deemphasis control [ Upstream commit 8259610c2ec01c5cbfb61882ae176aabacac9c19 ] Currently the put() method for the deemphasis control returns 0 when a new value is written to the control even if the value changed, meaning events are not generated. Fix this, skip the work of updating the value when it is unchanged and then return 1 after having done so. Signed-off-by: Mark Brown Link: https://lore.kernel.org/r/20220603123937.4013603-1-broonie@kernel.org Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/es8328.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/sound/soc/codecs/es8328.c b/sound/soc/codecs/es8328.c index 7e26231a596a..081b5f189632 100644 --- a/sound/soc/codecs/es8328.c +++ b/sound/soc/codecs/es8328.c @@ -161,13 +161,16 @@ static int es8328_put_deemph(struct snd_kcontrol *kcontrol, if (deemph > 1) return -EINVAL; + if (es8328->deemph == deemph) + return 0; + ret = es8328_set_deemph(component); if (ret < 0) return ret; es8328->deemph = deemph; - return 0; + return 1; } From 2e640e5e44a70feaa6f3beef0323c6bfe6361029 Mon Sep 17 00:00:00 2001 From: Mark Brown Date: Fri, 3 Jun 2022 13:50:03 +0200 Subject: [PATCH 0969/1842] ASoC: wm_adsp: Fix event generation for wm_adsp_fw_put() [ Upstream commit 2abdf9f80019e8244d3806ed0e1c9f725e50b452 ] Currently wm_adsp_fw_put() returns 0 rather than 1 when updating the value of the control, meaning that no event is generated to userspace. Fix this by setting the default return value to 1, the code already exits early with a return value of 0 if the value is unchanged. Signed-off-by: Mark Brown Reviewed-by: Richard Fitzgerald Link: https://lore.kernel.org/r/20220603115003.3865834-1-broonie@kernel.org Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/wm_adsp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c index 51d95437e0fd..10189f44af28 100644 --- a/sound/soc/codecs/wm_adsp.c +++ b/sound/soc/codecs/wm_adsp.c @@ -800,7 +800,7 @@ int wm_adsp_fw_put(struct snd_kcontrol *kcontrol, struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol); struct soc_enum *e = (struct soc_enum *)kcontrol->private_value; struct wm_adsp *dsp = snd_soc_component_get_drvdata(component); - int ret = 0; + int ret = 1; if (ucontrol->value.enumerated.item[0] == dsp[e->shift_l].fw) return 0; From 0e9994b86580178555af6ad22114687805dc51dd Mon Sep 17 00:00:00 2001 From: Marius Hoch Date: Tue, 7 Jun 2022 12:10:52 -0700 Subject: [PATCH 0970/1842] Input: soc_button_array - also add Lenovo Yoga Tablet2 1051F to dmi_use_low_level_irq [ Upstream commit 6ab2e51898cd4343bbdf8587af8ce8fbabddbcb5 ] Commit 223f61b8c5ad ("Input: soc_button_array - add Lenovo Yoga Tablet2 1051L to the dmi_use_low_level_irq list") added the 1051L to this list already, but the same problem applies to the 1051F. As there are no further 1051 variants (just the F/L), we can just DMI match 1051. Tested on a Lenovo Yoga Tablet2 1051F: Without this patch the home-button stops working after a wakeup from suspend. Signed-off-by: Marius Hoch Reviewed-by: Hans de Goede Link: https://lore.kernel.org/r/20220603120246.3065-1-mail@mariushoch.de Signed-off-by: Dmitry Torokhov Signed-off-by: Sasha Levin --- drivers/input/misc/soc_button_array.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/input/misc/soc_button_array.c b/drivers/input/misc/soc_button_array.c index cb6ec59a045d..efffcf0ebd3b 100644 --- a/drivers/input/misc/soc_button_array.c +++ b/drivers/input/misc/soc_button_array.c @@ -85,13 +85,13 @@ static const struct dmi_system_id dmi_use_low_level_irq[] = { }, { /* - * Lenovo Yoga Tab2 1051L, something messes with the home-button + * Lenovo Yoga Tab2 1051F/1051L, something messes with the home-button * IRQ settings, leading to a non working home-button. */ .matches = { DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), DMI_MATCH(DMI_PRODUCT_NAME, "60073"), - DMI_MATCH(DMI_PRODUCT_VERSION, "1051L"), + DMI_MATCH(DMI_PRODUCT_VERSION, "1051"), }, }, {} /* Terminating entry */ From f416fee125d4e59e9038943c2bfcdbc904d52866 Mon Sep 17 00:00:00 2001 From: Wentao Wang Date: Thu, 2 Jun 2022 08:57:00 +0000 Subject: [PATCH 0971/1842] scsi: vmw_pvscsi: Expand vcpuHint to 16 bits [ Upstream commit cf71d59c2eceadfcde0fb52e237990a0909880d7 ] vcpuHint has been expanded to 16 bit on host to enable routing to more CPUs. Guest side should align with the change. This change has been tested with hosts with 8-bit and 16-bit vcpuHint, on both platforms host side can get correct value. Link: https://lore.kernel.org/r/EF35F4D5-5DCC-42C5-BCC4-29DF1729B24C@vmware.com Signed-off-by: Wentao Wang Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin --- drivers/scsi/vmw_pvscsi.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/vmw_pvscsi.h b/drivers/scsi/vmw_pvscsi.h index 75966d3f326e..d87c12324c03 100644 --- a/drivers/scsi/vmw_pvscsi.h +++ b/drivers/scsi/vmw_pvscsi.h @@ -333,8 +333,8 @@ struct PVSCSIRingReqDesc { u8 tag; u8 bus; u8 target; - u8 vcpuHint; - u8 unused[59]; + u16 vcpuHint; + u8 unused[58]; } __packed; /* From 916145bf9df78e79442a5bb0f66400a4f048109a Mon Sep 17 00:00:00 2001 From: James Smart Date: Fri, 3 Jun 2022 10:43:26 -0700 Subject: [PATCH 0972/1842] scsi: lpfc: Fix port stuck in bypassed state after LIP in PT2PT topology [ Upstream commit 336d63615466b4c06b9401c987813fd19bdde39b ] After issuing a LIP, a specific target vendor does not ACC the FLOGI that lpfc sends. However, it does send its own FLOGI that lpfc ACCs. The target then establishes the port IDs by sending a PLOGI. lpfc PLOGI_ACCs and starts the RPI registration for DID 0x000001. The target then sends a LOGO to the fabric DID. lpfc is currently treating the LOGO from the fabric DID as a link down and cleans up all the ndlps. The ndlp for DID 0x000001 is put back into NPR and discovery stops, leaving the port in stuck in bypassed mode. Change lpfc behavior such that if a LOGO is received for the fabric DID in PT2PT topology skip the lpfc_linkdown_port() routine and just move the fabric DID back to NPR. Link: https://lore.kernel.org/r/20220603174329.63777-7-jsmart2021@gmail.com Co-developed-by: Justin Tee Signed-off-by: Justin Tee Signed-off-by: James Smart Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin --- drivers/scsi/lpfc/lpfc_nportdisc.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c index e33f752318c1..1e22364a31fc 100644 --- a/drivers/scsi/lpfc/lpfc_nportdisc.c +++ b/drivers/scsi/lpfc/lpfc_nportdisc.c @@ -857,7 +857,8 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_nvmet_invalidate_host(phba, ndlp); if (ndlp->nlp_DID == Fabric_DID) { - if (vport->port_state <= LPFC_FDISC) + if (vport->port_state <= LPFC_FDISC || + vport->fc_flag & FC_PT2PT) goto out; lpfc_linkdown_port(vport); spin_lock_irq(shost->host_lock); From 85acc5bf0515c060493f05e64a54925b0da006c0 Mon Sep 17 00:00:00 2001 From: James Smart Date: Fri, 3 Jun 2022 10:43:28 -0700 Subject: [PATCH 0973/1842] scsi: lpfc: Allow reduced polling rate for nvme_admin_async_event cmd completion [ Upstream commit 2e7e9c0c1ec05f18d320ecc8a31eec59d2af1af9 ] NVMe Asynchronous Event Request commands have no command timeout value per specifications. Set WQE option to allow a reduced FLUSH polling rate for I/O error detection specifically for nvme_admin_async_event commands. Link: https://lore.kernel.org/r/20220603174329.63777-9-jsmart2021@gmail.com Co-developed-by: Justin Tee Signed-off-by: Justin Tee Signed-off-by: James Smart Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin --- drivers/scsi/lpfc/lpfc_hw4.h | 3 +++ drivers/scsi/lpfc/lpfc_nvme.c | 11 +++++++++-- 2 files changed, 12 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/lpfc/lpfc_hw4.h b/drivers/scsi/lpfc/lpfc_hw4.h index 47e832b7f2c2..bfbc1c4fcab1 100644 --- a/drivers/scsi/lpfc/lpfc_hw4.h +++ b/drivers/scsi/lpfc/lpfc_hw4.h @@ -4281,6 +4281,9 @@ struct wqe_common { #define wqe_sup_SHIFT 6 #define wqe_sup_MASK 0x00000001 #define wqe_sup_WORD word11 +#define wqe_ffrq_SHIFT 6 +#define wqe_ffrq_MASK 0x00000001 +#define wqe_ffrq_WORD word11 #define wqe_wqec_SHIFT 7 #define wqe_wqec_MASK 0x00000001 #define wqe_wqec_WORD word11 diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c index 03c81cec6bc9..ef92e0b4b9cf 100644 --- a/drivers/scsi/lpfc/lpfc_nvme.c +++ b/drivers/scsi/lpfc/lpfc_nvme.c @@ -1315,7 +1315,8 @@ lpfc_nvme_prep_io_cmd(struct lpfc_vport *vport, { struct lpfc_hba *phba = vport->phba; struct nvmefc_fcp_req *nCmd = lpfc_ncmd->nvmeCmd; - struct lpfc_iocbq *pwqeq = &(lpfc_ncmd->cur_iocbq); + struct nvme_common_command *sqe; + struct lpfc_iocbq *pwqeq = &lpfc_ncmd->cur_iocbq; union lpfc_wqe128 *wqe = &pwqeq->wqe; uint32_t req_len; @@ -1371,8 +1372,14 @@ lpfc_nvme_prep_io_cmd(struct lpfc_vport *vport, cstat->control_requests++; } - if (pnode->nlp_nvme_info & NLP_NVME_NSLER) + if (pnode->nlp_nvme_info & NLP_NVME_NSLER) { bf_set(wqe_erp, &wqe->generic.wqe_com, 1); + sqe = &((struct nvme_fc_cmd_iu *) + nCmd->cmdaddr)->sqe.common; + if (sqe->opcode == nvme_admin_async_event) + bf_set(wqe_ffrq, &wqe->generic.wqe_com, 1); + } + /* * Finish initializing those WQE fields that are independent * of the nvme_cmnd request_buffer From 410b69262173c6c1947b3b032338311ef412f1f9 Mon Sep 17 00:00:00 2001 From: Chengguang Xu Date: Sun, 29 May 2022 23:34:53 +0800 Subject: [PATCH 0974/1842] scsi: ipr: Fix missing/incorrect resource cleanup in error case [ Upstream commit d64c491911322af1dcada98e5b9ee0d87e8c8fee ] Fix missing resource cleanup (when '(--i) == 0') for error case in ipr_alloc_mem() and skip incorrect resource cleanup (when '(--i) == 0') for error case in ipr_request_other_msi_irqs() because variable i started from 1. Link: https://lore.kernel.org/r/20220529153456.4183738-4-cgxu519@mykernel.net Reviewed-by: Dan Carpenter Acked-by: Brian King Signed-off-by: Chengguang Xu Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin --- drivers/scsi/ipr.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c index b0aa58d117cc..90e8a538b078 100644 --- a/drivers/scsi/ipr.c +++ b/drivers/scsi/ipr.c @@ -9792,7 +9792,7 @@ static int ipr_alloc_mem(struct ipr_ioa_cfg *ioa_cfg) GFP_KERNEL); if (!ioa_cfg->hrrq[i].host_rrq) { - while (--i > 0) + while (--i >= 0) dma_free_coherent(&pdev->dev, sizeof(u32) * ioa_cfg->hrrq[i].size, ioa_cfg->hrrq[i].host_rrq, @@ -10065,7 +10065,7 @@ static int ipr_request_other_msi_irqs(struct ipr_ioa_cfg *ioa_cfg, ioa_cfg->vectors_info[i].desc, &ioa_cfg->hrrq[i]); if (rc) { - while (--i >= 0) + while (--i > 0) free_irq(pci_irq_vector(pdev, i), &ioa_cfg->hrrq[i]); return rc; From 16dd002eb87174fef8fca452f3c3ca23f2d7af3d Mon Sep 17 00:00:00 2001 From: Chengguang Xu Date: Sun, 29 May 2022 23:34:55 +0800 Subject: [PATCH 0975/1842] scsi: pmcraid: Fix missing resource cleanup in error case [ Upstream commit ec1e8adcbdf661c57c395bca342945f4f815add7 ] Fix missing resource cleanup (when '(--i) == 0') for error case in pmcraid_register_interrupt_handler(). Link: https://lore.kernel.org/r/20220529153456.4183738-6-cgxu519@mykernel.net Reviewed-by: Dan Carpenter Signed-off-by: Chengguang Xu Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin --- drivers/scsi/pmcraid.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/scsi/pmcraid.c b/drivers/scsi/pmcraid.c index cbe5fab793eb..ce10d680c56c 100644 --- a/drivers/scsi/pmcraid.c +++ b/drivers/scsi/pmcraid.c @@ -4528,7 +4528,7 @@ pmcraid_register_interrupt_handler(struct pmcraid_instance *pinstance) return 0; out_unwind: - while (--i > 0) + while (--i >= 0) free_irq(pci_irq_vector(pdev, i), &pinstance->hrrq_vector[i]); pci_free_irq_vectors(pdev); return rc; From d539feb6df5ef94b171f3497583b3dbc1512a347 Mon Sep 17 00:00:00 2001 From: huangwenhui Date: Wed, 8 Jun 2022 16:23:57 +0800 Subject: [PATCH 0976/1842] ALSA: hda/realtek - Add HW8326 support [ Upstream commit 527f4643e03c298c1e3321cfa27866b1374a55e1 ] Added the support of new Huawei codec HW8326. The HW8326 is developed by Huawei with Realtek's IP Core, and it's compatible with ALC256. Signed-off-by: huangwenhui Link: https://lore.kernel.org/r/20220608082357.26898-1-huangwenhuia@uniontech.com Signed-off-by: Takashi Iwai Signed-off-by: Sasha Levin --- sound/hda/hdac_device.c | 1 + sound/pci/hda/patch_realtek.c | 14 ++++++++++++++ 2 files changed, 15 insertions(+) diff --git a/sound/hda/hdac_device.c b/sound/hda/hdac_device.c index 3e9e9ac804f6..b7e5032b61c9 100644 --- a/sound/hda/hdac_device.c +++ b/sound/hda/hdac_device.c @@ -660,6 +660,7 @@ static const struct hda_vendor_id hda_vendor_ids[] = { { 0x14f1, "Conexant" }, { 0x17e8, "Chrontel" }, { 0x1854, "LG" }, + { 0x19e5, "Huawei" }, { 0x1aec, "Wolfson Microelectronics" }, { 0x1af4, "QEMU" }, { 0x434d, "C-Media" }, diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index cf3b1133b785..83b5c2580c8f 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -439,6 +439,7 @@ static void alc_fill_eapd_coef(struct hda_codec *codec) case 0x10ec0245: case 0x10ec0255: case 0x10ec0256: + case 0x19e58326: case 0x10ec0257: case 0x10ec0282: case 0x10ec0283: @@ -576,6 +577,7 @@ static void alc_shutup_pins(struct hda_codec *codec) switch (codec->core.vendor_id) { case 0x10ec0236: case 0x10ec0256: + case 0x19e58326: case 0x10ec0283: case 0x10ec0286: case 0x10ec0288: @@ -3252,6 +3254,7 @@ static void alc_disable_headset_jack_key(struct hda_codec *codec) case 0x10ec0230: case 0x10ec0236: case 0x10ec0256: + case 0x19e58326: alc_write_coef_idx(codec, 0x48, 0x0); alc_update_coef_idx(codec, 0x49, 0x0045, 0x0); break; @@ -3280,6 +3283,7 @@ static void alc_enable_headset_jack_key(struct hda_codec *codec) case 0x10ec0230: case 0x10ec0236: case 0x10ec0256: + case 0x19e58326: alc_write_coef_idx(codec, 0x48, 0xd011); alc_update_coef_idx(codec, 0x49, 0x007f, 0x0045); break; @@ -4849,6 +4853,7 @@ static void alc_headset_mode_unplugged(struct hda_codec *codec) case 0x10ec0230: case 0x10ec0236: case 0x10ec0256: + case 0x19e58326: alc_process_coef_fw(codec, coef0256); break; case 0x10ec0234: @@ -4964,6 +4969,7 @@ static void alc_headset_mode_mic_in(struct hda_codec *codec, hda_nid_t hp_pin, case 0x10ec0230: case 0x10ec0236: case 0x10ec0256: + case 0x19e58326: alc_write_coef_idx(codec, 0x45, 0xc489); snd_hda_set_pin_ctl_cache(codec, hp_pin, 0); alc_process_coef_fw(codec, coef0256); @@ -5114,6 +5120,7 @@ static void alc_headset_mode_default(struct hda_codec *codec) case 0x10ec0230: case 0x10ec0236: case 0x10ec0256: + case 0x19e58326: alc_write_coef_idx(codec, 0x1b, 0x0e4b); alc_write_coef_idx(codec, 0x45, 0xc089); msleep(50); @@ -5213,6 +5220,7 @@ static void alc_headset_mode_ctia(struct hda_codec *codec) case 0x10ec0230: case 0x10ec0236: case 0x10ec0256: + case 0x19e58326: alc_process_coef_fw(codec, coef0256); break; case 0x10ec0234: @@ -5327,6 +5335,7 @@ static void alc_headset_mode_omtp(struct hda_codec *codec) case 0x10ec0230: case 0x10ec0236: case 0x10ec0256: + case 0x19e58326: alc_process_coef_fw(codec, coef0256); break; case 0x10ec0234: @@ -5428,6 +5437,7 @@ static void alc_determine_headset_type(struct hda_codec *codec) case 0x10ec0230: case 0x10ec0236: case 0x10ec0256: + case 0x19e58326: alc_write_coef_idx(codec, 0x1b, 0x0e4b); alc_write_coef_idx(codec, 0x06, 0x6104); alc_write_coefex_idx(codec, 0x57, 0x3, 0x09a3); @@ -5722,6 +5732,7 @@ static void alc255_set_default_jack_type(struct hda_codec *codec) case 0x10ec0230: case 0x10ec0236: case 0x10ec0256: + case 0x19e58326: alc_process_coef_fw(codec, alc256fw); break; } @@ -6325,6 +6336,7 @@ static void alc_combo_jack_hp_jd_restart(struct hda_codec *codec) case 0x10ec0236: case 0x10ec0255: case 0x10ec0256: + case 0x19e58326: alc_update_coef_idx(codec, 0x1b, 0x8000, 1 << 15); /* Reset HP JD */ alc_update_coef_idx(codec, 0x1b, 0x8000, 0 << 15); break; @@ -9813,6 +9825,7 @@ static int patch_alc269(struct hda_codec *codec) case 0x10ec0230: case 0x10ec0236: case 0x10ec0256: + case 0x19e58326: spec->codec_variant = ALC269_TYPE_ALC256; spec->shutup = alc256_shutup; spec->init_hook = alc256_init; @@ -11255,6 +11268,7 @@ static const struct hda_device_id snd_hda_id_realtek[] = { HDA_CODEC_ENTRY(0x10ec0b00, "ALCS1200A", patch_alc882), HDA_CODEC_ENTRY(0x10ec1168, "ALC1220", patch_alc882), HDA_CODEC_ENTRY(0x10ec1220, "ALC1220", patch_alc882), + HDA_CODEC_ENTRY(0x19e58326, "HW8326", patch_alc269), {} /* terminator */ }; MODULE_DEVICE_TABLE(hdaudio, snd_hda_id_realtek); From 6c18f47f47d4e5df42db2300bf4200fd0a2c7a4f Mon Sep 17 00:00:00 2001 From: chengkaitao Date: Thu, 2 Jun 2022 08:55:42 +0800 Subject: [PATCH 0977/1842] virtio-mmio: fix missing put_device() when vm_cmdline_parent registration failed [ Upstream commit a58a7f97ba11391d2d0d408e0b24f38d86ae748e ] The reference must be released when device_register(&vm_cmdline_parent) failed. Add the corresponding 'put_device()' in the error handling path. Signed-off-by: chengkaitao Message-Id: <20220602005542.16489-1-chengkaitao@didiglobal.com> Signed-off-by: Michael S. Tsirkin Acked-by: Jason Wang Signed-off-by: Sasha Levin --- drivers/virtio/virtio_mmio.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c index 238383ff1064..5c970e6f664c 100644 --- a/drivers/virtio/virtio_mmio.c +++ b/drivers/virtio/virtio_mmio.c @@ -689,6 +689,7 @@ static int vm_cmdline_set(const char *device, if (!vm_cmdline_parent_registered) { err = device_register(&vm_cmdline_parent); if (err) { + put_device(&vm_cmdline_parent); pr_err("Failed to register parent device!\n"); return err; } From 0eeec1a8b0cd38c47edeb042980a6aeacecf35ed Mon Sep 17 00:00:00 2001 From: Xiaohui Zhang Date: Tue, 7 Jun 2022 16:32:30 +0800 Subject: [PATCH 0978/1842] nfc: nfcmrvl: Fix memory leak in nfcmrvl_play_deferred [ Upstream commit 8a4d480702b71184fabcf379b80bf7539716752e ] Similar to the handling of play_deferred in commit 19cfe912c37b ("Bluetooth: btusb: Fix memory leak in play_deferred"), we thought a patch might be needed here as well. Currently usb_submit_urb is called directly to submit deferred tx urbs after unanchor them. So the usb_giveback_urb_bh would failed to unref it in usb_unanchor_urb and cause memory leak. Put those urbs in tx_anchor to avoid the leak, and also fix the error handling. Signed-off-by: Xiaohui Zhang Acked-by: Krzysztof Kozlowski Link: https://lore.kernel.org/r/20220607083230.6182-1-xiaohuizhang@ruc.edu.cn Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/nfc/nfcmrvl/usb.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/drivers/nfc/nfcmrvl/usb.c b/drivers/nfc/nfcmrvl/usb.c index 888e298f610b..f26986eb53f1 100644 --- a/drivers/nfc/nfcmrvl/usb.c +++ b/drivers/nfc/nfcmrvl/usb.c @@ -401,13 +401,25 @@ static void nfcmrvl_play_deferred(struct nfcmrvl_usb_drv_data *drv_data) int err; while ((urb = usb_get_from_anchor(&drv_data->deferred))) { + usb_anchor_urb(urb, &drv_data->tx_anchor); + err = usb_submit_urb(urb, GFP_ATOMIC); - if (err) + if (err) { + kfree(urb->setup_packet); + usb_unanchor_urb(urb); + usb_free_urb(urb); break; + } drv_data->tx_in_flight++; + usb_free_urb(urb); + } + + /* Cleanup the rest deferred urbs. */ + while ((urb = usb_get_from_anchor(&drv_data->deferred))) { + kfree(urb->setup_packet); + usb_free_urb(urb); } - usb_scuttle_anchored_urbs(&drv_data->deferred); } static int nfcmrvl_resume(struct usb_interface *intf) From b8879ca1fd7348b4d5db7db86dcb97f60c73d751 Mon Sep 17 00:00:00 2001 From: Wang Yufen Date: Tue, 7 Jun 2022 20:00:28 +0800 Subject: [PATCH 0979/1842] ipv6: Fix signed integer overflow in l2tp_ip6_sendmsg [ Upstream commit f638a84afef3dfe10554c51820c16e39a278c915 ] When len >= INT_MAX - transhdrlen, ulen = len + transhdrlen will be overflow. To fix, we can follow what udpv6 does and subtract the transhdrlen from the max. Signed-off-by: Wang Yufen Link: https://lore.kernel.org/r/20220607120028.845916-2-wangyufen@huawei.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- net/l2tp/l2tp_ip6.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c index 96f975777438..d54dbd01d86f 100644 --- a/net/l2tp/l2tp_ip6.c +++ b/net/l2tp/l2tp_ip6.c @@ -502,14 +502,15 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) struct ipcm6_cookie ipc6; int addr_len = msg->msg_namelen; int transhdrlen = 4; /* zero session-id */ - int ulen = len + transhdrlen; + int ulen; int err; /* Rough check on arithmetic overflow, * better check is made in ip6_append_data(). */ - if (len > INT_MAX) + if (len > INT_MAX - transhdrlen) return -EMSGSIZE; + ulen = len + transhdrlen; /* Mirror BSD error message compatibility */ if (msg->msg_flags & MSG_OOB) From 38c519df8ecf028a4d2250bad43eea1344f2fc18 Mon Sep 17 00:00:00 2001 From: Chen Lin Date: Wed, 8 Jun 2022 20:46:53 +0800 Subject: [PATCH 0980/1842] net: ethernet: mtk_eth_soc: fix misuse of mem alloc interface netdev[napi]_alloc_frag [ Upstream commit 2f2c0d2919a14002760f89f4e02960c735a316d2 ] When rx_flag == MTK_RX_FLAGS_HWLRO, rx_data_len = MTK_MAX_LRO_RX_LENGTH(4096 * 3) > PAGE_SIZE. netdev_alloc_frag is for alloction of page fragment only. Reference to other drivers and Documentation/vm/page_frags.rst Branch to use __get_free_pages when ring->frag_size > PAGE_SIZE. Signed-off-by: Chen Lin Link: https://lore.kernel.org/r/1654692413-2598-1-git-send-email-chen45464546@163.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/mediatek/mtk_eth_soc.c | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index 789642647cd3..c7aff89141e1 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -806,6 +806,17 @@ static inline void mtk_rx_get_desc(struct mtk_rx_dma *rxd, rxd->rxd4 = READ_ONCE(dma_rxd->rxd4); } +static void *mtk_max_lro_buf_alloc(gfp_t gfp_mask) +{ + unsigned int size = mtk_max_frag_size(MTK_MAX_LRO_RX_LENGTH); + unsigned long data; + + data = __get_free_pages(gfp_mask | __GFP_COMP | __GFP_NOWARN, + get_order(size)); + + return (void *)data; +} + /* the qdma core needs scratch memory to be setup */ static int mtk_init_fq_dma(struct mtk_eth *eth) { @@ -1303,7 +1314,10 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, goto release_desc; /* alloc new buffer */ - new_data = napi_alloc_frag(ring->frag_size); + if (ring->frag_size <= PAGE_SIZE) + new_data = napi_alloc_frag(ring->frag_size); + else + new_data = mtk_max_lro_buf_alloc(GFP_ATOMIC); if (unlikely(!new_data)) { netdev->stats.rx_dropped++; goto release_desc; @@ -1700,7 +1714,10 @@ static int mtk_rx_alloc(struct mtk_eth *eth, int ring_no, int rx_flag) return -ENOMEM; for (i = 0; i < rx_dma_size; i++) { - ring->data[i] = netdev_alloc_frag(ring->frag_size); + if (ring->frag_size <= PAGE_SIZE) + ring->data[i] = netdev_alloc_frag(ring->frag_size); + else + ring->data[i] = mtk_max_lro_buf_alloc(GFP_KERNEL); if (!ring->data[i]) return -ENOMEM; } From 85340c06345025761f26ca2eabc2718aad41e7d7 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Thu, 9 Jun 2022 10:03:28 -0700 Subject: [PATCH 0981/1842] mellanox: mlx5: avoid uninitialized variable warning with gcc-12 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 842c3b3ddc5f4d17275edbaa09e23d712bf8b915 ] gcc-12 started warning about 'tracker' being used uninitialized: drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c: In function ‘mlx5_do_bond’: drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c:786:28: warning: ‘tracker’ is used uninitialized [-Wuninitialized] 786 | struct lag_tracker tracker; | ^~~~~~~ which seems to be because it doesn't track how the use (and initialization) is bound by the 'do_bond' flag. But admittedly that 'do_bond' usage is fairly complicated, and involves passing it around as an argument to helper functions, so it's somewhat understandable that gcc doesn't see how that all works. This function could be rewritten to make the use of that tracker variable more obviously safe, but for now I'm just adding the forced initialization of it. Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- drivers/net/ethernet/mellanox/mlx5/core/lag.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag.c index 11cc3ea5010a..9fb3e5ec1da6 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/lag.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/lag.c @@ -274,7 +274,7 @@ static void mlx5_do_bond(struct mlx5_lag *ldev) { struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev; struct mlx5_core_dev *dev1 = ldev->pf[MLX5_LAG_P2].dev; - struct lag_tracker tracker; + struct lag_tracker tracker = { }; bool do_bond, roce_lag; int err; From f3c8bfd6dc4f23075a404b4dfd5630ca37f8465c Mon Sep 17 00:00:00 2001 From: Yupeng Li Date: Wed, 8 Jun 2022 09:12:29 +0800 Subject: [PATCH 0982/1842] MIPS: Loongson-3: fix compile mips cpu_hwmon as module build error. MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 41e456400212803704e82691716e1d7b0865114a ] set cpu_hwmon as a module build with loongson_sysconf, loongson_chiptemp undefined error,fix cpu_hwmon compile options to be bool.Some kernel compilation error information is as follows: Checking missing-syscalls for N32 CALL scripts/checksyscalls.sh Checking missing-syscalls for O32 CALL scripts/checksyscalls.sh CALL scripts/checksyscalls.sh CHK include/generated/compile.h CC [M] drivers/platform/mips/cpu_hwmon.o Building modules, stage 2. MODPOST 200 modules ERROR: "loongson_sysconf" [drivers/platform/mips/cpu_hwmon.ko] undefined! ERROR: "loongson_chiptemp" [drivers/platform/mips/cpu_hwmon.ko] undefined! make[1]: *** [scripts/Makefile.modpost:92:__modpost] 错误 1 make: *** [Makefile:1261:modules] 错误 2 Signed-off-by: Yupeng Li Reviewed-by: Guenter Roeck Reviewed-by: Huacai Chen Signed-off-by: Thomas Bogendoerfer Signed-off-by: Sasha Levin --- drivers/platform/mips/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/platform/mips/Kconfig b/drivers/platform/mips/Kconfig index 8ac149173c64..495da331ca2d 100644 --- a/drivers/platform/mips/Kconfig +++ b/drivers/platform/mips/Kconfig @@ -17,7 +17,7 @@ menuconfig MIPS_PLATFORM_DEVICES if MIPS_PLATFORM_DEVICES config CPU_HWMON - tristate "Loongson-3 CPU HWMon Driver" + bool "Loongson-3 CPU HWMon Driver" depends on MACH_LOONGSON64 select HWMON default y From 9d667348dc331b796e4fdc12399e1df38c950073 Mon Sep 17 00:00:00 2001 From: Serge Semin Date: Fri, 10 Jun 2022 13:45:00 +0300 Subject: [PATCH 0983/1842] gpio: dwapb: Don't print error on -EPROBE_DEFER [ Upstream commit 77006f6edc0e0f58617eb25e53731f78641e820d ] Currently if the APB or Debounce clocks aren't yet ready to be requested the DW GPIO driver will correctly handle that by deferring the probe procedure, but the error is still printed to the system log. It needlessly pollutes the log since there was no real error but a request to postpone the clock request procedure since the clocks subsystem hasn't been fully initialized yet. Let's fix that by using the dev_err_probe method to print the APB/clock request error status. It will correctly handle the deferred probe situation and print the error if it actually happens. Signed-off-by: Serge Semin Reviewed-by: Andy Shevchenko Signed-off-by: Bartosz Golaszewski Signed-off-by: Sasha Levin --- drivers/gpio/gpio-dwapb.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/gpio/gpio-dwapb.c b/drivers/gpio/gpio-dwapb.c index 4275c18a097a..ea2e2618b794 100644 --- a/drivers/gpio/gpio-dwapb.c +++ b/drivers/gpio/gpio-dwapb.c @@ -646,10 +646,9 @@ static int dwapb_get_clks(struct dwapb_gpio *gpio) gpio->clks[1].id = "db"; err = devm_clk_bulk_get_optional(gpio->dev, DWAPB_NR_CLOCKS, gpio->clks); - if (err) { - dev_err(gpio->dev, "Cannot get APB/Debounce clocks\n"); - return err; - } + if (err) + return dev_err_probe(gpio->dev, err, + "Cannot get APB/Debounce clocks\n"); err = clk_bulk_prepare_enable(DWAPB_NR_CLOCKS, gpio->clks); if (err) { From 4603a37f6eaefb2cedc35552f040d15d93025146 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sun, 5 Jun 2022 18:30:46 +0200 Subject: [PATCH 0984/1842] random: credit cpu and bootloader seeds by default [ Upstream commit 846bb97e131d7938847963cca00657c995b1fce1 ] This commit changes the default Kconfig values of RANDOM_TRUST_CPU and RANDOM_TRUST_BOOTLOADER to be Y by default. It does not change any existing configs or change any kernel behavior. The reason for this is several fold. As background, I recently had an email thread with the kernel maintainers of Fedora/RHEL, Debian, Ubuntu, Gentoo, Arch, NixOS, Alpine, SUSE, and Void as recipients. I noted that some distros trust RDRAND, some trust EFI, and some trust both, and I asked why or why not. There wasn't really much of a "debate" but rather an interesting discussion of what the historical reasons have been for this, and it came up that some distros just missed the introduction of the bootloader Kconfig knob, while another didn't want to enable it until there was a boot time switch to turn it off for more concerned users (which has since been added). The result of the rather uneventful discussion is that every major Linux distro enables these two options by default. While I didn't have really too strong of an opinion going into this thread -- and I mostly wanted to learn what the distros' thinking was one way or another -- ultimately I think their choice was a decent enough one for a default option (which can be disabled at boot time). I'll try to summarize the pros and cons: Pros: - The RNG machinery gets initialized super quickly, and there's no messing around with subsequent blocking behavior. - The bootloader mechanism is used by kexec in order for the prior kernel to initialize the RNG of the next kernel, which increases the entropy available to early boot daemons of the next kernel. - Previous objections related to backdoors centered around Dual_EC_DRBG-like kleptographic systems, in which observing some amount of the output stream enables an adversary holding the right key to determine the entire output stream. This used to be a partially justified concern, because RDRAND output was mixed into the output stream in varying ways, some of which may have lacked pre-image resistance (e.g. XOR or an LFSR). But this is no longer the case. Now, all usage of RDRAND and bootloader seeds go through a cryptographic hash function. This means that the CPU would have to compute a hash pre-image, which is not considered to be feasible (otherwise the hash function would be terribly broken). - More generally, if the CPU is backdoored, the RNG is probably not the realistic vector of choice for an attacker. - These CPU or bootloader seeds are far from being the only source of entropy. Rather, there is generally a pretty huge amount of entropy, not all of which is credited, especially on CPUs that support instructions like RDRAND. In other words, assuming RDRAND outputs all zeros, an attacker would *still* have to accurately model every single other entropy source also in use. - The RNG now reseeds itself quite rapidly during boot, starting at 2 seconds, then 4, then 8, then 16, and so forth, so that other sources of entropy get used without much delay. - Paranoid users can set random.trust_{cpu,bootloader}=no in the kernel command line, and paranoid system builders can set the Kconfig options to N, so there's no reduction or restriction of optionality. - It's a practical default. - All the distros have it set this way. Microsoft and Apple trust it too. Bandwagon. Cons: - RDRAND *could* still be backdoored with something like a fixed key or limited space serial number seed or another indexable scheme like that. (However, it's hard to imagine threat models where the CPU is backdoored like this, yet people are still okay making *any* computations with it or connecting it to networks, etc.) - RDRAND *could* be defective, rather than backdoored, and produce garbage that is in one way or another insufficient for crypto. - Suggesting a *reduction* in paranoia, as this commit effectively does, may cause some to question my personal integrity as a "security person". - Bootloader seeds and RDRAND are generally very difficult if not all together impossible to audit. Keep in mind that this doesn't actually change any behavior. This is just a change in the default Kconfig value. The distros already are shipping kernels that set things this way. Ard made an additional argument in [1]: We're at the mercy of firmware and micro-architecture anyway, given that we are also relying on it to ensure that every instruction in the kernel's executable image has been faithfully copied to memory, and that the CPU implements those instructions as documented. So I don't think firmware or ISA bugs related to RNGs deserve special treatment - if they are broken, we should quirk around them like we usually do. So enabling these by default is a step in the right direction IMHO. In [2], Phil pointed out that having this disabled masked a bug that CI otherwise would have caught: A clean 5.15.45 boots cleanly, whereas a downstream kernel shows the static key warning (but it does go on to boot). The significant difference is that our defconfigs set CONFIG_RANDOM_TRUST_BOOTLOADER=y defining that on top of multi_v7_defconfig demonstrates the issue on a clean 5.15.45. Conversely, not setting that option in a downstream kernel build avoids the warning [1] https://lore.kernel.org/lkml/CAMj1kXGi+ieviFjXv9zQBSaGyyzeGW_VpMpTLJK8PJb2QHEQ-w@mail.gmail.com/ [2] https://lore.kernel.org/lkml/c47c42e3-1d56-5859-a6ad-976a1a3381c6@raspberrypi.com/ Cc: Theodore Ts'o Reviewed-by: Ard Biesheuvel Signed-off-by: Jason A. Donenfeld Signed-off-by: Sasha Levin --- drivers/char/Kconfig | 52 +++++++++++++++++++++++++++----------------- 1 file changed, 32 insertions(+), 20 deletions(-) diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig index 3e2703a49632..b4e65d1ede26 100644 --- a/drivers/char/Kconfig +++ b/drivers/char/Kconfig @@ -471,29 +471,41 @@ config ADI and SSM (Silicon Secured Memory). Intended consumers of this driver include crash and makedumpfile. -endmenu - config RANDOM_TRUST_CPU - bool "Trust the CPU manufacturer to initialize Linux's CRNG" + bool "Initialize RNG using CPU RNG instructions" + default y depends on ARCH_RANDOM - default n help - Assume that CPU manufacturer (e.g., Intel or AMD for RDSEED or - RDRAND, IBM for the S390 and Power PC architectures) is trustworthy - for the purposes of initializing Linux's CRNG. Since this is not - something that can be independently audited, this amounts to trusting - that CPU manufacturer (perhaps with the insistence or mandate - of a Nation State's intelligence or law enforcement agencies) - has not installed a hidden back door to compromise the CPU's - random number generation facilities. This can also be configured - at boot with "random.trust_cpu=on/off". + Initialize the RNG using random numbers supplied by the CPU's + RNG instructions (e.g. RDRAND), if supported and available. These + random numbers are never used directly, but are rather hashed into + the main input pool, and this happens regardless of whether or not + this option is enabled. Instead, this option controls whether the + they are credited and hence can initialize the RNG. Additionally, + other sources of randomness are always used, regardless of this + setting. Enabling this implies trusting that the CPU can supply high + quality and non-backdoored random numbers. + + Say Y here unless you have reason to mistrust your CPU or believe + its RNG facilities may be faulty. This may also be configured at + boot time with "random.trust_cpu=on/off". config RANDOM_TRUST_BOOTLOADER - bool "Trust the bootloader to initialize Linux's CRNG" + bool "Initialize RNG using bootloader-supplied seed" + default y help - Some bootloaders can provide entropy to increase the kernel's initial - device randomness. Say Y here to assume the entropy provided by the - booloader is trustworthy so it will be added to the kernel's entropy - pool. Otherwise, say N here so it will be regarded as device input that - only mixes the entropy pool. This can also be configured at boot with - "random.trust_bootloader=on/off". + Initialize the RNG using a seed supplied by the bootloader or boot + environment (e.g. EFI or a bootloader-generated device tree). This + seed is not used directly, but is rather hashed into the main input + pool, and this happens regardless of whether or not this option is + enabled. Instead, this option controls whether the seed is credited + and hence can initialize the RNG. Additionally, other sources of + randomness are always used, regardless of this setting. Enabling + this implies trusting that the bootloader can supply high quality and + non-backdoored seeds. + + Say Y here unless you have reason to mistrust your bootloader or + believe its RNG facilities may be faulty. This may also be configured + at boot time with "random.trust_bootloader=on/off". + +endmenu From 03ea83324aa0c42139ca430e4b2e291e87018f39 Mon Sep 17 00:00:00 2001 From: Trond Myklebust Date: Tue, 31 May 2022 11:03:06 -0400 Subject: [PATCH 0985/1842] pNFS: Don't keep retrying if the server replied NFS4ERR_LAYOUTUNAVAILABLE [ Upstream commit fe44fb23d6ccde4c914c44ef74ab8d9d9ba02bea ] If the server tells us that a pNFS layout is not available for a specific file, then we should not keep pounding it with further layoutget requests. Fixes: 183d9e7b112a ("pnfs: rework LAYOUTGET retry handling") Signed-off-by: Trond Myklebust Signed-off-by: Anna Schumaker Signed-off-by: Sasha Levin --- fs/nfs/pnfs.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c index 8c0803d98008..69bb50d0ee3f 100644 --- a/fs/nfs/pnfs.c +++ b/fs/nfs/pnfs.c @@ -2155,6 +2155,12 @@ pnfs_update_layout(struct inode *ino, case -ERECALLCONFLICT: case -EAGAIN: break; + case -ENODATA: + /* The server returned NFS4ERR_LAYOUTUNAVAILABLE */ + pnfs_layout_set_fail_bit( + lo, pnfs_iomode_to_fail_bit(iomode)); + lseg = NULL; + goto out_put_layout_hdr; default: if (!nfs_error_is_fatal(PTR_ERR(lseg))) { pnfs_layout_clear_fail_bit(lo, pnfs_iomode_to_fail_bit(iomode)); From 8acc3e228e1c90bd410f73597a4549e0409f22d6 Mon Sep 17 00:00:00 2001 From: Trond Myklebust Date: Tue, 31 May 2022 11:03:07 -0400 Subject: [PATCH 0986/1842] pNFS: Avoid a live lock condition in pnfs_update_layout() [ Upstream commit 880265c77ac415090090d1fe72a188fee71cb458 ] If we're about to send the first layoutget for an empty layout, we want to make sure that we drain out the existing pending layoutget calls first. The reason is that these layouts may have been already implicitly returned to the server by a recall to which the client gave a NFS4ERR_NOMATCHING_LAYOUT response. The problem is that wait_var_event_killable() could in principle see the plh_outstanding count go back to '1' when the first process to wake up starts sending a new layoutget. If it fails to get a layout, then this loop can continue ad infinitum... Fixes: 0b77f97a7e42 ("NFSv4/pnfs: Fix layoutget behaviour after invalidation") Signed-off-by: Trond Myklebust Signed-off-by: Anna Schumaker Signed-off-by: Sasha Levin --- fs/nfs/callback_proc.c | 1 + fs/nfs/pnfs.c | 15 +++++++++------ fs/nfs/pnfs.h | 1 + 3 files changed, 11 insertions(+), 6 deletions(-) diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c index a5209643ac36..bfdd21224073 100644 --- a/fs/nfs/callback_proc.c +++ b/fs/nfs/callback_proc.c @@ -283,6 +283,7 @@ static u32 initiate_file_draining(struct nfs_client *clp, rv = NFS4_OK; break; case -ENOENT: + set_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags); /* Embrace your forgetfulness! */ rv = NFS4ERR_NOMATCHING_LAYOUT; diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c index 69bb50d0ee3f..21436721745b 100644 --- a/fs/nfs/pnfs.c +++ b/fs/nfs/pnfs.c @@ -469,6 +469,7 @@ pnfs_mark_layout_stateid_invalid(struct pnfs_layout_hdr *lo, pnfs_clear_lseg_state(lseg, lseg_list); pnfs_clear_layoutreturn_info(lo); pnfs_free_returned_lsegs(lo, lseg_list, &range, 0); + set_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags); if (test_bit(NFS_LAYOUT_RETURN, &lo->plh_flags) && !test_and_set_bit(NFS_LAYOUT_RETURN_LOCK, &lo->plh_flags)) pnfs_clear_layoutreturn_waitbit(lo); @@ -1923,8 +1924,9 @@ static void nfs_layoutget_begin(struct pnfs_layout_hdr *lo) static void nfs_layoutget_end(struct pnfs_layout_hdr *lo) { - if (atomic_dec_and_test(&lo->plh_outstanding)) - wake_up_var(&lo->plh_outstanding); + if (atomic_dec_and_test(&lo->plh_outstanding) && + test_and_clear_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags)) + wake_up_bit(&lo->plh_flags, NFS_LAYOUT_DRAIN); } static bool pnfs_is_first_layoutget(struct pnfs_layout_hdr *lo) @@ -2031,11 +2033,11 @@ pnfs_update_layout(struct inode *ino, * If the layout segment list is empty, but there are outstanding * layoutget calls, then they might be subject to a layoutrecall. */ - if ((list_empty(&lo->plh_segs) || !pnfs_layout_is_valid(lo)) && + if (test_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags) && atomic_read(&lo->plh_outstanding) != 0) { spin_unlock(&ino->i_lock); - lseg = ERR_PTR(wait_var_event_killable(&lo->plh_outstanding, - !atomic_read(&lo->plh_outstanding))); + lseg = ERR_PTR(wait_on_bit(&lo->plh_flags, NFS_LAYOUT_DRAIN, + TASK_KILLABLE)); if (IS_ERR(lseg)) goto out_put_layout_hdr; pnfs_put_layout_hdr(lo); @@ -2414,7 +2416,8 @@ pnfs_layout_process(struct nfs4_layoutget *lgp) goto out_forget; } - if (!pnfs_layout_is_valid(lo) && !pnfs_is_first_layoutget(lo)) + if (test_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags) && + !pnfs_is_first_layoutget(lo)) goto out_forget; if (nfs4_stateid_match_other(&lo->plh_stateid, &res->stateid)) { diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h index 11d9ed9addc0..a7cf84a6673b 100644 --- a/fs/nfs/pnfs.h +++ b/fs/nfs/pnfs.h @@ -107,6 +107,7 @@ enum { NFS_LAYOUT_FIRST_LAYOUTGET, /* Serialize first layoutget */ NFS_LAYOUT_INODE_FREEING, /* The inode is being freed */ NFS_LAYOUT_HASHED, /* The layout visible */ + NFS_LAYOUT_DRAIN, }; enum layoutdriver_policy_flags { From db965e2757d95f695e606856418cd84003dd036d Mon Sep 17 00:00:00 2001 From: Masahiro Yamada Date: Mon, 6 Jun 2022 14:02:38 +0900 Subject: [PATCH 0987/1842] clocksource: hyper-v: unexport __init-annotated hv_init_clocksource() [ Upstream commit 245b993d8f6c4e25f19191edfbd8080b645e12b1 ] EXPORT_SYMBOL and __init is a bad combination because the .init.text section is freed up after the initialization. Hence, modules cannot use symbols annotated __init. The access to a freed symbol may end up with kernel panic. modpost used to detect it, but it has been broken for a decade. Recently, I fixed modpost so it started to warn it again, then this showed up in linux-next builds. There are two ways to fix it: - Remove __init - Remove EXPORT_SYMBOL I chose the latter for this case because the only in-tree call-site, arch/x86/kernel/cpu/mshyperv.c is never compiled as modular. (CONFIG_HYPERVISOR_GUEST is boolean) Fixes: dd2cb348613b ("clocksource/drivers: Continue making Hyper-V clocksource ISA agnostic") Reported-by: Stephen Rothwell Signed-off-by: Masahiro Yamada Reviewed-by: Vitaly Kuznetsov Reviewed-by: Michael Kelley Link: https://lore.kernel.org/r/20220606050238.4162200-1-masahiroy@kernel.org Signed-off-by: Wei Liu Signed-off-by: Sasha Levin --- drivers/clocksource/hyperv_timer.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/clocksource/hyperv_timer.c b/drivers/clocksource/hyperv_timer.c index ba04cb381cd3..7c617d8dff3f 100644 --- a/drivers/clocksource/hyperv_timer.c +++ b/drivers/clocksource/hyperv_timer.c @@ -472,4 +472,3 @@ void __init hv_init_clocksource(void) hv_sched_clock_offset = hv_read_reference_counter(); hv_setup_sched_clock(read_hv_sched_clock_msr); } -EXPORT_SYMBOL_GPL(hv_init_clocksource); From ef4d73da0a5c2a48db94f7585e2c4e842aed311c Mon Sep 17 00:00:00 2001 From: Grzegorz Szczurek Date: Fri, 29 Apr 2022 14:27:08 +0200 Subject: [PATCH 0988/1842] i40e: Fix adding ADQ filter to TC0 [ Upstream commit c3238d36c3a2be0a29a9d848d6c51e1b14be6692 ] Procedure of configure tc flower filters erroneously allows to create filters on TC0 where unfiltered packets are also directed by default. Issue was caused by insufficient checks of hw_tc parameter specifying the hardware traffic class to pass matching packets to. Fix checking hw_tc parameter which blocks creation of filters on TC0. Fixes: 2f4b411a3d67 ("i40e: Enable cloud filters via tc-flower") Signed-off-by: Grzegorz Szczurek Signed-off-by: Jedrzej Jagielski Tested-by: Bharathi Sreenivas Signed-off-by: Tony Nguyen Signed-off-by: Sasha Levin --- drivers/net/ethernet/intel/i40e/i40e_main.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 4a18a7c7dd4c..614f3e995100 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -8163,6 +8163,11 @@ static int i40e_configure_clsflower(struct i40e_vsi *vsi, return -EOPNOTSUPP; } + if (!tc) { + dev_err(&pf->pdev->dev, "Unable to add filter because of invalid destination"); + return -EINVAL; + } + if (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state) || test_bit(__I40E_RESET_INTR_RECEIVED, pf->state)) return -EBUSY; From 43dfd1169cc07128c2dc3fecd89fc2f34bf35b15 Mon Sep 17 00:00:00 2001 From: Grzegorz Szczurek Date: Fri, 29 Apr 2022 14:40:23 +0200 Subject: [PATCH 0989/1842] i40e: Fix calculating the number of queue pairs [ Upstream commit 0bb050670ac90a167ecfa3f9590f92966c9a3677 ] If ADQ is enabled for a VF, then actual number of queue pair is a number of currently available traffic classes for this VF. Without this change the configuration of the Rx/Tx queues fails with error. Fixes: d29e0d233e0d ("i40e: missing input validation on VF message handling by the PF") Signed-off-by: Grzegorz Szczurek Signed-off-by: Jedrzej Jagielski Tested-by: Bharathi Sreenivas Signed-off-by: Tony Nguyen Signed-off-by: Sasha Levin --- drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c index 9181e007e039..1947c5a77550 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c +++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c @@ -2228,7 +2228,7 @@ static int i40e_vc_config_queues_msg(struct i40e_vf *vf, u8 *msg) } if (vf->adq_enabled) { - for (i = 0; i < I40E_MAX_VF_VSI; i++) + for (i = 0; i < vf->num_tc; i++) num_qps_all += vf->ch[i].num_qps; if (num_qps_all != qci->num_queue_pairs) { aq_ret = I40E_ERR_PARAM; From 814092927a215f5ca6c08249ec72a205e0b473cd Mon Sep 17 00:00:00 2001 From: Aleksandr Loktionov Date: Thu, 19 May 2022 16:01:45 +0200 Subject: [PATCH 0990/1842] i40e: Fix call trace in setup_tx_descriptors [ Upstream commit fd5855e6b1358e816710afee68a1d2bc685176ca ] After PF reset and ethtool -t there was call trace in dmesg sometimes leading to panic. When there was some time, around 5 seconds, between reset and test there were no errors. Problem was that pf reset calls i40e_vsi_close in prep_for_reset and ethtool -t calls i40e_vsi_close in diag_test. If there was not enough time between those commands the second i40e_vsi_close starts before previous i40e_vsi_close was done which leads to crash. Add check to diag_test if pf is in reset and don't start offline tests if it is true. Add netif_info("testing failed") into unhappy path of i40e_diag_test() Fixes: e17bc411aea8 ("i40e: Disable offline diagnostics if VFs are enabled") Fixes: 510efb2682b3 ("i40e: Fix ethtool offline diagnostic with netqueues") Signed-off-by: Michal Jaron Signed-off-by: Aleksandr Loktionov Tested-by: Gurucharan (A Contingent worker at Intel) Signed-off-by: Tony Nguyen Signed-off-by: Sasha Levin --- .../net/ethernet/intel/i40e/i40e_ethtool.c | 25 +++++++++++++------ 1 file changed, 17 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c index a2bdb2906519..63054061966e 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c @@ -2582,15 +2582,16 @@ static void i40e_diag_test(struct net_device *netdev, set_bit(__I40E_TESTING, pf->state); + if (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state) || + test_bit(__I40E_RESET_INTR_RECEIVED, pf->state)) { + dev_warn(&pf->pdev->dev, + "Cannot start offline testing when PF is in reset state.\n"); + goto skip_ol_tests; + } + if (i40e_active_vfs(pf) || i40e_active_vmdqs(pf)) { dev_warn(&pf->pdev->dev, "Please take active VFs and Netqueues offline and restart the adapter before running NIC diagnostics\n"); - data[I40E_ETH_TEST_REG] = 1; - data[I40E_ETH_TEST_EEPROM] = 1; - data[I40E_ETH_TEST_INTR] = 1; - data[I40E_ETH_TEST_LINK] = 1; - eth_test->flags |= ETH_TEST_FL_FAILED; - clear_bit(__I40E_TESTING, pf->state); goto skip_ol_tests; } @@ -2637,9 +2638,17 @@ static void i40e_diag_test(struct net_device *netdev, data[I40E_ETH_TEST_INTR] = 0; } -skip_ol_tests: - netif_info(pf, drv, netdev, "testing finished\n"); + return; + +skip_ol_tests: + data[I40E_ETH_TEST_REG] = 1; + data[I40E_ETH_TEST_EEPROM] = 1; + data[I40E_ETH_TEST_INTR] = 1; + data[I40E_ETH_TEST_LINK] = 1; + eth_test->flags |= ETH_TEST_FL_FAILED; + clear_bit(__I40E_TESTING, pf->state); + netif_info(pf, drv, netdev, "testing failed\n"); } static void i40e_get_wol(struct net_device *netdev, From 5334455067d51ea043d302a1d052db9b8af4832e Mon Sep 17 00:00:00 2001 From: Saurabh Sengar Date: Thu, 9 Jun 2022 10:16:36 -0700 Subject: [PATCH 0991/1842] Drivers: hv: vmbus: Release cpu lock in error case [ Upstream commit 656c5ba50b7172a0ea25dc1b37606bd51d01fe8d ] In case of invalid sub channel, release cpu lock before returning. Fixes: a949e86c0d780 ("Drivers: hv: vmbus: Resolve race between init_vp_index() and CPU hotplug") Signed-off-by: Saurabh Sengar Reviewed-by: Michael Kelley Link: https://lore.kernel.org/r/1654794996-13244-1-git-send-email-ssengar@linux.microsoft.com Signed-off-by: Wei Liu Signed-off-by: Sasha Levin --- drivers/hv/channel_mgmt.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c index 5dbb949b1afd..10188b1a6a08 100644 --- a/drivers/hv/channel_mgmt.c +++ b/drivers/hv/channel_mgmt.c @@ -606,6 +606,7 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel) */ if (newchannel->offermsg.offer.sub_channel_index == 0) { mutex_unlock(&vmbus_connection.channel_mutex); + cpus_read_unlock(); /* * Don't call free_channel(), because newchannel->kobj * is not initialized yet. From 65ca4db68b6819244df9024aea4be55edf8af1ef Mon Sep 17 00:00:00 2001 From: Vincent Whitchurch Date: Thu, 9 Jun 2022 16:17:04 +0200 Subject: [PATCH 0992/1842] tty: goldfish: Fix free_irq() on remove [ Upstream commit 499e13aac6c762e1e828172b0f0f5275651d6512 ] Pass the correct dev_id to free_irq() to fix this splat when the driver is unbound: WARNING: CPU: 0 PID: 30 at kernel/irq/manage.c:1895 free_irq Trying to free already-free IRQ 65 Call Trace: warn_slowpath_fmt free_irq goldfish_tty_remove platform_remove device_remove device_release_driver_internal device_driver_detach unbind_store drv_attr_store ... Fixes: 465893e18878e119 ("tty: goldfish: support platform_device with id -1") Signed-off-by: Vincent Whitchurch Link: https://lore.kernel.org/r/20220609141704.1080024-1-vincent.whitchurch@axis.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/tty/goldfish.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/tty/goldfish.c b/drivers/tty/goldfish.c index abc84d84f638..9180ca5e4dcd 100644 --- a/drivers/tty/goldfish.c +++ b/drivers/tty/goldfish.c @@ -428,7 +428,7 @@ static int goldfish_tty_remove(struct platform_device *pdev) tty_unregister_device(goldfish_tty_driver, qtty->console.index); iounmap(qtty->base); qtty->base = NULL; - free_irq(qtty->irq, pdev); + free_irq(qtty->irq, qtty); tty_port_destroy(&qtty->port); goldfish_tty_current_line_count--; if (goldfish_tty_current_line_count == 0) From 2b2180449ae0199bcdb3d67f6d3a40e532de928f Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Wed, 1 Jun 2022 16:30:26 +0400 Subject: [PATCH 0993/1842] misc: atmel-ssc: Fix IRQ check in ssc_probe [ Upstream commit 1c245358ce0b13669f6d1625f7a4e05c41f28980 ] platform_get_irq() returns negative error number instead 0 on failure. And the doc of platform_get_irq() provides a usage example: int irq = platform_get_irq(pdev, 0); if (irq < 0) return irq; Fix the check of return value to catch errors correctly. Fixes: eb1f2930609b ("Driver for the Atmel on-chip SSC on AT32AP and AT91") Reviewed-by: Claudiu Beznea Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220601123026.7119-1-linmq006@gmail.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/misc/atmel-ssc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/misc/atmel-ssc.c b/drivers/misc/atmel-ssc.c index d6cd5537126c..69f9b0336410 100644 --- a/drivers/misc/atmel-ssc.c +++ b/drivers/misc/atmel-ssc.c @@ -232,9 +232,9 @@ static int ssc_probe(struct platform_device *pdev) clk_disable_unprepare(ssc->clk); ssc->irq = platform_get_irq(pdev, 0); - if (!ssc->irq) { + if (ssc->irq < 0) { dev_dbg(&pdev->dev, "could not get irq\n"); - return -ENXIO; + return ssc->irq; } mutex_lock(&user_lock); From 63b26fe0252f923e6aca373e3ad4b31202dcd331 Mon Sep 17 00:00:00 2001 From: Alan Previn Date: Thu, 10 Mar 2022 16:43:11 -0800 Subject: [PATCH 0994/1842] drm/i915/reset: Fix error_state_read ptr + offset use [ Upstream commit c9b576d0c7bf55aeae1a736da7974fa202c4394d ] Fix our pointer offset usage in error_state_read when there is no i915_gpu_coredump but buf offset is non-zero. This fixes a kernel page fault can happen when multiple tests are running concurrently in a loop and one is producing engine resets and consuming the i915 error_state dump while the other is forcing full GT resets. (takes a while to trigger). The dmesg call trace: [ 5590.803000] BUG: unable to handle page fault for address: ffffffffa0b0e000 [ 5590.803009] #PF: supervisor read access in kernel mode [ 5590.803013] #PF: error_code(0x0000) - not-present page [ 5590.803016] PGD 5814067 P4D 5814067 PUD 5815063 PMD 109de4067 PTE 0 [ 5590.803022] Oops: 0000 [#1] PREEMPT SMP NOPTI [ 5590.803026] CPU: 5 PID: 13656 Comm: i915_hangman Tainted: G U 5.17.0-rc5-ups69-guc-err-capt-rev6+ #136 [ 5590.803033] Hardware name: Intel Corporation Alder Lake Client Platform/AlderLake-M LP4x RVP, BIOS ADLPFWI1.R00. 3031.A02.2201171222 01/17/2022 [ 5590.803039] RIP: 0010:memcpy_erms+0x6/0x10 [ 5590.803045] Code: fe ff ff cc eb 1e 0f 1f 00 48 89 f8 48 89 d1 48 c1 e9 03 83 e2 07 f3 48 a5 89 d1 f3 a4 c3 66 0f 1f 44 00 00 48 89 f8 48 89 d1 a4 c3 0f 1f 80 00 00 00 00 48 89 f8 48 83 fa 20 72 7e 40 38 fe [ 5590.803054] RSP: 0018:ffffc90003a8fdf0 EFLAGS: 00010282 [ 5590.803057] RAX: ffff888107ee9000 RBX: ffff888108cb1a00 RCX: 0000000000000f8f [ 5590.803061] RDX: 0000000000001000 RSI: ffffffffa0b0e000 RDI: ffff888107ee9071 [ 5590.803065] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000001 [ 5590.803069] R10: 0000000000000001 R11: 0000000000000002 R12: 0000000000000019 [ 5590.803073] R13: 0000000000174fff R14: 0000000000001000 R15: ffff888107ee9000 [ 5590.803077] FS: 00007f62a99bee80(0000) GS:ffff88849f880000(0000) knlGS:0000000000000000 [ 5590.803082] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 5590.803085] CR2: ffffffffa0b0e000 CR3: 000000010a1a8004 CR4: 0000000000770ee0 [ 5590.803089] PKRU: 55555554 [ 5590.803091] Call Trace: [ 5590.803093] [ 5590.803096] error_state_read+0xa1/0xd0 [i915] [ 5590.803175] kernfs_fop_read_iter+0xb2/0x1b0 [ 5590.803180] new_sync_read+0x116/0x1a0 [ 5590.803185] vfs_read+0x114/0x1b0 [ 5590.803189] ksys_read+0x63/0xe0 [ 5590.803193] do_syscall_64+0x38/0xc0 [ 5590.803197] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 5590.803201] RIP: 0033:0x7f62aaea5912 [ 5590.803204] Code: c0 e9 b2 fe ff ff 50 48 8d 3d 5a b9 0c 00 e8 05 19 02 00 0f 1f 44 00 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 0f 05 <48> 3d 00 f0 ff ff 77 56 c3 0f 1f 44 00 00 48 83 ec 28 48 89 54 24 [ 5590.803213] RSP: 002b:00007fff5b659ae8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [ 5590.803218] RAX: ffffffffffffffda RBX: 0000000000100000 RCX: 00007f62aaea5912 [ 5590.803221] RDX: 000000000008b000 RSI: 00007f62a8c4000f RDI: 0000000000000006 [ 5590.803225] RBP: 00007f62a8bcb00f R08: 0000000000200010 R09: 0000000000101000 [ 5590.803229] R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000006 [ 5590.803233] R13: 0000000000075000 R14: 00007f62a8acb010 R15: 0000000000200000 [ 5590.803238] [ 5590.803240] Modules linked in: i915 ttm drm_buddy drm_dp_helper drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops prime_numbers nfnetlink br_netfilter overlay mei_pxp mei_hdcp x86_pkg_temp_thermal coretemp kvm_intel snd_hda_codec_hdmi snd_hda_intel snd_intel_dspcfg snd_hda_codec snd_hwdep snd_hda_core snd_pcm mei_me mei fuse ip_tables x_tables crct10dif_pclmul e1000e crc32_pclmul ptp i2c_i801 ghash_clmulni_intel i2c_smbus pps_core [last unloa ded: ttm] [ 5590.803277] CR2: ffffffffa0b0e000 [ 5590.803280] ---[ end trace 0000000000000000 ]--- Fixes: 0e39037b3165 ("drm/i915: Cache the error string") Signed-off-by: Alan Previn Reviewed-by: John Harrison Signed-off-by: John Harrison Link: https://patchwork.freedesktop.org/patch/msgid/20220311004311.514198-2-alan.previn.teres.alexis@intel.com (cherry picked from commit 3304033a1e69cd81a2044b4422f0d7e593afb4e6) Signed-off-by: Jani Nikula Signed-off-by: Sasha Levin --- drivers/gpu/drm/i915/i915_sysfs.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_sysfs.c b/drivers/gpu/drm/i915/i915_sysfs.c index 45d32ef42787..ac40a95374d3 100644 --- a/drivers/gpu/drm/i915/i915_sysfs.c +++ b/drivers/gpu/drm/i915/i915_sysfs.c @@ -500,7 +500,14 @@ static ssize_t error_state_read(struct file *filp, struct kobject *kobj, struct device *kdev = kobj_to_dev(kobj); struct drm_i915_private *i915 = kdev_minor_to_i915(kdev); struct i915_gpu_coredump *gpu; - ssize_t ret; + ssize_t ret = 0; + + /* + * FIXME: Concurrent clients triggering resets and reading + clearing + * dumps can cause inconsistent sysfs reads when a user calls in with a + * non-zero offset to complete a prior partial read but the + * gpu_coredump has been cleared or replaced. + */ gpu = i915_first_error_state(i915); if (IS_ERR(gpu)) { @@ -512,8 +519,10 @@ static ssize_t error_state_read(struct file *filp, struct kobject *kobj, const char *str = "No error state collected\n"; size_t len = strlen(str); - ret = min_t(size_t, count, len - off); - memcpy(buf, str + off, ret); + if (off < len) { + ret = min_t(size_t, count, len - off); + memcpy(buf, str + off, ret); + } } return ret; From 42f7cbe2c2c98c05f29c3a64983ed1d216e3cf84 Mon Sep 17 00:00:00 2001 From: Daniel Wagner Date: Thu, 1 Apr 2021 11:54:10 +0200 Subject: [PATCH 0995/1842] nvme: use sysfs_emit instead of sprintf [ Upstream commit bff4bcf3cfc1595e0ef2aeb774b2403c88de1486 ] sysfs_emit is the recommended API to use for formatting strings to be returned to user space. It is equivalent to scnprintf and aware of the PAGE_SIZE buffer size. Suggested-by: Chaitanya Kulkarni Signed-off-by: Daniel Wagner Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/core.c | 40 +++++++++++++++++------------------ drivers/nvme/host/multipath.c | 8 +++---- 2 files changed, 24 insertions(+), 24 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index d301f0280ff6..c8c8c567a000 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -2813,8 +2813,8 @@ static ssize_t subsys_##field##_show(struct device *dev, \ { \ struct nvme_subsystem *subsys = \ container_of(dev, struct nvme_subsystem, dev); \ - return sprintf(buf, "%.*s\n", \ - (int)sizeof(subsys->field), subsys->field); \ + return sysfs_emit(buf, "%.*s\n", \ + (int)sizeof(subsys->field), subsys->field); \ } \ static SUBSYS_ATTR_RO(field, S_IRUGO, subsys_##field##_show); @@ -3335,13 +3335,13 @@ static ssize_t wwid_show(struct device *dev, struct device_attribute *attr, int model_len = sizeof(subsys->model); if (!uuid_is_null(&ids->uuid)) - return sprintf(buf, "uuid.%pU\n", &ids->uuid); + return sysfs_emit(buf, "uuid.%pU\n", &ids->uuid); if (memchr_inv(ids->nguid, 0, sizeof(ids->nguid))) - return sprintf(buf, "eui.%16phN\n", ids->nguid); + return sysfs_emit(buf, "eui.%16phN\n", ids->nguid); if (memchr_inv(ids->eui64, 0, sizeof(ids->eui64))) - return sprintf(buf, "eui.%8phN\n", ids->eui64); + return sysfs_emit(buf, "eui.%8phN\n", ids->eui64); while (serial_len > 0 && (subsys->serial[serial_len - 1] == ' ' || subsys->serial[serial_len - 1] == '\0')) @@ -3350,7 +3350,7 @@ static ssize_t wwid_show(struct device *dev, struct device_attribute *attr, subsys->model[model_len - 1] == '\0')) model_len--; - return sprintf(buf, "nvme.%04x-%*phN-%*phN-%08x\n", subsys->vendor_id, + return sysfs_emit(buf, "nvme.%04x-%*phN-%*phN-%08x\n", subsys->vendor_id, serial_len, subsys->serial, model_len, subsys->model, head->ns_id); } @@ -3359,7 +3359,7 @@ static DEVICE_ATTR_RO(wwid); static ssize_t nguid_show(struct device *dev, struct device_attribute *attr, char *buf) { - return sprintf(buf, "%pU\n", dev_to_ns_head(dev)->ids.nguid); + return sysfs_emit(buf, "%pU\n", dev_to_ns_head(dev)->ids.nguid); } static DEVICE_ATTR_RO(nguid); @@ -3374,23 +3374,23 @@ static ssize_t uuid_show(struct device *dev, struct device_attribute *attr, if (uuid_is_null(&ids->uuid)) { printk_ratelimited(KERN_WARNING "No UUID available providing old NGUID\n"); - return sprintf(buf, "%pU\n", ids->nguid); + return sysfs_emit(buf, "%pU\n", ids->nguid); } - return sprintf(buf, "%pU\n", &ids->uuid); + return sysfs_emit(buf, "%pU\n", &ids->uuid); } static DEVICE_ATTR_RO(uuid); static ssize_t eui_show(struct device *dev, struct device_attribute *attr, char *buf) { - return sprintf(buf, "%8ph\n", dev_to_ns_head(dev)->ids.eui64); + return sysfs_emit(buf, "%8ph\n", dev_to_ns_head(dev)->ids.eui64); } static DEVICE_ATTR_RO(eui); static ssize_t nsid_show(struct device *dev, struct device_attribute *attr, char *buf) { - return sprintf(buf, "%d\n", dev_to_ns_head(dev)->ns_id); + return sysfs_emit(buf, "%d\n", dev_to_ns_head(dev)->ns_id); } static DEVICE_ATTR_RO(nsid); @@ -3455,7 +3455,7 @@ static ssize_t field##_show(struct device *dev, \ struct device_attribute *attr, char *buf) \ { \ struct nvme_ctrl *ctrl = dev_get_drvdata(dev); \ - return sprintf(buf, "%.*s\n", \ + return sysfs_emit(buf, "%.*s\n", \ (int)sizeof(ctrl->subsys->field), ctrl->subsys->field); \ } \ static DEVICE_ATTR(field, S_IRUGO, field##_show, NULL); @@ -3469,7 +3469,7 @@ static ssize_t field##_show(struct device *dev, \ struct device_attribute *attr, char *buf) \ { \ struct nvme_ctrl *ctrl = dev_get_drvdata(dev); \ - return sprintf(buf, "%d\n", ctrl->field); \ + return sysfs_emit(buf, "%d\n", ctrl->field); \ } \ static DEVICE_ATTR(field, S_IRUGO, field##_show, NULL); @@ -3517,9 +3517,9 @@ static ssize_t nvme_sysfs_show_state(struct device *dev, if ((unsigned)ctrl->state < ARRAY_SIZE(state_name) && state_name[ctrl->state]) - return sprintf(buf, "%s\n", state_name[ctrl->state]); + return sysfs_emit(buf, "%s\n", state_name[ctrl->state]); - return sprintf(buf, "unknown state\n"); + return sysfs_emit(buf, "unknown state\n"); } static DEVICE_ATTR(state, S_IRUGO, nvme_sysfs_show_state, NULL); @@ -3571,9 +3571,9 @@ static ssize_t nvme_ctrl_loss_tmo_show(struct device *dev, struct nvmf_ctrl_options *opts = ctrl->opts; if (ctrl->opts->max_reconnects == -1) - return sprintf(buf, "off\n"); - return sprintf(buf, "%d\n", - opts->max_reconnects * opts->reconnect_delay); + return sysfs_emit(buf, "off\n"); + return sysfs_emit(buf, "%d\n", + opts->max_reconnects * opts->reconnect_delay); } static ssize_t nvme_ctrl_loss_tmo_store(struct device *dev, @@ -3603,8 +3603,8 @@ static ssize_t nvme_ctrl_reconnect_delay_show(struct device *dev, struct nvme_ctrl *ctrl = dev_get_drvdata(dev); if (ctrl->opts->reconnect_delay == -1) - return sprintf(buf, "off\n"); - return sprintf(buf, "%d\n", ctrl->opts->reconnect_delay); + return sysfs_emit(buf, "off\n"); + return sysfs_emit(buf, "%d\n", ctrl->opts->reconnect_delay); } static ssize_t nvme_ctrl_reconnect_delay_store(struct device *dev, diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c index a9e15c8f907b..379d6818a063 100644 --- a/drivers/nvme/host/multipath.c +++ b/drivers/nvme/host/multipath.c @@ -624,8 +624,8 @@ static ssize_t nvme_subsys_iopolicy_show(struct device *dev, struct nvme_subsystem *subsys = container_of(dev, struct nvme_subsystem, dev); - return sprintf(buf, "%s\n", - nvme_iopolicy_names[READ_ONCE(subsys->iopolicy)]); + return sysfs_emit(buf, "%s\n", + nvme_iopolicy_names[READ_ONCE(subsys->iopolicy)]); } static ssize_t nvme_subsys_iopolicy_store(struct device *dev, @@ -650,7 +650,7 @@ SUBSYS_ATTR_RW(iopolicy, S_IRUGO | S_IWUSR, static ssize_t ana_grpid_show(struct device *dev, struct device_attribute *attr, char *buf) { - return sprintf(buf, "%d\n", nvme_get_ns_from_dev(dev)->ana_grpid); + return sysfs_emit(buf, "%d\n", nvme_get_ns_from_dev(dev)->ana_grpid); } DEVICE_ATTR_RO(ana_grpid); @@ -659,7 +659,7 @@ static ssize_t ana_state_show(struct device *dev, struct device_attribute *attr, { struct nvme_ns *ns = nvme_get_ns_from_dev(dev); - return sprintf(buf, "%s\n", nvme_ana_state_names[ns->ana_state]); + return sysfs_emit(buf, "%s\n", nvme_ana_state_names[ns->ana_state]); } DEVICE_ATTR_RO(ana_state); From b90ae84a8a9cd57a6e773dd640748904588977fe Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Thomas=20Wei=C3=9Fschuh?= Date: Tue, 7 Jun 2022 17:55:55 +0200 Subject: [PATCH 0996/1842] nvme: add device name to warning in uuid_show() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 1fc766b5c08417248e0008bca14c3572ac0f1c26 ] This provides more context to users. Old message: [ 00.000000] No UUID available providing old NGUID New message: [ 00.000000] block nvme0n1: No UUID available providing old NGUID Fixes: d934f9848a77 ("nvme: provide UUID value to userspace") Signed-off-by: Thomas Weißschuh Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index c8c8c567a000..0aa68da51ed7 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -3372,8 +3372,8 @@ static ssize_t uuid_show(struct device *dev, struct device_attribute *attr, * we have no UUID set */ if (uuid_is_null(&ids->uuid)) { - printk_ratelimited(KERN_WARNING - "No UUID available providing old NGUID\n"); + dev_warn_ratelimited(dev, + "No UUID available providing old NGUID\n"); return sysfs_emit(buf, "%pU\n", ids->nguid); } return sysfs_emit(buf, "%pU\n", &ids->uuid); From 984793f255732857861571de12e38dd8ba025f48 Mon Sep 17 00:00:00 2001 From: Petr Machata Date: Mon, 13 Jun 2022 15:50:17 +0300 Subject: [PATCH 0997/1842] mlxsw: spectrum_cnt: Reorder counter pools [ Upstream commit 4b7a632ac4e7101ceefee8484d5c2ca505d347b3 ] Both RIF and ACL flow counters use a 24-bit SW-managed counter address to communicate which counter they want to bind. In a number of Spectrum FW releases, binding a RIF counter is broken and slices the counter index to 16 bits. As a result, on Spectrum-2 and above, no more than about 410 RIF counters can be effectively used. This translates to 205 netdevices for which L3 HW stats can be enabled. (This does not happen on Spectrum-1, because there are fewer counters available overall and the counter index never exceeds 16 bits.) Binding counters to ACLs does not have this issue. Therefore reorder the counter allocation scheme so that RIF counters come first and therefore get lower indices that are below the 16-bit barrier. Fixes: 98e60dce4da1 ("Merge branch 'mlxsw-Introduce-initial-Spectrum-2-support'") Reported-by: Maksym Yaremchuk Signed-off-by: Petr Machata Signed-off-by: Ido Schimmel Link: https://lore.kernel.org/r/20220613125017.2018162-1-idosch@nvidia.com Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.h index a68d931090dd..15c8d4de8350 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.h +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.h @@ -8,8 +8,8 @@ #include "spectrum.h" enum mlxsw_sp_counter_sub_pool_id { - MLXSW_SP_COUNTER_SUB_POOL_FLOW, MLXSW_SP_COUNTER_SUB_POOL_RIF, + MLXSW_SP_COUNTER_SUB_POOL_FLOW, }; int mlxsw_sp_counter_alloc(struct mlxsw_sp *mlxsw_sp, From 28069e026e649683930f887b2d8f4a981b93bc41 Mon Sep 17 00:00:00 2001 From: Christophe JAILLET Date: Mon, 13 Jun 2022 22:53:50 +0200 Subject: [PATCH 0998/1842] net: bgmac: Fix an erroneous kfree() in bgmac_remove() [ Upstream commit d7dd6eccfbc95ac47a12396f84e7e1b361db654b ] 'bgmac' is part of a managed resource allocated with bgmac_alloc(). It should not be freed explicitly. Remove the erroneous kfree() from the .remove() function. Fixes: 34a5102c3235 ("net: bgmac: allocate struct bgmac just once & don't copy it") Signed-off-by: Christophe JAILLET Reviewed-by: Florian Fainelli Link: https://lore.kernel.org/r/a026153108dd21239036a032b95c25b5cece253b.1655153616.git.christophe.jaillet@wanadoo.fr Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/broadcom/bgmac-bcma.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/net/ethernet/broadcom/bgmac-bcma.c b/drivers/net/ethernet/broadcom/bgmac-bcma.c index a5fd161ab5ee..26746197515f 100644 --- a/drivers/net/ethernet/broadcom/bgmac-bcma.c +++ b/drivers/net/ethernet/broadcom/bgmac-bcma.c @@ -323,7 +323,6 @@ static void bgmac_remove(struct bcma_device *core) bcma_mdio_mii_unregister(bgmac->mii_bus); bgmac_enet_remove(bgmac); bcma_set_drvdata(core, NULL); - kfree(bgmac); } static struct bcma_driver bgmac_bcma_driver = { From 64072389beb87aca444e8feeb68cfe914851d5fb Mon Sep 17 00:00:00 2001 From: Duoming Zhou Date: Tue, 14 Jun 2022 17:25:57 +0800 Subject: [PATCH 0999/1842] net: ax25: Fix deadlock caused by skb_recv_datagram in ax25_recvmsg [ Upstream commit 219b51a6f040fa5367adadd7d58c4dda0896a01d ] The skb_recv_datagram() in ax25_recvmsg() will hold lock_sock and block until it receives a packet from the remote. If the client doesn`t connect to server and calls read() directly, it will not receive any packets forever. As a result, the deadlock will happen. The fail log caused by deadlock is shown below: [ 369.606973] INFO: task ax25_deadlock:157 blocked for more than 245 seconds. [ 369.608919] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 369.613058] Call Trace: [ 369.613315] [ 369.614072] __schedule+0x2f9/0xb20 [ 369.615029] schedule+0x49/0xb0 [ 369.615734] __lock_sock+0x92/0x100 [ 369.616763] ? destroy_sched_domains_rcu+0x20/0x20 [ 369.617941] lock_sock_nested+0x6e/0x70 [ 369.618809] ax25_bind+0xaa/0x210 [ 369.619736] __sys_bind+0xca/0xf0 [ 369.620039] ? do_futex+0xae/0x1b0 [ 369.620387] ? __x64_sys_futex+0x7c/0x1c0 [ 369.620601] ? fpregs_assert_state_consistent+0x19/0x40 [ 369.620613] __x64_sys_bind+0x11/0x20 [ 369.621791] do_syscall_64+0x3b/0x90 [ 369.622423] entry_SYSCALL_64_after_hwframe+0x46/0xb0 [ 369.623319] RIP: 0033:0x7f43c8aa8af7 [ 369.624301] RSP: 002b:00007f43c8197ef8 EFLAGS: 00000246 ORIG_RAX: 0000000000000031 [ 369.625756] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f43c8aa8af7 [ 369.626724] RDX: 0000000000000010 RSI: 000055768e2021d0 RDI: 0000000000000005 [ 369.628569] RBP: 00007f43c8197f00 R08: 0000000000000011 R09: 00007f43c8198700 [ 369.630208] R10: 0000000000000000 R11: 0000000000000246 R12: 00007fff845e6afe [ 369.632240] R13: 00007fff845e6aff R14: 00007f43c8197fc0 R15: 00007f43c8198700 This patch replaces skb_recv_datagram() with an open-coded variant of it releasing the socket lock before the __skb_wait_for_more_packets() call and re-acquiring it after such call in order that other functions that need socket lock could be executed. what's more, the socket lock will be released only when recvmsg() will block and that should produce nicer overall behavior. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Suggested-by: Thomas Osterried Signed-off-by: Duoming Zhou Reported-by: Thomas Habets Acked-by: Paolo Abeni Reviewed-by: Eric Dumazet Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ax25/af_ax25.c | 34 ++++++++++++++++++++++++++++------ 1 file changed, 28 insertions(+), 6 deletions(-) diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c index 5fff027f25fa..a1f4cb836fcf 100644 --- a/net/ax25/af_ax25.c +++ b/net/ax25/af_ax25.c @@ -1653,9 +1653,12 @@ static int ax25_recvmsg(struct socket *sock, struct msghdr *msg, size_t size, int flags) { struct sock *sk = sock->sk; - struct sk_buff *skb; + struct sk_buff *skb, *last; + struct sk_buff_head *sk_queue; int copied; int err = 0; + int off = 0; + long timeo; lock_sock(sk); /* @@ -1667,11 +1670,29 @@ static int ax25_recvmsg(struct socket *sock, struct msghdr *msg, size_t size, goto out; } - /* Now we can treat all alike */ - skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT, - flags & MSG_DONTWAIT, &err); - if (skb == NULL) - goto out; + /* We need support for non-blocking reads. */ + sk_queue = &sk->sk_receive_queue; + skb = __skb_try_recv_datagram(sk, sk_queue, flags, &off, &err, &last); + /* If no packet is available, release_sock(sk) and try again. */ + if (!skb) { + if (err != -EAGAIN) + goto out; + release_sock(sk); + timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); + while (timeo && !__skb_wait_for_more_packets(sk, sk_queue, &err, + &timeo, last)) { + skb = __skb_try_recv_datagram(sk, sk_queue, flags, &off, + &err, &last); + if (skb) + break; + + if (err != -EAGAIN) + goto done; + } + if (!skb) + goto done; + lock_sock(sk); + } if (!sk_to_ax25(sk)->pidincl) skb_pull(skb, 1); /* Remove PID */ @@ -1718,6 +1739,7 @@ static int ax25_recvmsg(struct socket *sock, struct msghdr *msg, size_t size, out: release_sock(sk); +done: return err; } From e177f17fe46b8fde8542bd3a2e44a0b47c2271d0 Mon Sep 17 00:00:00 2001 From: Mark Rutland Date: Tue, 14 Jun 2022 09:09:42 +0100 Subject: [PATCH 1000/1842] arm64: ftrace: fix branch range checks [ Upstream commit 3eefdf9d1e406f3da47470b2854347009ffcb6fa ] The branch range checks in ftrace_make_call() and ftrace_make_nop() are incorrect, erroneously permitting a forwards branch of 128M and erroneously rejecting a backwards branch of 128M. This is because both functions calculate the offset backwards, calculating the offset *from* the target *to* the branch, rather than the other way around as the later comparisons expect. If an out-of-range branch were erroeously permitted, this would later be rejected by aarch64_insn_gen_branch_imm() as branch_imm_common() checks the bounds correctly, resulting in warnings and the placement of a BRK instruction. Note that this can only happen for a forwards branch of exactly 128M, and so the caller would need to be exactly 128M bytes below the relevant ftrace trampoline. If an in-range branch were erroeously rejected, then: * For modules when CONFIG_ARM64_MODULE_PLTS=y, this would result in the use of a PLT entry, which is benign. Note that this is the common case, as this is selected by CONFIG_RANDOMIZE_BASE (and therefore RANDOMIZE_MODULE_REGION_FULL), which distributions typically seelct. This is also selected by CONFIG_ARM64_ERRATUM_843419. * For modules when CONFIG_ARM64_MODULE_PLTS=n, this would result in internal ftrace failures. * For core kernel text, this would result in internal ftrace failues. Note that for this to happen, the kernel text would need to be at least 128M bytes in size, and typical configurations are smaller tha this. Fix this by calculating the offset *from* the branch *to* the target in both functions. Fixes: f8af0b364e24 ("arm64: ftrace: don't validate branch via PLT in ftrace_make_nop()") Fixes: e71a4e1bebaf ("arm64: ftrace: add support for far branches to dynamic ftrace") Signed-off-by: Mark Rutland Cc: Ard Biesheuvel Cc: Will Deacon Tested-by: "Ivan T. Ivanov" Reviewed-by: Chengming Zhou Reviewed-by: Ard Biesheuvel Link: https://lore.kernel.org/r/20220614080944.1349146-2-mark.rutland@arm.com Signed-off-by: Catalin Marinas Signed-off-by: Sasha Levin --- arch/arm64/kernel/ftrace.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index 86a5cf9bc19a..e21a01b99999 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -83,7 +83,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) { unsigned long pc = rec->ip; u32 old, new; - long offset = (long)pc - (long)addr; + long offset = (long)addr - (long)pc; if (offset < -SZ_128M || offset >= SZ_128M) { struct module *mod; @@ -182,7 +182,7 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long pc = rec->ip; bool validate = true; u32 old = 0, new; - long offset = (long)pc - (long)addr; + long offset = (long)addr - (long)pc; if (offset < -SZ_128M || offset >= SZ_128M) { u32 replaced; From bc28fde90937a920f7714ec4408269cac744f796 Mon Sep 17 00:00:00 2001 From: Mark Rutland Date: Tue, 14 Jun 2022 09:09:43 +0100 Subject: [PATCH 1001/1842] arm64: ftrace: consistently handle PLTs. [ Upstream commit a6253579977e4c6f7818eeb05bf2bc65678a7187 ] Sometimes it is necessary to use a PLT entry to call an ftrace trampoline. This is handled by ftrace_make_call() and ftrace_make_nop(), with each having *almost* identical logic, but this is not handled by ftrace_modify_call() since its introduction in commit: 3b23e4991fb66f6d ("arm64: implement ftrace with regs") Due to this, if we ever were to call ftrace_modify_call() for a callsite which requires a PLT entry for a trampoline, then either: a) If the old addr requires a trampoline, ftrace_modify_call() will use an out-of-range address to generate the 'old' branch instruction. This will result in warnings from aarch64_insn_gen_branch_imm() and ftrace_modify_code(), and no instructions will be modified. As ftrace_modify_call() will return an error, this will result in subsequent internal ftrace errors. b) If the old addr does not require a trampoline, but the new addr does, ftrace_modify_call() will use an out-of-range address to generate the 'new' branch instruction. This will result in warnings from aarch64_insn_gen_branch_imm(), and ftrace_modify_code() will replace the 'old' branch with a BRK. This will result in a kernel panic when this BRK is later executed. Practically speaking, case (a) is vastly more likely than case (b), and typically this will result in internal ftrace errors that don't necessarily affect the rest of the system. This can be demonstrated with an out-of-tree test module which triggers ftrace_modify_call(), e.g. | # insmod test_ftrace.ko | test_ftrace: Function test_function raw=0xffffb3749399201c, callsite=0xffffb37493992024 | branch_imm_common: offset out of range | branch_imm_common: offset out of range | ------------[ ftrace bug ]------------ | ftrace failed to modify | [] test_function+0x8/0x38 [test_ftrace] | actual: 1d:00:00:94 | Updating ftrace call site to call a different ftrace function | ftrace record flags: e0000002 | (2) R | expected tramp: ffffb374ae42ed54 | ------------[ cut here ]------------ | WARNING: CPU: 0 PID: 165 at kernel/trace/ftrace.c:2085 ftrace_bug+0x280/0x2b0 | Modules linked in: test_ftrace(+) | CPU: 0 PID: 165 Comm: insmod Not tainted 5.19.0-rc2-00002-g4d9ead8b45ce #13 | Hardware name: linux,dummy-virt (DT) | pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) | pc : ftrace_bug+0x280/0x2b0 | lr : ftrace_bug+0x280/0x2b0 | sp : ffff80000839ba00 | x29: ffff80000839ba00 x28: 0000000000000000 x27: ffff80000839bcf0 | x26: ffffb37493994180 x25: ffffb374b0991c28 x24: ffffb374b0d70000 | x23: 00000000ffffffea x22: ffffb374afcc33b0 x21: ffffb374b08f9cc8 | x20: ffff572b8462c000 x19: ffffb374b08f9000 x18: ffffffffffffffff | x17: 6c6c6163202c6331 x16: ffffb374ae5ad110 x15: ffffb374b0d51ee4 | x14: 0000000000000000 x13: 3435646532346561 x12: 3437336266666666 | x11: 203a706d61727420 x10: 6465746365707865 x9 : ffffb374ae5149e8 | x8 : 336266666666203a x7 : 706d617274206465 x6 : 00000000fffff167 | x5 : ffff572bffbc4a08 x4 : 00000000fffff167 x3 : 0000000000000000 | x2 : 0000000000000000 x1 : ffff572b84461e00 x0 : 0000000000000022 | Call trace: | ftrace_bug+0x280/0x2b0 | ftrace_replace_code+0x98/0xa0 | ftrace_modify_all_code+0xe0/0x144 | arch_ftrace_update_code+0x14/0x20 | ftrace_startup+0xf8/0x1b0 | register_ftrace_function+0x38/0x90 | test_ftrace_init+0xd0/0x1000 [test_ftrace] | do_one_initcall+0x50/0x2b0 | do_init_module+0x50/0x1f0 | load_module+0x17c8/0x1d64 | __do_sys_finit_module+0xa8/0x100 | __arm64_sys_finit_module+0x2c/0x3c | invoke_syscall+0x50/0x120 | el0_svc_common.constprop.0+0xdc/0x100 | do_el0_svc+0x3c/0xd0 | el0_svc+0x34/0xb0 | el0t_64_sync_handler+0xbc/0x140 | el0t_64_sync+0x18c/0x190 | ---[ end trace 0000000000000000 ]--- We can solve this by consistently determining whether to use a PLT entry for an address. Note that since (the earlier) commit: f1a54ae9af0da4d7 ("arm64: module/ftrace: intialize PLT at load time") ... we can consistently determine the PLT address that a given callsite will use, and therefore ftrace_make_nop() does not need to skip validation when a PLT is in use. This patch factors the existing logic out of ftrace_make_call() and ftrace_make_nop() into a common ftrace_find_callable_addr() helper function, which is used by ftrace_make_call(), ftrace_make_nop(), and ftrace_modify_call(). In ftrace_make_nop() the patching is consistently validated by ftrace_modify_code() as we can always determine what the old instruction should have been. Fixes: 3b23e4991fb6 ("arm64: implement ftrace with regs") Signed-off-by: Mark Rutland Cc: Ard Biesheuvel Cc: Will Deacon Tested-by: "Ivan T. Ivanov" Reviewed-by: Chengming Zhou Reviewed-by: Ard Biesheuvel Link: https://lore.kernel.org/r/20220614080944.1349146-3-mark.rutland@arm.com Signed-off-by: Catalin Marinas Signed-off-by: Sasha Levin --- arch/arm64/kernel/ftrace.c | 147 ++++++++++++++++++------------------- 1 file changed, 71 insertions(+), 76 deletions(-) diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c index e21a01b99999..3724bab278b2 100644 --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -76,6 +76,66 @@ static struct plt_entry *get_ftrace_plt(struct module *mod, unsigned long addr) return NULL; } +/* + * Find the address the callsite must branch to in order to reach '*addr'. + * + * Due to the limited range of 'BL' instructions, modules may be placed too far + * away to branch directly and must use a PLT. + * + * Returns true when '*addr' contains a reachable target address, or has been + * modified to contain a PLT address. Returns false otherwise. + */ +static bool ftrace_find_callable_addr(struct dyn_ftrace *rec, + struct module *mod, + unsigned long *addr) +{ + unsigned long pc = rec->ip; + long offset = (long)*addr - (long)pc; + struct plt_entry *plt; + + /* + * When the target is within range of the 'BL' instruction, use 'addr' + * as-is and branch to that directly. + */ + if (offset >= -SZ_128M && offset < SZ_128M) + return true; + + /* + * When the target is outside of the range of a 'BL' instruction, we + * must use a PLT to reach it. We can only place PLTs for modules, and + * only when module PLT support is built-in. + */ + if (!IS_ENABLED(CONFIG_ARM64_MODULE_PLTS)) + return false; + + /* + * 'mod' is only set at module load time, but if we end up + * dealing with an out-of-range condition, we can assume it + * is due to a module being loaded far away from the kernel. + * + * NOTE: __module_text_address() must be called with preemption + * disabled, but we can rely on ftrace_lock to ensure that 'mod' + * retains its validity throughout the remainder of this code. + */ + if (!mod) { + preempt_disable(); + mod = __module_text_address(pc); + preempt_enable(); + } + + if (WARN_ON(!mod)) + return false; + + plt = get_ftrace_plt(mod, *addr); + if (!plt) { + pr_err("ftrace: no module PLT for %ps\n", (void *)*addr); + return false; + } + + *addr = (unsigned long)plt; + return true; +} + /* * Turn on the call to ftrace_caller() in instrumented function */ @@ -83,40 +143,9 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) { unsigned long pc = rec->ip; u32 old, new; - long offset = (long)addr - (long)pc; - if (offset < -SZ_128M || offset >= SZ_128M) { - struct module *mod; - struct plt_entry *plt; - - if (!IS_ENABLED(CONFIG_ARM64_MODULE_PLTS)) - return -EINVAL; - - /* - * On kernels that support module PLTs, the offset between the - * branch instruction and its target may legally exceed the - * range of an ordinary relative 'bl' opcode. In this case, we - * need to branch via a trampoline in the module. - * - * NOTE: __module_text_address() must be called with preemption - * disabled, but we can rely on ftrace_lock to ensure that 'mod' - * retains its validity throughout the remainder of this code. - */ - preempt_disable(); - mod = __module_text_address(pc); - preempt_enable(); - - if (WARN_ON(!mod)) - return -EINVAL; - - plt = get_ftrace_plt(mod, addr); - if (!plt) { - pr_err("ftrace: no module PLT for %ps\n", (void *)addr); - return -EINVAL; - } - - addr = (unsigned long)plt; - } + if (!ftrace_find_callable_addr(rec, NULL, &addr)) + return -EINVAL; old = aarch64_insn_gen_nop(); new = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK); @@ -131,6 +160,11 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, unsigned long pc = rec->ip; u32 old, new; + if (!ftrace_find_callable_addr(rec, NULL, &old_addr)) + return -EINVAL; + if (!ftrace_find_callable_addr(rec, NULL, &addr)) + return -EINVAL; + old = aarch64_insn_gen_branch_imm(pc, old_addr, AARCH64_INSN_BRANCH_LINK); new = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK); @@ -180,54 +214,15 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long addr) { unsigned long pc = rec->ip; - bool validate = true; u32 old = 0, new; - long offset = (long)addr - (long)pc; - if (offset < -SZ_128M || offset >= SZ_128M) { - u32 replaced; - - if (!IS_ENABLED(CONFIG_ARM64_MODULE_PLTS)) - return -EINVAL; - - /* - * 'mod' is only set at module load time, but if we end up - * dealing with an out-of-range condition, we can assume it - * is due to a module being loaded far away from the kernel. - */ - if (!mod) { - preempt_disable(); - mod = __module_text_address(pc); - preempt_enable(); - - if (WARN_ON(!mod)) - return -EINVAL; - } - - /* - * The instruction we are about to patch may be a branch and - * link instruction that was redirected via a PLT entry. In - * this case, the normal validation will fail, but we can at - * least check that we are dealing with a branch and link - * instruction that points into the right module. - */ - if (aarch64_insn_read((void *)pc, &replaced)) - return -EFAULT; - - if (!aarch64_insn_is_bl(replaced) || - !within_module(pc + aarch64_get_branch_offset(replaced), - mod)) - return -EINVAL; - - validate = false; - } else { - old = aarch64_insn_gen_branch_imm(pc, addr, - AARCH64_INSN_BRANCH_LINK); - } + if (!ftrace_find_callable_addr(rec, mod, &addr)) + return -EINVAL; + old = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK); new = aarch64_insn_gen_nop(); - return ftrace_modify_code(pc, old, new, validate); + return ftrace_modify_code(pc, old, new, true); } void arch_ftrace_update_code(int command) From 2d825fb53b9a7e6fb2711a0eca2c6fa03f8f8a73 Mon Sep 17 00:00:00 2001 From: Masahiro Yamada Date: Sun, 12 Jun 2022 02:22:30 +0900 Subject: [PATCH 1002/1842] certs/blacklist_hashes.c: fix const confusion in certs blacklist MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 6a1c3767d82ed8233de1263aa7da81595e176087 ] This file fails to compile as follows: CC certs/blacklist_hashes.o certs/blacklist_hashes.c:4:1: error: ignoring attribute ‘section (".init.data")’ because it conflicts with previous ‘section (".init.rodata")’ [-Werror=attributes] 4 | const char __initdata *const blacklist_hashes[] = { | ^~~~~ In file included from certs/blacklist_hashes.c:2: certs/blacklist.h:5:38: note: previous declaration here 5 | extern const char __initconst *const blacklist_hashes[]; | ^~~~~~~~~~~~~~~~ Apply the same fix as commit 2be04df5668d ("certs/blacklist_nohashes.c: fix const confusion in certs blacklist"). Fixes: 734114f8782f ("KEYS: Add a system blacklist keyring") Signed-off-by: Masahiro Yamada Reviewed-by: Jarkko Sakkinen Reviewed-by: Mickaël Salaün Signed-off-by: Jarkko Sakkinen Signed-off-by: Sasha Levin --- certs/blacklist_hashes.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/certs/blacklist_hashes.c b/certs/blacklist_hashes.c index 344892337be0..d5961aa3d338 100644 --- a/certs/blacklist_hashes.c +++ b/certs/blacklist_hashes.c @@ -1,7 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 #include "blacklist.h" -const char __initdata *const blacklist_hashes[] = { +const char __initconst *const blacklist_hashes[] = { #include CONFIG_SYSTEM_BLACKLIST_HASH_LIST , NULL }; From 7fa28a7c3d74933a4fc22d341b60927952f31c19 Mon Sep 17 00:00:00 2001 From: Bart Van Assche Date: Wed, 15 Jun 2022 14:00:04 -0700 Subject: [PATCH 1003/1842] block: Fix handling of offline queues in blk_mq_alloc_request_hctx() [ Upstream commit 14dc7a18abbe4176f5626c13c333670da8e06aa1 ] This patch prevents that test nvme/004 triggers the following: UBSAN: array-index-out-of-bounds in block/blk-mq.h:135:9 index 512 is out of range for type 'long unsigned int [512]' Call Trace: show_stack+0x52/0x58 dump_stack_lvl+0x49/0x5e dump_stack+0x10/0x12 ubsan_epilogue+0x9/0x3b __ubsan_handle_out_of_bounds.cold+0x44/0x49 blk_mq_alloc_request_hctx+0x304/0x310 __nvme_submit_sync_cmd+0x70/0x200 [nvme_core] nvmf_connect_io_queue+0x23e/0x2a0 [nvme_fabrics] nvme_loop_connect_io_queues+0x8d/0xb0 [nvme_loop] nvme_loop_create_ctrl+0x58e/0x7d0 [nvme_loop] nvmf_create_ctrl+0x1d7/0x4d0 [nvme_fabrics] nvmf_dev_write+0xae/0x111 [nvme_fabrics] vfs_write+0x144/0x560 ksys_write+0xb7/0x140 __x64_sys_write+0x42/0x50 do_syscall_64+0x35/0x80 entry_SYSCALL_64_after_hwframe+0x44/0xae Cc: Christoph Hellwig Cc: Ming Lei Fixes: 20e4d8139319 ("blk-mq: simplify queue mapping & schedule with each possisble CPU") Signed-off-by: Bart Van Assche Reviewed-by: Christoph Hellwig Reviewed-by: Ming Lei Link: https://lore.kernel.org/r/20220615210004.1031820-1-bvanassche@acm.org Signed-off-by: Jens Axboe Signed-off-by: Sasha Levin --- block/blk-mq.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/block/blk-mq.c b/block/blk-mq.c index 15a11a217cd0..c5d82b21a1cc 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -466,6 +466,8 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q, if (!blk_mq_hw_queue_mapped(data.hctx)) goto out_queue_exit; cpu = cpumask_first_and(data.hctx->cpumask, cpu_online_mask); + if (cpu >= nr_cpu_ids) + goto out_queue_exit; data.ctx = __blk_mq_get_ctx(q, cpu); if (!q->elevator) From b559ef9dfc8f21d5940beedc9e88a1e912c76b18 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Wed, 1 Jun 2022 17:42:22 -0700 Subject: [PATCH 1004/1842] faddr2line: Fix overlapping text section failures, the sequel [ Upstream commit dcea997beed694cbd8705100ca1a6eb0d886de69 ] If a function lives in a section other than .text, but .text also exists in the object, faddr2line may wrongly assume .text. This can result in comically wrong output. For example: $ scripts/faddr2line vmlinux.o enter_from_user_mode+0x1c enter_from_user_mode+0x1c/0x30: find_next_bit at /home/jpoimboe/git/linux/./include/linux/find.h:40 (inlined by) perf_clear_dirty_counters at /home/jpoimboe/git/linux/arch/x86/events/core.c:2504 Fix it by passing the section name to addr2line, unless the object file is vmlinux, in which case the symbol table uses absolute addresses. Fixes: 1d1a0e7c5100 ("scripts/faddr2line: Fix overlapping text section failures") Reported-by: Peter Zijlstra Signed-off-by: Josh Poimboeuf Link: https://lore.kernel.org/r/7d25bc1408bd3a750ac26e60d2f2815a5f4a8363.1654130536.git.jpoimboe@kernel.org Signed-off-by: Sasha Levin --- scripts/faddr2line | 45 ++++++++++++++++++++++++++++++++++----------- 1 file changed, 34 insertions(+), 11 deletions(-) diff --git a/scripts/faddr2line b/scripts/faddr2line index 0e6268d59883..94ed98dd899f 100755 --- a/scripts/faddr2line +++ b/scripts/faddr2line @@ -95,17 +95,25 @@ __faddr2line() { local print_warnings=$4 local sym_name=${func_addr%+*} - local offset=${func_addr#*+} - offset=${offset%/*} + local func_offset=${func_addr#*+} + func_offset=${func_offset%/*} local user_size= + local file_type + local is_vmlinux=0 [[ $func_addr =~ "/" ]] && user_size=${func_addr#*/} - if [[ -z $sym_name ]] || [[ -z $offset ]] || [[ $sym_name = $func_addr ]]; then + if [[ -z $sym_name ]] || [[ -z $func_offset ]] || [[ $sym_name = $func_addr ]]; then warn "bad func+offset $func_addr" DONE=1 return fi + # vmlinux uses absolute addresses in the section table rather than + # section offsets. + local file_type=$(${READELF} --file-header $objfile | + ${AWK} '$1 == "Type:" { print $2; exit }') + [[ $file_type = "EXEC" ]] && is_vmlinux=1 + # Go through each of the object's symbols which match the func name. # In rare cases there might be duplicates, in which case we print all # matches. @@ -114,9 +122,11 @@ __faddr2line() { local sym_addr=0x${fields[1]} local sym_elf_size=${fields[2]} local sym_sec=${fields[6]} + local sec_size + local sec_name # Get the section size: - local sec_size=$(${READELF} --section-headers --wide $objfile | + sec_size=$(${READELF} --section-headers --wide $objfile | sed 's/\[ /\[/' | ${AWK} -v sec=$sym_sec '$1 == "[" sec "]" { print "0x" $6; exit }') @@ -126,6 +136,17 @@ __faddr2line() { return fi + # Get the section name: + sec_name=$(${READELF} --section-headers --wide $objfile | + sed 's/\[ /\[/' | + ${AWK} -v sec=$sym_sec '$1 == "[" sec "]" { print $2; exit }') + + if [[ -z $sec_name ]]; then + warn "bad section name: section: $sym_sec" + DONE=1 + return + fi + # Calculate the symbol size. # # Unfortunately we can't use the ELF size, because kallsyms @@ -174,10 +195,10 @@ __faddr2line() { sym_size=0x$(printf %x $sym_size) - # Calculate the section address from user-supplied offset: - local addr=$(($sym_addr + $offset)) + # Calculate the address from user-supplied offset: + local addr=$(($sym_addr + $func_offset)) if [[ -z $addr ]] || [[ $addr = 0 ]]; then - warn "bad address: $sym_addr + $offset" + warn "bad address: $sym_addr + $func_offset" DONE=1 return fi @@ -191,9 +212,9 @@ __faddr2line() { fi # Make sure the provided offset is within the symbol's range: - if [[ $offset -gt $sym_size ]]; then + if [[ $func_offset -gt $sym_size ]]; then [[ $print_warnings = 1 ]] && - echo "skipping $sym_name address at $addr due to size mismatch ($offset > $sym_size)" + echo "skipping $sym_name address at $addr due to size mismatch ($func_offset > $sym_size)" continue fi @@ -202,11 +223,13 @@ __faddr2line() { [[ $FIRST = 0 ]] && echo FIRST=0 - echo "$sym_name+$offset/$sym_size:" + echo "$sym_name+$func_offset/$sym_size:" # Pass section address to addr2line and strip absolute paths # from the output: - local output=$(${ADDR2LINE} -fpie $objfile $addr | sed "s; $dir_prefix\(\./\)*; ;") + local args="--functions --pretty-print --inlines --exe=$objfile" + [[ $is_vmlinux = 0 ]] && args="$args --section=$sec_name" + local output=$(${ADDR2LINE} $args $addr | sed "s; $dir_prefix\(\./\)*; ;") [[ -z $output ]] && continue # Default output (non --list): From 716587a57a284ab1813838fb1c563fa3ed2bf902 Mon Sep 17 00:00:00 2001 From: Jiasheng Jiang Date: Thu, 26 May 2022 17:41:00 +0800 Subject: [PATCH 1005/1842] i2c: npcm7xx: Add check for platform_driver_register [ Upstream commit 6ba12b56b9b844b83ed54fb7ed59fb0eb41e4045 ] As platform_driver_register() could fail, it should be better to deal with the return value in order to maintain the code consisitency. Fixes: 56a1485b102e ("i2c: npcm7xx: Add Nuvoton NPCM I2C controller driver") Signed-off-by: Jiasheng Jiang Acked-by: Tali Perry Signed-off-by: Wolfram Sang Signed-off-by: Sasha Levin --- drivers/i2c/busses/i2c-npcm7xx.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c index 20a2f903b7f6..d9ac62c1ac25 100644 --- a/drivers/i2c/busses/i2c-npcm7xx.c +++ b/drivers/i2c/busses/i2c-npcm7xx.c @@ -2369,8 +2369,7 @@ static struct platform_driver npcm_i2c_bus_driver = { static int __init npcm_i2c_init(void) { npcm_i2c_debugfs_dir = debugfs_create_dir("npcm_i2c", NULL); - platform_driver_register(&npcm_i2c_bus_driver); - return 0; + return platform_driver_register(&npcm_i2c_bus_driver); } module_init(npcm_i2c_init); From e52a58b79f11755ea7e877015c4a1704303fa55f Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Wed, 1 Jun 2022 12:09:25 +0400 Subject: [PATCH 1006/1842] irqchip/gic/realview: Fix refcount leak in realview_gic_of_init [ Upstream commit f4b98e314888cc51486421bcf6d52852452ea48b ] of_find_matching_node_and_match() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. Add missing of_node_put() to avoid refcount leak. Fixes: 82b0a434b436 ("irqchip/gic/realview: Support more RealView DCC variants") Signed-off-by: Miaoqian Lin Signed-off-by: Marc Zyngier Link: https://lore.kernel.org/r/20220601080930.31005-2-linmq006@gmail.com Signed-off-by: Sasha Levin --- drivers/irqchip/irq-gic-realview.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/irqchip/irq-gic-realview.c b/drivers/irqchip/irq-gic-realview.c index b4c1924f0255..38fab02ffe9d 100644 --- a/drivers/irqchip/irq-gic-realview.c +++ b/drivers/irqchip/irq-gic-realview.c @@ -57,6 +57,7 @@ realview_gic_of_init(struct device_node *node, struct device_node *parent) /* The PB11MPCore GIC needs to be configured in the syscon */ map = syscon_node_to_regmap(np); + of_node_put(np); if (!IS_ERR(map)) { /* new irq mode with no DCC */ regmap_write(map, REALVIEW_SYS_LOCK_OFFSET, From 7c9dd9d23f26dabcfb14148b9acdfba540418b19 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Wed, 1 Jun 2022 12:09:28 +0400 Subject: [PATCH 1007/1842] irqchip/gic-v3: Fix error handling in gic_populate_ppi_partitions [ Upstream commit ec8401a429ffee34ccf38cebf3443f8d5ae6cb0d ] of_get_child_by_name() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. When kcalloc fails, it missing of_node_put() and results in refcount leak. Fix this by goto out_put_node label. Fixes: 52085d3f2028 ("irqchip/gic-v3: Dynamically allocate PPI partition descriptors") Signed-off-by: Miaoqian Lin Signed-off-by: Marc Zyngier Link: https://lore.kernel.org/r/20220601080930.31005-5-linmq006@gmail.com Signed-off-by: Sasha Levin --- drivers/irqchip/irq-gic-v3.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c index e5e3fd6b9554..8d62028a0e04 100644 --- a/drivers/irqchip/irq-gic-v3.c +++ b/drivers/irqchip/irq-gic-v3.c @@ -1831,7 +1831,7 @@ static void __init gic_populate_ppi_partitions(struct device_node *gic_node) gic_data.ppi_descs = kcalloc(gic_data.ppi_nr, sizeof(*gic_data.ppi_descs), GFP_KERNEL); if (!gic_data.ppi_descs) - return; + goto out_put_node; nr_parts = of_get_child_count(parts_node); From 506a88a5bf261d76a5214c0338a320f2214c67ac Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Wed, 1 Jun 2022 12:09:29 +0400 Subject: [PATCH 1008/1842] irqchip/gic-v3: Fix refcount leak in gic_populate_ppi_partitions [ Upstream commit fa1ad9d4cc47ca2470cd904ad4519f05d7e43a2b ] of_find_node_by_phandle() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. Add missing of_node_put() to avoid refcount leak. Fixes: e3825ba1af3a ("irqchip/gic-v3: Add support for partitioned PPIs") Signed-off-by: Miaoqian Lin Signed-off-by: Marc Zyngier Link: https://lore.kernel.org/r/20220601080930.31005-6-linmq006@gmail.com Signed-off-by: Sasha Levin --- drivers/irqchip/irq-gic-v3.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c index 8d62028a0e04..4c8f18f0cecf 100644 --- a/drivers/irqchip/irq-gic-v3.c +++ b/drivers/irqchip/irq-gic-v3.c @@ -1872,12 +1872,15 @@ static void __init gic_populate_ppi_partitions(struct device_node *gic_node) continue; cpu = of_cpu_node_to_id(cpu_node); - if (WARN_ON(cpu < 0)) + if (WARN_ON(cpu < 0)) { + of_node_put(cpu_node); continue; + } pr_cont("%pOF[%d] ", cpu_node, cpu); cpumask_set_cpu(cpu, &part->mask); + of_node_put(cpu_node); } pr_cont("}\n"); From 9ea9c92275b3606bbf125af2533746595de03dab Mon Sep 17 00:00:00 2001 From: Serge Semin Date: Fri, 10 Jun 2022 10:42:33 +0300 Subject: [PATCH 1009/1842] i2c: designware: Use standard optional ref clock implementation [ Upstream commit 27071b5cbca59d8e8f8750c199a6cbf8c9799963 ] Even though the DW I2C controller reference clock source is requested by the method devm_clk_get() with non-optional clock requirement the way the clock handler is used afterwards has a pure optional clock semantic (though in some circumstances we can get a warning about the clock missing printed in the system console). There is no point in reimplementing that functionality seeing the kernel clock framework already supports the optional interface from scratch. Thus let's convert the platform driver to using it. Note by providing this commit we get to fix two problems. The first one was introduced in commit c62ebb3d5f0d ("i2c: designware: Add support for an interface clock"). It causes not having the interface clock (pclk) enabled/disabled in case if the reference clock isn't provided. The second problem was first introduced in commit b33af11de236 ("i2c: designware: Do not require clock when SSCN and FFCN are provided"). Since that modification the deferred probe procedure has been unsupported in case if the interface clock isn't ready. Fixes: c62ebb3d5f0d ("i2c: designware: Add support for an interface clock") Fixes: b33af11de236 ("i2c: designware: Do not require clock when SSCN and FFCN are provided") Signed-off-by: Serge Semin Reviewed-by: Andy Shevchenko Acked-by: Jarkko Nikula Signed-off-by: Wolfram Sang Signed-off-by: Sasha Levin --- drivers/i2c/busses/i2c-designware-common.c | 3 --- drivers/i2c/busses/i2c-designware-platdrv.c | 13 +++++++++++-- 2 files changed, 11 insertions(+), 5 deletions(-) diff --git a/drivers/i2c/busses/i2c-designware-common.c b/drivers/i2c/busses/i2c-designware-common.c index 3c19aada4b30..9468c6c89b3f 100644 --- a/drivers/i2c/busses/i2c-designware-common.c +++ b/drivers/i2c/busses/i2c-designware-common.c @@ -474,9 +474,6 @@ int i2c_dw_prepare_clk(struct dw_i2c_dev *dev, bool prepare) { int ret; - if (IS_ERR(dev->clk)) - return PTR_ERR(dev->clk); - if (prepare) { /* Optional interface clock */ ret = clk_prepare_enable(dev->pclk); diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c index 0dfeb2d11603..ad91c7c0faa5 100644 --- a/drivers/i2c/busses/i2c-designware-platdrv.c +++ b/drivers/i2c/busses/i2c-designware-platdrv.c @@ -266,8 +266,17 @@ static int dw_i2c_plat_probe(struct platform_device *pdev) goto exit_reset; } - dev->clk = devm_clk_get(&pdev->dev, NULL); - if (!i2c_dw_prepare_clk(dev, true)) { + dev->clk = devm_clk_get_optional(&pdev->dev, NULL); + if (IS_ERR(dev->clk)) { + ret = PTR_ERR(dev->clk); + goto exit_reset; + } + + ret = i2c_dw_prepare_clk(dev, true); + if (ret) + goto exit_reset; + + if (dev->clk) { u64 clk_khz; dev->get_clk_rate_khz = i2c_dw_get_clk_rate_khz; From 9308be3d9a74d29d41c39f52d588ba009c8f9a75 Mon Sep 17 00:00:00 2001 From: Alexander Usyskin Date: Mon, 6 Jun 2022 17:42:25 +0300 Subject: [PATCH 1010/1842] mei: me: add raptor lake point S DID commit 3ed8c7d39cfef831fe508fc1308f146912fa72e6 upstream. Add Raptor (Point) Lake S device id. Cc: Signed-off-by: Alexander Usyskin Signed-off-by: Tomas Winkler Link: https://lore.kernel.org/r/20220606144225.282375-3-tomas.winkler@intel.com Signed-off-by: Greg Kroah-Hartman --- drivers/misc/mei/hw-me-regs.h | 2 ++ drivers/misc/mei/pci-me.c | 2 ++ 2 files changed, 4 insertions(+) diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h index d81d75a20b8f..afb2e78df4d6 100644 --- a/drivers/misc/mei/hw-me-regs.h +++ b/drivers/misc/mei/hw-me-regs.h @@ -109,6 +109,8 @@ #define MEI_DEV_ID_ADP_P 0x51E0 /* Alder Lake Point P */ #define MEI_DEV_ID_ADP_N 0x54E0 /* Alder Lake Point N */ +#define MEI_DEV_ID_RPL_S 0x7A68 /* Raptor Lake Point S */ + /* * MEI HW Section */ diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c index a738253dbd05..5324b65d0d29 100644 --- a/drivers/misc/mei/pci-me.c +++ b/drivers/misc/mei/pci-me.c @@ -115,6 +115,8 @@ static const struct pci_device_id mei_me_pci_tbl[] = { {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_P, MEI_ME_PCH15_CFG)}, {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_N, MEI_ME_PCH15_CFG)}, + {MEI_PCI_DEVICE(MEI_DEV_ID_RPL_S, MEI_ME_PCH15_CFG)}, + /* required last entry */ {0, } }; From 308b8f31c069d5545c06f8e6bab6bd5bd139bfed Mon Sep 17 00:00:00 2001 From: Ian Abbott Date: Tue, 7 Jun 2022 18:18:19 +0100 Subject: [PATCH 1011/1842] comedi: vmk80xx: fix expression for tx buffer size commit 242439f7e279d86b3f73b5de724bc67b2f8aeb07 upstream. The expression for setting the size of the allocated bulk TX buffer (`devpriv->usb_tx_buf`) is calling `usb_endpoint_maxp(devpriv->ep_rx)`, which is using the wrong endpoint (should be `devpriv->ep_tx`). Fix it. Fixes: a23461c47482 ("comedi: vmk80xx: fix transfer-buffer overflow") Cc: Johan Hovold Cc: stable@vger.kernel.org # 4.9+ Reviewed-by: Johan Hovold Signed-off-by: Ian Abbott Link: https://lore.kernel.org/r/20220607171819.4121-1-abbotti@mev.co.uk Signed-off-by: Greg Kroah-Hartman --- drivers/staging/comedi/drivers/vmk80xx.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/staging/comedi/drivers/vmk80xx.c b/drivers/staging/comedi/drivers/vmk80xx.c index 7769eadfaf61..ccc65cfc519f 100644 --- a/drivers/staging/comedi/drivers/vmk80xx.c +++ b/drivers/staging/comedi/drivers/vmk80xx.c @@ -685,7 +685,7 @@ static int vmk80xx_alloc_usb_buffers(struct comedi_device *dev) if (!devpriv->usb_rx_buf) return -ENOMEM; - size = max(usb_endpoint_maxp(devpriv->ep_rx), MIN_BUF_SIZE); + size = max(usb_endpoint_maxp(devpriv->ep_tx), MIN_BUF_SIZE); devpriv->usb_tx_buf = kzalloc(size, GFP_KERNEL); if (!devpriv->usb_tx_buf) return -ENOMEM; From d721986e967b3c5fef7495e3840362ba71d1968f Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 28 May 2022 12:24:29 +0200 Subject: [PATCH 1012/1842] crypto: memneq - move into lib/ commit abfed87e2a12bd246047d78c01d81eb9529f1d06 upstream. This is used by code that doesn't need CONFIG_CRYPTO, so move this into lib/ with a Kconfig option so that it can be selected by whatever needs it. This fixes a linker error Zheng pointed out when CRYPTO_MANAGER_DISABLE_TESTS!=y and CRYPTO=m: lib/crypto/curve25519-selftest.o: In function `curve25519_selftest': curve25519-selftest.c:(.init.text+0x60): undefined reference to `__crypto_memneq' curve25519-selftest.c:(.init.text+0xec): undefined reference to `__crypto_memneq' curve25519-selftest.c:(.init.text+0x114): undefined reference to `__crypto_memneq' curve25519-selftest.c:(.init.text+0x154): undefined reference to `__crypto_memneq' Reported-by: Zheng Bin Cc: Eric Biggers Cc: stable@vger.kernel.org Fixes: aa127963f1ca ("crypto: lib/curve25519 - re-add selftests") Signed-off-by: Jason A. Donenfeld Reviewed-by: Eric Biggers Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman --- crypto/Kconfig | 1 + crypto/Makefile | 2 +- lib/Kconfig | 3 +++ lib/Makefile | 1 + lib/crypto/Kconfig | 1 + {crypto => lib}/memneq.c | 0 6 files changed, 7 insertions(+), 1 deletion(-) rename {crypto => lib}/memneq.c (100%) diff --git a/crypto/Kconfig b/crypto/Kconfig index c15bfc0e3723..4a53cb98f3df 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -15,6 +15,7 @@ source "crypto/async_tx/Kconfig" # menuconfig CRYPTO tristate "Cryptographic API" + select LIB_MEMNEQ help This option provides the core Cryptographic API. diff --git a/crypto/Makefile b/crypto/Makefile index b279483fba50..3d53cc1d8a86 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -4,7 +4,7 @@ # obj-$(CONFIG_CRYPTO) += crypto.o -crypto-y := api.o cipher.o compress.o memneq.o +crypto-y := api.o cipher.o compress.o obj-$(CONFIG_CRYPTO_ENGINE) += crypto_engine.o obj-$(CONFIG_CRYPTO_FIPS) += fips.o diff --git a/lib/Kconfig b/lib/Kconfig index 258e1ec7d592..36326864249d 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -103,6 +103,9 @@ config INDIRECT_PIO source "lib/crypto/Kconfig" +config LIB_MEMNEQ + bool + config CRC_CCITT tristate "CRC-CCITT functions" help diff --git a/lib/Makefile b/lib/Makefile index 69b8217652ed..a803e1527c4b 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -248,6 +248,7 @@ obj-$(CONFIG_DIMLIB) += dim/ obj-$(CONFIG_SIGNATURE) += digsig.o lib-$(CONFIG_CLZ_TAB) += clz_tab.o +lib-$(CONFIG_LIB_MEMNEQ) += memneq.o obj-$(CONFIG_GENERIC_STRNCPY_FROM_USER) += strncpy_from_user.o obj-$(CONFIG_GENERIC_STRNLEN_USER) += strnlen_user.o diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index 9856e291f414..2082af43d51f 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -71,6 +71,7 @@ config CRYPTO_LIB_CURVE25519 tristate "Curve25519 scalar multiplication library" depends on CRYPTO_ARCH_HAVE_LIB_CURVE25519 || !CRYPTO_ARCH_HAVE_LIB_CURVE25519 select CRYPTO_LIB_CURVE25519_GENERIC if CRYPTO_ARCH_HAVE_LIB_CURVE25519=n + select LIB_MEMNEQ help Enable the Curve25519 library interface. This interface may be fulfilled by either the generic implementation or an arch-specific diff --git a/crypto/memneq.c b/lib/memneq.c similarity index 100% rename from crypto/memneq.c rename to lib/memneq.c From 0e13274bc6423faae648de59a8174f0ee8457007 Mon Sep 17 00:00:00 2001 From: Slark Xiao Date: Wed, 1 Jun 2022 11:47:40 +0800 Subject: [PATCH 1013/1842] USB: serial: option: add support for Cinterion MV31 with new baseline commit 158f7585bfcea4aae0ad4128d032a80fec550df1 upstream. Adding support for Cinterion device MV31 with Qualcomm new baseline. Use different PIDs to separate it from previous base line products. All interfaces settings keep same as previous. Below is test evidence: T: Bus=03 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 6 Spd=480 MxCh= 0 D: Ver= 2.10 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=1e2d ProdID=00b8 Rev=04.14 S: Manufacturer=Cinterion S: Product=Cinterion PID 0x00B8 USB Mobile Broadband S: SerialNumber=90418e79 C: #Ifs= 6 Cfg#= 1 Atr=a0 MxPwr=500mA I: If#=0x0 Alt= 0 #EPs= 1 Cls=02(commc) Sub=0e Prot=00 Driver=cdc_mbim I: If#=0x1 Alt= 1 #EPs= 2 Cls=0a(data ) Sub=00 Prot=02 Driver=cdc_mbim I: If#=0x2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=40 Driver=option I: If#=0x3 Alt= 0 #EPs= 1 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none) I: If#=0x4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=60 Driver=option I: If#=0x5 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=30 Driver=option T: Bus=03 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 7 Spd=480 MxCh= 0 D: Ver= 2.10 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=1e2d ProdID=00b9 Rev=04.14 S: Manufacturer=Cinterion S: Product=Cinterion PID 0x00B9 USB Mobile Broadband S: SerialNumber=90418e79 C: #Ifs= 4 Cfg#= 1 Atr=a0 MxPwr=500mA I: If#=0x0 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=50 Driver=qmi_wwan I: If#=0x1 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=40 Driver=option I: If#=0x2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=60 Driver=option I: If#=0x3 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=30 Driver=option For PID 00b8, interface 3 is GNSS port which don't use serial driver. Signed-off-by: Slark Xiao Link: https://lore.kernel.org/r/20220601034740.5438-1-slark_xiao@163.com [ johan: rename defines using a "2" infix ] Cc: stable@vger.kernel.org Signed-off-by: Johan Hovold Signed-off-by: Greg Kroah-Hartman --- drivers/usb/serial/option.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c index a40c0f3b85c2..3744cde5146f 100644 --- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -432,6 +432,8 @@ static void option_instat_callback(struct urb *urb); #define CINTERION_PRODUCT_CLS8 0x00b0 #define CINTERION_PRODUCT_MV31_MBIM 0x00b3 #define CINTERION_PRODUCT_MV31_RMNET 0x00b7 +#define CINTERION_PRODUCT_MV31_2_MBIM 0x00b8 +#define CINTERION_PRODUCT_MV31_2_RMNET 0x00b9 #define CINTERION_PRODUCT_MV32_WA 0x00f1 #define CINTERION_PRODUCT_MV32_WB 0x00f2 @@ -1979,6 +1981,10 @@ static const struct usb_device_id option_ids[] = { .driver_info = RSVD(3)}, { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_RMNET, 0xff), .driver_info = RSVD(0)}, + { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_2_MBIM, 0xff), + .driver_info = RSVD(3)}, + { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_2_RMNET, 0xff), + .driver_info = RSVD(0)}, { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WA, 0xff), .driver_info = RSVD(3)}, { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV32_WB, 0xff), From 791da3e6c8830cf7c63151d767b87db039345fd5 Mon Sep 17 00:00:00 2001 From: Robert Eckelmann Date: Sat, 21 May 2022 23:08:08 +0900 Subject: [PATCH 1014/1842] USB: serial: io_ti: add Agilent E5805A support commit 908e698f2149c3d6a67d9ae15c75545a3f392559 upstream. Add support for Agilent E5805A (rebranded ION Edgeport/4) to io_ti. Signed-off-by: Robert Eckelmann Link: https://lore.kernel.org/r/20220521230808.30931eca@octoberrain Cc: stable@vger.kernel.org Signed-off-by: Johan Hovold Signed-off-by: Greg Kroah-Hartman --- drivers/usb/serial/io_ti.c | 2 ++ drivers/usb/serial/io_usbvend.h | 1 + 2 files changed, 3 insertions(+) diff --git a/drivers/usb/serial/io_ti.c b/drivers/usb/serial/io_ti.c index c327d4cf7928..03bcab3b9bd0 100644 --- a/drivers/usb/serial/io_ti.c +++ b/drivers/usb/serial/io_ti.c @@ -168,6 +168,7 @@ static const struct usb_device_id edgeport_2port_id_table[] = { { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_8S) }, { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_416) }, { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_416B) }, + { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_E5805A) }, { } }; @@ -206,6 +207,7 @@ static const struct usb_device_id id_table_combined[] = { { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_8S) }, { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_416) }, { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_TI_EDGEPORT_416B) }, + { USB_DEVICE(USB_VENDOR_ID_ION, ION_DEVICE_ID_E5805A) }, { } }; diff --git a/drivers/usb/serial/io_usbvend.h b/drivers/usb/serial/io_usbvend.h index 52cbc353051f..9a6f742ad3ab 100644 --- a/drivers/usb/serial/io_usbvend.h +++ b/drivers/usb/serial/io_usbvend.h @@ -212,6 +212,7 @@ // // Definitions for other product IDs #define ION_DEVICE_ID_MT4X56USB 0x1403 // OEM device +#define ION_DEVICE_ID_E5805A 0x1A01 // OEM device (rebranded Edgeport/4) #define GENERATION_ID_FROM_USB_PRODUCT_ID(ProductId) \ From a44a8a762f7fe9ad3c065813d058e835a6180cb2 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Mon, 30 May 2022 12:54:12 +0400 Subject: [PATCH 1015/1842] usb: dwc2: Fix memory leak in dwc2_hcd_init commit 3755278f078460b021cd0384562977bf2039a57a upstream. usb_create_hcd will alloc memory for hcd, and we should call usb_put_hcd to free it when platform_get_resource() fails to prevent memory leak. goto error2 label instead error1 to fix this. Fixes: 856e6e8e0f93 ("usb: dwc2: check return value after calling platform_get_resource()") Cc: stable Acked-by: Minas Harutyunyan Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220530085413.44068-1-linmq006@gmail.com Signed-off-by: Greg Kroah-Hartman --- drivers/usb/dwc2/hcd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c index 30919f741b7f..9279d3d3698c 100644 --- a/drivers/usb/dwc2/hcd.c +++ b/drivers/usb/dwc2/hcd.c @@ -5076,7 +5076,7 @@ int dwc2_hcd_init(struct dwc2_hsotg *hsotg) res = platform_get_resource(pdev, IORESOURCE_MEM, 0); if (!res) { retval = -EINVAL; - goto error1; + goto error2; } hcd->rsrc_start = res->start; hcd->rsrc_len = resource_size(res); From 57901c658f77d9ea2e772f35cb38e47efb54c558 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Fri, 3 Jun 2022 18:02:44 +0400 Subject: [PATCH 1016/1842] usb: gadget: lpc32xx_udc: Fix refcount leak in lpc32xx_udc_probe commit 4757c9ade34178b351580133771f510b5ffcf9c8 upstream. of_parse_phandle() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. Add missing of_node_put() to avoid refcount leak. of_node_put() will check NULL pointer. Fixes: 24a28e428351 ("USB: gadget driver for LPC32xx") Cc: stable Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220603140246.64529-1-linmq006@gmail.com Signed-off-by: Greg Kroah-Hartman --- drivers/usb/gadget/udc/lpc32xx_udc.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/usb/gadget/udc/lpc32xx_udc.c b/drivers/usb/gadget/udc/lpc32xx_udc.c index 3f1c62adce4b..314cb5ea061a 100644 --- a/drivers/usb/gadget/udc/lpc32xx_udc.c +++ b/drivers/usb/gadget/udc/lpc32xx_udc.c @@ -3015,6 +3015,7 @@ static int lpc32xx_udc_probe(struct platform_device *pdev) } udc->isp1301_i2c_client = isp1301_get_client(isp1301_node); + of_node_put(isp1301_node); if (!udc->isp1301_i2c_client) { return -EPROBE_DEFER; } From 33ba36351eecc111d955cffc746d5d735ef52ae0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ilpo=20J=C3=A4rvinen?= Date: Fri, 20 May 2022 13:35:41 +0300 Subject: [PATCH 1017/1842] serial: 8250: Store to lsr_save_flags after lsr read MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit be03b0651ffd8bab69dfd574c6818b446c0753ce upstream. Not all LSR register flags are preserved across reads. Therefore, LSR readers must store the non-preserved bits into lsr_save_flags. This fix was initially mixed into feature commit f6f586102add ("serial: 8250: Handle UART without interrupt on TEMT using em485"). However, that feature change had a flaw and it was reverted to make room for simpler approach providing the same feature. The embedded fix got reverted with the feature change. Re-add the lsr_save_flags fix and properly mark it's a fix. Link: https://lore.kernel.org/all/1d6c31d-d194-9e6a-ddf9-5f29af829f3@linux.intel.com/T/#m1737eef986bd20cf19593e344cebd7b0244945fc Fixes: e490c9144cfa ("tty: Add software emulated RS485 support for 8250") Cc: stable Acked-by: Uwe Kleine-König Signed-off-by: Uwe Kleine-König Signed-off-by: Ilpo Järvinen Link: https://lore.kernel.org/r/f4d774be-1437-a550-8334-19d8722ab98c@linux.intel.com Signed-off-by: Greg Kroah-Hartman --- drivers/tty/serial/8250/8250_port.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c index e0fa24f0f732..9cf5177815a8 100644 --- a/drivers/tty/serial/8250/8250_port.c +++ b/drivers/tty/serial/8250/8250_port.c @@ -1532,6 +1532,8 @@ static inline void __stop_tx(struct uart_8250_port *p) if (em485) { unsigned char lsr = serial_in(p, UART_LSR); + p->lsr_saved_flags |= lsr & LSR_SAVE_FLAGS; + /* * To provide required timeing and allow FIFO transfer, * __stop_tx_rs485() must be called only when both FIFO and From ba751f0d25f07aa21ce9b85372a3792bf7969d13 Mon Sep 17 00:00:00 2001 From: Mikulas Patocka Date: Thu, 16 Jun 2022 13:28:57 -0400 Subject: [PATCH 1018/1842] dm mirror log: round up region bitmap size to BITS_PER_LONG commit 85e123c27d5cbc22cfdc01de1e2ca1d9003a02d0 upstream. The code in dm-log rounds up bitset_size to 32 bits. It then uses find_next_zero_bit_le on the allocated region. find_next_zero_bit_le accesses the bitmap using unsigned long pointers. So, on 64-bit architectures, it may access 4 bytes beyond the allocated size. Fix this bug by rounding up bitset_size to BITS_PER_LONG. This bug was found by running the lvm2 testsuite with kasan. Fixes: 29121bd0b00e ("[PATCH] dm mirror log: bitset_size fix") Cc: stable@vger.kernel.org Signed-off-by: Mikulas Patocka Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-log.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/md/dm-log.c b/drivers/md/dm-log.c index 33e71ea6cc14..8b15f53cbdd9 100644 --- a/drivers/md/dm-log.c +++ b/drivers/md/dm-log.c @@ -415,8 +415,7 @@ static int create_log_context(struct dm_dirty_log *log, struct dm_target *ti, /* * Work out how many "unsigned long"s we need to hold the bitset. */ - bitset_size = dm_round_up(region_count, - sizeof(*lc->clean_bits) << BYTE_SHIFT); + bitset_size = dm_round_up(region_count, BITS_PER_LONG); bitset_size >>= BYTE_SHIFT; lc->bitset_uint32_count = bitset_size / sizeof(*lc->clean_bits); From e27430c1f1ed957ece93efbe7907263f996179be Mon Sep 17 00:00:00 2001 From: Roman Li Date: Thu, 19 May 2022 14:41:16 -0400 Subject: [PATCH 1019/1842] drm/amd/display: Cap OLED brightness per max frame-average luminance commit 4fd17f2ac0aa4e48823ac2ede5b050fb70300bf4 upstream. [Why] For OLED eDP the Display Manager uses max_cll value as a limit for brightness control. max_cll defines the content light luminance for individual pixel. Whereas max_fall defines frame-average level luminance. The user may not observe the difference in brightness in between max_fall and max_cll. That negatively impacts the user experience. [How] Use max_fall value instead of max_cll as a limit for brightness control. Reviewed-by: Rodrigo Siqueira Acked-by: Hamza Mahfooz Signed-off-by: Roman Li Tested-by: Daniel Wheeler Signed-off-by: Alex Deucher Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 7bb151283f44..f069d0faba64 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -2141,7 +2141,7 @@ static struct drm_mode_config_helper_funcs amdgpu_dm_mode_config_helperfuncs = { static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector) { - u32 max_cll, min_cll, max, min, q, r; + u32 max_avg, min_cll, max, min, q, r; struct amdgpu_dm_backlight_caps *caps; struct amdgpu_display_manager *dm; struct drm_connector *conn_base; @@ -2164,7 +2164,7 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector) caps = &dm->backlight_caps; caps->ext_caps = &aconnector->dc_link->dpcd_sink_ext_caps; caps->aux_support = false; - max_cll = conn_base->hdr_sink_metadata.hdmi_type1.max_cll; + max_avg = conn_base->hdr_sink_metadata.hdmi_type1.max_fall; min_cll = conn_base->hdr_sink_metadata.hdmi_type1.min_cll; if (caps->ext_caps->bits.oled == 1 /*|| @@ -2192,8 +2192,8 @@ static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector) * The results of the above expressions can be verified at * pre_computed_values. */ - q = max_cll >> 5; - r = max_cll % 32; + q = max_avg >> 5; + r = max_avg % 32; max = (1 << q) * pre_computed_values[r]; // min luminance: maxLum * (CV/255)^2 / 100 From 6fdaf31ad5f3d3afab744dfd9a8b0d9142aa881f Mon Sep 17 00:00:00 2001 From: Baokun Li Date: Sat, 28 May 2022 19:00:15 +0800 Subject: [PATCH 1020/1842] ext4: fix bug_on ext4_mb_use_inode_pa commit a08f789d2ab5242c07e716baf9a835725046be89 upstream. Hulk Robot reported a BUG_ON: ================================================================== kernel BUG at fs/ext4/mballoc.c:3211! [...] RIP: 0010:ext4_mb_mark_diskspace_used.cold+0x85/0x136f [...] Call Trace: ext4_mb_new_blocks+0x9df/0x5d30 ext4_ext_map_blocks+0x1803/0x4d80 ext4_map_blocks+0x3a4/0x1a10 ext4_writepages+0x126d/0x2c30 do_writepages+0x7f/0x1b0 __filemap_fdatawrite_range+0x285/0x3b0 file_write_and_wait_range+0xb1/0x140 ext4_sync_file+0x1aa/0xca0 vfs_fsync_range+0xfb/0x260 do_fsync+0x48/0xa0 [...] ================================================================== Above issue may happen as follows: ------------------------------------- do_fsync vfs_fsync_range ext4_sync_file file_write_and_wait_range __filemap_fdatawrite_range do_writepages ext4_writepages mpage_map_and_submit_extent mpage_map_one_extent ext4_map_blocks ext4_mb_new_blocks ext4_mb_normalize_request >>> start + size <= ac->ac_o_ex.fe_logical ext4_mb_regular_allocator ext4_mb_simple_scan_group ext4_mb_use_best_found ext4_mb_new_preallocation ext4_mb_new_inode_pa ext4_mb_use_inode_pa >>> set ac->ac_b_ex.fe_len <= 0 ext4_mb_mark_diskspace_used >>> BUG_ON(ac->ac_b_ex.fe_len <= 0); we can easily reproduce this problem with the following commands: `fallocate -l100M disk` `mkfs.ext4 -b 1024 -g 256 disk` `mount disk /mnt` `fsstress -d /mnt -l 0 -n 1000 -p 1` The size must be smaller than or equal to EXT4_BLOCKS_PER_GROUP. Therefore, "start + size <= ac->ac_o_ex.fe_logical" may occur when the size is truncated. So start should be the start position of the group where ac_o_ex.fe_logical is located after alignment. In addition, when the value of fe_logical or EXT4_BLOCKS_PER_GROUP is very large, the value calculated by start_off is more accurate. Cc: stable@kernel.org Fixes: cd648b8a8fd5 ("ext4: trim allocation requests to group size") Reported-by: Hulk Robot Signed-off-by: Baokun Li Reviewed-by: Ritesh Harjani Link: https://lore.kernel.org/r/20220528110017.354175-2-libaokun1@huawei.com Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman --- fs/ext4/mballoc.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 15223b5a3af9..c32d0895c3a3 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -3520,6 +3520,15 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac, size = size >> bsbits; start = start_off >> bsbits; + /* + * For tiny groups (smaller than 8MB) the chosen allocation + * alignment may be larger than group size. Make sure the + * alignment does not move allocation to a different group which + * makes mballoc fail assertions later. + */ + start = max(start, rounddown(ac->ac_o_ex.fe_logical, + (ext4_lblk_t)EXT4_BLOCKS_PER_GROUP(ac->ac_sb))); + /* don't cover already allocated blocks in selected range */ if (ar->pleft && start <= ar->lleft) { size -= ar->lleft + 1 - start; From 0ca74dacfd478a9c03361b7512be48ea6b22e8a5 Mon Sep 17 00:00:00 2001 From: Ding Xiang Date: Mon, 30 May 2022 18:00:47 +0800 Subject: [PATCH 1021/1842] ext4: make variable "count" signed commit bc75a6eb856cb1507fa907bf6c1eda91b3fef52f upstream. Since dx_make_map() may return -EFSCORRUPTED now, so change "count" to be a signed integer so we can correctly check for an error code returned by dx_make_map(). Fixes: 46c116b920eb ("ext4: verify dir block before splitting it") Cc: stable@kernel.org Signed-off-by: Ding Xiang Link: https://lore.kernel.org/r/20220530100047.537598-1-dingxiang@cmss.chinamobile.com Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman --- fs/ext4/namei.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c index feae39f1db37..2c9ae72a1f5c 100644 --- a/fs/ext4/namei.c +++ b/fs/ext4/namei.c @@ -1841,7 +1841,8 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir, struct dx_hash_info *hinfo) { unsigned blocksize = dir->i_sb->s_blocksize; - unsigned count, continued; + unsigned continued; + int count; struct buffer_head *bh2; ext4_lblk_t newblock; u32 hash2; From bfd004a1d3a062aac300523d406ac1f3e5f1a82c Mon Sep 17 00:00:00 2001 From: Zhang Yi Date: Wed, 1 Jun 2022 17:27:17 +0800 Subject: [PATCH 1022/1842] ext4: add reserved GDT blocks check commit b55c3cd102a6f48b90e61c44f7f3dda8c290c694 upstream. We capture a NULL pointer issue when resizing a corrupt ext4 image which is freshly clear resize_inode feature (not run e2fsck). It could be simply reproduced by following steps. The problem is because of the resize_inode feature was cleared, and it will convert the filesystem to meta_bg mode in ext4_resize_fs(), but the es->s_reserved_gdt_blocks was not reduced to zero, so could we mistakenly call reserve_backup_gdb() and passing an uninitialized resize_inode to it when adding new group descriptors. mkfs.ext4 /dev/sda 3G tune2fs -O ^resize_inode /dev/sda #forget to run requested e2fsck mount /dev/sda /mnt resize2fs /dev/sda 8G ======== BUG: kernel NULL pointer dereference, address: 0000000000000028 CPU: 19 PID: 3243 Comm: resize2fs Not tainted 5.18.0-rc7-00001-gfde086c5ebfd #748 ... RIP: 0010:ext4_flex_group_add+0xe08/0x2570 ... Call Trace: ext4_resize_fs+0xbec/0x1660 __ext4_ioctl+0x1749/0x24e0 ext4_ioctl+0x12/0x20 __x64_sys_ioctl+0xa6/0x110 do_syscall_64+0x3b/0x90 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x7f2dd739617b ======== The fix is simple, add a check in ext4_resize_begin() to make sure that the es->s_reserved_gdt_blocks is zero when the resize_inode feature is disabled. Cc: stable@kernel.org Signed-off-by: Zhang Yi Reviewed-by: Ritesh Harjani Reviewed-by: Jan Kara Link: https://lore.kernel.org/r/20220601092717.763694-1-yi.zhang@huawei.com Signed-off-by: Theodore Ts'o Signed-off-by: Greg Kroah-Hartman --- fs/ext4/resize.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c index 6513079c728b..015028302305 100644 --- a/fs/ext4/resize.c +++ b/fs/ext4/resize.c @@ -52,6 +52,16 @@ int ext4_resize_begin(struct super_block *sb) if (!capable(CAP_SYS_RESOURCE)) return -EPERM; + /* + * If the reserved GDT blocks is non-zero, the resize_inode feature + * should always be set. + */ + if (EXT4_SB(sb)->s_es->s_reserved_gdt_blocks && + !ext4_has_feature_resize_inode(sb)) { + ext4_error(sb, "resize_inode disabled but reserved GDT blocks non-zero"); + return -EFSCORRUPTED; + } + /* * If we are not using the primary superblock/GDT copy don't resize, * because the user tools have no way of handling this. Probably a From d74d7865e2a81ddae805565f1261eae90a5055c3 Mon Sep 17 00:00:00 2001 From: Marc Zyngier Date: Tue, 7 Jun 2022 14:14:25 +0100 Subject: [PATCH 1023/1842] KVM: arm64: Don't read a HW interrupt pending state in user context commit 2cdea19a34c2340b3aa69508804efe4e3750fcec upstream. Since 5bfa685e62e9 ("KVM: arm64: vgic: Read HW interrupt pending state from the HW"), we're able to source the pending bit for an interrupt that is stored either on the physical distributor or on a device. However, this state is only available when the vcpu is loaded, and is not intended to be accessed from userspace. Unfortunately, the GICv2 emulation doesn't provide specific userspace accessors, and we fallback with the ones that are intended for the guest, with fatal consequences. Add a new vgic_uaccess_read_pending() accessor for userspace to use, build on top of the existing vgic_mmio_read_pending(). Reported-by: Eric Auger Reviewed-by: Eric Auger Tested-by: Eric Auger Signed-off-by: Marc Zyngier Fixes: 5bfa685e62e9 ("KVM: arm64: vgic: Read HW interrupt pending state from the HW") Link: https://lore.kernel.org/r/20220607131427.1164881-2-maz@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman --- arch/arm64/kvm/vgic/vgic-mmio-v2.c | 4 ++-- arch/arm64/kvm/vgic/vgic-mmio.c | 19 ++++++++++++++++--- arch/arm64/kvm/vgic/vgic-mmio.h | 3 +++ 3 files changed, 21 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v2.c b/arch/arm64/kvm/vgic/vgic-mmio-v2.c index a016f07adc28..b3cc51795650 100644 --- a/arch/arm64/kvm/vgic/vgic-mmio-v2.c +++ b/arch/arm64/kvm/vgic/vgic-mmio-v2.c @@ -418,11 +418,11 @@ static const struct vgic_register_region vgic_v2_dist_registers[] = { VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PENDING_SET, vgic_mmio_read_pending, vgic_mmio_write_spending, - NULL, vgic_uaccess_write_spending, 1, + vgic_uaccess_read_pending, vgic_uaccess_write_spending, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PENDING_CLEAR, vgic_mmio_read_pending, vgic_mmio_write_cpending, - NULL, vgic_uaccess_write_cpending, 1, + vgic_uaccess_read_pending, vgic_uaccess_write_cpending, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ACTIVE_SET, vgic_mmio_read_active, vgic_mmio_write_sactive, diff --git a/arch/arm64/kvm/vgic/vgic-mmio.c b/arch/arm64/kvm/vgic/vgic-mmio.c index 9e1459534ce5..5b441777937b 100644 --- a/arch/arm64/kvm/vgic/vgic-mmio.c +++ b/arch/arm64/kvm/vgic/vgic-mmio.c @@ -226,8 +226,9 @@ int vgic_uaccess_write_cenable(struct kvm_vcpu *vcpu, return 0; } -unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu, - gpa_t addr, unsigned int len) +static unsigned long __read_pending(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len, + bool is_user) { u32 intid = VGIC_ADDR_TO_INTID(addr, 1); u32 value = 0; @@ -248,7 +249,7 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu, IRQCHIP_STATE_PENDING, &val); WARN_RATELIMIT(err, "IRQ %d", irq->host_irq); - } else if (vgic_irq_is_mapped_level(irq)) { + } else if (!is_user && vgic_irq_is_mapped_level(irq)) { val = vgic_get_phys_line_level(irq); } else { val = irq_is_pending(irq); @@ -263,6 +264,18 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu, return value; } +unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len) +{ + return __read_pending(vcpu, addr, len, false); +} + +unsigned long vgic_uaccess_read_pending(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len) +{ + return __read_pending(vcpu, addr, len, true); +} + static bool is_vgic_v2_sgi(struct kvm_vcpu *vcpu, struct vgic_irq *irq) { return (vgic_irq_is_sgi(irq->intid) && diff --git a/arch/arm64/kvm/vgic/vgic-mmio.h b/arch/arm64/kvm/vgic/vgic-mmio.h index fefcca2b14dc..dcea44015985 100644 --- a/arch/arm64/kvm/vgic/vgic-mmio.h +++ b/arch/arm64/kvm/vgic/vgic-mmio.h @@ -149,6 +149,9 @@ int vgic_uaccess_write_cenable(struct kvm_vcpu *vcpu, unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len); +unsigned long vgic_uaccess_read_pending(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len); + void vgic_mmio_write_spending(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val); From d6be031a2f5e27f27f3648bac98d2a35874eaddc Mon Sep 17 00:00:00 2001 From: Sean Christopherson Date: Tue, 30 Mar 2021 19:30:25 -0700 Subject: [PATCH 1024/1842] KVM: x86: Account a variety of miscellaneous allocations commit eba04b20e4861d9bdbd8470a13c0c6e824521a36 upstream. Switch to GFP_KERNEL_ACCOUNT for a handful of allocations that are clearly associated with a single task/VM. Note, there are a several SEV allocations that aren't accounted, but those can (hopefully) be fixed by using the local stack for memory. Signed-off-by: Sean Christopherson Message-Id: <20210331023025.2485960-3-seanjc@google.com> Signed-off-by: Paolo Bonzini [sudip: adjust context] Signed-off-by: Sudip Mukherjee Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/svm/nested.c | 4 ++-- arch/x86/kvm/svm/sev.c | 2 +- arch/x86/kvm/vmx/vmx.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c index 23910e6a3f01..e7feaa7910ab 100644 --- a/arch/x86/kvm/svm/nested.c +++ b/arch/x86/kvm/svm/nested.c @@ -1198,8 +1198,8 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu, return -EINVAL; ret = -ENOMEM; - ctl = kzalloc(sizeof(*ctl), GFP_KERNEL); - save = kzalloc(sizeof(*save), GFP_KERNEL); + ctl = kzalloc(sizeof(*ctl), GFP_KERNEL_ACCOUNT); + save = kzalloc(sizeof(*save), GFP_KERNEL_ACCOUNT); if (!ctl || !save) goto out_free; diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 6c82ef22985d..a0c4da5f7d7f 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -537,7 +537,7 @@ static int sev_launch_measure(struct kvm *kvm, struct kvm_sev_cmd *argp) } ret = -ENOMEM; - blob = kmalloc(params.len, GFP_KERNEL); + blob = kmalloc(params.len, GFP_KERNEL_ACCOUNT); if (!blob) goto e_free; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index e729f65c6760..cc647dcc228b 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -619,7 +619,7 @@ static int hv_enable_direct_tlbflush(struct kvm_vcpu *vcpu) * evmcs in singe VM shares same assist page. */ if (!*p_hv_pa_pg) - *p_hv_pa_pg = kzalloc(PAGE_SIZE, GFP_KERNEL); + *p_hv_pa_pg = kzalloc(PAGE_SIZE, GFP_KERNEL_ACCOUNT); if (!*p_hv_pa_pg) return -ENOMEM; From 401bef1f95de92c3a8c6eece46e02fa88d7285ee Mon Sep 17 00:00:00 2001 From: Ashish Kalra Date: Mon, 16 May 2022 15:43:10 +0000 Subject: [PATCH 1025/1842] KVM: SVM: Use kzalloc for sev ioctl interfaces to prevent kernel data leak commit d22d2474e3953996f03528b84b7f52cc26a39403 upstream. For some sev ioctl interfaces, the length parameter that is passed maybe less than or equal to SEV_FW_BLOB_MAX_SIZE, but larger than the data that PSP firmware returns. In this case, kmalloc will allocate memory that is the size of the input rather than the size of the data. Since PSP firmware doesn't fully overwrite the allocated buffer, these sev ioctl interface may return uninitialized kernel slab memory. Reported-by: Andy Nguyen Suggested-by: David Rientjes Suggested-by: Peter Gonda Cc: kvm@vger.kernel.org Cc: stable@vger.kernel.org Cc: linux-kernel@vger.kernel.org Fixes: eaf78265a4ab3 ("KVM: SVM: Move SEV code to separate file") Fixes: 2c07ded06427d ("KVM: SVM: add support for SEV attestation command") Fixes: 4cfdd47d6d95a ("KVM: SVM: Add KVM_SEV SEND_START command") Fixes: d3d1af85e2c75 ("KVM: SVM: Add KVM_SEND_UPDATE_DATA command") Fixes: eba04b20e4861 ("KVM: x86: Account a variety of miscellaneous allocations") Signed-off-by: Ashish Kalra Reviewed-by: Peter Gonda Message-Id: <20220516154310.3685678-1-Ashish.Kalra@amd.com> Signed-off-by: Paolo Bonzini [sudip: adjust context] Signed-off-by: Sudip Mukherjee Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/svm/sev.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index a0c4da5f7d7f..7397cc449e2f 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -537,7 +537,7 @@ static int sev_launch_measure(struct kvm *kvm, struct kvm_sev_cmd *argp) } ret = -ENOMEM; - blob = kmalloc(params.len, GFP_KERNEL_ACCOUNT); + blob = kzalloc(params.len, GFP_KERNEL_ACCOUNT); if (!blob) goto e_free; @@ -676,7 +676,7 @@ static int __sev_dbg_decrypt_user(struct kvm *kvm, unsigned long paddr, if (!IS_ALIGNED(dst_paddr, 16) || !IS_ALIGNED(paddr, 16) || !IS_ALIGNED(size, 16)) { - tpage = (void *)alloc_page(GFP_KERNEL); + tpage = (void *)alloc_page(GFP_KERNEL | __GFP_ZERO); if (!tpage) return -ENOMEM; From be98641034081aa84ed866aca41a77ecf27d4a2f Mon Sep 17 00:00:00 2001 From: Andy Chi Date: Fri, 13 May 2022 20:16:45 +0800 Subject: [PATCH 1026/1842] ALSA: hda/realtek: fix right sounds and mute/micmute LEDs for HP machine commit 024a7ad9eb4df626ca8c77fef4f67fd0ebd559d2 upstream. The HP EliteBook 630 is using ALC236 codec which used 0x02 to control mute LED and 0x01 to control micmute LED. Therefore, add a quirk to make it works. Signed-off-by: Andy Chi Cc: Link: https://lore.kernel.org/r/20220513121648.28584-1-andy.chi@canonical.com Signed-off-by: Takashi Iwai [sudip: adjust context] Signed-off-by: Sudip Mukherjee Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 83b5c2580c8f..7c720f03c134 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -8793,6 +8793,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), SND_PCI_QUIRK(0x103c, 0x888d, "HP ZBook Power 15.6 inch G8 Mobile Workstation PC", ALC236_FIXUP_HP_GPIO_LED), SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED), + SND_PCI_QUIRK(0x103c, 0x89aa, "HP EliteBook 630 G9", ALC236_FIXUP_HP_GPIO_LED), SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), From aa9a001efa9ccbb66a22a528bdfce9e825709343 Mon Sep 17 00:00:00 2001 From: Murilo Opsfelder Araujo Date: Thu, 14 Apr 2022 23:30:02 -0300 Subject: [PATCH 1027/1842] virtio-pci: Remove wrong address verification in vp_del_vqs() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 7e415282b41bf0d15c6e0fe268f822d9b083f2f7 upstream. GCC 12 enhanced -Waddress when comparing array address to null [0], which warns: drivers/virtio/virtio_pci_common.c: In function ‘vp_del_vqs’: drivers/virtio/virtio_pci_common.c:257:29: warning: the comparison will always evaluate as ‘true’ for the pointer operand in ‘vp_dev->msix_affinity_masks + (sizetype)((long unsigned int)i * 256)’ must not be NULL [-Waddress] 257 | if (vp_dev->msix_affinity_masks[i]) | ^~~~~~ In fact, the verification is comparing the result of a pointer arithmetic, the address "msix_affinity_masks + i", which will always evaluate to true. Under the hood, free_cpumask_var() calls kfree(), which is safe to pass NULL, not requiring non-null verification. So remove the verification to make compiler happy (happy compiler, happy life). [0] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102103 Signed-off-by: Murilo Opsfelder Araujo Message-Id: <20220415023002.49805-1-muriloo@linux.ibm.com> Signed-off-by: Michael S. Tsirkin Acked-by: Christophe de Dinechin Cc: Sudip Mukherjee Signed-off-by: Greg Kroah-Hartman --- drivers/virtio/virtio_pci_common.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c index b35bb2d57f62..1e890ef17687 100644 --- a/drivers/virtio/virtio_pci_common.c +++ b/drivers/virtio/virtio_pci_common.c @@ -254,8 +254,7 @@ void vp_del_vqs(struct virtio_device *vdev) if (vp_dev->msix_affinity_masks) { for (i = 0; i < vp_dev->msix_vectors; i++) - if (vp_dev->msix_affinity_masks[i]) - free_cpumask_var(vp_dev->msix_affinity_masks[i]); + free_cpumask_var(vp_dev->msix_affinity_masks[i]); } if (vp_dev->msix_enabled) { From 73bc8a5e8e3a902d8cc9b2f42505f647ca48fac2 Mon Sep 17 00:00:00 2001 From: Robin Murphy Date: Fri, 20 May 2022 18:10:13 +0100 Subject: [PATCH 1028/1842] dma-direct: don't over-decrypt memory commit 4a37f3dd9a83186cb88d44808ab35b78375082c9 upstream. The original x86 sev_alloc() only called set_memory_decrypted() on memory returned by alloc_pages_node(), so the page order calculation fell out of that logic. However, the common dma-direct code has several potential allocators, not all of which are guaranteed to round up the underlying allocation to a power-of-two size, so carrying over that calculation for the encryption/decryption size was a mistake. Fix it by rounding to a *number* of pages, rather than an order. Until recently there was an even worse interaction with DMA_DIRECT_REMAP where we could have ended up decrypting part of the next adjacent vmalloc area, only averted by no architecture actually supporting both configs at once. Don't ask how I found that one out... Fixes: c10f07aa27da ("dma/direct: Handle force decryption for DMA coherent buffers in common code") Signed-off-by: Robin Murphy Signed-off-by: Christoph Hellwig Acked-by: David Rientjes [ backport the functional change without all the prior refactoring ] Signed-off-by: Robin Murphy Signed-off-by: Greg Kroah-Hartman --- kernel/dma/direct.c | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 06c111544f61..2922250f93b4 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -188,7 +188,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, goto out_free_pages; if (force_dma_unencrypted(dev)) { err = set_memory_decrypted((unsigned long)ret, - 1 << get_order(size)); + PFN_UP(size)); if (err) goto out_free_pages; } @@ -210,7 +210,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, ret = page_address(page); if (force_dma_unencrypted(dev)) { err = set_memory_decrypted((unsigned long)ret, - 1 << get_order(size)); + PFN_UP(size)); if (err) goto out_free_pages; } @@ -231,7 +231,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, out_encrypt_pages: if (force_dma_unencrypted(dev)) { err = set_memory_encrypted((unsigned long)page_address(page), - 1 << get_order(size)); + PFN_UP(size)); /* If memory cannot be re-encrypted, it must be leaked */ if (err) return NULL; @@ -244,8 +244,6 @@ void *dma_direct_alloc(struct device *dev, size_t size, void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { - unsigned int page_order = get_order(size); - if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && !force_dma_unencrypted(dev)) { /* cpu_addr is a struct page cookie, not a kernel address */ @@ -266,7 +264,7 @@ void dma_direct_free(struct device *dev, size_t size, return; if (force_dma_unencrypted(dev)) - set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); + set_memory_encrypted((unsigned long)cpu_addr, PFN_UP(size)); if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr)) vunmap(cpu_addr); @@ -302,8 +300,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, ret = page_address(page); if (force_dma_unencrypted(dev)) { - if (set_memory_decrypted((unsigned long)ret, - 1 << get_order(size))) + if (set_memory_decrypted((unsigned long)ret, PFN_UP(size))) goto out_free_pages; } memset(ret, 0, size); @@ -318,7 +315,6 @@ void dma_direct_free_pages(struct device *dev, size_t size, struct page *page, dma_addr_t dma_addr, enum dma_data_direction dir) { - unsigned int page_order = get_order(size); void *vaddr = page_address(page); /* If cpu_addr is not from an atomic pool, dma_free_from_pool() fails */ @@ -327,7 +323,7 @@ void dma_direct_free_pages(struct device *dev, size_t size, return; if (force_dma_unencrypted(dev)) - set_memory_encrypted((unsigned long)vaddr, 1 << page_order); + set_memory_encrypted((unsigned long)vaddr, PFN_UP(size)); dma_free_contiguous(dev, page, size); } From 09b55dc90b4db94e645854074bd98c480ce40c51 Mon Sep 17 00:00:00 2001 From: Davide Caratti Date: Thu, 10 Feb 2022 18:56:08 +0100 Subject: [PATCH 1029/1842] net/sched: act_police: more accurate MTU policing commit 4ddc844eb81da59bfb816d8d52089aba4e59e269 upstream. in current Linux, MTU policing does not take into account that packets at the TC ingress have the L2 header pulled. Thus, the same TC police action (with the same value of tcfp_mtu) behaves differently for ingress/egress. In addition, the full GSO size is compared to tcfp_mtu: as a consequence, the policer drops GSO packets even when individual segments have the L2 + L3 + L4 + payload length below the configured valued of tcfp_mtu. Improve the accuracy of MTU policing as follows: - account for mac_len for non-GSO packets at TC ingress. - compare MTU threshold with the segmented size for GSO packets. Also, add a kselftest that verifies the correct behavior. Signed-off-by: Davide Caratti Reviewed-by: Marcelo Ricardo Leitner Signed-off-by: David S. Miller [dcaratti: fix conflicts due to lack of the following commits: - commit 2ffe0395288a ("net/sched: act_police: add support for packet-per-second policing") - commit 53b61f29367d ("selftests: forwarding: Add tc-police tests for packets per second")] Link: https://lore.kernel.org/netdev/876d597a0ff55f6ba786f73c5a9fd9eb8d597a03.1644514748.git.dcaratti@redhat.com Signed-off-by: Greg Kroah-Hartman --- net/sched/act_police.c | 16 +++++- .../selftests/net/forwarding/tc_police.sh | 52 +++++++++++++++++++ 2 files changed, 67 insertions(+), 1 deletion(-) diff --git a/net/sched/act_police.c b/net/sched/act_police.c index 8d8452b1cdd4..380733588959 100644 --- a/net/sched/act_police.c +++ b/net/sched/act_police.c @@ -213,6 +213,20 @@ static int tcf_police_init(struct net *net, struct nlattr *nla, return err; } +static bool tcf_police_mtu_check(struct sk_buff *skb, u32 limit) +{ + u32 len; + + if (skb_is_gso(skb)) + return skb_gso_validate_mac_len(skb, limit); + + len = qdisc_pkt_len(skb); + if (skb_at_tc_ingress(skb)) + len += skb->mac_len; + + return len <= limit; +} + static int tcf_police_act(struct sk_buff *skb, const struct tc_action *a, struct tcf_result *res) { @@ -235,7 +249,7 @@ static int tcf_police_act(struct sk_buff *skb, const struct tc_action *a, goto inc_overlimits; } - if (qdisc_pkt_len(skb) <= p->tcfp_mtu) { + if (tcf_police_mtu_check(skb, p->tcfp_mtu)) { if (!p->rate_present) { ret = p->tcfp_result; goto end; diff --git a/tools/testing/selftests/net/forwarding/tc_police.sh b/tools/testing/selftests/net/forwarding/tc_police.sh index 160f9cccdfb7..eb09acdcb3ff 100755 --- a/tools/testing/selftests/net/forwarding/tc_police.sh +++ b/tools/testing/selftests/net/forwarding/tc_police.sh @@ -35,6 +35,8 @@ ALL_TESTS=" police_shared_test police_rx_mirror_test police_tx_mirror_test + police_mtu_rx_test + police_mtu_tx_test " NUM_NETIFS=6 source tc_common.sh @@ -290,6 +292,56 @@ police_tx_mirror_test() police_mirror_common_test $rp2 egress "police tx and mirror" } +police_mtu_common_test() { + RET=0 + + local test_name=$1; shift + local dev=$1; shift + local direction=$1; shift + + tc filter add dev $dev $direction protocol ip pref 1 handle 101 flower \ + dst_ip 198.51.100.1 ip_proto udp dst_port 54321 \ + action police mtu 1042 conform-exceed drop/ok + + # to count "conform" packets + tc filter add dev $h2 ingress protocol ip pref 1 handle 101 flower \ + dst_ip 198.51.100.1 ip_proto udp dst_port 54321 \ + action drop + + mausezahn $h1 -a own -b $(mac_get $rp1) -A 192.0.2.1 -B 198.51.100.1 \ + -t udp sp=12345,dp=54321 -p 1001 -c 10 -q + + mausezahn $h1 -a own -b $(mac_get $rp1) -A 192.0.2.1 -B 198.51.100.1 \ + -t udp sp=12345,dp=54321 -p 1000 -c 3 -q + + tc_check_packets "dev $dev $direction" 101 13 + check_err $? "wrong packet counter" + + # "exceed" packets + local overlimits_t0=$(tc_rule_stats_get ${dev} 1 ${direction} .overlimits) + test ${overlimits_t0} = 10 + check_err $? "wrong overlimits, expected 10 got ${overlimits_t0}" + + # "conform" packets + tc_check_packets "dev $h2 ingress" 101 3 + check_err $? "forwarding error" + + tc filter del dev $h2 ingress protocol ip pref 1 handle 101 flower + tc filter del dev $dev $direction protocol ip pref 1 handle 101 flower + + log_test "$test_name" +} + +police_mtu_rx_test() +{ + police_mtu_common_test "police mtu (rx)" $rp1 ingress +} + +police_mtu_tx_test() +{ + police_mtu_common_test "police mtu (tx)" $rp2 egress +} + setup_prepare() { h1=${NETIFS[p1]} From e1513a714de67dd7d2fcd6cfe2ebd59961fe80b1 Mon Sep 17 00:00:00 2001 From: Ilya Maximets Date: Tue, 7 Jun 2022 00:11:40 +0200 Subject: [PATCH 1030/1842] net: openvswitch: fix misuse of the cached connection on tuple changes commit 2061ecfdf2350994e5b61c43e50e98a7a70e95ee upstream. If packet headers changed, the cached nfct is no longer relevant for the packet and attempt to re-use it leads to the incorrect packet classification. This issue is causing broken connectivity in OpenStack deployments with OVS/OVN due to hairpin traffic being unexpectedly dropped. The setup has datapath flows with several conntrack actions and tuple changes between them: actions:ct(commit,zone=8,mark=0/0x1,nat(src)), set(eth(src=00:00:00:00:00:01,dst=00:00:00:00:00:06)), set(ipv4(src=172.18.2.10,dst=192.168.100.6,ttl=62)), ct(zone=8),recirc(0x4) After the first ct() action the packet headers are almost fully re-written. The next ct() tries to re-use the existing nfct entry and marks the packet as invalid, so it gets dropped later in the pipeline. Clearing the cached conntrack entry whenever packet tuple is changed to avoid the issue. The flow key should not be cleared though, because we should still be able to match on the ct_state if the recirculation happens after the tuple change but before the next ct() action. Cc: stable@vger.kernel.org Fixes: 7f8a436eaa2c ("openvswitch: Add conntrack action") Reported-by: Frode Nordahl Link: https://mail.openvswitch.org/pipermail/ovs-discuss/2022-May/051829.html Link: https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1967856 Signed-off-by: Ilya Maximets Link: https://lore.kernel.org/r/20220606221140.488984-1-i.maximets@ovn.org Signed-off-by: Jakub Kicinski [Backport to 5.10: minor rebase in ovs_ct_clear function. This version also applicable to and tested on 5.4 and 4.19.] Signed-off-by: Greg Kroah-Hartman --- net/openvswitch/actions.c | 6 ++++++ net/openvswitch/conntrack.c | 3 ++- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c index 6d8d70021666..80fee9d118ee 100644 --- a/net/openvswitch/actions.c +++ b/net/openvswitch/actions.c @@ -372,6 +372,7 @@ static void set_ip_addr(struct sk_buff *skb, struct iphdr *nh, update_ip_l4_checksum(skb, nh, *addr, new_addr); csum_replace4(&nh->check, *addr, new_addr); skb_clear_hash(skb); + ovs_ct_clear(skb, NULL); *addr = new_addr; } @@ -419,6 +420,7 @@ static void set_ipv6_addr(struct sk_buff *skb, u8 l4_proto, update_ipv6_checksum(skb, l4_proto, addr, new_addr); skb_clear_hash(skb); + ovs_ct_clear(skb, NULL); memcpy(addr, new_addr, sizeof(__be32[4])); } @@ -659,6 +661,7 @@ static int set_nsh(struct sk_buff *skb, struct sw_flow_key *flow_key, static void set_tp_port(struct sk_buff *skb, __be16 *port, __be16 new_port, __sum16 *check) { + ovs_ct_clear(skb, NULL); inet_proto_csum_replace2(check, skb, *port, new_port, false); *port = new_port; } @@ -698,6 +701,7 @@ static int set_udp(struct sk_buff *skb, struct sw_flow_key *flow_key, uh->dest = dst; flow_key->tp.src = src; flow_key->tp.dst = dst; + ovs_ct_clear(skb, NULL); } skb_clear_hash(skb); @@ -760,6 +764,8 @@ static int set_sctp(struct sk_buff *skb, struct sw_flow_key *flow_key, sh->checksum = old_csum ^ old_correct_csum ^ new_csum; skb_clear_hash(skb); + ovs_ct_clear(skb, NULL); + flow_key->tp.src = sh->source; flow_key->tp.dst = sh->dest; diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c index 7ff98d39ec94..41f248895a87 100644 --- a/net/openvswitch/conntrack.c +++ b/net/openvswitch/conntrack.c @@ -1324,7 +1324,8 @@ int ovs_ct_clear(struct sk_buff *skb, struct sw_flow_key *key) if (skb_nfct(skb)) { nf_conntrack_put(skb_nfct(skb)); nf_ct_set(skb, NULL, IP_CT_UNTRACKED); - ovs_ct_fill_key(skb, key); + if (key) + ovs_ct_fill_key(skb, key); } return 0; From f0a7adff635af904cb96aef95b6468cf18d01045 Mon Sep 17 00:00:00 2001 From: Vinicius Costa Gomes Date: Mon, 26 Jul 2021 20:36:54 -0700 Subject: [PATCH 1031/1842] Revert "PCI: Make pci_enable_ptm() private" commit 1d71eb53e45187f58089d32b51e27784c791d90e upstream. Make pci_enable_ptm() accessible from the drivers. Exposing this to the driver enables the driver to use the 'ptm_enabled' field of 'pci_dev' to check if PTM is enabled or not. This reverts commit ac6c26da29c1 ("PCI: Make pci_enable_ptm() private"). Signed-off-by: Vinicius Costa Gomes Acked-by: Bjorn Helgaas Signed-off-by: Tony Nguyen Signed-off-by: Meng Tang Signed-off-by: Greg Kroah-Hartman --- drivers/pci/pci.h | 3 --- include/linux/pci.h | 7 +++++++ 2 files changed, 7 insertions(+), 3 deletions(-) diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h index a96dc6f53076..4084764bf0b1 100644 --- a/drivers/pci/pci.h +++ b/drivers/pci/pci.h @@ -585,11 +585,8 @@ static inline void pcie_ecrc_get_policy(char *str) { } #ifdef CONFIG_PCIE_PTM void pci_ptm_init(struct pci_dev *dev); -int pci_enable_ptm(struct pci_dev *dev, u8 *granularity); #else static inline void pci_ptm_init(struct pci_dev *dev) { } -static inline int pci_enable_ptm(struct pci_dev *dev, u8 *granularity) -{ return -EINVAL; } #endif struct pci_dev_reset_methods { diff --git a/include/linux/pci.h b/include/linux/pci.h index bc5a1150f072..692ce678c5f1 100644 --- a/include/linux/pci.h +++ b/include/linux/pci.h @@ -1599,6 +1599,13 @@ static inline bool pci_aer_available(void) { return false; } bool pci_ats_disabled(void); +#ifdef CONFIG_PCIE_PTM +int pci_enable_ptm(struct pci_dev *dev, u8 *granularity); +#else +static inline int pci_enable_ptm(struct pci_dev *dev, u8 *granularity) +{ return -EINVAL; } +#endif + void pci_cfg_access_lock(struct pci_dev *dev); bool pci_cfg_access_trylock(struct pci_dev *dev); void pci_cfg_access_unlock(struct pci_dev *dev); From ff4443f3fc531f014a9fddaa13d036d82addfbcb Mon Sep 17 00:00:00 2001 From: Vinicius Costa Gomes Date: Mon, 26 Jul 2021 20:36:56 -0700 Subject: [PATCH 1032/1842] igc: Enable PCIe PTM commit 1b5d73fb862414106cf270a1a7300ce8ae77de83 upstream. Enables PCIe PTM (Precision Time Measurement) support in the igc driver. Notifies the PCI devices that PCIe PTM should be enabled. PCIe PTM is similar protocol to PTP (Precision Time Protocol) running in the PCIe fabric, it allows devices to report time measurements from their internal clocks and the correlation with the PCIe root clock. The i225 NIC exposes some registers that expose those time measurements, those registers will be used, in later patches, to implement the PTP_SYS_OFFSET_PRECISE ioctl(). Signed-off-by: Vinicius Costa Gomes Tested-by: Dvora Fuxbrumer Signed-off-by: Tony Nguyen Signed-off-by: Meng Tang Signed-off-by: Greg Kroah-Hartman --- drivers/net/ethernet/intel/igc/igc_main.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index fd9257c7059a..53e31002ce52 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include @@ -5041,6 +5042,10 @@ static int igc_probe(struct pci_dev *pdev, pci_enable_pcie_error_reporting(pdev); + err = pci_enable_ptm(pdev, NULL); + if (err < 0) + dev_info(&pdev->dev, "PCIe PTM not supported by PCIe bus/controller\n"); + pci_set_master(pdev); err = -ENOMEM; From a3e50506ea0d39b352b0c2bad026d663d4400647 Mon Sep 17 00:00:00 2001 From: Masahiro Yamada Date: Sat, 4 Jun 2022 17:50:50 +0900 Subject: [PATCH 1033/1842] powerpc/book3e: get rid of #include commit 7ad4bd887d27c6b6ffbef216f19c19f8fe2b8f52 upstream. You cannot include here because it is generated in init/Makefile but there is no guarantee that it happens before arch/powerpc/mm/nohash/kaslr_booke.c is compiled for parallel builds. The places where you can reliably include are: - init/ (because init/Makefile can specify the dependency) - arch/*/boot/ (because it is compiled after vmlinux) Commit f231e4333312 ("hexagon: get rid of #include ") fixed the last breakage at that time, but powerpc re-added this. was unneeded because 'build_str' is almost the same as 'linux_banner' defined in init/version.c Let's copy the solution from MIPS. (get_random_boot() in arch/mips/kernel/relocate.c) Fixes: 6a38ea1d7b94 ("powerpc/fsl_booke/32: randomize the kernel image offset") Signed-off-by: Masahiro Yamada Acked-by: Scott Wood Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220604085050.4078927-1-masahiroy@kernel.org Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/mm/nohash/kaslr_booke.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/mm/nohash/kaslr_booke.c b/arch/powerpc/mm/nohash/kaslr_booke.c index 4c74e8a5482b..c555ad9fa00b 100644 --- a/arch/powerpc/mm/nohash/kaslr_booke.c +++ b/arch/powerpc/mm/nohash/kaslr_booke.c @@ -18,7 +18,6 @@ #include #include #include -#include #include struct regions { @@ -36,10 +35,6 @@ struct regions { int reserved_mem_size_cells; }; -/* Simplified build-specific string for starting entropy. */ -static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@" - LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION; - struct regions __initdata regions; static __init void kaslr_get_cmdline(void *fdt) @@ -72,7 +67,8 @@ static unsigned long __init get_boot_seed(void *fdt) { unsigned long hash = 0; - hash = rotate_xor(hash, build_str, sizeof(build_str)); + /* build-specific string for starting entropy. */ + hash = rotate_xor(hash, linux_banner, strlen(linux_banner)); hash = rotate_xor(hash, fdt, fdt_totalsize(fdt)); return hash; From e0b6018894b8f86811f8f62e7846ed27832abb48 Mon Sep 17 00:00:00 2001 From: Peng Fan Date: Sat, 7 May 2022 20:54:30 +0800 Subject: [PATCH 1034/1842] clk: imx8mp: fix usb_root_clk parent commit cf7f3f4fa9e57b8e9f594823e77e6cbb0ce2b254 upstream. According to reference mannual CCGR77(usb) sources from hsio_axi, fix it. Fixes: 9c140d9926761 ("clk: imx: Add support for i.MX8MP clock driver") Signed-off-by: Peng Fan Reviewed-by: Abel Vesa Link: https://lore.kernel.org/r/20220507125430.793287-1-peng.fan@oss.nxp.com Signed-off-by: Abel Vesa Signed-off-by: Greg Kroah-Hartman --- drivers/clk/imx/clk-imx8mp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c index 0391f5bda5e4..36e8d619e334 100644 --- a/drivers/clk/imx/clk-imx8mp.c +++ b/drivers/clk/imx/clk-imx8mp.c @@ -691,7 +691,7 @@ static int imx8mp_clocks_probe(struct platform_device *pdev) hws[IMX8MP_CLK_UART2_ROOT] = imx_clk_hw_gate4("uart2_root_clk", "uart2", ccm_base + 0x44a0, 0); hws[IMX8MP_CLK_UART3_ROOT] = imx_clk_hw_gate4("uart3_root_clk", "uart3", ccm_base + 0x44b0, 0); hws[IMX8MP_CLK_UART4_ROOT] = imx_clk_hw_gate4("uart4_root_clk", "uart4", ccm_base + 0x44c0, 0); - hws[IMX8MP_CLK_USB_ROOT] = imx_clk_hw_gate4("usb_root_clk", "osc_32k", ccm_base + 0x44d0, 0); + hws[IMX8MP_CLK_USB_ROOT] = imx_clk_hw_gate4("usb_root_clk", "hsio_axi", ccm_base + 0x44d0, 0); hws[IMX8MP_CLK_USB_PHY_ROOT] = imx_clk_hw_gate4("usb_phy_root_clk", "usb_phy_ref", ccm_base + 0x44f0, 0); hws[IMX8MP_CLK_USDHC1_ROOT] = imx_clk_hw_gate4("usdhc1_root_clk", "usdhc1", ccm_base + 0x4510, 0); hws[IMX8MP_CLK_USDHC2_ROOT] = imx_clk_hw_gate4("usdhc2_root_clk", "usdhc2", ccm_base + 0x4520, 0); From 4f3fee72a74c88c9039ce0405a715f6221791d06 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Wed, 22 Jun 2022 14:13:21 +0200 Subject: [PATCH 1035/1842] Linux 5.10.124 Link: https://lore.kernel.org/r/20220620124720.882450983@linuxfoundation.org Tested-by: Florian Fainelli Tested-by: Pavel Machek (CIP) Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Sudip Mukherjee Tested-by: Jon Hunter Tested-by: Shuah Khan Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 862946040186..9ed79a05a972 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 123 +SUBLEVEL = 124 EXTRAVERSION = NAME = Dare mighty things From 9dcde7a741066bb742d24ce4ebcf00f1acd41112 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Fri, 24 Jun 2022 10:03:02 +0200 Subject: [PATCH 1036/1842] Revert "include/uapi/linux/xfrm.h: Fix XFRM_MSG_MAPPING ABI breakage" This reverts commit 633be494c3ca66a6c36e61100ffa0bd3d2923954 which is commit 844f7eaaed9267ae17d33778efe65548cc940205 upstream. It breaks the Android kernel ABI and if this really needs to be added to Android, it must come back in a format in the future that does not break the abi. Signed-off-by: Greg Kroah-Hartman Change-Id: I4224f0b86fdc8cfba5e18f9632ed191d18552a30 --- include/uapi/linux/xfrm.h | 6 +++--- security/selinux/nlmsgtab.c | 4 +--- 2 files changed, 4 insertions(+), 6 deletions(-) diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h index 67ca6ba0c81d..61c9b236c38a 100644 --- a/include/uapi/linux/xfrm.h +++ b/include/uapi/linux/xfrm.h @@ -213,13 +213,13 @@ enum { XFRM_MSG_GETSPDINFO, #define XFRM_MSG_GETSPDINFO XFRM_MSG_GETSPDINFO - XFRM_MSG_MAPPING, -#define XFRM_MSG_MAPPING XFRM_MSG_MAPPING - XFRM_MSG_SETDEFAULT, #define XFRM_MSG_SETDEFAULT XFRM_MSG_SETDEFAULT XFRM_MSG_GETDEFAULT, #define XFRM_MSG_GETDEFAULT XFRM_MSG_GETDEFAULT + + XFRM_MSG_MAPPING, +#define XFRM_MSG_MAPPING XFRM_MSG_MAPPING __XFRM_MSG_MAX }; #define XFRM_MSG_MAX (__XFRM_MSG_MAX - 1) diff --git a/security/selinux/nlmsgtab.c b/security/selinux/nlmsgtab.c index eb4250b6b9fe..9ad6f9b893f5 100644 --- a/security/selinux/nlmsgtab.c +++ b/security/selinux/nlmsgtab.c @@ -123,8 +123,6 @@ static const struct nlmsg_perm nlmsg_xfrm_perms[] = { XFRM_MSG_NEWSPDINFO, NETLINK_XFRM_SOCKET__NLMSG_WRITE }, { XFRM_MSG_GETSPDINFO, NETLINK_XFRM_SOCKET__NLMSG_READ }, { XFRM_MSG_MAPPING, NETLINK_XFRM_SOCKET__NLMSG_READ }, - { XFRM_MSG_SETDEFAULT, NETLINK_XFRM_SOCKET__NLMSG_WRITE }, - { XFRM_MSG_GETDEFAULT, NETLINK_XFRM_SOCKET__NLMSG_READ }, }; static const struct nlmsg_perm nlmsg_audit_perms[] = @@ -188,7 +186,7 @@ int selinux_nlmsg_lookup(u16 sclass, u16 nlmsg_type, u32 *perm) * structures at the top of this file with the new mappings * before updating the BUILD_BUG_ON() macro! */ - BUILD_BUG_ON(XFRM_MSG_MAX != XFRM_MSG_GETDEFAULT); + BUILD_BUG_ON(XFRM_MSG_MAX != XFRM_MSG_MAPPING); err = nlmsg_perm(nlmsg_type, perm, nlmsg_xfrm_perms, sizeof(nlmsg_xfrm_perms)); break; From ece9c2a70f7cb2d22827b07074cbc988f5f13199 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Fri, 24 Jun 2022 10:01:39 +0200 Subject: [PATCH 1037/1842] Revert "xfrm: fix "disable_policy" flag use when arriving from different devices" This reverts commit 47f04f95edb1aa00ac2d9a9fb0f1e50031965618 which is e6175a2ed1f18bf2f649625bf725e07adcfa6a28 commit upstream. It breaks the Android kernel ABI and if this really needs to be added to Android, it must come back in a format in the future that does not break the abi. Signed-off-by: Greg Kroah-Hartman Change-Id: Ibb0fcc031d2bf71f137d3c760d84858436acc801 --- include/net/ip.h | 1 - include/net/xfrm.h | 14 +------------- net/ipv4/route.c | 23 +++++------------------ 3 files changed, 6 insertions(+), 32 deletions(-) diff --git a/include/net/ip.h b/include/net/ip.h index 7baa3dd20fc1..db3a2eb144b3 100644 --- a/include/net/ip.h +++ b/include/net/ip.h @@ -57,7 +57,6 @@ struct inet_skb_parm { #define IPSKB_DOREDIRECT BIT(5) #define IPSKB_FRAG_PMTU BIT(6) #define IPSKB_L3SLAVE BIT(7) -#define IPSKB_NOPOLICY BIT(8) u16 frag_max_size; }; diff --git a/include/net/xfrm.h b/include/net/xfrm.h index da3d220670cf..f526a71e49c3 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -1095,18 +1095,6 @@ static inline bool __xfrm_check_nopolicy(struct net *net, struct sk_buff *skb, return false; } -static inline bool __xfrm_check_dev_nopolicy(struct sk_buff *skb, - int dir, unsigned short family) -{ - if (dir != XFRM_POLICY_OUT && family == AF_INET) { - /* same dst may be used for traffic originating from - * devices with different policy settings. - */ - return IPCB(skb)->flags & IPSKB_NOPOLICY; - } - return skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY); -} - static inline int __xfrm_policy_check2(struct sock *sk, int dir, struct sk_buff *skb, unsigned int family, int reverse) @@ -1118,7 +1106,7 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir, return __xfrm_policy_check(sk, ndir, skb, family); return __xfrm_check_nopolicy(net, skb, dir) || - __xfrm_check_dev_nopolicy(skb, dir, family) || + (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || __xfrm_policy_check(sk, ndir, skb, family); } diff --git a/net/ipv4/route.c b/net/ipv4/route.c index aab8ac383d5d..9bd3cd2177f4 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -1765,7 +1765,6 @@ static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr, struct in_device *in_dev = __in_dev_get_rcu(dev); unsigned int flags = RTCF_MULTICAST; struct rtable *rth; - bool no_policy; u32 itag = 0; int err; @@ -1776,12 +1775,8 @@ static int ip_route_input_mc(struct sk_buff *skb, __be32 daddr, __be32 saddr, if (our) flags |= RTCF_LOCAL; - no_policy = IN_DEV_ORCONF(in_dev, NOPOLICY); - if (no_policy) - IPCB(skb)->flags |= IPSKB_NOPOLICY; - rth = rt_dst_alloc(dev_net(dev)->loopback_dev, flags, RTN_MULTICAST, - no_policy, false); + IN_DEV_ORCONF(in_dev, NOPOLICY), false); if (!rth) return -ENOBUFS; @@ -1840,7 +1835,7 @@ static int __mkroute_input(struct sk_buff *skb, struct rtable *rth; int err; struct in_device *out_dev; - bool do_cache, no_policy; + bool do_cache; u32 itag = 0; /* get a working reference to the output device */ @@ -1885,10 +1880,6 @@ static int __mkroute_input(struct sk_buff *skb, } } - no_policy = IN_DEV_ORCONF(in_dev, NOPOLICY); - if (no_policy) - IPCB(skb)->flags |= IPSKB_NOPOLICY; - fnhe = find_exception(nhc, daddr); if (do_cache) { if (fnhe) @@ -1901,7 +1892,8 @@ static int __mkroute_input(struct sk_buff *skb, } } - rth = rt_dst_alloc(out_dev->dev, 0, res->type, no_policy, + rth = rt_dst_alloc(out_dev->dev, 0, res->type, + IN_DEV_ORCONF(in_dev, NOPOLICY), IN_DEV_ORCONF(out_dev, NOXFRM)); if (!rth) { err = -ENOBUFS; @@ -2153,7 +2145,6 @@ static int ip_route_input_slow(struct sk_buff *skb, __be32 daddr, __be32 saddr, struct rtable *rth; struct flowi4 fl4; bool do_cache = true; - bool no_policy; /* IP on this device is disabled. */ @@ -2271,10 +2262,6 @@ out: return err; RT_CACHE_STAT_INC(in_brd); local_input: - no_policy = IN_DEV_ORCONF(in_dev, NOPOLICY); - if (no_policy) - IPCB(skb)->flags |= IPSKB_NOPOLICY; - do_cache &= res->fi && !itag; if (do_cache) { struct fib_nh_common *nhc = FIB_RES_NHC(*res); @@ -2289,7 +2276,7 @@ out: return err; rth = rt_dst_alloc(ip_rt_get_dev(net, res), flags | RTCF_LOCAL, res->type, - no_policy, false); + IN_DEV_ORCONF(in_dev, NOPOLICY), false); if (!rth) goto e_nobufs; From 42dadcf0a88f51945078edf194c18059f14e5ee4 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Fri, 24 Jun 2022 10:01:45 +0200 Subject: [PATCH 1038/1842] Revert "xfrm: rework default policy structure" This reverts commit 0d2e9d8000efbe733ef23e6de5aba956c783014b which is b58b1f563ab78955d37e9e43e02790a85c66ac05 commit upstream. It breaks the Android kernel ABI and if this really needs to be added to Android, it must come back in a format in the future that does not break the abi. Signed-off-by: Greg Kroah-Hartman Change-Id: I0a8403a202b21cda7f856ea143c5b01b78346078 --- include/net/netns/xfrm.h | 6 +++++- include/net/xfrm.h | 44 +++++++++++++++++++++++++--------------- net/xfrm/xfrm_policy.c | 10 +++------ net/xfrm/xfrm_user.c | 43 +++++++++++++++++++++++---------------- 4 files changed, 61 insertions(+), 42 deletions(-) diff --git a/include/net/netns/xfrm.h b/include/net/netns/xfrm.h index c6c779c95784..7703624b9151 100644 --- a/include/net/netns/xfrm.h +++ b/include/net/netns/xfrm.h @@ -66,7 +66,11 @@ struct netns_xfrm { int sysctl_larval_drop; u32 sysctl_acq_expires; - u8 policy_default[XFRM_POLICY_MAX]; + u8 policy_default; +#define XFRM_POL_DEFAULT_IN 1 +#define XFRM_POL_DEFAULT_OUT 2 +#define XFRM_POL_DEFAULT_FWD 4 +#define XFRM_POL_DEFAULT_MASK 7 #ifdef CONFIG_SYSCTL struct ctl_table_header *sysctl_hdr; diff --git a/include/net/xfrm.h b/include/net/xfrm.h index f526a71e49c3..9a8c94f9bd74 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -1083,18 +1083,25 @@ xfrm_state_addr_cmp(const struct xfrm_tmpl *tmpl, const struct xfrm_state *x, un } #ifdef CONFIG_XFRM -int __xfrm_policy_check(struct sock *, int dir, struct sk_buff *skb, - unsigned short family); - -static inline bool __xfrm_check_nopolicy(struct net *net, struct sk_buff *skb, - int dir) +static inline bool +xfrm_default_allow(struct net *net, int dir) { - if (!net->xfrm.policy_count[dir] && !secpath_exists(skb)) - return net->xfrm.policy_default[dir] == XFRM_USERPOLICY_ACCEPT; + u8 def = net->xfrm.policy_default; + switch (dir) { + case XFRM_POLICY_IN: + return def & XFRM_POL_DEFAULT_IN ? false : true; + case XFRM_POLICY_OUT: + return def & XFRM_POL_DEFAULT_OUT ? false : true; + case XFRM_POLICY_FWD: + return def & XFRM_POL_DEFAULT_FWD ? false : true; + } return false; } +int __xfrm_policy_check(struct sock *, int dir, struct sk_buff *skb, + unsigned short family); + static inline int __xfrm_policy_check2(struct sock *sk, int dir, struct sk_buff *skb, unsigned int family, int reverse) @@ -1105,9 +1112,13 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir, if (sk && sk->sk_policy[XFRM_POLICY_IN]) return __xfrm_policy_check(sk, ndir, skb, family); - return __xfrm_check_nopolicy(net, skb, dir) || - (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || - __xfrm_policy_check(sk, ndir, skb, family); + if (xfrm_default_allow(net, dir)) + return (!net->xfrm.policy_count[dir] && !secpath_exists(skb)) || + (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || + __xfrm_policy_check(sk, ndir, skb, family); + else + return (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || + __xfrm_policy_check(sk, ndir, skb, family); } static inline int xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, unsigned short family) @@ -1159,12 +1170,13 @@ static inline int xfrm_route_forward(struct sk_buff *skb, unsigned short family) { struct net *net = dev_net(skb->dev); - if (!net->xfrm.policy_count[XFRM_POLICY_OUT] && - net->xfrm.policy_default[XFRM_POLICY_OUT] == XFRM_USERPOLICY_ACCEPT) - return true; - - return (skb_dst(skb)->flags & DST_NOXFRM) || - __xfrm_route_forward(skb, family); + if (xfrm_default_allow(net, XFRM_POLICY_OUT)) + return !net->xfrm.policy_count[XFRM_POLICY_OUT] || + (skb_dst(skb)->flags & DST_NOXFRM) || + __xfrm_route_forward(skb, family); + else + return (skb_dst(skb)->flags & DST_NOXFRM) || + __xfrm_route_forward(skb, family); } static inline int xfrm4_route_forward(struct sk_buff *skb) diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index a2fcc74c4a43..7b7f7d9504a5 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -3167,7 +3167,7 @@ struct dst_entry *xfrm_lookup_with_ifid(struct net *net, nopol: if (!(dst_orig->dev->flags & IFF_LOOPBACK) && - net->xfrm.policy_default[dir] == XFRM_USERPOLICY_BLOCK) { + !xfrm_default_allow(net, dir)) { err = -EPERM; goto error; } @@ -3618,7 +3618,7 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, } if (!pol) { - if (net->xfrm.policy_default[dir] == XFRM_USERPOLICY_BLOCK) { + if (!xfrm_default_allow(net, dir)) { XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOPOLS); return 0; } @@ -3678,8 +3678,7 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, } xfrm_nr = ti; - if (net->xfrm.policy_default[dir] == XFRM_USERPOLICY_BLOCK && - !xfrm_nr) { + if (!xfrm_default_allow(net, dir) && !xfrm_nr) { XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOSTATES); goto reject; } @@ -4167,9 +4166,6 @@ static int __net_init xfrm_net_init(struct net *net) spin_lock_init(&net->xfrm.xfrm_state_lock); spin_lock_init(&net->xfrm.xfrm_policy_lock); mutex_init(&net->xfrm.xfrm_cfg_mutex); - net->xfrm.policy_default[XFRM_POLICY_IN] = XFRM_USERPOLICY_ACCEPT; - net->xfrm.policy_default[XFRM_POLICY_FWD] = XFRM_USERPOLICY_ACCEPT; - net->xfrm.policy_default[XFRM_POLICY_OUT] = XFRM_USERPOLICY_ACCEPT; rv = xfrm_statistics_init(net); if (rv < 0) diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index dae1aedb63aa..0aecea12c4eb 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -1919,35 +1919,38 @@ static int xfrm_notify_userpolicy(struct net *net) } up = nlmsg_data(nlh); - up->in = net->xfrm.policy_default[XFRM_POLICY_IN]; - up->fwd = net->xfrm.policy_default[XFRM_POLICY_FWD]; - up->out = net->xfrm.policy_default[XFRM_POLICY_OUT]; + up->in = net->xfrm.policy_default & XFRM_POL_DEFAULT_IN ? + XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; + up->fwd = net->xfrm.policy_default & XFRM_POL_DEFAULT_FWD ? + XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; + up->out = net->xfrm.policy_default & XFRM_POL_DEFAULT_OUT ? + XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; nlmsg_end(skb, nlh); return xfrm_nlmsg_multicast(net, skb, 0, XFRMNLGRP_POLICY); } -static bool xfrm_userpolicy_is_valid(__u8 policy) -{ - return policy == XFRM_USERPOLICY_BLOCK || - policy == XFRM_USERPOLICY_ACCEPT; -} - static int xfrm_set_default(struct sk_buff *skb, struct nlmsghdr *nlh, struct nlattr **attrs) { struct net *net = sock_net(skb->sk); struct xfrm_userpolicy_default *up = nlmsg_data(nlh); - if (xfrm_userpolicy_is_valid(up->in)) - net->xfrm.policy_default[XFRM_POLICY_IN] = up->in; + if (up->in == XFRM_USERPOLICY_BLOCK) + net->xfrm.policy_default |= XFRM_POL_DEFAULT_IN; + else if (up->in == XFRM_USERPOLICY_ACCEPT) + net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_IN; - if (xfrm_userpolicy_is_valid(up->fwd)) - net->xfrm.policy_default[XFRM_POLICY_FWD] = up->fwd; + if (up->fwd == XFRM_USERPOLICY_BLOCK) + net->xfrm.policy_default |= XFRM_POL_DEFAULT_FWD; + else if (up->fwd == XFRM_USERPOLICY_ACCEPT) + net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_FWD; - if (xfrm_userpolicy_is_valid(up->out)) - net->xfrm.policy_default[XFRM_POLICY_OUT] = up->out; + if (up->out == XFRM_USERPOLICY_BLOCK) + net->xfrm.policy_default |= XFRM_POL_DEFAULT_OUT; + else if (up->out == XFRM_USERPOLICY_ACCEPT) + net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_OUT; rt_genid_bump_all(net); @@ -1977,9 +1980,13 @@ static int xfrm_get_default(struct sk_buff *skb, struct nlmsghdr *nlh, } r_up = nlmsg_data(r_nlh); - r_up->in = net->xfrm.policy_default[XFRM_POLICY_IN]; - r_up->fwd = net->xfrm.policy_default[XFRM_POLICY_FWD]; - r_up->out = net->xfrm.policy_default[XFRM_POLICY_OUT]; + + r_up->in = net->xfrm.policy_default & XFRM_POL_DEFAULT_IN ? + XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; + r_up->fwd = net->xfrm.policy_default & XFRM_POL_DEFAULT_FWD ? + XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; + r_up->out = net->xfrm.policy_default & XFRM_POL_DEFAULT_OUT ? + XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; nlmsg_end(r_skb, r_nlh); return nlmsg_unicast(net->xfrm.nlsk, r_skb, portid); From 4ead88c0e8fe935c621208e7c636293604bcccbd Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Fri, 24 Jun 2022 10:01:50 +0200 Subject: [PATCH 1039/1842] Revert "xfrm: fix dflt policy check when there is no policy configured" This reverts commit 57c1bbe7098b516d535295a9fa762a44c871a74c which is ec3bb890817e4398f2d46e12e2e205495b116be9 commit upstream. It breaks the Android kernel ABI and if this really needs to be added to Android, it must come back in a format in the future that does not break the abi. Signed-off-by: Greg Kroah-Hartman Change-Id: Iebfe2659463646982cc41c7cd29db2d51ef5e6eb --- include/net/xfrm.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/net/xfrm.h b/include/net/xfrm.h index 9a8c94f9bd74..5ef446431358 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -1170,7 +1170,7 @@ static inline int xfrm_route_forward(struct sk_buff *skb, unsigned short family) { struct net *net = dev_net(skb->dev); - if (xfrm_default_allow(net, XFRM_POLICY_OUT)) + if (xfrm_default_allow(net, XFRM_POLICY_FWD)) return !net->xfrm.policy_count[XFRM_POLICY_OUT] || (skb_dst(skb)->flags & DST_NOXFRM) || __xfrm_route_forward(skb, family); From df0ff8d194937beedbda57de59a6f5ee023fefdc Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Fri, 24 Jun 2022 10:01:55 +0200 Subject: [PATCH 1040/1842] Revert "xfrm: notify default policy on update" This reverts commit 9856c3a129dd625c26df92fc58dda739741204ff which is 88d0adb5f13b1c52fbb7d755f6f79db18c2f0c2c commit upstream. It breaks the Android kernel ABI and if this really needs to be added to Android, it must come back in a format in the future that does not break the abi. Signed-off-by: Greg Kroah-Hartman Change-Id: I4e7bf3d512309e061272648fdb5733e270ab4279 --- net/xfrm/xfrm_user.c | 31 ------------------------------- 1 file changed, 31 deletions(-) diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index 0aecea12c4eb..0af4de0ae263 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -1901,36 +1901,6 @@ static struct sk_buff *xfrm_policy_netlink(struct sk_buff *in_skb, return skb; } -static int xfrm_notify_userpolicy(struct net *net) -{ - struct xfrm_userpolicy_default *up; - int len = NLMSG_ALIGN(sizeof(*up)); - struct nlmsghdr *nlh; - struct sk_buff *skb; - - skb = nlmsg_new(len, GFP_ATOMIC); - if (skb == NULL) - return -ENOMEM; - - nlh = nlmsg_put(skb, 0, 0, XFRM_MSG_GETDEFAULT, sizeof(*up), 0); - if (nlh == NULL) { - kfree_skb(skb); - return -EMSGSIZE; - } - - up = nlmsg_data(nlh); - up->in = net->xfrm.policy_default & XFRM_POL_DEFAULT_IN ? - XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; - up->fwd = net->xfrm.policy_default & XFRM_POL_DEFAULT_FWD ? - XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; - up->out = net->xfrm.policy_default & XFRM_POL_DEFAULT_OUT ? - XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; - - nlmsg_end(skb, nlh); - - return xfrm_nlmsg_multicast(net, skb, 0, XFRMNLGRP_POLICY); -} - static int xfrm_set_default(struct sk_buff *skb, struct nlmsghdr *nlh, struct nlattr **attrs) { @@ -1954,7 +1924,6 @@ static int xfrm_set_default(struct sk_buff *skb, struct nlmsghdr *nlh, rt_genid_bump_all(net); - xfrm_notify_userpolicy(net); return 0; } From f7160ab10306f2d89e6ee845cb35fb27bee614be Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Fri, 24 Jun 2022 10:02:00 +0200 Subject: [PATCH 1041/1842] Revert "xfrm: make user policy API complete" This reverts commit 20fd28df40494400babe79c89549bd8317b7dd9f which is f8d858e607b2a36808ac6d4218f5f5203d7a7d63 commit upstream. It breaks the Android kernel ABI and if this really needs to be added to Android, it must come back in a format in the future that does not break the abi. Signed-off-by: Greg Kroah-Hartman Change-Id: I0597156be84f636d8196c81b2625a04bab57dc0c --- include/uapi/linux/xfrm.h | 9 +++------ net/xfrm/xfrm_user.c | 31 ++++++++++++------------------- 2 files changed, 15 insertions(+), 25 deletions(-) diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h index 61c9b236c38a..c5a4d7a7b8df 100644 --- a/include/uapi/linux/xfrm.h +++ b/include/uapi/linux/xfrm.h @@ -520,12 +520,9 @@ struct xfrm_user_offload { #define XFRM_OFFLOAD_INBOUND 2 struct xfrm_userpolicy_default { -#define XFRM_USERPOLICY_UNSPEC 0 -#define XFRM_USERPOLICY_BLOCK 1 -#define XFRM_USERPOLICY_ACCEPT 2 - __u8 in; - __u8 fwd; - __u8 out; +#define XFRM_USERPOLICY_DIRMASK_MAX (sizeof(__u8) * 8) + __u8 dirmask; + __u8 action; }; #ifndef __KERNEL__ diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index 0af4de0ae263..efbc48c85a8b 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -1906,21 +1906,16 @@ static int xfrm_set_default(struct sk_buff *skb, struct nlmsghdr *nlh, { struct net *net = sock_net(skb->sk); struct xfrm_userpolicy_default *up = nlmsg_data(nlh); + u8 dirmask; + u8 old_default = net->xfrm.policy_default; - if (up->in == XFRM_USERPOLICY_BLOCK) - net->xfrm.policy_default |= XFRM_POL_DEFAULT_IN; - else if (up->in == XFRM_USERPOLICY_ACCEPT) - net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_IN; + if (up->dirmask >= XFRM_USERPOLICY_DIRMASK_MAX) + return -EINVAL; - if (up->fwd == XFRM_USERPOLICY_BLOCK) - net->xfrm.policy_default |= XFRM_POL_DEFAULT_FWD; - else if (up->fwd == XFRM_USERPOLICY_ACCEPT) - net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_FWD; + dirmask = (1 << up->dirmask) & XFRM_POL_DEFAULT_MASK; - if (up->out == XFRM_USERPOLICY_BLOCK) - net->xfrm.policy_default |= XFRM_POL_DEFAULT_OUT; - else if (up->out == XFRM_USERPOLICY_ACCEPT) - net->xfrm.policy_default &= ~XFRM_POL_DEFAULT_OUT; + net->xfrm.policy_default = (old_default & (0xff ^ dirmask)) + | (up->action << up->dirmask); rt_genid_bump_all(net); @@ -1933,11 +1928,13 @@ static int xfrm_get_default(struct sk_buff *skb, struct nlmsghdr *nlh, struct sk_buff *r_skb; struct nlmsghdr *r_nlh; struct net *net = sock_net(skb->sk); - struct xfrm_userpolicy_default *r_up; + struct xfrm_userpolicy_default *r_up, *up; int len = NLMSG_ALIGN(sizeof(struct xfrm_userpolicy_default)); u32 portid = NETLINK_CB(skb).portid; u32 seq = nlh->nlmsg_seq; + up = nlmsg_data(nlh); + r_skb = nlmsg_new(len, GFP_ATOMIC); if (!r_skb) return -ENOMEM; @@ -1950,12 +1947,8 @@ static int xfrm_get_default(struct sk_buff *skb, struct nlmsghdr *nlh, r_up = nlmsg_data(r_nlh); - r_up->in = net->xfrm.policy_default & XFRM_POL_DEFAULT_IN ? - XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; - r_up->fwd = net->xfrm.policy_default & XFRM_POL_DEFAULT_FWD ? - XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; - r_up->out = net->xfrm.policy_default & XFRM_POL_DEFAULT_OUT ? - XFRM_USERPOLICY_BLOCK : XFRM_USERPOLICY_ACCEPT; + r_up->action = ((net->xfrm.policy_default & (1 << up->dirmask)) >> up->dirmask); + r_up->dirmask = up->dirmask; nlmsg_end(r_skb, r_nlh); return nlmsg_unicast(net->xfrm.nlsk, r_skb, portid); From e21944a82a4693a056ab61e78aefb806d3f7335b Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Fri, 24 Jun 2022 10:02:10 +0200 Subject: [PATCH 1042/1842] Revert "net: xfrm: fix shift-out-of-bounce" This reverts commit ab610ee1d1a1d5f9f6e326c21962cb3fc8b81a5b which is 5d8dbb7fb82b8661c16d496644b931c0e2e3a12e commit upstream. It breaks the Android kernel ABI and if this really needs to be added to Android, it must come back in a format in the future that does not break the abi. Signed-off-by: Greg Kroah-Hartman Change-Id: I1843039a406d0e39cdbde7f1da9be30fb1cbcd6a --- include/uapi/linux/xfrm.h | 1 - net/xfrm/xfrm_user.c | 7 +------ 2 files changed, 1 insertion(+), 7 deletions(-) diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h index c5a4d7a7b8df..5f19800b41c5 100644 --- a/include/uapi/linux/xfrm.h +++ b/include/uapi/linux/xfrm.h @@ -520,7 +520,6 @@ struct xfrm_user_offload { #define XFRM_OFFLOAD_INBOUND 2 struct xfrm_userpolicy_default { -#define XFRM_USERPOLICY_DIRMASK_MAX (sizeof(__u8) * 8) __u8 dirmask; __u8 action; }; diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index efbc48c85a8b..344d7d34491b 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -1906,14 +1906,9 @@ static int xfrm_set_default(struct sk_buff *skb, struct nlmsghdr *nlh, { struct net *net = sock_net(skb->sk); struct xfrm_userpolicy_default *up = nlmsg_data(nlh); - u8 dirmask; + u8 dirmask = (1 << up->dirmask) & XFRM_POL_DEFAULT_MASK; u8 old_default = net->xfrm.policy_default; - if (up->dirmask >= XFRM_USERPOLICY_DIRMASK_MAX) - return -EINVAL; - - dirmask = (1 << up->dirmask) & XFRM_POL_DEFAULT_MASK; - net->xfrm.policy_default = (old_default & (0xff ^ dirmask)) | (up->action << up->dirmask); From 73c2a811f6d17d60e03a0cb5c0fa17fab7e982e2 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Fri, 24 Jun 2022 10:03:05 +0200 Subject: [PATCH 1043/1842] Revert "xfrm: Add possibility to set the default to block if we have no policy" This reverts commit 5b7f84b1f9f46327360a64c529433fa0d68cc3f4 which is commit 2d151d39073aff498358543801fca0f670fea981 upstream. It breaks the Android kernel ABI and if this really needs to be added to Android, it must come back in a format in the future that does not break the abi. Signed-off-by: Greg Kroah-Hartman Change-Id: Ic222a9dfeaa3775f1173b4cd13de7e9ae959ccd9 --- include/net/netns/xfrm.h | 7 ------ include/net/xfrm.h | 36 +++++---------------------- include/uapi/linux/xfrm.h | 10 -------- net/xfrm/xfrm_policy.c | 16 ------------ net/xfrm/xfrm_user.c | 52 --------------------------------------- 5 files changed, 6 insertions(+), 115 deletions(-) diff --git a/include/net/netns/xfrm.h b/include/net/netns/xfrm.h index 7703624b9151..93d74c68d87e 100644 --- a/include/net/netns/xfrm.h +++ b/include/net/netns/xfrm.h @@ -65,13 +65,6 @@ struct netns_xfrm { u32 sysctl_aevent_rseqth; int sysctl_larval_drop; u32 sysctl_acq_expires; - - u8 policy_default; -#define XFRM_POL_DEFAULT_IN 1 -#define XFRM_POL_DEFAULT_OUT 2 -#define XFRM_POL_DEFAULT_FWD 4 -#define XFRM_POL_DEFAULT_MASK 7 - #ifdef CONFIG_SYSCTL struct ctl_table_header *sysctl_hdr; #endif diff --git a/include/net/xfrm.h b/include/net/xfrm.h index 5ef446431358..5c022fc018a4 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -1083,22 +1083,6 @@ xfrm_state_addr_cmp(const struct xfrm_tmpl *tmpl, const struct xfrm_state *x, un } #ifdef CONFIG_XFRM -static inline bool -xfrm_default_allow(struct net *net, int dir) -{ - u8 def = net->xfrm.policy_default; - - switch (dir) { - case XFRM_POLICY_IN: - return def & XFRM_POL_DEFAULT_IN ? false : true; - case XFRM_POLICY_OUT: - return def & XFRM_POL_DEFAULT_OUT ? false : true; - case XFRM_POLICY_FWD: - return def & XFRM_POL_DEFAULT_FWD ? false : true; - } - return false; -} - int __xfrm_policy_check(struct sock *, int dir, struct sk_buff *skb, unsigned short family); @@ -1112,13 +1096,9 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir, if (sk && sk->sk_policy[XFRM_POLICY_IN]) return __xfrm_policy_check(sk, ndir, skb, family); - if (xfrm_default_allow(net, dir)) - return (!net->xfrm.policy_count[dir] && !secpath_exists(skb)) || - (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || - __xfrm_policy_check(sk, ndir, skb, family); - else - return (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || - __xfrm_policy_check(sk, ndir, skb, family); + return (!net->xfrm.policy_count[dir] && !secpath_exists(skb)) || + (skb_dst(skb) && (skb_dst(skb)->flags & DST_NOPOLICY)) || + __xfrm_policy_check(sk, ndir, skb, family); } static inline int xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, unsigned short family) @@ -1170,13 +1150,9 @@ static inline int xfrm_route_forward(struct sk_buff *skb, unsigned short family) { struct net *net = dev_net(skb->dev); - if (xfrm_default_allow(net, XFRM_POLICY_FWD)) - return !net->xfrm.policy_count[XFRM_POLICY_OUT] || - (skb_dst(skb)->flags & DST_NOXFRM) || - __xfrm_route_forward(skb, family); - else - return (skb_dst(skb)->flags & DST_NOXFRM) || - __xfrm_route_forward(skb, family); + return !net->xfrm.policy_count[XFRM_POLICY_OUT] || + (skb_dst(skb)->flags & DST_NOXFRM) || + __xfrm_route_forward(skb, family); } static inline int xfrm4_route_forward(struct sk_buff *skb) diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h index 5f19800b41c5..66073c082a06 100644 --- a/include/uapi/linux/xfrm.h +++ b/include/uapi/linux/xfrm.h @@ -213,11 +213,6 @@ enum { XFRM_MSG_GETSPDINFO, #define XFRM_MSG_GETSPDINFO XFRM_MSG_GETSPDINFO - XFRM_MSG_SETDEFAULT, -#define XFRM_MSG_SETDEFAULT XFRM_MSG_SETDEFAULT - XFRM_MSG_GETDEFAULT, -#define XFRM_MSG_GETDEFAULT XFRM_MSG_GETDEFAULT - XFRM_MSG_MAPPING, #define XFRM_MSG_MAPPING XFRM_MSG_MAPPING __XFRM_MSG_MAX @@ -519,11 +514,6 @@ struct xfrm_user_offload { #define XFRM_OFFLOAD_IPV6 1 #define XFRM_OFFLOAD_INBOUND 2 -struct xfrm_userpolicy_default { - __u8 dirmask; - __u8 action; -}; - #ifndef __KERNEL__ /* backwards compatibility for userspace */ #define XFRMGRP_ACQUIRE 1 diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index 7b7f7d9504a5..9c5811e4ebc4 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -3166,11 +3166,6 @@ struct dst_entry *xfrm_lookup_with_ifid(struct net *net, return dst; nopol: - if (!(dst_orig->dev->flags & IFF_LOOPBACK) && - !xfrm_default_allow(net, dir)) { - err = -EPERM; - goto error; - } if (!(flags & XFRM_LOOKUP_ICMP)) { dst = dst_orig; goto ok; @@ -3618,11 +3613,6 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, } if (!pol) { - if (!xfrm_default_allow(net, dir)) { - XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOPOLS); - return 0; - } - if (sp && secpath_has_nontransport(sp, 0, &xerr_idx)) { xfrm_secpath_reject(xerr_idx, skb, &fl); XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOPOLS); @@ -3677,12 +3667,6 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, tpp[ti++] = &pols[pi]->xfrm_vec[i]; } xfrm_nr = ti; - - if (!xfrm_default_allow(net, dir) && !xfrm_nr) { - XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOSTATES); - goto reject; - } - if (npols > 1) { xfrm_tmpl_sort(stp, tpp, xfrm_nr, family); tpp = stp; diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index 344d7d34491b..b7f95fde8ac0 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -1901,54 +1901,6 @@ static struct sk_buff *xfrm_policy_netlink(struct sk_buff *in_skb, return skb; } -static int xfrm_set_default(struct sk_buff *skb, struct nlmsghdr *nlh, - struct nlattr **attrs) -{ - struct net *net = sock_net(skb->sk); - struct xfrm_userpolicy_default *up = nlmsg_data(nlh); - u8 dirmask = (1 << up->dirmask) & XFRM_POL_DEFAULT_MASK; - u8 old_default = net->xfrm.policy_default; - - net->xfrm.policy_default = (old_default & (0xff ^ dirmask)) - | (up->action << up->dirmask); - - rt_genid_bump_all(net); - - return 0; -} - -static int xfrm_get_default(struct sk_buff *skb, struct nlmsghdr *nlh, - struct nlattr **attrs) -{ - struct sk_buff *r_skb; - struct nlmsghdr *r_nlh; - struct net *net = sock_net(skb->sk); - struct xfrm_userpolicy_default *r_up, *up; - int len = NLMSG_ALIGN(sizeof(struct xfrm_userpolicy_default)); - u32 portid = NETLINK_CB(skb).portid; - u32 seq = nlh->nlmsg_seq; - - up = nlmsg_data(nlh); - - r_skb = nlmsg_new(len, GFP_ATOMIC); - if (!r_skb) - return -ENOMEM; - - r_nlh = nlmsg_put(r_skb, portid, seq, XFRM_MSG_GETDEFAULT, sizeof(*r_up), 0); - if (!r_nlh) { - kfree_skb(r_skb); - return -EMSGSIZE; - } - - r_up = nlmsg_data(r_nlh); - - r_up->action = ((net->xfrm.policy_default & (1 << up->dirmask)) >> up->dirmask); - r_up->dirmask = up->dirmask; - nlmsg_end(r_skb, r_nlh); - - return nlmsg_unicast(net->xfrm.nlsk, r_skb, portid); -} - static int xfrm_get_policy(struct sk_buff *skb, struct nlmsghdr *nlh, struct nlattr **attrs) { @@ -2656,8 +2608,6 @@ const int xfrm_msg_min[XFRM_NR_MSGTYPES] = { [XFRM_MSG_GETSADINFO - XFRM_MSG_BASE] = sizeof(u32), [XFRM_MSG_NEWSPDINFO - XFRM_MSG_BASE] = sizeof(u32), [XFRM_MSG_GETSPDINFO - XFRM_MSG_BASE] = sizeof(u32), - [XFRM_MSG_SETDEFAULT - XFRM_MSG_BASE] = XMSGSIZE(xfrm_userpolicy_default), - [XFRM_MSG_GETDEFAULT - XFRM_MSG_BASE] = XMSGSIZE(xfrm_userpolicy_default), }; EXPORT_SYMBOL_GPL(xfrm_msg_min); @@ -2737,8 +2687,6 @@ static const struct xfrm_link { .nla_pol = xfrma_spd_policy, .nla_max = XFRMA_SPD_MAX }, [XFRM_MSG_GETSPDINFO - XFRM_MSG_BASE] = { .doit = xfrm_get_spdinfo }, - [XFRM_MSG_SETDEFAULT - XFRM_MSG_BASE] = { .doit = xfrm_set_default }, - [XFRM_MSG_GETDEFAULT - XFRM_MSG_BASE] = { .doit = xfrm_get_default }, }; static int xfrm_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, From ee4677b78eca67f9c89b2452e9e9d37cc2a7436d Mon Sep 17 00:00:00 2001 From: Christian Borntraeger Date: Mon, 30 May 2022 11:27:06 +0200 Subject: [PATCH 1044/1842] s390/mm: use non-quiescing sske for KVM switch to keyed guest commit 3ae11dbcfac906a8c3a480e98660a823130dc16a upstream. The switch to a keyed guest does not require a classic sske as the other guest CPUs are not accessing the key before the switch is complete. By using the NQ SSKE things are faster especially with multiple guests. Signed-off-by: Christian Borntraeger Suggested-by: Janis Schoetterl-Glausch Reviewed-by: Claudio Imbrenda Link: https://lore.kernel.org/r/20220530092706.11637-3-borntraeger@linux.ibm.com Signed-off-by: Christian Borntraeger Signed-off-by: Heiko Carstens Signed-off-by: Greg Kroah-Hartman --- arch/s390/mm/pgtable.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c index fabaedddc90c..1c05caf68e7d 100644 --- a/arch/s390/mm/pgtable.c +++ b/arch/s390/mm/pgtable.c @@ -734,7 +734,7 @@ void ptep_zap_key(struct mm_struct *mm, unsigned long addr, pte_t *ptep) pgste_val(pgste) |= PGSTE_GR_BIT | PGSTE_GC_BIT; ptev = pte_val(*ptep); if (!(ptev & _PAGE_INVALID) && (ptev & _PAGE_WRITE)) - page_set_storage_key(ptev & PAGE_MASK, PAGE_DEFAULT_KEY, 1); + page_set_storage_key(ptev & PAGE_MASK, PAGE_DEFAULT_KEY, 0); pgste_set_unlock(ptep, pgste); preempt_enable(); } From 355be6131164c5bacf2e810763835aecb6e01fcb Mon Sep 17 00:00:00 2001 From: Damien Le Moal Date: Mon, 23 May 2022 16:29:10 +0900 Subject: [PATCH 1045/1842] zonefs: fix zonefs_iomap_begin() for reads commit c1c1204c0d0c1dccc1310b9277fb2bd8b663d8fe upstream. If a readahead is issued to a sequential zone file with an offset exactly equal to the current file size, the iomap type is set to IOMAP_UNWRITTEN, which will prevent an IO, but the iomap length is calculated as 0. This causes a WARN_ON() in iomap_iter(): [17309.548939] WARNING: CPU: 3 PID: 2137 at fs/iomap/iter.c:34 iomap_iter+0x9cf/0xe80 [...] [17309.650907] RIP: 0010:iomap_iter+0x9cf/0xe80 [...] [17309.754560] Call Trace: [17309.757078] [17309.759240] ? lock_is_held_type+0xd8/0x130 [17309.763531] iomap_readahead+0x1a8/0x870 [17309.767550] ? iomap_read_folio+0x4c0/0x4c0 [17309.771817] ? lockdep_hardirqs_on_prepare+0x400/0x400 [17309.778848] ? lock_release+0x370/0x750 [17309.784462] ? folio_add_lru+0x217/0x3f0 [17309.790220] ? reacquire_held_locks+0x4e0/0x4e0 [17309.796543] read_pages+0x17d/0xb60 [17309.801854] ? folio_add_lru+0x238/0x3f0 [17309.807573] ? readahead_expand+0x5f0/0x5f0 [17309.813554] ? policy_node+0xb5/0x140 [17309.819018] page_cache_ra_unbounded+0x27d/0x450 [17309.825439] filemap_get_pages+0x500/0x1450 [17309.831444] ? filemap_add_folio+0x140/0x140 [17309.837519] ? lock_is_held_type+0xd8/0x130 [17309.843509] filemap_read+0x28c/0x9f0 [17309.848953] ? zonefs_file_read_iter+0x1ea/0x4d0 [zonefs] [17309.856162] ? trace_contention_end+0xd6/0x130 [17309.862416] ? __mutex_lock+0x221/0x1480 [17309.868151] ? zonefs_file_read_iter+0x166/0x4d0 [zonefs] [17309.875364] ? filemap_get_pages+0x1450/0x1450 [17309.881647] ? __mutex_unlock_slowpath+0x15e/0x620 [17309.888248] ? wait_for_completion_io_timeout+0x20/0x20 [17309.895231] ? lock_is_held_type+0xd8/0x130 [17309.901115] ? lock_is_held_type+0xd8/0x130 [17309.906934] zonefs_file_read_iter+0x356/0x4d0 [zonefs] [17309.913750] new_sync_read+0x2d8/0x520 [17309.919035] ? __x64_sys_lseek+0x1d0/0x1d0 Furthermore, this causes iomap_readahead() to loop forever as iomap_readahead_iter() always returns 0, making no progress. Fix this by treating reads after the file size as access to holes, setting the iomap type to IOMAP_HOLE, the iomap addr to IOMAP_NULL_ADDR and using the length argument as is for the iomap length. To simplify the code with this change, zonefs_iomap_begin() is split into the read variant, zonefs_read_iomap_begin() and zonefs_read_iomap_ops, and the write variant, zonefs_write_iomap_begin() and zonefs_write_iomap_ops. Reported-by: Jorgen Hansen Fixes: 8dcc1a9d90c1 ("fs: New zonefs file system") Signed-off-by: Damien Le Moal Reviewed-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn Reviewed-by: Jorgen Hansen Signed-off-by: Greg Kroah-Hartman --- fs/zonefs/super.c | 96 +++++++++++++++++++++++++++++++---------------- 1 file changed, 64 insertions(+), 32 deletions(-) diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c index 8c7d01e907a3..bf5cb6efb8c0 100644 --- a/fs/zonefs/super.c +++ b/fs/zonefs/super.c @@ -68,15 +68,49 @@ static inline void zonefs_i_size_write(struct inode *inode, loff_t isize) zi->i_flags &= ~ZONEFS_ZONE_OPEN; } -static int zonefs_iomap_begin(struct inode *inode, loff_t offset, loff_t length, - unsigned int flags, struct iomap *iomap, - struct iomap *srcmap) +static int zonefs_read_iomap_begin(struct inode *inode, loff_t offset, + loff_t length, unsigned int flags, + struct iomap *iomap, struct iomap *srcmap) { struct zonefs_inode_info *zi = ZONEFS_I(inode); struct super_block *sb = inode->i_sb; loff_t isize; - /* All I/Os should always be within the file maximum size */ + /* + * All blocks are always mapped below EOF. If reading past EOF, + * act as if there is a hole up to the file maximum size. + */ + mutex_lock(&zi->i_truncate_mutex); + iomap->bdev = inode->i_sb->s_bdev; + iomap->offset = ALIGN_DOWN(offset, sb->s_blocksize); + isize = i_size_read(inode); + if (iomap->offset >= isize) { + iomap->type = IOMAP_HOLE; + iomap->addr = IOMAP_NULL_ADDR; + iomap->length = length; + } else { + iomap->type = IOMAP_MAPPED; + iomap->addr = (zi->i_zsector << SECTOR_SHIFT) + iomap->offset; + iomap->length = isize - iomap->offset; + } + mutex_unlock(&zi->i_truncate_mutex); + + return 0; +} + +static const struct iomap_ops zonefs_read_iomap_ops = { + .iomap_begin = zonefs_read_iomap_begin, +}; + +static int zonefs_write_iomap_begin(struct inode *inode, loff_t offset, + loff_t length, unsigned int flags, + struct iomap *iomap, struct iomap *srcmap) +{ + struct zonefs_inode_info *zi = ZONEFS_I(inode); + struct super_block *sb = inode->i_sb; + loff_t isize; + + /* All write I/Os should always be within the file maximum size */ if (WARN_ON_ONCE(offset + length > zi->i_max_size)) return -EIO; @@ -86,7 +120,7 @@ static int zonefs_iomap_begin(struct inode *inode, loff_t offset, loff_t length, * operation. */ if (WARN_ON_ONCE(zi->i_ztype == ZONEFS_ZTYPE_SEQ && - (flags & IOMAP_WRITE) && !(flags & IOMAP_DIRECT))) + !(flags & IOMAP_DIRECT))) return -EIO; /* @@ -95,45 +129,42 @@ static int zonefs_iomap_begin(struct inode *inode, loff_t offset, loff_t length, * write pointer) and unwriten beyond. */ mutex_lock(&zi->i_truncate_mutex); - isize = i_size_read(inode); - if (offset >= isize) - iomap->type = IOMAP_UNWRITTEN; - else - iomap->type = IOMAP_MAPPED; - if (flags & IOMAP_WRITE) - length = zi->i_max_size - offset; - else - length = min(length, isize - offset); - mutex_unlock(&zi->i_truncate_mutex); - - iomap->offset = ALIGN_DOWN(offset, sb->s_blocksize); - iomap->length = ALIGN(offset + length, sb->s_blocksize) - iomap->offset; iomap->bdev = inode->i_sb->s_bdev; + iomap->offset = ALIGN_DOWN(offset, sb->s_blocksize); iomap->addr = (zi->i_zsector << SECTOR_SHIFT) + iomap->offset; + isize = i_size_read(inode); + if (iomap->offset >= isize) { + iomap->type = IOMAP_UNWRITTEN; + iomap->length = zi->i_max_size - iomap->offset; + } else { + iomap->type = IOMAP_MAPPED; + iomap->length = isize - iomap->offset; + } + mutex_unlock(&zi->i_truncate_mutex); return 0; } -static const struct iomap_ops zonefs_iomap_ops = { - .iomap_begin = zonefs_iomap_begin, +static const struct iomap_ops zonefs_write_iomap_ops = { + .iomap_begin = zonefs_write_iomap_begin, }; static int zonefs_readpage(struct file *unused, struct page *page) { - return iomap_readpage(page, &zonefs_iomap_ops); + return iomap_readpage(page, &zonefs_read_iomap_ops); } static void zonefs_readahead(struct readahead_control *rac) { - iomap_readahead(rac, &zonefs_iomap_ops); + iomap_readahead(rac, &zonefs_read_iomap_ops); } /* * Map blocks for page writeback. This is used only on conventional zone files, * which implies that the page range can only be within the fixed inode size. */ -static int zonefs_map_blocks(struct iomap_writepage_ctx *wpc, - struct inode *inode, loff_t offset) +static int zonefs_write_map_blocks(struct iomap_writepage_ctx *wpc, + struct inode *inode, loff_t offset) { struct zonefs_inode_info *zi = ZONEFS_I(inode); @@ -147,12 +178,12 @@ static int zonefs_map_blocks(struct iomap_writepage_ctx *wpc, offset < wpc->iomap.offset + wpc->iomap.length) return 0; - return zonefs_iomap_begin(inode, offset, zi->i_max_size - offset, - IOMAP_WRITE, &wpc->iomap, NULL); + return zonefs_write_iomap_begin(inode, offset, zi->i_max_size - offset, + IOMAP_WRITE, &wpc->iomap, NULL); } static const struct iomap_writeback_ops zonefs_writeback_ops = { - .map_blocks = zonefs_map_blocks, + .map_blocks = zonefs_write_map_blocks, }; static int zonefs_writepage(struct page *page, struct writeback_control *wbc) @@ -182,7 +213,8 @@ static int zonefs_swap_activate(struct swap_info_struct *sis, return -EINVAL; } - return iomap_swapfile_activate(sis, swap_file, span, &zonefs_iomap_ops); + return iomap_swapfile_activate(sis, swap_file, span, + &zonefs_read_iomap_ops); } static const struct address_space_operations zonefs_file_aops = { @@ -612,7 +644,7 @@ static vm_fault_t zonefs_filemap_page_mkwrite(struct vm_fault *vmf) /* Serialize against truncates */ down_read(&zi->i_mmap_sem); - ret = iomap_page_mkwrite(vmf, &zonefs_iomap_ops); + ret = iomap_page_mkwrite(vmf, &zonefs_write_iomap_ops); up_read(&zi->i_mmap_sem); sb_end_pagefault(inode->i_sb); @@ -869,7 +901,7 @@ static ssize_t zonefs_file_dio_write(struct kiocb *iocb, struct iov_iter *from) if (append) ret = zonefs_file_dio_append(iocb, from); else - ret = iomap_dio_rw(iocb, from, &zonefs_iomap_ops, + ret = iomap_dio_rw(iocb, from, &zonefs_write_iomap_ops, &zonefs_write_dio_ops, sync); if (zi->i_ztype == ZONEFS_ZTYPE_SEQ && (ret > 0 || ret == -EIOCBQUEUED)) { @@ -911,7 +943,7 @@ static ssize_t zonefs_file_buffered_write(struct kiocb *iocb, if (ret <= 0) goto inode_unlock; - ret = iomap_file_buffered_write(iocb, from, &zonefs_iomap_ops); + ret = iomap_file_buffered_write(iocb, from, &zonefs_write_iomap_ops); if (ret > 0) iocb->ki_pos += ret; else if (ret == -EIO) @@ -1004,7 +1036,7 @@ static ssize_t zonefs_file_read_iter(struct kiocb *iocb, struct iov_iter *to) goto inode_unlock; } file_accessed(iocb->ki_filp); - ret = iomap_dio_rw(iocb, to, &zonefs_iomap_ops, + ret = iomap_dio_rw(iocb, to, &zonefs_read_iomap_ops, &zonefs_read_dio_ops, is_sync_kiocb(iocb)); } else { ret = generic_file_read_iter(iocb, to); From 16b1994679a0ad7f12a3b40b928e70241303d6f4 Mon Sep 17 00:00:00 2001 From: Marian Postevca Date: Fri, 3 Jun 2022 18:34:59 +0300 Subject: [PATCH 1046/1842] usb: gadget: u_ether: fix regression in setting fixed MAC address commit b337af3a4d6147000b7ca6b3438bf5c820849b37 upstream. In systemd systems setting a fixed MAC address through the "dev_addr" module argument fails systematically. When checking the MAC address after the interface is created it always has the same but different MAC address to the one supplied as argument. This is partially caused by systemd which by default will set an internally generated permanent MAC address for interfaces that are marked as having a randomly generated address. Commit 890d5b40908bfd1a ("usb: gadget: u_ether: fix race in setting MAC address in setup phase") didn't take into account the fact that the interface must be marked as having a set MAC address when it's set as module argument. Fixed by marking the interface with NET_ADDR_SET when the "dev_addr" module argument is supplied. Fixes: 890d5b40908bfd1a ("usb: gadget: u_ether: fix race in setting MAC address in setup phase") Cc: stable@vger.kernel.org Signed-off-by: Marian Postevca Link: https://lore.kernel.org/r/20220603153459.32722-1-posteuca@mutex.one Signed-off-by: Greg Kroah-Hartman --- drivers/usb/gadget/function/u_ether.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c index a40be8b448c2..64ef97ab9274 100644 --- a/drivers/usb/gadget/function/u_ether.c +++ b/drivers/usb/gadget/function/u_ether.c @@ -772,9 +772,13 @@ struct eth_dev *gether_setup_name(struct usb_gadget *g, dev->qmult = qmult; snprintf(net->name, sizeof(net->name), "%s%%d", netname); - if (get_ether_addr(dev_addr, net->dev_addr)) + if (get_ether_addr(dev_addr, net->dev_addr)) { + net->addr_assign_type = NET_ADDR_RANDOM; dev_warn(&g->dev, "using random %s ethernet address\n", "self"); + } else { + net->addr_assign_type = NET_ADDR_SET; + } if (get_ether_addr(host_addr, dev->host_mac)) dev_warn(&g->dev, "using random %s ethernet address\n", "host"); @@ -831,6 +835,9 @@ struct net_device *gether_setup_name_default(const char *netname) INIT_LIST_HEAD(&dev->tx_reqs); INIT_LIST_HEAD(&dev->rx_reqs); + /* by default we always have a random MAC address */ + net->addr_assign_type = NET_ADDR_RANDOM; + skb_queue_head_init(&dev->rx_frames); /* network device setup */ @@ -868,7 +875,6 @@ int gether_register_netdev(struct net_device *net) g = dev->gadget; memcpy(net->dev_addr, dev->dev_mac, ETH_ALEN); - net->addr_assign_type = NET_ADDR_RANDOM; status = register_netdev(net); if (status < 0) { @@ -908,6 +914,7 @@ int gether_set_dev_addr(struct net_device *net, const char *dev_addr) if (get_ether_addr(dev_addr, new_addr)) return -EINVAL; memcpy(dev->dev_mac, new_addr, ETH_ALEN); + net->addr_assign_type = NET_ADDR_SET; return 0; } EXPORT_SYMBOL_GPL(gether_set_dev_addr); From 743acb520799bd8987edb7d1282b7eeddeb8f986 Mon Sep 17 00:00:00 2001 From: Eric Dumazet Date: Tue, 9 Feb 2021 11:20:28 -0800 Subject: [PATCH 1047/1842] tcp: add some entropy in __inet_hash_connect() commit c579bd1b4021c42ae247108f1e6f73dd3f08600c upstream. Even when implementing RFC 6056 3.3.4 (Algorithm 4: Double-Hash Port Selection Algorithm), a patient attacker could still be able to collect enough state from an otherwise idle host. Idea of this patch is to inject some noise, in the cases __inet_hash_connect() found a candidate in the first attempt. This noise should not significantly reduce the collision avoidance, and should be zero if connection table is already well used. Note that this is not implementing RFC 6056 3.3.5 because we think Algorithm 5 could hurt typical workloads. Signed-off-by: Eric Dumazet Cc: David Dworken Cc: Willem de Bruijn Signed-off-by: David S. Miller Cc: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- net/ipv4/inet_hashtables.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index 44b524136f95..8c2d8e4dabfa 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -833,6 +833,11 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row, return -EADDRNOTAVAIL; ok: + /* If our first attempt found a candidate, skip next candidate + * in 1/16 of cases to add some noise. + */ + if (!i && !(prandom_u32() % 16)) + i = 2; WRITE_ONCE(table_perturb[index], READ_ONCE(table_perturb[index]) + i + 2); /* Head lock still held and bh's disabled */ From dd46a868fcfdf3aac8ffb20b2321e174a0156fb2 Mon Sep 17 00:00:00 2001 From: Willy Tarreau Date: Mon, 2 May 2022 10:46:09 +0200 Subject: [PATCH 1048/1842] tcp: use different parts of the port_offset for index and offset commit 9e9b70ae923baf2b5e8a0ea4fd0c8451801ac526 upstream. Amit Klein suggests that we use different parts of port_offset for the table's index and the port offset so that there is no direct relation between them. Cc: Jason A. Donenfeld Cc: Moshe Kol Cc: Yossi Gilad Cc: Amit Klein Reviewed-by: Eric Dumazet Signed-off-by: Willy Tarreau Signed-off-by: Jakub Kicinski Cc: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- net/ipv4/inet_hashtables.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index 8c2d8e4dabfa..bf26b7efafbd 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -777,7 +777,7 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row, net_get_random_once(table_perturb, sizeof(table_perturb)); index = hash_32(port_offset, INET_TABLE_PERTURB_SHIFT); - offset = READ_ONCE(table_perturb[index]) + port_offset; + offset = READ_ONCE(table_perturb[index]) + (port_offset >> 32); offset %= remaining; /* In first pass we try ports of @low parity. From d28e64b1c63eced06aedadcacb0be4997c10c7c1 Mon Sep 17 00:00:00 2001 From: Willy Tarreau Date: Mon, 2 May 2022 10:46:11 +0200 Subject: [PATCH 1049/1842] tcp: add small random increments to the source port commit ca7af0402550f9a0b3316d5f1c30904e42ed257d upstream. Here we're randomly adding between 0 and 7 random increments to the selected source port in order to add some noise in the source port selection that will make the next port less predictable. With the default port range of 32768-60999 this means a worst case reuse scenario of 14116/8=1764 connections between two consecutive uses of the same port, with an average of 14116/4.5=3137. This code was stressed at more than 800000 connections per second to a fixed target with all connections closed by the client using RSTs (worst condition) and only 2 connections failed among 13 billion, despite the hash being reseeded every 10 seconds, indicating a perfectly safe situation. Cc: Moshe Kol Cc: Yossi Gilad Cc: Amit Klein Reviewed-by: Eric Dumazet Signed-off-by: Willy Tarreau Signed-off-by: Jakub Kicinski Cc: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- net/ipv4/inet_hashtables.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index bf26b7efafbd..0eb6fc6cfa3a 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -833,11 +833,12 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row, return -EADDRNOTAVAIL; ok: - /* If our first attempt found a candidate, skip next candidate - * in 1/16 of cases to add some noise. + /* Here we want to add a little bit of randomness to the next source + * port that will be chosen. We use a max() with a random here so that + * on low contention the randomness is maximal and on high contention + * it may be inexistent. */ - if (!i && !(prandom_u32() % 16)) - i = 2; + i = max_t(int, i, (prandom_u32() & 7) * 2); WRITE_ONCE(table_perturb[index], READ_ONCE(table_perturb[index]) + i + 2); /* Head lock still held and bh's disabled */ From 24b922a5da0055f1bb8b391b83e494d2e5d56508 Mon Sep 17 00:00:00 2001 From: Willy Tarreau Date: Mon, 2 May 2022 10:46:12 +0200 Subject: [PATCH 1050/1842] tcp: dynamically allocate the perturb table used by source ports commit e9261476184be1abd486c9434164b2acbe0ed6c2 upstream. We'll need to further increase the size of this table and it's likely that at some point its size will not be suitable anymore for a static table. Let's allocate it on boot from inet_hashinfo2_init(), which is called from tcp_init(). Cc: Moshe Kol Cc: Yossi Gilad Cc: Amit Klein Reviewed-by: Eric Dumazet Signed-off-by: Willy Tarreau Signed-off-by: Jakub Kicinski Cc: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- net/ipv4/inet_hashtables.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index 0eb6fc6cfa3a..331719cdc431 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -731,7 +731,8 @@ EXPORT_SYMBOL_GPL(inet_unhash); * privacy, this only consumes 1 KB of kernel memory. */ #define INET_TABLE_PERTURB_SHIFT 8 -static u32 table_perturb[1 << INET_TABLE_PERTURB_SHIFT]; +#define INET_TABLE_PERTURB_SIZE (1 << INET_TABLE_PERTURB_SHIFT) +static u32 *table_perturb; int __inet_hash_connect(struct inet_timewait_death_row *death_row, struct sock *sk, u64 port_offset, @@ -774,7 +775,8 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row, if (likely(remaining > 1)) remaining &= ~1U; - net_get_random_once(table_perturb, sizeof(table_perturb)); + net_get_random_once(table_perturb, + INET_TABLE_PERTURB_SIZE * sizeof(*table_perturb)); index = hash_32(port_offset, INET_TABLE_PERTURB_SHIFT); offset = READ_ONCE(table_perturb[index]) + (port_offset >> 32); @@ -912,6 +914,12 @@ void __init inet_hashinfo2_init(struct inet_hashinfo *h, const char *name, low_limit, high_limit); init_hashinfo_lhash2(h); + + /* this one is used for source ports of outgoing connections */ + table_perturb = kmalloc_array(INET_TABLE_PERTURB_SIZE, + sizeof(*table_perturb), GFP_KERNEL); + if (!table_perturb) + panic("TCP: failed to alloc table_perturb"); } int inet_hashinfo2_init_mod(struct inet_hashinfo *h) From 9429b75bc271b6f29e50dbb0ee0751800ff87dd9 Mon Sep 17 00:00:00 2001 From: Willy Tarreau Date: Mon, 2 May 2022 10:46:13 +0200 Subject: [PATCH 1051/1842] tcp: increase source port perturb table to 2^16 commit 4c2c8f03a5ab7cb04ec64724d7d176d00bcc91e5 upstream. Moshe Kol, Amit Klein, and Yossi Gilad reported being able to accurately identify a client by forcing it to emit only 40 times more connections than there are entries in the table_perturb[] table. The previous two improvements consisting in resalting the secret every 10s and adding randomness to each port selection only slightly improved the situation, and the current value of 2^8 was too small as it's not very difficult to make a client emit 10k connections in less than 10 seconds. Thus we're increasing the perturb table from 2^8 to 2^16 so that the same precision now requires 2.6M connections, which is more difficult in this time frame and harder to hide as a background activity. The impact is that the table now uses 256 kB instead of 1 kB, which could mostly affect devices making frequent outgoing connections. However such components usually target a small set of destinations (load balancers, database clients, perf assessment tools), and in practice only a few entries will be visited, like before. A live test at 1 million connections per second showed no performance difference from the previous value. Reported-by: Moshe Kol Reported-by: Yossi Gilad Reported-by: Amit Klein Reviewed-by: Eric Dumazet Signed-off-by: Willy Tarreau Signed-off-by: Jakub Kicinski Cc: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- net/ipv4/inet_hashtables.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index 331719cdc431..fd058ef2aa9f 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -726,11 +726,12 @@ EXPORT_SYMBOL_GPL(inet_unhash); * Note that we use 32bit integers (vs RFC 'short integers') * because 2^16 is not a multiple of num_ephemeral and this * property might be used by clever attacker. - * RFC claims using TABLE_LENGTH=10 buckets gives an improvement, - * we use 256 instead to really give more isolation and - * privacy, this only consumes 1 KB of kernel memory. + * RFC claims using TABLE_LENGTH=10 buckets gives an improvement, though + * attacks were since demonstrated, thus we use 65536 instead to really + * give more isolation and privacy, at the expense of 256kB of kernel + * memory. */ -#define INET_TABLE_PERTURB_SHIFT 8 +#define INET_TABLE_PERTURB_SHIFT 16 #define INET_TABLE_PERTURB_SIZE (1 << INET_TABLE_PERTURB_SHIFT) static u32 *table_perturb; From 7ccb026ecb997405b59d391140c25ee347891504 Mon Sep 17 00:00:00 2001 From: Willy Tarreau Date: Mon, 2 May 2022 10:46:14 +0200 Subject: [PATCH 1052/1842] tcp: drop the hash_32() part from the index calculation commit e8161345ddbb66e449abde10d2fdce93f867eba9 upstream. In commit 190cc82489f4 ("tcp: change source port randomizarion at connect() time"), the table_perturb[] array was introduced and an index was taken from the port_offset via hash_32(). But it turns out that hash_32() performs a multiplication while the input here comes from the output of SipHash in secure_seq, that is well distributed enough to avoid the need for yet another hash. Suggested-by: Amit Klein Reviewed-by: Eric Dumazet Signed-off-by: Willy Tarreau Signed-off-by: Jakub Kicinski Cc: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- net/ipv4/inet_hashtables.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c index fd058ef2aa9f..f38b71cc3edb 100644 --- a/net/ipv4/inet_hashtables.c +++ b/net/ipv4/inet_hashtables.c @@ -778,7 +778,7 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row, net_get_random_once(table_perturb, INET_TABLE_PERTURB_SIZE * sizeof(*table_perturb)); - index = hash_32(port_offset, INET_TABLE_PERTURB_SHIFT); + index = port_offset & (INET_TABLE_PERTURB_SIZE - 1); offset = READ_ONCE(table_perturb[index]) + (port_offset >> 32); offset %= remaining; From a1508d164e58dc0857a1103a014d78f321d7481f Mon Sep 17 00:00:00 2001 From: Lukas Wunner Date: Sun, 23 Jan 2022 05:21:14 +0100 Subject: [PATCH 1053/1842] serial: core: Initialize rs485 RTS polarity already on probe commit 2dd8a74fddd21b95dcc60a2d3c9eaec993419d69 upstream. RTS polarity of rs485-enabled ports is currently initialized on uart open via: tty_port_open() tty_port_block_til_ready() tty_port_raise_dtr_rts() # if (C_BAUD(tty)) uart_dtr_rts() uart_port_dtr_rts() There's at least three problems here: First, if no baud rate is set, RTS polarity is not initialized. That's the right thing to do for rs232, but not for rs485, which requires that RTS is deasserted unconditionally. Second, if the DeviceTree property "linux,rs485-enabled-at-boot-time" is present, RTS should be deasserted as early as possible, i.e. on probe. Otherwise it may remain asserted until first open. Third, even though RTS is deasserted on open and close, it may subsequently be asserted by uart_throttle(), uart_unthrottle() or uart_set_termios() because those functions aren't rs485-aware. (Only uart_tiocmset() is.) To address these issues, move RTS initialization from uart_port_dtr_rts() to uart_configure_port(). Prevent subsequent modification of RTS polarity by moving the existing rs485 check from uart_tiocmget() to uart_update_mctrl(). That way, RTS is initialized on probe and then remains unmodified unless the uart transmits data. If rs485 is enabled at runtime (instead of at boot) through a TIOCSRS485 ioctl(), RTS is initialized by the uart driver's ->rs485_config() callback and then likewise remains unmodified. The PL011 driver initializes RTS on uart open and prevents subsequent modification in its ->set_mctrl() callback. That code is obsoleted by the present commit, so drop it. Cc: Jan Kiszka Cc: Su Bao Cheng Signed-off-by: Lukas Wunner Link: https://lore.kernel.org/r/2d2acaf3a69e89b7bf687c912022b11fd29dfa1e.1642909284.git.lukas@wunner.de Signed-off-by: Greg Kroah-Hartman --- drivers/tty/serial/serial_core.c | 34 +++++++++++--------------------- 1 file changed, 12 insertions(+), 22 deletions(-) diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c index 19f0c5db11e3..32d09d024f6c 100644 --- a/drivers/tty/serial/serial_core.c +++ b/drivers/tty/serial/serial_core.c @@ -144,6 +144,11 @@ uart_update_mctrl(struct uart_port *port, unsigned int set, unsigned int clear) unsigned long flags; unsigned int old; + if (port->rs485.flags & SER_RS485_ENABLED) { + set &= ~TIOCM_RTS; + clear &= ~TIOCM_RTS; + } + spin_lock_irqsave(&port->lock, flags); old = port->mctrl; port->mctrl = (old & ~clear) | set; @@ -157,23 +162,10 @@ uart_update_mctrl(struct uart_port *port, unsigned int set, unsigned int clear) static void uart_port_dtr_rts(struct uart_port *uport, int raise) { - int rs485_on = uport->rs485_config && - (uport->rs485.flags & SER_RS485_ENABLED); - int RTS_after_send = !!(uport->rs485.flags & SER_RS485_RTS_AFTER_SEND); - - if (raise) { - if (rs485_on && RTS_after_send) { - uart_set_mctrl(uport, TIOCM_DTR); - uart_clear_mctrl(uport, TIOCM_RTS); - } else { - uart_set_mctrl(uport, TIOCM_DTR | TIOCM_RTS); - } - } else { - unsigned int clear = TIOCM_DTR; - - clear |= (!rs485_on || RTS_after_send) ? TIOCM_RTS : 0; - uart_clear_mctrl(uport, clear); - } + if (raise) + uart_set_mctrl(uport, TIOCM_DTR | TIOCM_RTS); + else + uart_clear_mctrl(uport, TIOCM_DTR | TIOCM_RTS); } /* @@ -1116,11 +1108,6 @@ uart_tiocmset(struct tty_struct *tty, unsigned int set, unsigned int clear) goto out; if (!tty_io_error(tty)) { - if (uport->rs485.flags & SER_RS485_ENABLED) { - set &= ~TIOCM_RTS; - clear &= ~TIOCM_RTS; - } - uart_update_mctrl(uport, set, clear); ret = 0; } @@ -2429,6 +2416,9 @@ uart_configure_port(struct uart_driver *drv, struct uart_state *state, */ spin_lock_irqsave(&port->lock, flags); port->mctrl &= TIOCM_DTR; + if (port->rs485.flags & SER_RS485_ENABLED && + !(port->rs485.flags & SER_RS485_RTS_AFTER_SEND)) + port->mctrl |= TIOCM_RTS; port->ops->set_mctrl(port, port->mctrl); spin_unlock_irqrestore(&port->lock, flags); From 1a264b3a6940b2595dc6b51edf8b1d9a71963fc7 Mon Sep 17 00:00:00 2001 From: Will Deacon Date: Fri, 10 Jun 2022 16:12:27 +0100 Subject: [PATCH 1054/1842] arm64: mm: Don't invalidate FROM_DEVICE buffers at start of DMA transfer commit c50f11c6196f45c92ca48b16a5071615d4ae0572 upstream. Invalidating the buffer memory in arch_sync_dma_for_device() for FROM_DEVICE transfers When using the streaming DMA API to map a buffer prior to inbound non-coherent DMA (i.e. DMA_FROM_DEVICE), we invalidate any dirty CPU cachelines so that they will not be written back during the transfer and corrupt the buffer contents written by the DMA. This, however, poses two potential problems: (1) If the DMA transfer does not write to every byte in the buffer, then the unwritten bytes will contain stale data once the transfer has completed. (2) If the buffer has a virtual alias in userspace, then stale data may be visible via this alias during the period between performing the cache invalidation and the DMA writes landing in memory. Address both of these issues by cleaning (aka writing-back) the dirty lines in arch_sync_dma_for_device(DMA_FROM_DEVICE) instead of discarding them using invalidation. Cc: Ard Biesheuvel Cc: Christoph Hellwig Cc: Robin Murphy Cc: Russell King Cc: Link: https://lore.kernel.org/r/20220606152150.GA31568@willie-the-truck Signed-off-by: Will Deacon Reviewed-by: Ard Biesheuvel Link: https://lore.kernel.org/r/20220610151228.4562-2-will@kernel.org Signed-off-by: Catalin Marinas Signed-off-by: Greg Kroah-Hartman --- arch/arm64/mm/cache.S | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S index 2d881f34dd9d..7b8158ae36ec 100644 --- a/arch/arm64/mm/cache.S +++ b/arch/arm64/mm/cache.S @@ -228,8 +228,6 @@ SYM_FUNC_END_PI(__dma_flush_area) * - dir - DMA direction */ SYM_FUNC_START_PI(__dma_map_area) - cmp w2, #DMA_FROM_DEVICE - b.eq __dma_inv_area b __dma_clean_area SYM_FUNC_END_PI(__dma_map_area) From df3f3bb5059d20ef094d6b2f0256c4bf4127a859 Mon Sep 17 00:00:00 2001 From: Jens Axboe Date: Wed, 22 Jun 2022 14:05:49 -0600 Subject: [PATCH 1055/1842] io_uring: add missing item types for various requests Any read/write should grab current->nsproxy, denoted by IO_WQ_WORK_FILES as it refers to current->files as well, and connect and recv/recvmsg, send/sendmsg should grab current->fs which is denoted by IO_WQ_WORK_FS. No upstream commit exists for this issue. Reported-by: Bing-Jhong Billy Jheng Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- fs/io_uring.c | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 871475d3fca2..40ac37beca47 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -773,7 +773,8 @@ static const struct io_op_def io_op_defs[] = { .buffer_select = 1, .needs_async_data = 1, .async_size = sizeof(struct io_async_rw), - .work_flags = IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG, + .work_flags = IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG | + IO_WQ_WORK_FILES, }, [IORING_OP_WRITEV] = { .needs_file = 1, @@ -783,7 +784,7 @@ static const struct io_op_def io_op_defs[] = { .needs_async_data = 1, .async_size = sizeof(struct io_async_rw), .work_flags = IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG | - IO_WQ_WORK_FSIZE, + IO_WQ_WORK_FSIZE | IO_WQ_WORK_FILES, }, [IORING_OP_FSYNC] = { .needs_file = 1, @@ -794,7 +795,8 @@ static const struct io_op_def io_op_defs[] = { .unbound_nonreg_file = 1, .pollin = 1, .async_size = sizeof(struct io_async_rw), - .work_flags = IO_WQ_WORK_BLKCG | IO_WQ_WORK_MM, + .work_flags = IO_WQ_WORK_BLKCG | IO_WQ_WORK_MM | + IO_WQ_WORK_FILES, }, [IORING_OP_WRITE_FIXED] = { .needs_file = 1, @@ -803,7 +805,7 @@ static const struct io_op_def io_op_defs[] = { .pollout = 1, .async_size = sizeof(struct io_async_rw), .work_flags = IO_WQ_WORK_BLKCG | IO_WQ_WORK_FSIZE | - IO_WQ_WORK_MM, + IO_WQ_WORK_MM | IO_WQ_WORK_FILES, }, [IORING_OP_POLL_ADD] = { .needs_file = 1, @@ -857,7 +859,7 @@ static const struct io_op_def io_op_defs[] = { .pollout = 1, .needs_async_data = 1, .async_size = sizeof(struct io_async_connect), - .work_flags = IO_WQ_WORK_MM, + .work_flags = IO_WQ_WORK_MM | IO_WQ_WORK_FS, }, [IORING_OP_FALLOCATE] = { .needs_file = 1, @@ -885,7 +887,8 @@ static const struct io_op_def io_op_defs[] = { .pollin = 1, .buffer_select = 1, .async_size = sizeof(struct io_async_rw), - .work_flags = IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG, + .work_flags = IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG | + IO_WQ_WORK_FILES, }, [IORING_OP_WRITE] = { .needs_file = 1, @@ -894,7 +897,7 @@ static const struct io_op_def io_op_defs[] = { .pollout = 1, .async_size = sizeof(struct io_async_rw), .work_flags = IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG | - IO_WQ_WORK_FSIZE, + IO_WQ_WORK_FSIZE | IO_WQ_WORK_FILES, }, [IORING_OP_FADVISE] = { .needs_file = 1, @@ -907,14 +910,16 @@ static const struct io_op_def io_op_defs[] = { .needs_file = 1, .unbound_nonreg_file = 1, .pollout = 1, - .work_flags = IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG, + .work_flags = IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG | + IO_WQ_WORK_FS, }, [IORING_OP_RECV] = { .needs_file = 1, .unbound_nonreg_file = 1, .pollin = 1, .buffer_select = 1, - .work_flags = IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG, + .work_flags = IO_WQ_WORK_MM | IO_WQ_WORK_BLKCG | + IO_WQ_WORK_FS, }, [IORING_OP_OPENAT2] = { .work_flags = IO_WQ_WORK_FILES | IO_WQ_WORK_FS | From 6a7c3bcc3c2e2c2e7a42d6601306095060ea938a Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Sat, 25 Jun 2022 15:16:09 +0200 Subject: [PATCH 1056/1842] Linux 5.10.125 Link: https://lore.kernel.org/r/20220623164322.296526800@linuxfoundation.org Tested-by: Salvatore Bonaccorso Tested-by: Pavel Machek (CIP) Tested-by: Florian Fainelli Tested-by: Shuah Khan Tested-by: Hulk Robot Tested-by: Jon Hunter Tested-by: Sudip Mukherjee Tested-by: Guenter Roeck Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 9ed79a05a972..da5b28931e5c 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 124 +SUBLEVEL = 125 EXTRAVERSION = NAME = Dare mighty things From fb2fbb3c10d779c0163c9c2c7ca1aeb75ef3f7ca Mon Sep 17 00:00:00 2001 From: Jens Axboe Date: Sun, 26 Jun 2022 18:21:03 -0600 Subject: [PATCH 1057/1842] io_uring: use separate list entry for iopoll requests A previous commit ended up enabling file tracking for iopoll requests, which conflicts with both of them using the same list entry for tracking. Add a separate list entry just for iopoll requests, avoid this issue. No upstream commit exists for this issue. Reported-by: Greg Thelen Fixes: df3f3bb5059d ("io_uring: add missing item types for various requests") Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- fs/io_uring.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 40ac37beca47..2e12dcbc7b0f 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -696,6 +696,8 @@ struct io_kiocb { */ struct list_head inflight_entry; + struct list_head iopoll_entry; + struct percpu_ref *fixed_file_refs; struct callback_head task_work; /* for polled requests, i.e. IORING_OP_POLL_ADD and async armed poll */ @@ -2350,8 +2352,8 @@ static void io_iopoll_queue(struct list_head *again) struct io_kiocb *req; do { - req = list_first_entry(again, struct io_kiocb, inflight_entry); - list_del(&req->inflight_entry); + req = list_first_entry(again, struct io_kiocb, iopoll_entry); + list_del(&req->iopoll_entry); __io_complete_rw(req, -EAGAIN, 0, NULL); } while (!list_empty(again)); } @@ -2373,14 +2375,14 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events, while (!list_empty(done)) { int cflags = 0; - req = list_first_entry(done, struct io_kiocb, inflight_entry); + req = list_first_entry(done, struct io_kiocb, iopoll_entry); if (READ_ONCE(req->result) == -EAGAIN) { req->result = 0; req->iopoll_completed = 0; - list_move_tail(&req->inflight_entry, &again); + list_move_tail(&req->iopoll_entry, &again); continue; } - list_del(&req->inflight_entry); + list_del(&req->iopoll_entry); if (req->flags & REQ_F_BUFFER_SELECTED) cflags = io_put_rw_kbuf(req); @@ -2416,7 +2418,7 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events, spin = !ctx->poll_multi_file && *nr_events < min; ret = 0; - list_for_each_entry_safe(req, tmp, &ctx->iopoll_list, inflight_entry) { + list_for_each_entry_safe(req, tmp, &ctx->iopoll_list, iopoll_entry) { struct kiocb *kiocb = &req->rw.kiocb; /* @@ -2425,7 +2427,7 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events, * and complete those lists first, if we have entries there. */ if (READ_ONCE(req->iopoll_completed)) { - list_move_tail(&req->inflight_entry, &done); + list_move_tail(&req->iopoll_entry, &done); continue; } if (!list_empty(&done)) @@ -2437,7 +2439,7 @@ static int io_do_iopoll(struct io_ring_ctx *ctx, unsigned int *nr_events, /* iopoll may have completed current req */ if (READ_ONCE(req->iopoll_completed)) - list_move_tail(&req->inflight_entry, &done); + list_move_tail(&req->iopoll_entry, &done); if (ret && spin) spin = false; @@ -2670,7 +2672,7 @@ static void io_iopoll_req_issued(struct io_kiocb *req) struct io_kiocb *list_req; list_req = list_first_entry(&ctx->iopoll_list, struct io_kiocb, - inflight_entry); + iopoll_entry); if (list_req->file != req->file) ctx->poll_multi_file = true; } @@ -2680,9 +2682,9 @@ static void io_iopoll_req_issued(struct io_kiocb *req) * it to the front so we find it first. */ if (READ_ONCE(req->iopoll_completed)) - list_add(&req->inflight_entry, &ctx->iopoll_list); + list_add(&req->iopoll_entry, &ctx->iopoll_list); else - list_add_tail(&req->inflight_entry, &ctx->iopoll_list); + list_add_tail(&req->iopoll_entry, &ctx->iopoll_list); if ((ctx->flags & IORING_SETUP_SQPOLL) && wq_has_sleeper(&ctx->sq_data->wait)) From 9cae50bdfafa0ce87eb2693401efeae2cd30b417 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Mon, 27 Jun 2022 09:41:01 +0200 Subject: [PATCH 1058/1842] Linux 5.10.126 Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index da5b28931e5c..57434487c2b4 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 125 +SUBLEVEL = 126 EXTRAVERSION = NAME = Dare mighty things From 3acb7dc242ca25eb258493b513ef2f4b0f2a9ad1 Mon Sep 17 00:00:00 2001 From: Jiri Slaby Date: Tue, 5 Jan 2021 13:02:35 +0100 Subject: [PATCH 1059/1842] vt: drop old FONT ioctls commit ff2047fb755d4415ec3c70ac799889371151796d upstream. Drop support for these ioctls: * PIO_FONT, PIO_FONTX * GIO_FONT, GIO_FONTX * PIO_FONTRESET As was demonstrated by commit 90bfdeef83f1 (tty: make FONTX ioctl use the tty pointer they were actually passed), these ioctls are not used from userspace, as: 1) they used to be broken (set up font on current console, not the open one) and racy (before the commit above) 2) KDFONTOP ioctl is used for years instead Note that PIO_FONTRESET is defunct on most systems as VGA_CONSOLE is set on them for ages. That turns on BROKEN_GRAPHICS_PROGRAMS which makes PIO_FONTRESET just return an error. We are removing KD_FONT_FLAG_OLD here as it was used only by these removed ioctls. kd.h header exists both in kernel and uapi headers, so we can remove the kernel one completely. Everyone includeing kd.h will now automatically get the uapi one. There are now unused definitions of the ioctl numbers and "struct consolefontdesc" in kd.h, but as it is a uapi header, I am not touching these. Signed-off-by: Jiri Slaby Link: https://lore.kernel.org/r/20210105120239.28031-8-jslaby@suse.cz Cc: guodaxing Signed-off-by: Greg Kroah-Hartman --- drivers/tty/vt/vt.c | 39 +--------- drivers/tty/vt/vt_ioctl.c | 151 -------------------------------------- include/linux/kd.h | 8 -- 3 files changed, 3 insertions(+), 195 deletions(-) delete mode 100644 include/linux/kd.h diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c index a7ee1171eeb3..0a6336d54a65 100644 --- a/drivers/tty/vt/vt.c +++ b/drivers/tty/vt/vt.c @@ -4625,16 +4625,8 @@ static int con_font_get(struct vc_data *vc, struct console_font_op *op) if (op->data && font.charcount > op->charcount) rc = -ENOSPC; - if (!(op->flags & KD_FONT_FLAG_OLD)) { - if (font.width > op->width || font.height > op->height) - rc = -ENOSPC; - } else { - if (font.width != 8) - rc = -EIO; - else if ((op->height && font.height > op->height) || - font.height > 32) - rc = -ENOSPC; - } + if (font.width > op->width || font.height > op->height) + rc = -ENOSPC; if (rc) goto out; @@ -4662,7 +4654,7 @@ static int con_font_set(struct vc_data *vc, struct console_font_op *op) return -EINVAL; if (op->charcount > 512) return -EINVAL; - if (op->width <= 0 || op->width > 32 || op->height > 32) + if (op->width <= 0 || op->width > 32 || !op->height || op->height > 32) return -EINVAL; size = (op->width+7)/8 * 32 * op->charcount; if (size > max_font_size) @@ -4672,31 +4664,6 @@ static int con_font_set(struct vc_data *vc, struct console_font_op *op) if (IS_ERR(font.data)) return PTR_ERR(font.data); - if (!op->height) { /* Need to guess font height [compat] */ - int h, i; - u8 *charmap = font.data; - - /* - * If from KDFONTOP ioctl, don't allow things which can be done - * in userland,so that we can get rid of this soon - */ - if (!(op->flags & KD_FONT_FLAG_OLD)) { - kfree(font.data); - return -EINVAL; - } - - for (h = 32; h > 0; h--) - for (i = 0; i < op->charcount; i++) - if (charmap[32*i+h-1]) - goto nonzero; - - kfree(font.data); - return -EINVAL; - - nonzero: - op->height = h; - } - font.charcount = op->charcount; font.width = op->width; font.height = op->height; diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c index a9c6ea8986af..b10b86e2c17e 100644 --- a/drivers/tty/vt/vt_ioctl.c +++ b/drivers/tty/vt/vt_ioctl.c @@ -486,70 +486,6 @@ static int vt_k_ioctl(struct tty_struct *tty, unsigned int cmd, return 0; } -static inline int do_fontx_ioctl(struct vc_data *vc, int cmd, - struct consolefontdesc __user *user_cfd, - struct console_font_op *op) -{ - struct consolefontdesc cfdarg; - int i; - - if (copy_from_user(&cfdarg, user_cfd, sizeof(struct consolefontdesc))) - return -EFAULT; - - switch (cmd) { - case PIO_FONTX: - op->op = KD_FONT_OP_SET; - op->flags = KD_FONT_FLAG_OLD; - op->width = 8; - op->height = cfdarg.charheight; - op->charcount = cfdarg.charcount; - op->data = cfdarg.chardata; - return con_font_op(vc, op); - - case GIO_FONTX: - op->op = KD_FONT_OP_GET; - op->flags = KD_FONT_FLAG_OLD; - op->width = 8; - op->height = cfdarg.charheight; - op->charcount = cfdarg.charcount; - op->data = cfdarg.chardata; - i = con_font_op(vc, op); - if (i) - return i; - cfdarg.charheight = op->height; - cfdarg.charcount = op->charcount; - if (copy_to_user(user_cfd, &cfdarg, sizeof(struct consolefontdesc))) - return -EFAULT; - return 0; - } - return -EINVAL; -} - -static int vt_io_fontreset(struct vc_data *vc, struct console_font_op *op) -{ - int ret; - - if (__is_defined(BROKEN_GRAPHICS_PROGRAMS)) { - /* - * With BROKEN_GRAPHICS_PROGRAMS defined, the default font is - * not saved. - */ - return -ENOSYS; - } - - op->op = KD_FONT_OP_SET_DEFAULT; - op->data = NULL; - ret = con_font_op(vc, op); - if (ret) - return ret; - - console_lock(); - con_set_default_unimap(vc); - console_unlock(); - - return 0; -} - static inline int do_unimap_ioctl(int cmd, struct unimapdesc __user *user_ud, bool perm, struct vc_data *vc) { @@ -574,29 +510,7 @@ static inline int do_unimap_ioctl(int cmd, struct unimapdesc __user *user_ud, static int vt_io_ioctl(struct vc_data *vc, unsigned int cmd, void __user *up, bool perm) { - struct console_font_op op; /* used in multiple places here */ - switch (cmd) { - case PIO_FONT: - if (!perm) - return -EPERM; - op.op = KD_FONT_OP_SET; - op.flags = KD_FONT_FLAG_OLD | KD_FONT_FLAG_DONT_RECALC; /* Compatibility */ - op.width = 8; - op.height = 0; - op.charcount = 256; - op.data = up; - return con_font_op(vc, &op); - - case GIO_FONT: - op.op = KD_FONT_OP_GET; - op.flags = KD_FONT_FLAG_OLD; - op.width = 8; - op.height = 32; - op.charcount = 256; - op.data = up; - return con_font_op(vc, &op); - case PIO_CMAP: if (!perm) return -EPERM; @@ -605,20 +519,6 @@ static int vt_io_ioctl(struct vc_data *vc, unsigned int cmd, void __user *up, case GIO_CMAP: return con_get_cmap(up); - case PIO_FONTX: - if (!perm) - return -EPERM; - - fallthrough; - case GIO_FONTX: - return do_fontx_ioctl(vc, cmd, up, &op); - - case PIO_FONTRESET: - if (!perm) - return -EPERM; - - return vt_io_fontreset(vc, &op); - case PIO_SCRNMAP: if (!perm) return -EPERM; @@ -1099,54 +999,6 @@ void vc_SAK(struct work_struct *work) #ifdef CONFIG_COMPAT -struct compat_consolefontdesc { - unsigned short charcount; /* characters in font (256 or 512) */ - unsigned short charheight; /* scan lines per character (1-32) */ - compat_caddr_t chardata; /* font data in expanded form */ -}; - -static inline int -compat_fontx_ioctl(struct vc_data *vc, int cmd, - struct compat_consolefontdesc __user *user_cfd, - int perm, struct console_font_op *op) -{ - struct compat_consolefontdesc cfdarg; - int i; - - if (copy_from_user(&cfdarg, user_cfd, sizeof(struct compat_consolefontdesc))) - return -EFAULT; - - switch (cmd) { - case PIO_FONTX: - if (!perm) - return -EPERM; - op->op = KD_FONT_OP_SET; - op->flags = KD_FONT_FLAG_OLD; - op->width = 8; - op->height = cfdarg.charheight; - op->charcount = cfdarg.charcount; - op->data = compat_ptr(cfdarg.chardata); - return con_font_op(vc, op); - - case GIO_FONTX: - op->op = KD_FONT_OP_GET; - op->flags = KD_FONT_FLAG_OLD; - op->width = 8; - op->height = cfdarg.charheight; - op->charcount = cfdarg.charcount; - op->data = compat_ptr(cfdarg.chardata); - i = con_font_op(vc, op); - if (i) - return i; - cfdarg.charheight = op->height; - cfdarg.charcount = op->charcount; - if (copy_to_user(user_cfd, &cfdarg, sizeof(struct compat_consolefontdesc))) - return -EFAULT; - return 0; - } - return -EINVAL; -} - struct compat_console_font_op { compat_uint_t op; /* operation code KD_FONT_OP_* */ compat_uint_t flags; /* KD_FONT_FLAG_* */ @@ -1223,9 +1075,6 @@ long vt_compat_ioctl(struct tty_struct *tty, /* * these need special handlers for incompatible data structures */ - case PIO_FONTX: - case GIO_FONTX: - return compat_fontx_ioctl(vc, cmd, up, perm, &op); case KDFONTOP: return compat_kdfontop_ioctl(up, perm, &op, vc); diff --git a/include/linux/kd.h b/include/linux/kd.h deleted file mode 100644 index b130a18f860f..000000000000 --- a/include/linux/kd.h +++ /dev/null @@ -1,8 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _LINUX_KD_H -#define _LINUX_KD_H - -#include - -#define KD_FONT_FLAG_OLD 0x80000000 /* Invoked via old interface [compat] */ -#endif /* _LINUX_KD_H */ From 310ebbd9f5cd2f52a160dc652f750ecc41b0cd5c Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 16 Jun 2022 02:03:12 +0200 Subject: [PATCH 1060/1842] random: schedule mix_interrupt_randomness() less often commit 534d2eaf1970274150596fdd2bf552721e65d6b2 upstream. It used to be that mix_interrupt_randomness() would credit 1 bit each time it ran, and so add_interrupt_randomness() would schedule mix() to run every 64 interrupts, a fairly arbitrary number, but nonetheless considered to be a decent enough conservative estimate. Since e3e33fc2ea7f ("random: do not use input pool from hard IRQs"), mix() is now able to credit multiple bits, depending on the number of calls to add(). This was done for reasons separate from this commit, but it has the nice side effect of enabling this patch to schedule mix() less often. Currently the rules are: a) Credit 1 bit for every 64 calls to add(). b) Schedule mix() once a second that add() is called. c) Schedule mix() once every 64 calls to add(). Rules (a) and (c) no longer need to be coupled. It's still important to have _some_ value in (c), so that we don't "over-saturate" the fast pool, but the once per second we get from rule (b) is a plenty enough baseline. So, by increasing the 64 in rule (c) to something larger, we avoid calling queue_work_on() as frequently during irq storms. This commit changes that 64 in rule (c) to be 1024, which means we schedule mix() 16 times less often. And it does *not* need to change the 64 in rule (a). Fixes: 58340f8e952b ("random: defer fast pool mixing to worker") Cc: stable@vger.kernel.org Cc: Dominik Brodowski Acked-by: Sebastian Andrzej Siewior Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 5776dfd4a6fc..0dee6f6f07aa 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1001,7 +1001,7 @@ void add_interrupt_randomness(int irq) if (new_count & MIX_INFLIGHT) return; - if (new_count < 64 && !time_is_before_jiffies(fast_pool->last + HZ)) + if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ)) return; if (unlikely(!fast_pool->mix.func)) From 5e80f923b8ddb3af9f9e6d77c2f108521c362c4e Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 16 Jun 2022 15:00:51 +0200 Subject: [PATCH 1061/1842] random: quiet urandom warning ratelimit suppression message commit c01d4d0a82b71857be7449380338bc53dde2da92 upstream. random.c ratelimits how much it warns about uninitialized urandom reads using __ratelimit(). When the RNG is finally initialized, it prints the number of missed messages due to ratelimiting. It has been this way since that functionality was introduced back in 2018. Recently, cc1e127bfa95 ("random: remove ratelimiting for in-kernel unseeded randomness") put a bit more stress on the urandom ratelimiting, which teased out a bug in the implementation. Specifically, when under pressure, __ratelimit() will print its own message and reset the count back to 0, making the final message at the end less useful. Secondly, it does so as a pr_warn(), which apparently is undesirable for people's CI. Fortunately, __ratelimit() has the RATELIMIT_MSG_ON_RELEASE flag exactly for this purpose, so we set the flag. Fixes: 4e00b339e264 ("random: rate limit unseeded randomness warnings") Cc: stable@vger.kernel.org Reported-by: Jon Hunter Reported-by: Ron Economos Tested-by: Ron Economos Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 2 +- include/linux/ratelimit_types.h | 12 ++++++++---- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index 0dee6f6f07aa..da6d74d757e6 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -88,7 +88,7 @@ static RAW_NOTIFIER_HEAD(random_ready_chain); /* Control how we warn userspace. */ static struct ratelimit_state urandom_warning = - RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3); + RATELIMIT_STATE_INIT_FLAGS("urandom_warning", HZ, 3, RATELIMIT_MSG_ON_RELEASE); static int ratelimit_disable __read_mostly = IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM); module_param_named(ratelimit_disable, ratelimit_disable, int, 0644); diff --git a/include/linux/ratelimit_types.h b/include/linux/ratelimit_types.h index b676aa419eef..f0e535f199be 100644 --- a/include/linux/ratelimit_types.h +++ b/include/linux/ratelimit_types.h @@ -23,12 +23,16 @@ struct ratelimit_state { unsigned long flags; }; -#define RATELIMIT_STATE_INIT(name, interval_init, burst_init) { \ - .lock = __RAW_SPIN_LOCK_UNLOCKED(name.lock), \ - .interval = interval_init, \ - .burst = burst_init, \ +#define RATELIMIT_STATE_INIT_FLAGS(name, interval_init, burst_init, flags_init) { \ + .lock = __RAW_SPIN_LOCK_UNLOCKED(name.lock), \ + .interval = interval_init, \ + .burst = burst_init, \ + .flags = flags_init, \ } +#define RATELIMIT_STATE_INIT(name, interval_init, burst_init) \ + RATELIMIT_STATE_INIT_FLAGS(name, interval_init, burst_init, 0) + #define RATELIMIT_STATE_INIT_DISABLED \ RATELIMIT_STATE_INIT(ratelimit_state, 0, DEFAULT_RATELIMIT_BURST) From 12a6be5d11d0799c59fe6578793ffabd20b8e60b Mon Sep 17 00:00:00 2001 From: Takashi Iwai Date: Mon, 20 Jun 2022 12:40:08 +0200 Subject: [PATCH 1062/1842] ALSA: hda/via: Fix missing beep setup commit c7807b27d510e5aa53c8a120cfc02c33c24ebb5f upstream. Like the previous fix for Conexant codec, the beep_nid has to be set up before calling snd_hda_gen_parse_auto_config(); otherwise it'd miss the path setup. Fix the call order for addressing the missing beep setup. Fixes: 0e8f9862493a ("ALSA: hda/via - Simplify control management") Cc: Link: https://bugzilla.kernel.org/show_bug.cgi?id=216152 Link: https://lore.kernel.org/r/20220620104008.1994-2-tiwai@suse.de Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_via.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c index 773a136161f1..a188901a83bb 100644 --- a/sound/pci/hda/patch_via.c +++ b/sound/pci/hda/patch_via.c @@ -520,11 +520,11 @@ static int via_parse_auto_config(struct hda_codec *codec) if (err < 0) return err; - err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg); + err = auto_parse_beep(codec); if (err < 0) return err; - err = auto_parse_beep(codec); + err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg); if (err < 0) return err; From 64373290601f9f06f260590cba8018dea323d1ea Mon Sep 17 00:00:00 2001 From: Takashi Iwai Date: Mon, 20 Jun 2022 12:40:07 +0200 Subject: [PATCH 1063/1842] ALSA: hda/conexant: Fix missing beep setup commit 5faa0bc69102f3a4c605581564c367be5eb94dfa upstream. Currently the Conexant codec driver sets up the beep NID after calling snd_hda_gen_parse_auto_config(). It turned out that this results in the insufficient setup for the beep control, as the generic parser handles the fake path in snd_hda_gen_parse_auto_config() only if the beep_nid is set up beforehand. For dealing with the beep widget properly, call cx_auto_parse_beep() before snd_hda_gen_parse_auto_config() call. Fixes: 51e19ca5f755 ("ALSA: hda/conexant - Clean up beep code") Cc: Link: https://bugzilla.kernel.org/show_bug.cgi?id=216152 Link: https://lore.kernel.org/r/20220620104008.1994-1-tiwai@suse.de Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_conexant.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c index 0dd6d37db966..53b7ea86f3f8 100644 --- a/sound/pci/hda/patch_conexant.c +++ b/sound/pci/hda/patch_conexant.c @@ -1072,11 +1072,11 @@ static int patch_conexant_auto(struct hda_codec *codec) if (err < 0) goto error; - err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg); + err = cx_auto_parse_beep(codec); if (err < 0) goto error; - err = cx_auto_parse_beep(codec); + err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg); if (err < 0) goto error; From f5ea433d56d45ea81b0a51e6cf45b1349205f6d2 Mon Sep 17 00:00:00 2001 From: Soham Sen Date: Thu, 9 Jun 2022 23:49:20 +0530 Subject: [PATCH 1064/1842] ALSA: hda/realtek: Add mute LED quirk for HP Omen laptop commit b2e6b3d9bbb0a59ba7c710cc06e44cc548301f5f upstream. The HP Omen 15 laptop needs a quirk to toggle the mute LED. It already is implemented for a different variant of the HP Omen laptop so a fixup entry is needed for this variant. Signed-off-by: Soham Sen Cc: Link: https://lore.kernel.org/r/20220609181919.45535-1-contact@sohamsen.me Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 7c720f03c134..d09dbe00addc 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -8768,6 +8768,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { ALC285_FIXUP_HP_GPIO_AMP_INIT), SND_PCI_QUIRK(0x103c, 0x8783, "HP ZBook Fury 15 G7 Mobile Workstation", ALC285_FIXUP_HP_GPIO_AMP_INIT), + SND_PCI_QUIRK(0x103c, 0x8787, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED), SND_PCI_QUIRK(0x103c, 0x8788, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED), SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED), SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED), From 7fcbc89d4722f7b24ab12cdf6ec31a957de98e04 Mon Sep 17 00:00:00 2001 From: Kailang Yang Date: Mon, 13 Jun 2022 14:57:19 +0800 Subject: [PATCH 1065/1842] ALSA: hda/realtek - ALC897 headset MIC no sound commit fe6900bd8156467365bd5b976df64928fdebfeb0 upstream. There is not have Headset Mic verb table in BIOS default. So, it will have recording issue from headset MIC. Add the verb table value without jack detect. It will turn on Headset Mic. Signed-off-by: Kailang Yang Cc: Link: https://lore.kernel.org/r/719133a27d8844a890002cb817001dfa@realtek.com Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_realtek.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index d09dbe00addc..a8be6c5f730b 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -10447,6 +10447,7 @@ enum { ALC668_FIXUP_MIC_DET_COEF, ALC897_FIXUP_LENOVO_HEADSET_MIC, ALC897_FIXUP_HEADSET_MIC_PIN, + ALC897_FIXUP_HP_HSMIC_VERB, }; static const struct hda_fixup alc662_fixups[] = { @@ -10866,6 +10867,13 @@ static const struct hda_fixup alc662_fixups[] = { .chained = true, .chain_id = ALC897_FIXUP_LENOVO_HEADSET_MIC }, + [ALC897_FIXUP_HP_HSMIC_VERB] = { + .type = HDA_FIXUP_PINS, + .v.pins = (const struct hda_pintbl[]) { + { 0x19, 0x01a1913c }, /* use as headset mic, without its own jack detect */ + { } + }, + }, }; static const struct snd_pci_quirk alc662_fixup_tbl[] = { @@ -10891,6 +10899,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = { SND_PCI_QUIRK(0x1028, 0x0698, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1028, 0x069f, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800), + SND_PCI_QUIRK(0x103c, 0x8719, "HP", ALC897_FIXUP_HP_HSMIC_VERB), SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2), SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2), SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE), From 80307458a1eef6ea6bfecfb3be68cd0ad33f1fdd Mon Sep 17 00:00:00 2001 From: Takashi Iwai Date: Tue, 14 Jun 2022 07:48:31 +0200 Subject: [PATCH 1066/1842] ALSA: hda/realtek: Apply fixup for Lenovo Yoga Duet 7 properly commit 56ec3e755bd1041d35bdec020a99b327697ee470 upstream. It turned out that Lenovo shipped two completely different products with the very same PCI SSID, where both require different quirks; namely, Lenovo C940 has already the fixup for its speaker (ALC298_FIXUP_LENOVO_SPK_VOLUME) with the PCI SSID 17aa:3818, while Yoga Duet 7 has also the very same PCI SSID but requires a different quirk, ALC287_FIXUP_YOGA7_14TIL_SPEAKERS. Fortunately, both are with different codecs (C940 with ALC298 and Duet 7 with ALC287), hence we can apply different fixes by checking the codec ID. This patch implements that special fixup function. For easier handling, the internal function for applying a specific fixup entry is exported as __snd_hda_apply_fixup(), so that it can be called from the codec driver. The rest is simply calling it with a different fixup ID depending on the codec ID. Reported-by: Hans de Goede Tested-by: nikitashvets@flyium.com Cc: Link: https://lore.kernel.org/r/5ca147d1-3a2d-60c6-c491-8aa844183222@redhat.com Link: https://lore.kernel.org/r/20220614054831.14648-1-tiwai@suse.de Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/hda_auto_parser.c | 7 ++++--- sound/pci/hda/hda_local.h | 1 + sound/pci/hda/patch_realtek.c | 24 +++++++++++++++++++++++- 3 files changed, 28 insertions(+), 4 deletions(-) diff --git a/sound/pci/hda/hda_auto_parser.c b/sound/pci/hda/hda_auto_parser.c index 4dc01647753c..ec821a263036 100644 --- a/sound/pci/hda/hda_auto_parser.c +++ b/sound/pci/hda/hda_auto_parser.c @@ -823,7 +823,7 @@ static void set_pin_targets(struct hda_codec *codec, snd_hda_set_pin_ctl_cache(codec, cfg->nid, cfg->val); } -static void apply_fixup(struct hda_codec *codec, int id, int action, int depth) +void __snd_hda_apply_fixup(struct hda_codec *codec, int id, int action, int depth) { const char *modelname = codec->fixup_name; @@ -833,7 +833,7 @@ static void apply_fixup(struct hda_codec *codec, int id, int action, int depth) if (++depth > 10) break; if (fix->chained_before) - apply_fixup(codec, fix->chain_id, action, depth + 1); + __snd_hda_apply_fixup(codec, fix->chain_id, action, depth + 1); switch (fix->type) { case HDA_FIXUP_PINS: @@ -874,6 +874,7 @@ static void apply_fixup(struct hda_codec *codec, int id, int action, int depth) id = fix->chain_id; } } +EXPORT_SYMBOL_GPL(__snd_hda_apply_fixup); /** * snd_hda_apply_fixup - Apply the fixup chain with the given action @@ -883,7 +884,7 @@ static void apply_fixup(struct hda_codec *codec, int id, int action, int depth) void snd_hda_apply_fixup(struct hda_codec *codec, int action) { if (codec->fixup_list) - apply_fixup(codec, codec->fixup_id, action, 0); + __snd_hda_apply_fixup(codec, codec->fixup_id, action, 0); } EXPORT_SYMBOL_GPL(snd_hda_apply_fixup); diff --git a/sound/pci/hda/hda_local.h b/sound/pci/hda/hda_local.h index 5beb8aa44ecd..efc0c68a5442 100644 --- a/sound/pci/hda/hda_local.h +++ b/sound/pci/hda/hda_local.h @@ -357,6 +357,7 @@ void snd_hda_apply_verbs(struct hda_codec *codec); void snd_hda_apply_pincfgs(struct hda_codec *codec, const struct hda_pintbl *cfg); void snd_hda_apply_fixup(struct hda_codec *codec, int action); +void __snd_hda_apply_fixup(struct hda_codec *codec, int id, int action, int depth); void snd_hda_pick_fixup(struct hda_codec *codec, const struct hda_model_fixup *models, const struct snd_pci_quirk *quirk, diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index a8be6c5f730b..159ab9ea508a 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -6827,6 +6827,7 @@ enum { ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS, ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE, ALC287_FIXUP_YOGA7_14ITL_SPEAKERS, + ALC298_FIXUP_LENOVO_C940_DUET7, ALC287_FIXUP_13S_GEN2_SPEAKERS, ALC256_FIXUP_SET_COEF_DEFAULTS, ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE, @@ -6836,6 +6837,23 @@ enum { ALC285_FIXUP_LEGION_Y9000X_AUTOMUTE, }; +/* A special fixup for Lenovo C940 and Yoga Duet 7; + * both have the very same PCI SSID, and we need to apply different fixups + * depending on the codec ID + */ +static void alc298_fixup_lenovo_c940_duet7(struct hda_codec *codec, + const struct hda_fixup *fix, + int action) +{ + int id; + + if (codec->core.vendor_id == 0x10ec0298) + id = ALC298_FIXUP_LENOVO_SPK_VOLUME; /* C940 */ + else + id = ALC287_FIXUP_YOGA7_14ITL_SPEAKERS; /* Duet 7 */ + __snd_hda_apply_fixup(codec, id, action, 0); +} + static const struct hda_fixup alc269_fixups[] = { [ALC269_FIXUP_GPIO2] = { .type = HDA_FIXUP_FUNC, @@ -8529,6 +8547,10 @@ static const struct hda_fixup alc269_fixups[] = { .chained = true, .chain_id = ALC269_FIXUP_HEADSET_MODE, }, + [ALC298_FIXUP_LENOVO_C940_DUET7] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc298_fixup_lenovo_c940_duet7, + }, [ALC287_FIXUP_13S_GEN2_SPEAKERS] = { .type = HDA_FIXUP_VERBS, .v.verbs = (const struct hda_verb[]) { @@ -8993,7 +9015,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340), SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS), - SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME), + SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940 / Yoga Duet 7", ALC298_FIXUP_LENOVO_C940_DUET7), SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS), SND_PCI_QUIRK(0x17aa, 0x3820, "Yoga Duet 7 13ITL6", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), SND_PCI_QUIRK(0x17aa, 0x3824, "Legion Y9000X 2020", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS), From 6e8e5031592d4f04244a00b591f3076e33b52caf Mon Sep 17 00:00:00 2001 From: Tim Crawford Date: Fri, 17 Jun 2022 07:30:28 -0600 Subject: [PATCH 1067/1842] ALSA: hda/realtek: Add quirk for Clevo PD70PNT commit d49951219b0249d3eff49e4f02e0de82357bc8a0 upstream. Fixes speaker output and headset detection on Clevo PD70PNT. Signed-off-by: Tim Crawford Cc: Link: https://lore.kernel.org/r/20220617133028.50568-1-tcrawford@system76.com Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 159ab9ea508a..ec13095f8b4c 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -2643,6 +2643,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = { SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS), SND_PCI_QUIRK(0x1558, 0x67f1, "Clevo PC70H[PRS]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), + SND_PCI_QUIRK(0x1558, 0x67f5, "Clevo PD70PN[NRT]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170SM", ALC1220_FIXUP_CLEVO_PB51ED_PINS), SND_PCI_QUIRK(0x1558, 0x7715, "Clevo X170KM-G", ALC1220_FIXUP_CLEVO_PB51ED), From 1508658aec4ef85d3ae7e3149ea25ef862eaef3b Mon Sep 17 00:00:00 2001 From: Tim Crawford Date: Wed, 22 Jun 2022 09:00:17 -0600 Subject: [PATCH 1068/1842] ALSA: hda/realtek: Add quirk for Clevo NS50PU commit 627ce0d68eb4b53e995b08089fa9da1e513ec5ba upstream. Fixes headset detection on Clevo NS50PU. Signed-off-by: Tim Crawford Cc: Link: https://lore.kernel.org/r/20220622150017.9897-1-tcrawford@system76.com Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index ec13095f8b4c..604f55ec7944 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -8933,6 +8933,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1558, 0x70f3, "Clevo NH77DPQ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1558, 0x70f4, "Clevo NH77EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1558, 0x70f6, "Clevo NH77DPQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x7716, "Clevo NS50PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1558, 0x8521, "Clevo NH77D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), From 49e3e449bc4e1208e3db7f19a86246467b230672 Mon Sep 17 00:00:00 2001 From: Rosemarie O'Riorden Date: Tue, 21 Jun 2022 16:48:45 -0400 Subject: [PATCH 1069/1842] net: openvswitch: fix parsing of nw_proto for IPv6 fragments commit 12378a5a75e33f34f8586706eb61cca9e6d4690c upstream. When a packet enters the OVS datapath and does not match any existing flows installed in the kernel flow cache, the packet will be sent to userspace to be parsed, and a new flow will be created. The kernel and OVS rely on each other to parse packet fields in the same way so that packets will be handled properly. As per the design document linked below, OVS expects all later IPv6 fragments to have nw_proto=44 in the flow key, so they can be correctly matched on OpenFlow rules. OpenFlow controllers create pipelines based on this design. This behavior was changed by the commit in the Fixes tag so that nw_proto equals the next_header field of the last extension header. However, there is no counterpart for this change in OVS userspace, meaning that this field is parsed differently between OVS and the kernel. This is a problem because OVS creates actions based on what is parsed in userspace, but the kernel-provided flow key is used as a match criteria, as described in Documentation/networking/openvswitch.rst. This leads to issues such as packets incorrectly matching on a flow and thus the wrong list of actions being applied to the packet. Such changes in packet parsing cannot be implemented without breaking the userspace. The offending commit is partially reverted to restore the expected behavior. The change technically made sense and there is a good reason that it was implemented, but it does not comply with the original design of OVS. If in the future someone wants to implement such a change, then it must be user-configurable and disabled by default to preserve backwards compatibility with existing OVS versions. Cc: stable@vger.kernel.org Fixes: fa642f08839b ("openvswitch: Derive IP protocol number for IPv6 later frags") Link: https://docs.openvswitch.org/en/latest/topics/design/#fragments Signed-off-by: Rosemarie O'Riorden Acked-by: Eelco Chaudron Link: https://lore.kernel.org/r/20220621204845.9721-1-roriorden@redhat.com Signed-off-by: Paolo Abeni Signed-off-by: Greg Kroah-Hartman --- net/openvswitch/flow.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c index b03d142ec82e..c9ba61413c98 100644 --- a/net/openvswitch/flow.c +++ b/net/openvswitch/flow.c @@ -265,7 +265,7 @@ static int parse_ipv6hdr(struct sk_buff *skb, struct sw_flow_key *key) if (flags & IP6_FH_F_FRAG) { if (frag_off) { key->ip.frag = OVS_FRAG_TYPE_LATER; - key->ip.proto = nexthdr; + key->ip.proto = NEXTHDR_FRAGMENT; return 0; } key->ip.frag = OVS_FRAG_TYPE_FIRST; From 0ae82e1ccb666a83ac1426a145bf346ea7612364 Mon Sep 17 00:00:00 2001 From: David Sterba Date: Thu, 2 Jun 2022 23:57:17 +0200 Subject: [PATCH 1070/1842] btrfs: add error messages to all unrecognized mount options commit e3a4167c880cf889f66887a152799df4d609dd21 upstream. Almost none of the errors stemming from a valid mount option but wrong value prints a descriptive message which would help to identify why mount failed. Like in the linked report: $ uname -r v4.19 $ mount -o compress=zstd /dev/sdb /mnt mount: /mnt: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error. $ dmesg ... BTRFS error (device sdb): open_ctree failed Errors caused by memory allocation failures are left out as it's not a user error so reporting that would be confusing. Link: https://lore.kernel.org/linux-btrfs/9c3fec36-fc61-3a33-4977-a7e207c3fa4e@gmx.de/ CC: stable@vger.kernel.org # 4.9+ Reviewed-by: Qu Wenruo Reviewed-by: Nikolay Borisov Reviewed-by: Anand Jain Signed-off-by: David Sterba Signed-off-by: Greg Kroah-Hartman --- fs/btrfs/super.c | 39 ++++++++++++++++++++++++++++++++------- 1 file changed, 32 insertions(+), 7 deletions(-) diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c index 2663485c17cb..8bf8cdb62a3a 100644 --- a/fs/btrfs/super.c +++ b/fs/btrfs/super.c @@ -652,6 +652,8 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options, compress_force = false; no_compress++; } else { + btrfs_err(info, "unrecognized compression value %s", + args[0].from); ret = -EINVAL; goto out; } @@ -710,8 +712,11 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options, case Opt_thread_pool: ret = match_int(&args[0], &intarg); if (ret) { + btrfs_err(info, "unrecognized thread_pool value %s", + args[0].from); goto out; } else if (intarg == 0) { + btrfs_err(info, "invalid value 0 for thread_pool"); ret = -EINVAL; goto out; } @@ -772,8 +777,11 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options, break; case Opt_ratio: ret = match_int(&args[0], &intarg); - if (ret) + if (ret) { + btrfs_err(info, "unrecognized metadata_ratio value %s", + args[0].from); goto out; + } info->metadata_ratio = intarg; btrfs_info(info, "metadata ratio %u", info->metadata_ratio); @@ -790,6 +798,8 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options, btrfs_set_and_info(info, DISCARD_ASYNC, "turning on async discard"); } else { + btrfs_err(info, "unrecognized discard mode value %s", + args[0].from); ret = -EINVAL; goto out; } @@ -814,6 +824,8 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options, btrfs_set_and_info(info, FREE_SPACE_TREE, "enabling free space tree"); } else { + btrfs_err(info, "unrecognized space_cache value %s", + args[0].from); ret = -EINVAL; goto out; } @@ -889,8 +901,12 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options, break; case Opt_check_integrity_print_mask: ret = match_int(&args[0], &intarg); - if (ret) + if (ret) { + btrfs_err(info, + "unrecognized check_integrity_print_mask value %s", + args[0].from); goto out; + } info->check_integrity_print_mask = intarg; btrfs_info(info, "check_integrity_print_mask 0x%x", info->check_integrity_print_mask); @@ -905,13 +921,15 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options, goto out; #endif case Opt_fatal_errors: - if (strcmp(args[0].from, "panic") == 0) + if (strcmp(args[0].from, "panic") == 0) { btrfs_set_opt(info->mount_opt, PANIC_ON_FATAL_ERROR); - else if (strcmp(args[0].from, "bug") == 0) + } else if (strcmp(args[0].from, "bug") == 0) { btrfs_clear_opt(info->mount_opt, PANIC_ON_FATAL_ERROR); - else { + } else { + btrfs_err(info, "unrecognized fatal_errors value %s", + args[0].from); ret = -EINVAL; goto out; } @@ -919,8 +937,12 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options, case Opt_commit_interval: intarg = 0; ret = match_int(&args[0], &intarg); - if (ret) + if (ret) { + btrfs_err(info, "unrecognized commit_interval value %s", + args[0].from); + ret = -EINVAL; goto out; + } if (intarg == 0) { btrfs_info(info, "using default commit interval %us", @@ -934,8 +956,11 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options, break; case Opt_rescue: ret = parse_rescue_options(info, args[0].from); - if (ret < 0) + if (ret < 0) { + btrfs_err(info, "unrecognized rescue value %s", + args[0].from); goto out; + } break; #ifdef CONFIG_BTRFS_DEBUG case Opt_fragment_all: From 07e56884cd95769e42190aecbc78040e3068f673 Mon Sep 17 00:00:00 2001 From: Chevron Li Date: Thu, 2 Jun 2022 06:25:43 -0700 Subject: [PATCH 1071/1842] mmc: sdhci-pci-o2micro: Fix card detect by dealing with debouncing commit e591fcf6b4e39335c9b128b17738fcd2fdd278ae upstream. The result from ->get_cd() may be incorrect as the card detect debouncing isn't managed correctly. Let's fix it. Signed-off-by: Chevron Li Fixes: 7d44061704dd ("mmc: sdhci-pci-o2micro: Fix O2 Host data read/write DLL Lock phase shift issue") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20220602132543.596-1-chevron.li@bayhubtech.com [Ulf: Updated the commit message] Signed-off-by: Ulf Hansson Signed-off-by: Greg Kroah-Hartman --- drivers/mmc/host/sdhci-pci-o2micro.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c index 94e3f72f6405..8c357e3b78d7 100644 --- a/drivers/mmc/host/sdhci-pci-o2micro.c +++ b/drivers/mmc/host/sdhci-pci-o2micro.c @@ -147,6 +147,8 @@ static int sdhci_o2_get_cd(struct mmc_host *mmc) if (!(sdhci_readw(host, O2_PLL_DLL_WDT_CONTROL1) & O2_PLL_LOCK_STATUS)) sdhci_o2_enable_internal_clock(host); + else + sdhci_o2_wait_card_detect_stable(host); return !!(sdhci_readl(host, SDHCI_PRESENT_STATE) & SDHCI_CARD_PRESENT); } From 156427b3123c2c1f0987a544d0b005b188a75393 Mon Sep 17 00:00:00 2001 From: Sascha Hauer Date: Tue, 14 Jun 2022 10:31:38 +0200 Subject: [PATCH 1072/1842] mtd: rawnand: gpmi: Fix setting busy timeout setting commit 06781a5026350cde699d2d10c9914a25c1524f45 upstream. The DEVICE_BUSY_TIMEOUT value is described in the Reference Manual as: | Timeout waiting for NAND Ready/Busy or ATA IRQ. Used in WAIT_FOR_READY | mode. This value is the number of GPMI_CLK cycles multiplied by 4096. So instead of multiplying the value in cycles with 4096, we have to divide it by that value. Use DIV_ROUND_UP to make sure we are on the safe side, especially when the calculated value in cycles is smaller than 4096 as typically the case. This bug likely never triggered because any timeout != 0 usually will do. In my case the busy timeout in cycles was originally calculated as 2408, which multiplied with 4096 is 0x968000. The lower 16 bits were taken for the 16 bit wide register field, so the register value was 0x8000. With 2970bf5a32f0 ("mtd: rawnand: gpmi: fix controller timings setting") however the value in cycles became 2384, which multiplied with 4096 is 0x950000. The lower 16 bit are 0x0 now resulting in an intermediate timeout when reading from NAND. Fixes: b1206122069aa ("mtd: rawnand: gpmi: use core timings instead of an empirical derivation") Cc: stable@vger.kernel.org Signed-off-by: Sascha Hauer Signed-off-by: Miquel Raynal Link: https://lore.kernel.org/linux-mtd/20220614083138.3455683-1-s.hauer@pengutronix.de Signed-off-by: Greg Kroah-Hartman --- drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c index 92e8ca56f566..8d096ca770b0 100644 --- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c +++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c @@ -683,7 +683,7 @@ static void gpmi_nfc_compute_timings(struct gpmi_nand_data *this, hw->timing0 = BF_GPMI_TIMING0_ADDRESS_SETUP(addr_setup_cycles) | BF_GPMI_TIMING0_DATA_HOLD(data_hold_cycles) | BF_GPMI_TIMING0_DATA_SETUP(data_setup_cycles); - hw->timing1 = BF_GPMI_TIMING1_BUSY_TIMEOUT(busy_timeout_cycles * 4096); + hw->timing1 = BF_GPMI_TIMING1_BUSY_TIMEOUT(DIV_ROUND_UP(busy_timeout_cycles, 4096)); /* * Derive NFC ideal delay from {3}: From 273106c2df43b6a8d4dccfb012163dfbda94563a Mon Sep 17 00:00:00 2001 From: Edward Wu Date: Fri, 17 Jun 2022 11:32:20 +0800 Subject: [PATCH 1073/1842] ata: libata: add qc->flags in ata_qc_complete_template tracepoint commit 540a92bfe6dab7310b9df2e488ba247d784d0163 upstream. Add flags value to check the result of ata completion Fixes: 255c03d15a29 ("libata: Add tracepoints") Cc: stable@vger.kernel.org Signed-off-by: Edward Wu Signed-off-by: Damien Le Moal Signed-off-by: Greg Kroah-Hartman --- include/trace/events/libata.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/trace/events/libata.h b/include/trace/events/libata.h index ab69434e2329..72e785a903b6 100644 --- a/include/trace/events/libata.h +++ b/include/trace/events/libata.h @@ -249,6 +249,7 @@ DECLARE_EVENT_CLASS(ata_qc_complete_template, __entry->hob_feature = qc->result_tf.hob_feature; __entry->nsect = qc->result_tf.nsect; __entry->hob_nsect = qc->result_tf.hob_nsect; + __entry->flags = qc->flags; ), TP_printk("ata_port=%u ata_dev=%u tag=%d flags=%s status=%s " \ From 03d1874b8295384b232237418c07235753912b8f Mon Sep 17 00:00:00 2001 From: Nikos Tsironis Date: Tue, 21 Jun 2022 15:24:03 +0300 Subject: [PATCH 1074/1842] dm era: commit metadata in postsuspend after worker stops commit 9ae6e8b1c9bbf6874163d1243e393137313762b7 upstream. During postsuspend dm-era does the following: 1. Archives the current era 2. Commits the metadata, as part of the RPC call for archiving the current era 3. Stops the worker Until the worker stops, it might write to the metadata again. Moreover, these writes are not flushed to disk immediately, but are cached by the dm-bufio client, which writes them back asynchronously. As a result, the committed metadata of a suspended dm-era device might not be consistent with the in-core metadata. In some cases, this can result in the corruption of the on-disk metadata. Suppose the following sequence of events: 1. Load a new table, e.g. a snapshot-origin table, to a device with a dm-era table 2. Suspend the device 3. dm-era commits its metadata, but the worker does a few more metadata writes until it stops, as part of digesting an archived writeset 4. These writes are cached by the dm-bufio client 5. Load the dm-era table to another device. 6. The new instance of the dm-era target loads the committed, on-disk metadata, which don't include the extra writes done by the worker after the metadata commit. 7. Resume the new device 8. The new dm-era target instance starts using the metadata 9. Resume the original device 10. The destructor of the old dm-era target instance is called and destroys the dm-bufio client, which results in flushing the cached writes to disk 11. These writes might overwrite the writes done by the new dm-era instance, hence corrupting its metadata. Fix this by committing the metadata after the worker stops running. stop_worker uses flush_workqueue to flush the current work. However, the work item may re-queue itself and flush_workqueue doesn't wait for re-queued works to finish. This could result in the worker changing the metadata after they have been committed, or writing to the metadata concurrently with the commit in the postsuspend thread. Use drain_workqueue instead, which waits until the work and all re-queued works finish. Fixes: eec40579d8487 ("dm: add era target") Cc: stable@vger.kernel.org # v3.15+ Signed-off-by: Nikos Tsironis Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-era-target.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/md/dm-era-target.c b/drivers/md/dm-era-target.c index d9ac7372108c..96bad057bea2 100644 --- a/drivers/md/dm-era-target.c +++ b/drivers/md/dm-era-target.c @@ -1396,7 +1396,7 @@ static void start_worker(struct era *era) static void stop_worker(struct era *era) { atomic_set(&era->suspended, 1); - flush_workqueue(era->wq); + drain_workqueue(era->wq); } /*---------------------------------------------------------------- @@ -1566,6 +1566,12 @@ static void era_postsuspend(struct dm_target *ti) } stop_worker(era); + + r = metadata_commit(era->md); + if (r) { + DMERR("%s: metadata_commit failed", __func__); + /* FIXME: fail mode */ + } } static int era_preresume(struct dm_target *ti) From f6a266e0dc6fc024767cd15b8f9de477f4efdfc2 Mon Sep 17 00:00:00 2001 From: Mikulas Patocka Date: Thu, 23 Jun 2022 14:53:25 -0400 Subject: [PATCH 1075/1842] dm mirror log: clear log bits up to BITS_PER_LONG boundary commit 90736eb3232d208ee048493f371075e4272e0944 upstream. Commit 85e123c27d5c ("dm mirror log: round up region bitmap size to BITS_PER_LONG") introduced a regression on 64-bit architectures in the lvm testsuite tests: lvcreate-mirror, mirror-names and vgsplit-operation. If the device is shrunk, we need to clear log bits beyond the end of the device. The code clears bits up to a 32-bit boundary and then calculates lc->sync_count by summing set bits up to a 64-bit boundary (the commit changed that; previously, this boundary was 32-bit too). So, it was using some non-zeroed bits in the calculation and this caused misbehavior. Fix this regression by clearing bits up to BITS_PER_LONG boundary. Fixes: 85e123c27d5c ("dm mirror log: round up region bitmap size to BITS_PER_LONG") Cc: stable@vger.kernel.org Reported-by: Benjamin Marzinski Signed-off-by: Mikulas Patocka Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-log.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/md/dm-log.c b/drivers/md/dm-log.c index 8b15f53cbdd9..fe3a9473f338 100644 --- a/drivers/md/dm-log.c +++ b/drivers/md/dm-log.c @@ -615,7 +615,7 @@ static int disk_resume(struct dm_dirty_log *log) log_clear_bit(lc, lc->clean_bits, i); /* clear any old bits -- device has shrunk */ - for (i = lc->region_count; i % (sizeof(*lc->clean_bits) << BYTE_SHIFT); i++) + for (i = lc->region_count; i % BITS_PER_LONG; i++) log_clear_bit(lc, lc->clean_bits, i); /* copy clean across to sync */ From 0b3006a862fb3cdbb3e5d6e125b60d4bf94a7ea0 Mon Sep 17 00:00:00 2001 From: Carlo Lobrano Date: Tue, 14 Jun 2022 09:56:23 +0200 Subject: [PATCH 1076/1842] USB: serial: option: add Telit LE910Cx 0x1250 composition commit 342fc0c3b345525da21112bd0478a0dc741598ea upstream. Add support for the following Telit LE910Cx composition: 0x1250: rmnet, tty, tty, tty, tty Reviewed-by: Daniele Palmas Signed-off-by: Carlo Lobrano Link: https://lore.kernel.org/r/20220614075623.2392607-1-c.lobrano@gmail.com Cc: stable@vger.kernel.org Signed-off-by: Johan Hovold Signed-off-by: Greg Kroah-Hartman --- drivers/usb/serial/option.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c index 3744cde5146f..963c9683034d 100644 --- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -1279,6 +1279,7 @@ static const struct usb_device_id option_ids[] = { .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1231, 0xff), /* Telit LE910Cx (RNDIS) */ .driver_info = NCTRL(2) | RSVD(3) }, + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x1250, 0xff, 0x00, 0x00) }, /* Telit LE910Cx (rmnet) */ { USB_DEVICE(TELIT_VENDOR_ID, 0x1260), .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, { USB_DEVICE(TELIT_VENDOR_ID, 0x1261), From 9e6e063e548b487407316939977ee3b6e796a2c9 Mon Sep 17 00:00:00 2001 From: Yonglin Tan Date: Tue, 21 Jun 2022 20:37:53 +0800 Subject: [PATCH 1077/1842] USB: serial: option: add Quectel EM05-G modem commit 33b29dbb39bcbd0a96e440646396bbf670b914fa upstream. The EM05-G modem has 2 USB configurations that are configurable via the AT command AT+QCFG="usbnet",[ 0 | 2 ] which make the modem enumerate with the following interfaces, respectively: "RMNET" : AT + DIAG + NMEA + Modem + QMI "MBIM" : MBIM + AT + DIAG + NMEA + Modem The detailed description of the USB configuration for each mode as follows: RMNET Mode -------------- T: Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 21 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=2c7c ProdID=030a Rev= 3.18 S: Manufacturer=Quectel S: Product=Quectel EM05-G C:* #Ifs= 5 Cfg#= 1 Atr=a0 MxPwr=500mA I:* If#= 3 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option E: Ad=81(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=01(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=83(I) Atr=03(Int.) MxPS= 10 Ivl=32ms E: Ad=82(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=02(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=85(I) Atr=03(Int.) MxPS= 10 Ivl=32ms E: Ad=84(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=03(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 5 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=87(I) Atr=03(Int.) MxPS= 10 Ivl=32ms E: Ad=86(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=04(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 6 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none) E: Ad=89(I) Atr=03(Int.) MxPS= 8 Ivl=32ms E: Ad=88(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=05(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms MBIM Mode -------------- T: Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 16 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=2c7c ProdID=030a Rev= 3.18 S: Manufacturer=Quectel S: Product=Quectel EM05-G C:* #Ifs= 6 Cfg#= 1 Atr=a0 MxPwr=500mA A: FirstIf#= 0 IfCount= 2 Cls=02(comm.) Sub=0e Prot=00 I:* If#= 3 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=option E: Ad=81(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=01(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=83(I) Atr=03(Int.) MxPS= 10 Ivl=32ms E: Ad=82(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=02(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=85(I) Atr=03(Int.) MxPS= 10 Ivl=32ms E: Ad=84(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=03(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 5 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=87(I) Atr=03(Int.) MxPS= 10 Ivl=32ms E: Ad=86(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=04(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms I:* If#= 0 Alt= 0 #EPs= 1 Cls=02(comm.) Sub=0e Prot=00 Driver=cdc_mbim E: Ad=89(I) Atr=03(Int.) MxPS= 64 Ivl=32ms I: If#= 1 Alt= 0 #EPs= 0 Cls=0a(data ) Sub=00 Prot=02 Driver=cdc_mbim I:* If#= 1 Alt= 1 #EPs= 2 Cls=0a(data ) Sub=00 Prot=02 Driver=cdc_mbim E: Ad=88(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=05(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms Signed-off-by: Yonglin Tan Cc: stable@vger.kernel.org Signed-off-by: Johan Hovold Signed-off-by: Greg Kroah-Hartman --- drivers/usb/serial/option.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c index 963c9683034d..d24cf0525d62 100644 --- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -252,6 +252,7 @@ static void option_instat_callback(struct urb *urb); #define QUECTEL_PRODUCT_EG95 0x0195 #define QUECTEL_PRODUCT_BG96 0x0296 #define QUECTEL_PRODUCT_EP06 0x0306 +#define QUECTEL_PRODUCT_EM05G 0x030a #define QUECTEL_PRODUCT_EM12 0x0512 #define QUECTEL_PRODUCT_RM500Q 0x0800 #define QUECTEL_PRODUCT_EC200S_CN 0x6002 @@ -1134,6 +1135,8 @@ static const struct usb_device_id option_ids[] = { { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff), .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) }, + { USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G, 0xff), + .driver_info = RSVD(6) | ZLP }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff), .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) }, From 8682335375bdd98d6c0d343ae4e72ad4779c0041 Mon Sep 17 00:00:00 2001 From: Macpaul Lin Date: Thu, 23 Jun 2022 16:56:44 +0800 Subject: [PATCH 1078/1842] USB: serial: option: add Quectel RM500K module support commit 15b694e96c31807d8515aacfa687a1e8a4fbbadc upstream. Add usb product id of the Quectel RM500K module. RM500K provides 2 mandatory interfaces to Linux host after enumeration. - /dev/ttyUSB5: this is a serial interface for control path. User needs to write AT commands to this device node to query status, set APN, set PIN code, and enable/disable the data connection to 5G network. - ethX: this is the data path provided as a RNDIS devices. After the data connection has been established, Linux host can access 5G data network via this interface. "RNDIS": RNDIS + ADB + AT (/dev/ttyUSB5) + MODEM COMs usb-devices output for 0x7001: T: Bus=05 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 3 Spd=480 MxCh= 0 D: Ver= 2.10 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=2c7c ProdID=7001 Rev=00.01 S: Manufacturer=MediaTek Inc. S: Product=USB DATA CARD S: SerialNumber=869206050009672 C: #Ifs=10 Cfg#= 1 Atr=a0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 1 Cls=02(commc) Sub=02 Prot=ff Driver=rndis_host E: Ad=82(I) Atr=03(Int.) MxPS= 64 Ivl=125us I: If#= 1 Alt= 0 #EPs= 2 Cls=0a(data ) Sub=00 Prot=00 Driver=rndis_host E: Ad=01(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=81(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms I: If#= 2 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=02(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=83(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms I: If#= 3 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=03(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=84(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms I: If#= 4 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=04(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=85(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms I: If#= 5 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=42 Prot=01 Driver=(none) E: Ad=05(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=86(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms I: If#= 6 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=06(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=87(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms I: If#= 7 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=07(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=88(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms I: If#= 8 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=08(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=89(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms I: If#= 9 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=option E: Ad=09(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=8a(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms Co-developed-by: Ballon Shi Signed-off-by: Ballon Shi Signed-off-by: Macpaul Lin Cc: stable@vger.kernel.org Signed-off-by: Johan Hovold Signed-off-by: Greg Kroah-Hartman --- drivers/usb/serial/option.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c index d24cf0525d62..44e06b95584e 100644 --- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -257,6 +257,7 @@ static void option_instat_callback(struct urb *urb); #define QUECTEL_PRODUCT_RM500Q 0x0800 #define QUECTEL_PRODUCT_EC200S_CN 0x6002 #define QUECTEL_PRODUCT_EC200T 0x6026 +#define QUECTEL_PRODUCT_RM500K 0x7001 #define CMOTECH_VENDOR_ID 0x16d8 #define CMOTECH_PRODUCT_6001 0x6001 @@ -1150,6 +1151,7 @@ static const struct usb_device_id option_ids[] = { .driver_info = ZLP }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) }, { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) }, + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500K, 0xff, 0x00, 0x00) }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) }, { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) }, From 8adedb4711dcdf25bbe3098284eb7654fb0a66c3 Mon Sep 17 00:00:00 2001 From: Maximilian Luz Date: Mon, 6 Jun 2022 23:13:05 +0200 Subject: [PATCH 1079/1842] drm/msm: Fix double pm_runtime_disable() call [ Upstream commit ce0db505bc0c51ef5e9ba446c660de7e26f78f29 ] Following commit 17e822f7591f ("drm/msm: fix unbalanced pm_runtime_enable in adreno_gpu_{init, cleanup}"), any call to adreno_unbind() will disable runtime PM twice, as indicated by the call trees below: adreno_unbind() -> pm_runtime_force_suspend() -> pm_runtime_disable() adreno_unbind() -> gpu->funcs->destroy() [= aNxx_destroy()] -> adreno_gpu_cleanup() -> pm_runtime_disable() Note that pm_runtime_force_suspend() is called right before gpu->funcs->destroy() and both functions are called unconditionally. With recent addition of the eDP AUX bus code, this problem manifests itself when the eDP panel cannot be found yet and probing is deferred. On the first probe attempt, we disable runtime PM twice as described above. This then causes any later probe attempt to fail with [drm:adreno_load_gpu [msm]] *ERROR* Couldn't power up the GPU: -13 preventing the driver from loading. As there seem to be scenarios where the aNxx_destroy() functions are not called from adreno_unbind(), simply removing pm_runtime_disable() from inside adreno_unbind() does not seem to be the proper fix. This is what commit 17e822f7591f ("drm/msm: fix unbalanced pm_runtime_enable in adreno_gpu_{init, cleanup}") intended to fix. Therefore, instead check whether runtime PM is still enabled, and only disable it in that case. Fixes: 17e822f7591f ("drm/msm: fix unbalanced pm_runtime_enable in adreno_gpu_{init, cleanup}") Signed-off-by: Maximilian Luz Tested-by: Bjorn Andersson Reviewed-by: Rob Clark Link: https://lore.kernel.org/r/20220606211305.189585-1-luzmaximilian@gmail.com Signed-off-by: Rob Clark Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 458b5b26d3c2..de8cc25506d6 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -960,7 +960,8 @@ void adreno_gpu_cleanup(struct adreno_gpu *adreno_gpu) for (i = 0; i < ARRAY_SIZE(adreno_gpu->info->fw); i++) release_firmware(adreno_gpu->fw[i]); - pm_runtime_disable(&priv->gpu_pdev->dev); + if (pm_runtime_enabled(&priv->gpu_pdev->dev)) + pm_runtime_disable(&priv->gpu_pdev->dev); msm_gpu_cleanup(&adreno_gpu->base); From ec9b0a8d307ed0377699fe47bb4dc640e9184f8b Mon Sep 17 00:00:00 2001 From: Pablo Neira Ayuso Date: Mon, 25 Jan 2021 17:28:18 +0100 Subject: [PATCH 1080/1842] netfilter: nftables: add nft_parse_register_load() and use it MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 4f16d25c68ec844299a4df6ecbb0234eaf88a935 ] This new function combines the netlink register attribute parser and the load validation function. This update requires to replace: enum nft_registers sreg:8; in many of the expression private areas otherwise compiler complains with: error: cannot take address of bit-field ‘sreg’ when passing the register field as reference. Signed-off-by: Pablo Neira Ayuso Signed-off-by: Sasha Levin --- include/net/netfilter/nf_tables.h | 2 +- include/net/netfilter/nf_tables_core.h | 6 ++--- include/net/netfilter/nft_meta.h | 2 +- net/ipv4/netfilter/nft_dup_ipv4.c | 18 ++++++------- net/ipv6/netfilter/nft_dup_ipv6.c | 18 ++++++------- net/netfilter/nf_tables_api.c | 18 +++++++++++-- net/netfilter/nft_bitwise.c | 10 ++++---- net/netfilter/nft_byteorder.c | 6 ++--- net/netfilter/nft_cmp.c | 8 +++--- net/netfilter/nft_ct.c | 5 ++-- net/netfilter/nft_dup_netdev.c | 6 ++--- net/netfilter/nft_dynset.c | 12 ++++----- net/netfilter/nft_exthdr.c | 6 ++--- net/netfilter/nft_fwd_netdev.c | 18 ++++++------- net/netfilter/nft_hash.c | 10 +++++--- net/netfilter/nft_lookup.c | 6 ++--- net/netfilter/nft_masq.c | 18 ++++++------- net/netfilter/nft_meta.c | 3 +-- net/netfilter/nft_nat.c | 35 +++++++++++--------------- net/netfilter/nft_objref.c | 6 ++--- net/netfilter/nft_payload.c | 4 +-- net/netfilter/nft_queue.c | 12 ++++----- net/netfilter/nft_range.c | 6 ++--- net/netfilter/nft_redir.c | 18 ++++++------- net/netfilter/nft_tproxy.c | 14 +++++------ 25 files changed, 132 insertions(+), 135 deletions(-) diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h index b7907385a02f..06e7f84a6d12 100644 --- a/include/net/netfilter/nf_tables.h +++ b/include/net/netfilter/nf_tables.h @@ -203,7 +203,7 @@ int nft_parse_u32_check(const struct nlattr *attr, int max, u32 *dest); unsigned int nft_parse_register(const struct nlattr *attr); int nft_dump_register(struct sk_buff *skb, unsigned int attr, unsigned int reg); -int nft_validate_register_load(enum nft_registers reg, unsigned int len); +int nft_parse_register_load(const struct nlattr *attr, u8 *sreg, u32 len); int nft_validate_register_store(const struct nft_ctx *ctx, enum nft_registers reg, const struct nft_data *data, diff --git a/include/net/netfilter/nf_tables_core.h b/include/net/netfilter/nf_tables_core.h index 8657e6815b07..b7aff03a3f0f 100644 --- a/include/net/netfilter/nf_tables_core.h +++ b/include/net/netfilter/nf_tables_core.h @@ -26,14 +26,14 @@ void nf_tables_core_module_exit(void); struct nft_bitwise_fast_expr { u32 mask; u32 xor; - enum nft_registers sreg:8; + u8 sreg; enum nft_registers dreg:8; }; struct nft_cmp_fast_expr { u32 data; u32 mask; - enum nft_registers sreg:8; + u8 sreg; u8 len; bool inv; }; @@ -67,7 +67,7 @@ struct nft_payload_set { enum nft_payload_bases base:8; u8 offset; u8 len; - enum nft_registers sreg:8; + u8 sreg; u8 csum_type; u8 csum_offset; u8 csum_flags; diff --git a/include/net/netfilter/nft_meta.h b/include/net/netfilter/nft_meta.h index 07e2fd507963..946fa8c83798 100644 --- a/include/net/netfilter/nft_meta.h +++ b/include/net/netfilter/nft_meta.h @@ -8,7 +8,7 @@ struct nft_meta { enum nft_meta_keys key:8; union { enum nft_registers dreg:8; - enum nft_registers sreg:8; + u8 sreg; }; }; diff --git a/net/ipv4/netfilter/nft_dup_ipv4.c b/net/ipv4/netfilter/nft_dup_ipv4.c index bcdb37f86a94..aeb631760eb9 100644 --- a/net/ipv4/netfilter/nft_dup_ipv4.c +++ b/net/ipv4/netfilter/nft_dup_ipv4.c @@ -13,8 +13,8 @@ #include struct nft_dup_ipv4 { - enum nft_registers sreg_addr:8; - enum nft_registers sreg_dev:8; + u8 sreg_addr; + u8 sreg_dev; }; static void nft_dup_ipv4_eval(const struct nft_expr *expr, @@ -40,16 +40,16 @@ static int nft_dup_ipv4_init(const struct nft_ctx *ctx, if (tb[NFTA_DUP_SREG_ADDR] == NULL) return -EINVAL; - priv->sreg_addr = nft_parse_register(tb[NFTA_DUP_SREG_ADDR]); - err = nft_validate_register_load(priv->sreg_addr, sizeof(struct in_addr)); + err = nft_parse_register_load(tb[NFTA_DUP_SREG_ADDR], &priv->sreg_addr, + sizeof(struct in_addr)); if (err < 0) return err; - if (tb[NFTA_DUP_SREG_DEV] != NULL) { - priv->sreg_dev = nft_parse_register(tb[NFTA_DUP_SREG_DEV]); - return nft_validate_register_load(priv->sreg_dev, sizeof(int)); - } - return 0; + if (tb[NFTA_DUP_SREG_DEV]) + err = nft_parse_register_load(tb[NFTA_DUP_SREG_DEV], + &priv->sreg_dev, sizeof(int)); + + return err; } static int nft_dup_ipv4_dump(struct sk_buff *skb, const struct nft_expr *expr) diff --git a/net/ipv6/netfilter/nft_dup_ipv6.c b/net/ipv6/netfilter/nft_dup_ipv6.c index 8b5193efb1f1..3a00d95e964e 100644 --- a/net/ipv6/netfilter/nft_dup_ipv6.c +++ b/net/ipv6/netfilter/nft_dup_ipv6.c @@ -13,8 +13,8 @@ #include struct nft_dup_ipv6 { - enum nft_registers sreg_addr:8; - enum nft_registers sreg_dev:8; + u8 sreg_addr; + u8 sreg_dev; }; static void nft_dup_ipv6_eval(const struct nft_expr *expr, @@ -38,16 +38,16 @@ static int nft_dup_ipv6_init(const struct nft_ctx *ctx, if (tb[NFTA_DUP_SREG_ADDR] == NULL) return -EINVAL; - priv->sreg_addr = nft_parse_register(tb[NFTA_DUP_SREG_ADDR]); - err = nft_validate_register_load(priv->sreg_addr, sizeof(struct in6_addr)); + err = nft_parse_register_load(tb[NFTA_DUP_SREG_ADDR], &priv->sreg_addr, + sizeof(struct in6_addr)); if (err < 0) return err; - if (tb[NFTA_DUP_SREG_DEV] != NULL) { - priv->sreg_dev = nft_parse_register(tb[NFTA_DUP_SREG_DEV]); - return nft_validate_register_load(priv->sreg_dev, sizeof(int)); - } - return 0; + if (tb[NFTA_DUP_SREG_DEV]) + err = nft_parse_register_load(tb[NFTA_DUP_SREG_DEV], + &priv->sreg_dev, sizeof(int)); + + return err; } static int nft_dup_ipv6_dump(struct sk_buff *skb, const struct nft_expr *expr) diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 0c56a90c3f08..91713ecd60e9 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -8514,7 +8514,7 @@ EXPORT_SYMBOL_GPL(nft_dump_register); * Validate that the input register is one of the general purpose * registers and that the length of the load is within the bounds. */ -int nft_validate_register_load(enum nft_registers reg, unsigned int len) +static int nft_validate_register_load(enum nft_registers reg, unsigned int len) { if (reg < NFT_REG_1 * NFT_REG_SIZE / NFT_REG32_SIZE) return -EINVAL; @@ -8525,7 +8525,21 @@ int nft_validate_register_load(enum nft_registers reg, unsigned int len) return 0; } -EXPORT_SYMBOL_GPL(nft_validate_register_load); + +int nft_parse_register_load(const struct nlattr *attr, u8 *sreg, u32 len) +{ + u32 reg; + int err; + + reg = nft_parse_register(attr); + err = nft_validate_register_load(reg, len); + if (err < 0) + return err; + + *sreg = reg; + return 0; +} +EXPORT_SYMBOL_GPL(nft_parse_register_load); /** * nft_validate_register_store - validate an expressions' register store diff --git a/net/netfilter/nft_bitwise.c b/net/netfilter/nft_bitwise.c index bbd773d74377..2157970b3cd3 100644 --- a/net/netfilter/nft_bitwise.c +++ b/net/netfilter/nft_bitwise.c @@ -16,7 +16,7 @@ #include struct nft_bitwise { - enum nft_registers sreg:8; + u8 sreg; enum nft_registers dreg:8; enum nft_bitwise_ops op:8; u8 len; @@ -169,8 +169,8 @@ static int nft_bitwise_init(const struct nft_ctx *ctx, priv->len = len; - priv->sreg = nft_parse_register(tb[NFTA_BITWISE_SREG]); - err = nft_validate_register_load(priv->sreg, priv->len); + err = nft_parse_register_load(tb[NFTA_BITWISE_SREG], &priv->sreg, + priv->len); if (err < 0) return err; @@ -315,8 +315,8 @@ static int nft_bitwise_fast_init(const struct nft_ctx *ctx, struct nft_bitwise_fast_expr *priv = nft_expr_priv(expr); int err; - priv->sreg = nft_parse_register(tb[NFTA_BITWISE_SREG]); - err = nft_validate_register_load(priv->sreg, sizeof(u32)); + err = nft_parse_register_load(tb[NFTA_BITWISE_SREG], &priv->sreg, + sizeof(u32)); if (err < 0) return err; diff --git a/net/netfilter/nft_byteorder.c b/net/netfilter/nft_byteorder.c index 12bed3f7bbc6..0960563cd5a1 100644 --- a/net/netfilter/nft_byteorder.c +++ b/net/netfilter/nft_byteorder.c @@ -16,7 +16,7 @@ #include struct nft_byteorder { - enum nft_registers sreg:8; + u8 sreg; enum nft_registers dreg:8; enum nft_byteorder_ops op:8; u8 len; @@ -131,14 +131,14 @@ static int nft_byteorder_init(const struct nft_ctx *ctx, return -EINVAL; } - priv->sreg = nft_parse_register(tb[NFTA_BYTEORDER_SREG]); err = nft_parse_u32_check(tb[NFTA_BYTEORDER_LEN], U8_MAX, &len); if (err < 0) return err; priv->len = len; - err = nft_validate_register_load(priv->sreg, priv->len); + err = nft_parse_register_load(tb[NFTA_BYTEORDER_SREG], &priv->sreg, + priv->len); if (err < 0) return err; diff --git a/net/netfilter/nft_cmp.c b/net/netfilter/nft_cmp.c index 1d42d06f5b64..b529c0e86546 100644 --- a/net/netfilter/nft_cmp.c +++ b/net/netfilter/nft_cmp.c @@ -18,7 +18,7 @@ struct nft_cmp_expr { struct nft_data data; - enum nft_registers sreg:8; + u8 sreg; u8 len; enum nft_cmp_ops op:8; }; @@ -87,8 +87,7 @@ static int nft_cmp_init(const struct nft_ctx *ctx, const struct nft_expr *expr, return err; } - priv->sreg = nft_parse_register(tb[NFTA_CMP_SREG]); - err = nft_validate_register_load(priv->sreg, desc.len); + err = nft_parse_register_load(tb[NFTA_CMP_SREG], &priv->sreg, desc.len); if (err < 0) return err; @@ -211,8 +210,7 @@ static int nft_cmp_fast_init(const struct nft_ctx *ctx, if (err < 0) return err; - priv->sreg = nft_parse_register(tb[NFTA_CMP_SREG]); - err = nft_validate_register_load(priv->sreg, desc.len); + err = nft_parse_register_load(tb[NFTA_CMP_SREG], &priv->sreg, desc.len); if (err < 0) return err; diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c index 7fcb73ac2e6e..dbf54cca6083 100644 --- a/net/netfilter/nft_ct.c +++ b/net/netfilter/nft_ct.c @@ -28,7 +28,7 @@ struct nft_ct { enum ip_conntrack_dir dir:8; union { enum nft_registers dreg:8; - enum nft_registers sreg:8; + u8 sreg; }; }; @@ -608,8 +608,7 @@ static int nft_ct_set_init(const struct nft_ctx *ctx, } } - priv->sreg = nft_parse_register(tb[NFTA_CT_SREG]); - err = nft_validate_register_load(priv->sreg, len); + err = nft_parse_register_load(tb[NFTA_CT_SREG], &priv->sreg, len); if (err < 0) goto err1; diff --git a/net/netfilter/nft_dup_netdev.c b/net/netfilter/nft_dup_netdev.c index 70c457476b87..5b5c607fbf83 100644 --- a/net/netfilter/nft_dup_netdev.c +++ b/net/netfilter/nft_dup_netdev.c @@ -14,7 +14,7 @@ #include struct nft_dup_netdev { - enum nft_registers sreg_dev:8; + u8 sreg_dev; }; static void nft_dup_netdev_eval(const struct nft_expr *expr, @@ -40,8 +40,8 @@ static int nft_dup_netdev_init(const struct nft_ctx *ctx, if (tb[NFTA_DUP_SREG_DEV] == NULL) return -EINVAL; - priv->sreg_dev = nft_parse_register(tb[NFTA_DUP_SREG_DEV]); - return nft_validate_register_load(priv->sreg_dev, sizeof(int)); + return nft_parse_register_load(tb[NFTA_DUP_SREG_DEV], &priv->sreg_dev, + sizeof(int)); } static int nft_dup_netdev_dump(struct sk_buff *skb, const struct nft_expr *expr) diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c index 58904bee1a0d..8c45e01fecdd 100644 --- a/net/netfilter/nft_dynset.c +++ b/net/netfilter/nft_dynset.c @@ -16,8 +16,8 @@ struct nft_dynset { struct nft_set *set; struct nft_set_ext_tmpl tmpl; enum nft_dynset_ops op:8; - enum nft_registers sreg_key:8; - enum nft_registers sreg_data:8; + u8 sreg_key; + u8 sreg_data; bool invert; u64 timeout; struct nft_expr *expr; @@ -154,8 +154,8 @@ static int nft_dynset_init(const struct nft_ctx *ctx, return err; } - priv->sreg_key = nft_parse_register(tb[NFTA_DYNSET_SREG_KEY]); - err = nft_validate_register_load(priv->sreg_key, set->klen); + err = nft_parse_register_load(tb[NFTA_DYNSET_SREG_KEY], &priv->sreg_key, + set->klen); if (err < 0) return err; @@ -165,8 +165,8 @@ static int nft_dynset_init(const struct nft_ctx *ctx, if (set->dtype == NFT_DATA_VERDICT) return -EOPNOTSUPP; - priv->sreg_data = nft_parse_register(tb[NFTA_DYNSET_SREG_DATA]); - err = nft_validate_register_load(priv->sreg_data, set->dlen); + err = nft_parse_register_load(tb[NFTA_DYNSET_SREG_DATA], + &priv->sreg_data, set->dlen); if (err < 0) return err; } else if (set->flags & NFT_SET_MAP) diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c index faa0844c01fb..f2b36b9c2b53 100644 --- a/net/netfilter/nft_exthdr.c +++ b/net/netfilter/nft_exthdr.c @@ -20,7 +20,7 @@ struct nft_exthdr { u8 len; u8 op; enum nft_registers dreg:8; - enum nft_registers sreg:8; + u8 sreg; u8 flags; }; @@ -403,11 +403,11 @@ static int nft_exthdr_tcp_set_init(const struct nft_ctx *ctx, priv->type = nla_get_u8(tb[NFTA_EXTHDR_TYPE]); priv->offset = offset; priv->len = len; - priv->sreg = nft_parse_register(tb[NFTA_EXTHDR_SREG]); priv->flags = flags; priv->op = op; - return nft_validate_register_load(priv->sreg, priv->len); + return nft_parse_register_load(tb[NFTA_EXTHDR_SREG], &priv->sreg, + priv->len); } static int nft_exthdr_ipv4_init(const struct nft_ctx *ctx, diff --git a/net/netfilter/nft_fwd_netdev.c b/net/netfilter/nft_fwd_netdev.c index 3b0dcd170551..7730409f6f09 100644 --- a/net/netfilter/nft_fwd_netdev.c +++ b/net/netfilter/nft_fwd_netdev.c @@ -18,7 +18,7 @@ #include struct nft_fwd_netdev { - enum nft_registers sreg_dev:8; + u8 sreg_dev; }; static void nft_fwd_netdev_eval(const struct nft_expr *expr, @@ -50,8 +50,8 @@ static int nft_fwd_netdev_init(const struct nft_ctx *ctx, if (tb[NFTA_FWD_SREG_DEV] == NULL) return -EINVAL; - priv->sreg_dev = nft_parse_register(tb[NFTA_FWD_SREG_DEV]); - return nft_validate_register_load(priv->sreg_dev, sizeof(int)); + return nft_parse_register_load(tb[NFTA_FWD_SREG_DEV], &priv->sreg_dev, + sizeof(int)); } static int nft_fwd_netdev_dump(struct sk_buff *skb, const struct nft_expr *expr) @@ -83,8 +83,8 @@ static bool nft_fwd_netdev_offload_action(const struct nft_expr *expr) } struct nft_fwd_neigh { - enum nft_registers sreg_dev:8; - enum nft_registers sreg_addr:8; + u8 sreg_dev; + u8 sreg_addr; u8 nfproto; }; @@ -162,8 +162,6 @@ static int nft_fwd_neigh_init(const struct nft_ctx *ctx, !tb[NFTA_FWD_NFPROTO]) return -EINVAL; - priv->sreg_dev = nft_parse_register(tb[NFTA_FWD_SREG_DEV]); - priv->sreg_addr = nft_parse_register(tb[NFTA_FWD_SREG_ADDR]); priv->nfproto = ntohl(nla_get_be32(tb[NFTA_FWD_NFPROTO])); switch (priv->nfproto) { @@ -177,11 +175,13 @@ static int nft_fwd_neigh_init(const struct nft_ctx *ctx, return -EOPNOTSUPP; } - err = nft_validate_register_load(priv->sreg_dev, sizeof(int)); + err = nft_parse_register_load(tb[NFTA_FWD_SREG_DEV], &priv->sreg_dev, + sizeof(int)); if (err < 0) return err; - return nft_validate_register_load(priv->sreg_addr, addr_len); + return nft_parse_register_load(tb[NFTA_FWD_SREG_ADDR], &priv->sreg_addr, + addr_len); } static int nft_fwd_neigh_dump(struct sk_buff *skb, const struct nft_expr *expr) diff --git a/net/netfilter/nft_hash.c b/net/netfilter/nft_hash.c index 96371d878e7e..7ee6c6da50ae 100644 --- a/net/netfilter/nft_hash.c +++ b/net/netfilter/nft_hash.c @@ -14,7 +14,7 @@ #include struct nft_jhash { - enum nft_registers sreg:8; + u8 sreg; enum nft_registers dreg:8; u8 len; bool autogen_seed:1; @@ -83,7 +83,6 @@ static int nft_jhash_init(const struct nft_ctx *ctx, if (tb[NFTA_HASH_OFFSET]) priv->offset = ntohl(nla_get_be32(tb[NFTA_HASH_OFFSET])); - priv->sreg = nft_parse_register(tb[NFTA_HASH_SREG]); priv->dreg = nft_parse_register(tb[NFTA_HASH_DREG]); err = nft_parse_u32_check(tb[NFTA_HASH_LEN], U8_MAX, &len); @@ -94,6 +93,10 @@ static int nft_jhash_init(const struct nft_ctx *ctx, priv->len = len; + err = nft_parse_register_load(tb[NFTA_HASH_SREG], &priv->sreg, len); + if (err < 0) + return err; + priv->modulus = ntohl(nla_get_be32(tb[NFTA_HASH_MODULUS])); if (priv->modulus < 1) return -ERANGE; @@ -108,8 +111,7 @@ static int nft_jhash_init(const struct nft_ctx *ctx, get_random_bytes(&priv->seed, sizeof(priv->seed)); } - return nft_validate_register_load(priv->sreg, len) && - nft_validate_register_store(ctx, priv->dreg, NULL, + return nft_validate_register_store(ctx, priv->dreg, NULL, NFT_DATA_VALUE, sizeof(u32)); } diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c index f1363b8aabba..9e87b6d39f51 100644 --- a/net/netfilter/nft_lookup.c +++ b/net/netfilter/nft_lookup.c @@ -17,7 +17,7 @@ struct nft_lookup { struct nft_set *set; - enum nft_registers sreg:8; + u8 sreg; enum nft_registers dreg:8; bool invert; struct nft_set_binding binding; @@ -76,8 +76,8 @@ static int nft_lookup_init(const struct nft_ctx *ctx, if (IS_ERR(set)) return PTR_ERR(set); - priv->sreg = nft_parse_register(tb[NFTA_LOOKUP_SREG]); - err = nft_validate_register_load(priv->sreg, set->klen); + err = nft_parse_register_load(tb[NFTA_LOOKUP_SREG], &priv->sreg, + set->klen); if (err < 0) return err; diff --git a/net/netfilter/nft_masq.c b/net/netfilter/nft_masq.c index 71390b727040..9953e8053753 100644 --- a/net/netfilter/nft_masq.c +++ b/net/netfilter/nft_masq.c @@ -15,8 +15,8 @@ struct nft_masq { u32 flags; - enum nft_registers sreg_proto_min:8; - enum nft_registers sreg_proto_max:8; + u8 sreg_proto_min; + u8 sreg_proto_max; }; static const struct nla_policy nft_masq_policy[NFTA_MASQ_MAX + 1] = { @@ -54,19 +54,15 @@ static int nft_masq_init(const struct nft_ctx *ctx, } if (tb[NFTA_MASQ_REG_PROTO_MIN]) { - priv->sreg_proto_min = - nft_parse_register(tb[NFTA_MASQ_REG_PROTO_MIN]); - - err = nft_validate_register_load(priv->sreg_proto_min, plen); + err = nft_parse_register_load(tb[NFTA_MASQ_REG_PROTO_MIN], + &priv->sreg_proto_min, plen); if (err < 0) return err; if (tb[NFTA_MASQ_REG_PROTO_MAX]) { - priv->sreg_proto_max = - nft_parse_register(tb[NFTA_MASQ_REG_PROTO_MAX]); - - err = nft_validate_register_load(priv->sreg_proto_max, - plen); + err = nft_parse_register_load(tb[NFTA_MASQ_REG_PROTO_MAX], + &priv->sreg_proto_max, + plen); if (err < 0) return err; } else { diff --git a/net/netfilter/nft_meta.c b/net/netfilter/nft_meta.c index bf4b3ad5314c..65e231ec1884 100644 --- a/net/netfilter/nft_meta.c +++ b/net/netfilter/nft_meta.c @@ -661,8 +661,7 @@ int nft_meta_set_init(const struct nft_ctx *ctx, return -EOPNOTSUPP; } - priv->sreg = nft_parse_register(tb[NFTA_META_SREG]); - err = nft_validate_register_load(priv->sreg, len); + err = nft_parse_register_load(tb[NFTA_META_SREG], &priv->sreg, len); if (err < 0) return err; diff --git a/net/netfilter/nft_nat.c b/net/netfilter/nft_nat.c index 6a4a5ac88db7..db8f9116eeb4 100644 --- a/net/netfilter/nft_nat.c +++ b/net/netfilter/nft_nat.c @@ -21,10 +21,10 @@ #include struct nft_nat { - enum nft_registers sreg_addr_min:8; - enum nft_registers sreg_addr_max:8; - enum nft_registers sreg_proto_min:8; - enum nft_registers sreg_proto_max:8; + u8 sreg_addr_min; + u8 sreg_addr_max; + u8 sreg_proto_min; + u8 sreg_proto_max; enum nf_nat_manip_type type:8; u8 family; u16 flags; @@ -208,18 +208,15 @@ static int nft_nat_init(const struct nft_ctx *ctx, const struct nft_expr *expr, priv->family = family; if (tb[NFTA_NAT_REG_ADDR_MIN]) { - priv->sreg_addr_min = - nft_parse_register(tb[NFTA_NAT_REG_ADDR_MIN]); - err = nft_validate_register_load(priv->sreg_addr_min, alen); + err = nft_parse_register_load(tb[NFTA_NAT_REG_ADDR_MIN], + &priv->sreg_addr_min, alen); if (err < 0) return err; if (tb[NFTA_NAT_REG_ADDR_MAX]) { - priv->sreg_addr_max = - nft_parse_register(tb[NFTA_NAT_REG_ADDR_MAX]); - - err = nft_validate_register_load(priv->sreg_addr_max, - alen); + err = nft_parse_register_load(tb[NFTA_NAT_REG_ADDR_MAX], + &priv->sreg_addr_max, + alen); if (err < 0) return err; } else { @@ -231,19 +228,15 @@ static int nft_nat_init(const struct nft_ctx *ctx, const struct nft_expr *expr, plen = sizeof_field(struct nf_nat_range, min_addr.all); if (tb[NFTA_NAT_REG_PROTO_MIN]) { - priv->sreg_proto_min = - nft_parse_register(tb[NFTA_NAT_REG_PROTO_MIN]); - - err = nft_validate_register_load(priv->sreg_proto_min, plen); + err = nft_parse_register_load(tb[NFTA_NAT_REG_PROTO_MIN], + &priv->sreg_proto_min, plen); if (err < 0) return err; if (tb[NFTA_NAT_REG_PROTO_MAX]) { - priv->sreg_proto_max = - nft_parse_register(tb[NFTA_NAT_REG_PROTO_MAX]); - - err = nft_validate_register_load(priv->sreg_proto_max, - plen); + err = nft_parse_register_load(tb[NFTA_NAT_REG_PROTO_MAX], + &priv->sreg_proto_max, + plen); if (err < 0) return err; } else { diff --git a/net/netfilter/nft_objref.c b/net/netfilter/nft_objref.c index 5f9207a9f485..bc104d36d3bb 100644 --- a/net/netfilter/nft_objref.c +++ b/net/netfilter/nft_objref.c @@ -95,7 +95,7 @@ static const struct nft_expr_ops nft_objref_ops = { struct nft_objref_map { struct nft_set *set; - enum nft_registers sreg:8; + u8 sreg; struct nft_set_binding binding; }; @@ -137,8 +137,8 @@ static int nft_objref_map_init(const struct nft_ctx *ctx, if (!(set->flags & NFT_SET_OBJECT)) return -EINVAL; - priv->sreg = nft_parse_register(tb[NFTA_OBJREF_SET_SREG]); - err = nft_validate_register_load(priv->sreg, set->klen); + err = nft_parse_register_load(tb[NFTA_OBJREF_SET_SREG], &priv->sreg, + set->klen); if (err < 0) return err; diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c index 6a8495bd08bb..b9702236d310 100644 --- a/net/netfilter/nft_payload.c +++ b/net/netfilter/nft_payload.c @@ -664,7 +664,6 @@ static int nft_payload_set_init(const struct nft_ctx *ctx, priv->base = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_BASE])); priv->offset = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_OFFSET])); priv->len = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_LEN])); - priv->sreg = nft_parse_register(tb[NFTA_PAYLOAD_SREG]); if (tb[NFTA_PAYLOAD_CSUM_TYPE]) priv->csum_type = @@ -697,7 +696,8 @@ static int nft_payload_set_init(const struct nft_ctx *ctx, return -EOPNOTSUPP; } - return nft_validate_register_load(priv->sreg, priv->len); + return nft_parse_register_load(tb[NFTA_PAYLOAD_SREG], &priv->sreg, + priv->len); } static int nft_payload_set_dump(struct sk_buff *skb, const struct nft_expr *expr) diff --git a/net/netfilter/nft_queue.c b/net/netfilter/nft_queue.c index 23265d757acb..9ba1de51ac07 100644 --- a/net/netfilter/nft_queue.c +++ b/net/netfilter/nft_queue.c @@ -19,10 +19,10 @@ static u32 jhash_initval __read_mostly; struct nft_queue { - enum nft_registers sreg_qnum:8; - u16 queuenum; - u16 queues_total; - u16 flags; + u8 sreg_qnum; + u16 queuenum; + u16 queues_total; + u16 flags; }; static void nft_queue_eval(const struct nft_expr *expr, @@ -111,8 +111,8 @@ static int nft_queue_sreg_init(const struct nft_ctx *ctx, struct nft_queue *priv = nft_expr_priv(expr); int err; - priv->sreg_qnum = nft_parse_register(tb[NFTA_QUEUE_SREG_QNUM]); - err = nft_validate_register_load(priv->sreg_qnum, sizeof(u32)); + err = nft_parse_register_load(tb[NFTA_QUEUE_SREG_QNUM], + &priv->sreg_qnum, sizeof(u32)); if (err < 0) return err; diff --git a/net/netfilter/nft_range.c b/net/netfilter/nft_range.c index 89efcc5a533d..e4a1c44d7f51 100644 --- a/net/netfilter/nft_range.c +++ b/net/netfilter/nft_range.c @@ -15,7 +15,7 @@ struct nft_range_expr { struct nft_data data_from; struct nft_data data_to; - enum nft_registers sreg:8; + u8 sreg; u8 len; enum nft_range_ops op:8; }; @@ -86,8 +86,8 @@ static int nft_range_init(const struct nft_ctx *ctx, const struct nft_expr *expr goto err2; } - priv->sreg = nft_parse_register(tb[NFTA_RANGE_SREG]); - err = nft_validate_register_load(priv->sreg, desc_from.len); + err = nft_parse_register_load(tb[NFTA_RANGE_SREG], &priv->sreg, + desc_from.len); if (err < 0) goto err2; diff --git a/net/netfilter/nft_redir.c b/net/netfilter/nft_redir.c index 2056051c0af0..ba09890dddb5 100644 --- a/net/netfilter/nft_redir.c +++ b/net/netfilter/nft_redir.c @@ -14,8 +14,8 @@ #include struct nft_redir { - enum nft_registers sreg_proto_min:8; - enum nft_registers sreg_proto_max:8; + u8 sreg_proto_min; + u8 sreg_proto_max; u16 flags; }; @@ -50,19 +50,15 @@ static int nft_redir_init(const struct nft_ctx *ctx, plen = sizeof_field(struct nf_nat_range, min_addr.all); if (tb[NFTA_REDIR_REG_PROTO_MIN]) { - priv->sreg_proto_min = - nft_parse_register(tb[NFTA_REDIR_REG_PROTO_MIN]); - - err = nft_validate_register_load(priv->sreg_proto_min, plen); + err = nft_parse_register_load(tb[NFTA_REDIR_REG_PROTO_MIN], + &priv->sreg_proto_min, plen); if (err < 0) return err; if (tb[NFTA_REDIR_REG_PROTO_MAX]) { - priv->sreg_proto_max = - nft_parse_register(tb[NFTA_REDIR_REG_PROTO_MAX]); - - err = nft_validate_register_load(priv->sreg_proto_max, - plen); + err = nft_parse_register_load(tb[NFTA_REDIR_REG_PROTO_MAX], + &priv->sreg_proto_max, + plen); if (err < 0) return err; } else { diff --git a/net/netfilter/nft_tproxy.c b/net/netfilter/nft_tproxy.c index 242222dc52c3..37c728bdad41 100644 --- a/net/netfilter/nft_tproxy.c +++ b/net/netfilter/nft_tproxy.c @@ -13,9 +13,9 @@ #endif struct nft_tproxy { - enum nft_registers sreg_addr:8; - enum nft_registers sreg_port:8; - u8 family; + u8 sreg_addr; + u8 sreg_port; + u8 family; }; static void nft_tproxy_eval_v4(const struct nft_expr *expr, @@ -254,15 +254,15 @@ static int nft_tproxy_init(const struct nft_ctx *ctx, } if (tb[NFTA_TPROXY_REG_ADDR]) { - priv->sreg_addr = nft_parse_register(tb[NFTA_TPROXY_REG_ADDR]); - err = nft_validate_register_load(priv->sreg_addr, alen); + err = nft_parse_register_load(tb[NFTA_TPROXY_REG_ADDR], + &priv->sreg_addr, alen); if (err < 0) return err; } if (tb[NFTA_TPROXY_REG_PORT]) { - priv->sreg_port = nft_parse_register(tb[NFTA_TPROXY_REG_PORT]); - err = nft_validate_register_load(priv->sreg_port, sizeof(u16)); + err = nft_parse_register_load(tb[NFTA_TPROXY_REG_PORT], + &priv->sreg_port, sizeof(u16)); if (err < 0) return err; } From 95f80c88436f98f38abdeb791b2a979b24191091 Mon Sep 17 00:00:00 2001 From: Pablo Neira Ayuso Date: Mon, 25 Jan 2021 18:27:22 +0100 Subject: [PATCH 1081/1842] netfilter: nftables: add nft_parse_register_store() and use it MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 345023b0db315648ccc3c1a36aee88304a8b4d91 ] This new function combines the netlink register attribute parser and the store validation function. This update requires to replace: enum nft_registers dreg:8; in many of the expression private areas otherwise compiler complains with: error: cannot take address of bit-field ‘dreg’ when passing the register field as reference. Signed-off-by: Pablo Neira Ayuso Signed-off-by: Sasha Levin --- include/net/netfilter/nf_tables.h | 8 +++--- include/net/netfilter/nf_tables_core.h | 6 ++--- include/net/netfilter/nft_fib.h | 2 +- include/net/netfilter/nft_meta.h | 2 +- net/bridge/netfilter/nft_meta_bridge.c | 5 ++-- net/netfilter/nf_tables_api.c | 34 ++++++++++++++++++++++---- net/netfilter/nft_bitwise.c | 13 +++++----- net/netfilter/nft_byteorder.c | 8 +++--- net/netfilter/nft_ct.c | 7 +++--- net/netfilter/nft_exthdr.c | 8 +++--- net/netfilter/nft_fib.c | 5 ++-- net/netfilter/nft_hash.c | 17 ++++++------- net/netfilter/nft_immediate.c | 6 ++--- net/netfilter/nft_lookup.c | 8 +++--- net/netfilter/nft_meta.c | 5 ++-- net/netfilter/nft_numgen.c | 15 +++++------- net/netfilter/nft_osf.c | 8 +++--- net/netfilter/nft_payload.c | 6 ++--- net/netfilter/nft_rt.c | 7 +++--- net/netfilter/nft_socket.c | 7 +++--- net/netfilter/nft_tunnel.c | 8 +++--- net/netfilter/nft_xfrm.c | 7 +++--- 22 files changed, 100 insertions(+), 92 deletions(-) diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h index 06e7f84a6d12..b9948e7861f2 100644 --- a/include/net/netfilter/nf_tables.h +++ b/include/net/netfilter/nf_tables.h @@ -204,10 +204,10 @@ unsigned int nft_parse_register(const struct nlattr *attr); int nft_dump_register(struct sk_buff *skb, unsigned int attr, unsigned int reg); int nft_parse_register_load(const struct nlattr *attr, u8 *sreg, u32 len); -int nft_validate_register_store(const struct nft_ctx *ctx, - enum nft_registers reg, - const struct nft_data *data, - enum nft_data_types type, unsigned int len); +int nft_parse_register_store(const struct nft_ctx *ctx, + const struct nlattr *attr, u8 *dreg, + const struct nft_data *data, + enum nft_data_types type, unsigned int len); /** * struct nft_userdata - user defined data associated with an object diff --git a/include/net/netfilter/nf_tables_core.h b/include/net/netfilter/nf_tables_core.h index b7aff03a3f0f..fd10a7862fdc 100644 --- a/include/net/netfilter/nf_tables_core.h +++ b/include/net/netfilter/nf_tables_core.h @@ -27,7 +27,7 @@ struct nft_bitwise_fast_expr { u32 mask; u32 xor; u8 sreg; - enum nft_registers dreg:8; + u8 dreg; }; struct nft_cmp_fast_expr { @@ -40,7 +40,7 @@ struct nft_cmp_fast_expr { struct nft_immediate_expr { struct nft_data data; - enum nft_registers dreg:8; + u8 dreg; u8 dlen; }; @@ -60,7 +60,7 @@ struct nft_payload { enum nft_payload_bases base:8; u8 offset; u8 len; - enum nft_registers dreg:8; + u8 dreg; }; struct nft_payload_set { diff --git a/include/net/netfilter/nft_fib.h b/include/net/netfilter/nft_fib.h index 628b6fa579cd..237f3757637e 100644 --- a/include/net/netfilter/nft_fib.h +++ b/include/net/netfilter/nft_fib.h @@ -5,7 +5,7 @@ #include struct nft_fib { - enum nft_registers dreg:8; + u8 dreg; u8 result; u32 flags; }; diff --git a/include/net/netfilter/nft_meta.h b/include/net/netfilter/nft_meta.h index 946fa8c83798..2dce55c736f4 100644 --- a/include/net/netfilter/nft_meta.h +++ b/include/net/netfilter/nft_meta.h @@ -7,7 +7,7 @@ struct nft_meta { enum nft_meta_keys key:8; union { - enum nft_registers dreg:8; + u8 dreg; u8 sreg; }; }; diff --git a/net/bridge/netfilter/nft_meta_bridge.c b/net/bridge/netfilter/nft_meta_bridge.c index 8e8ffac037cd..97805ec424c1 100644 --- a/net/bridge/netfilter/nft_meta_bridge.c +++ b/net/bridge/netfilter/nft_meta_bridge.c @@ -87,9 +87,8 @@ static int nft_meta_bridge_get_init(const struct nft_ctx *ctx, return nft_meta_get_init(ctx, expr, tb); } - priv->dreg = nft_parse_register(tb[NFTA_META_DREG]); - return nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, len); + return nft_parse_register_store(ctx, tb[NFTA_META_DREG], &priv->dreg, + NULL, NFT_DATA_VALUE, len); } static struct nft_expr_type nft_meta_bridge_type; diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 91713ecd60e9..3c17fadaab5f 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -4414,6 +4414,12 @@ static int nf_tables_delset(struct net *net, struct sock *nlsk, return nft_delset(&ctx, set); } +static int nft_validate_register_store(const struct nft_ctx *ctx, + enum nft_registers reg, + const struct nft_data *data, + enum nft_data_types type, + unsigned int len); + static int nf_tables_bind_check_setelem(const struct nft_ctx *ctx, struct nft_set *set, const struct nft_set_iter *iter, @@ -8555,10 +8561,11 @@ EXPORT_SYMBOL_GPL(nft_parse_register_load); * A value of NULL for the data means that its runtime gathered * data. */ -int nft_validate_register_store(const struct nft_ctx *ctx, - enum nft_registers reg, - const struct nft_data *data, - enum nft_data_types type, unsigned int len) +static int nft_validate_register_store(const struct nft_ctx *ctx, + enum nft_registers reg, + const struct nft_data *data, + enum nft_data_types type, + unsigned int len) { int err; @@ -8590,7 +8597,24 @@ int nft_validate_register_store(const struct nft_ctx *ctx, return 0; } } -EXPORT_SYMBOL_GPL(nft_validate_register_store); + +int nft_parse_register_store(const struct nft_ctx *ctx, + const struct nlattr *attr, u8 *dreg, + const struct nft_data *data, + enum nft_data_types type, unsigned int len) +{ + int err; + u32 reg; + + reg = nft_parse_register(attr); + err = nft_validate_register_store(ctx, reg, data, type, len); + if (err < 0) + return err; + + *dreg = reg; + return 0; +} +EXPORT_SYMBOL_GPL(nft_parse_register_store); static const struct nla_policy nft_verdict_policy[NFTA_VERDICT_MAX + 1] = { [NFTA_VERDICT_CODE] = { .type = NLA_U32 }, diff --git a/net/netfilter/nft_bitwise.c b/net/netfilter/nft_bitwise.c index 2157970b3cd3..47b0dba95054 100644 --- a/net/netfilter/nft_bitwise.c +++ b/net/netfilter/nft_bitwise.c @@ -17,7 +17,7 @@ struct nft_bitwise { u8 sreg; - enum nft_registers dreg:8; + u8 dreg; enum nft_bitwise_ops op:8; u8 len; struct nft_data mask; @@ -174,9 +174,9 @@ static int nft_bitwise_init(const struct nft_ctx *ctx, if (err < 0) return err; - priv->dreg = nft_parse_register(tb[NFTA_BITWISE_DREG]); - err = nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, priv->len); + err = nft_parse_register_store(ctx, tb[NFTA_BITWISE_DREG], + &priv->dreg, NULL, NFT_DATA_VALUE, + priv->len); if (err < 0) return err; @@ -320,9 +320,8 @@ static int nft_bitwise_fast_init(const struct nft_ctx *ctx, if (err < 0) return err; - priv->dreg = nft_parse_register(tb[NFTA_BITWISE_DREG]); - err = nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, sizeof(u32)); + err = nft_parse_register_store(ctx, tb[NFTA_BITWISE_DREG], &priv->dreg, + NULL, NFT_DATA_VALUE, sizeof(u32)); if (err < 0) return err; diff --git a/net/netfilter/nft_byteorder.c b/net/netfilter/nft_byteorder.c index 0960563cd5a1..9d5947ab8d4e 100644 --- a/net/netfilter/nft_byteorder.c +++ b/net/netfilter/nft_byteorder.c @@ -17,7 +17,7 @@ struct nft_byteorder { u8 sreg; - enum nft_registers dreg:8; + u8 dreg; enum nft_byteorder_ops op:8; u8 len; u8 size; @@ -142,9 +142,9 @@ static int nft_byteorder_init(const struct nft_ctx *ctx, if (err < 0) return err; - priv->dreg = nft_parse_register(tb[NFTA_BYTEORDER_DREG]); - return nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, priv->len); + return nft_parse_register_store(ctx, tb[NFTA_BYTEORDER_DREG], + &priv->dreg, NULL, NFT_DATA_VALUE, + priv->len); } static int nft_byteorder_dump(struct sk_buff *skb, const struct nft_expr *expr) diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c index dbf54cca6083..781118465d46 100644 --- a/net/netfilter/nft_ct.c +++ b/net/netfilter/nft_ct.c @@ -27,7 +27,7 @@ struct nft_ct { enum nft_ct_keys key:8; enum ip_conntrack_dir dir:8; union { - enum nft_registers dreg:8; + u8 dreg; u8 sreg; }; }; @@ -499,9 +499,8 @@ static int nft_ct_get_init(const struct nft_ctx *ctx, } } - priv->dreg = nft_parse_register(tb[NFTA_CT_DREG]); - err = nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, len); + err = nft_parse_register_store(ctx, tb[NFTA_CT_DREG], &priv->dreg, NULL, + NFT_DATA_VALUE, len); if (err < 0) return err; diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c index f2b36b9c2b53..670dd146fb2b 100644 --- a/net/netfilter/nft_exthdr.c +++ b/net/netfilter/nft_exthdr.c @@ -19,7 +19,7 @@ struct nft_exthdr { u8 offset; u8 len; u8 op; - enum nft_registers dreg:8; + u8 dreg; u8 sreg; u8 flags; }; @@ -353,12 +353,12 @@ static int nft_exthdr_init(const struct nft_ctx *ctx, priv->type = nla_get_u8(tb[NFTA_EXTHDR_TYPE]); priv->offset = offset; priv->len = len; - priv->dreg = nft_parse_register(tb[NFTA_EXTHDR_DREG]); priv->flags = flags; priv->op = op; - return nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, priv->len); + return nft_parse_register_store(ctx, tb[NFTA_EXTHDR_DREG], + &priv->dreg, NULL, NFT_DATA_VALUE, + priv->len); } static int nft_exthdr_tcp_set_init(const struct nft_ctx *ctx, diff --git a/net/netfilter/nft_fib.c b/net/netfilter/nft_fib.c index 4dfdaeaf09a5..b10ce732b337 100644 --- a/net/netfilter/nft_fib.c +++ b/net/netfilter/nft_fib.c @@ -86,7 +86,6 @@ int nft_fib_init(const struct nft_ctx *ctx, const struct nft_expr *expr, return -EINVAL; priv->result = ntohl(nla_get_be32(tb[NFTA_FIB_RESULT])); - priv->dreg = nft_parse_register(tb[NFTA_FIB_DREG]); switch (priv->result) { case NFT_FIB_RESULT_OIF: @@ -106,8 +105,8 @@ int nft_fib_init(const struct nft_ctx *ctx, const struct nft_expr *expr, return -EINVAL; } - err = nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, len); + err = nft_parse_register_store(ctx, tb[NFTA_FIB_DREG], &priv->dreg, + NULL, NFT_DATA_VALUE, len); if (err < 0) return err; diff --git a/net/netfilter/nft_hash.c b/net/netfilter/nft_hash.c index 7ee6c6da50ae..f829f5289e16 100644 --- a/net/netfilter/nft_hash.c +++ b/net/netfilter/nft_hash.c @@ -15,7 +15,7 @@ struct nft_jhash { u8 sreg; - enum nft_registers dreg:8; + u8 dreg; u8 len; bool autogen_seed:1; u32 modulus; @@ -38,7 +38,7 @@ static void nft_jhash_eval(const struct nft_expr *expr, } struct nft_symhash { - enum nft_registers dreg:8; + u8 dreg; u32 modulus; u32 offset; }; @@ -83,8 +83,6 @@ static int nft_jhash_init(const struct nft_ctx *ctx, if (tb[NFTA_HASH_OFFSET]) priv->offset = ntohl(nla_get_be32(tb[NFTA_HASH_OFFSET])); - priv->dreg = nft_parse_register(tb[NFTA_HASH_DREG]); - err = nft_parse_u32_check(tb[NFTA_HASH_LEN], U8_MAX, &len); if (err < 0) return err; @@ -111,8 +109,8 @@ static int nft_jhash_init(const struct nft_ctx *ctx, get_random_bytes(&priv->seed, sizeof(priv->seed)); } - return nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, sizeof(u32)); + return nft_parse_register_store(ctx, tb[NFTA_HASH_DREG], &priv->dreg, + NULL, NFT_DATA_VALUE, sizeof(u32)); } static int nft_symhash_init(const struct nft_ctx *ctx, @@ -128,8 +126,6 @@ static int nft_symhash_init(const struct nft_ctx *ctx, if (tb[NFTA_HASH_OFFSET]) priv->offset = ntohl(nla_get_be32(tb[NFTA_HASH_OFFSET])); - priv->dreg = nft_parse_register(tb[NFTA_HASH_DREG]); - priv->modulus = ntohl(nla_get_be32(tb[NFTA_HASH_MODULUS])); if (priv->modulus < 1) return -ERANGE; @@ -137,8 +133,9 @@ static int nft_symhash_init(const struct nft_ctx *ctx, if (priv->offset + priv->modulus - 1 < priv->offset) return -EOVERFLOW; - return nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, sizeof(u32)); + return nft_parse_register_store(ctx, tb[NFTA_HASH_DREG], + &priv->dreg, NULL, NFT_DATA_VALUE, + sizeof(u32)); } static int nft_jhash_dump(struct sk_buff *skb, diff --git a/net/netfilter/nft_immediate.c b/net/netfilter/nft_immediate.c index 5c9d88560a47..d0f67d325bdf 100644 --- a/net/netfilter/nft_immediate.c +++ b/net/netfilter/nft_immediate.c @@ -48,9 +48,9 @@ static int nft_immediate_init(const struct nft_ctx *ctx, priv->dlen = desc.len; - priv->dreg = nft_parse_register(tb[NFTA_IMMEDIATE_DREG]); - err = nft_validate_register_store(ctx, priv->dreg, &priv->data, - desc.type, desc.len); + err = nft_parse_register_store(ctx, tb[NFTA_IMMEDIATE_DREG], + &priv->dreg, &priv->data, desc.type, + desc.len); if (err < 0) goto err1; diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c index 9e87b6d39f51..b0f558b4fea5 100644 --- a/net/netfilter/nft_lookup.c +++ b/net/netfilter/nft_lookup.c @@ -18,7 +18,7 @@ struct nft_lookup { struct nft_set *set; u8 sreg; - enum nft_registers dreg:8; + u8 dreg; bool invert; struct nft_set_binding binding; }; @@ -100,9 +100,9 @@ static int nft_lookup_init(const struct nft_ctx *ctx, if (!(set->flags & NFT_SET_MAP)) return -EINVAL; - priv->dreg = nft_parse_register(tb[NFTA_LOOKUP_DREG]); - err = nft_validate_register_store(ctx, priv->dreg, NULL, - set->dtype, set->dlen); + err = nft_parse_register_store(ctx, tb[NFTA_LOOKUP_DREG], + &priv->dreg, NULL, set->dtype, + set->dlen); if (err < 0) return err; } else if (set->flags & NFT_SET_MAP) diff --git a/net/netfilter/nft_meta.c b/net/netfilter/nft_meta.c index 65e231ec1884..a7e01e9952f1 100644 --- a/net/netfilter/nft_meta.c +++ b/net/netfilter/nft_meta.c @@ -535,9 +535,8 @@ int nft_meta_get_init(const struct nft_ctx *ctx, return -EOPNOTSUPP; } - priv->dreg = nft_parse_register(tb[NFTA_META_DREG]); - return nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, len); + return nft_parse_register_store(ctx, tb[NFTA_META_DREG], &priv->dreg, + NULL, NFT_DATA_VALUE, len); } EXPORT_SYMBOL_GPL(nft_meta_get_init); diff --git a/net/netfilter/nft_numgen.c b/net/netfilter/nft_numgen.c index f1fc824f9737..722cac1e90e0 100644 --- a/net/netfilter/nft_numgen.c +++ b/net/netfilter/nft_numgen.c @@ -16,7 +16,7 @@ static DEFINE_PER_CPU(struct rnd_state, nft_numgen_prandom_state); struct nft_ng_inc { - enum nft_registers dreg:8; + u8 dreg; u32 modulus; atomic_t counter; u32 offset; @@ -66,11 +66,10 @@ static int nft_ng_inc_init(const struct nft_ctx *ctx, if (priv->offset + priv->modulus - 1 < priv->offset) return -EOVERFLOW; - priv->dreg = nft_parse_register(tb[NFTA_NG_DREG]); atomic_set(&priv->counter, priv->modulus - 1); - return nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, sizeof(u32)); + return nft_parse_register_store(ctx, tb[NFTA_NG_DREG], &priv->dreg, + NULL, NFT_DATA_VALUE, sizeof(u32)); } static int nft_ng_dump(struct sk_buff *skb, enum nft_registers dreg, @@ -100,7 +99,7 @@ static int nft_ng_inc_dump(struct sk_buff *skb, const struct nft_expr *expr) } struct nft_ng_random { - enum nft_registers dreg:8; + u8 dreg; u32 modulus; u32 offset; }; @@ -140,10 +139,8 @@ static int nft_ng_random_init(const struct nft_ctx *ctx, prandom_init_once(&nft_numgen_prandom_state); - priv->dreg = nft_parse_register(tb[NFTA_NG_DREG]); - - return nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, sizeof(u32)); + return nft_parse_register_store(ctx, tb[NFTA_NG_DREG], &priv->dreg, + NULL, NFT_DATA_VALUE, sizeof(u32)); } static int nft_ng_random_dump(struct sk_buff *skb, const struct nft_expr *expr) diff --git a/net/netfilter/nft_osf.c b/net/netfilter/nft_osf.c index 2c957629ea66..d82677e83400 100644 --- a/net/netfilter/nft_osf.c +++ b/net/netfilter/nft_osf.c @@ -6,7 +6,7 @@ #include struct nft_osf { - enum nft_registers dreg:8; + u8 dreg; u8 ttl; u32 flags; }; @@ -83,9 +83,9 @@ static int nft_osf_init(const struct nft_ctx *ctx, priv->flags = flags; } - priv->dreg = nft_parse_register(tb[NFTA_OSF_DREG]); - err = nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, NFT_OSF_MAXGENRELEN); + err = nft_parse_register_store(ctx, tb[NFTA_OSF_DREG], &priv->dreg, + NULL, NFT_DATA_VALUE, + NFT_OSF_MAXGENRELEN); if (err < 0) return err; diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c index b9702236d310..01878c16418c 100644 --- a/net/netfilter/nft_payload.c +++ b/net/netfilter/nft_payload.c @@ -144,10 +144,10 @@ static int nft_payload_init(const struct nft_ctx *ctx, priv->base = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_BASE])); priv->offset = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_OFFSET])); priv->len = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_LEN])); - priv->dreg = nft_parse_register(tb[NFTA_PAYLOAD_DREG]); - return nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, priv->len); + return nft_parse_register_store(ctx, tb[NFTA_PAYLOAD_DREG], + &priv->dreg, NULL, NFT_DATA_VALUE, + priv->len); } static int nft_payload_dump(struct sk_buff *skb, const struct nft_expr *expr) diff --git a/net/netfilter/nft_rt.c b/net/netfilter/nft_rt.c index 7cfcb0e2f7ee..bcd01a63e38f 100644 --- a/net/netfilter/nft_rt.c +++ b/net/netfilter/nft_rt.c @@ -15,7 +15,7 @@ struct nft_rt { enum nft_rt_keys key:8; - enum nft_registers dreg:8; + u8 dreg; }; static u16 get_tcpmss(const struct nft_pktinfo *pkt, const struct dst_entry *skbdst) @@ -141,9 +141,8 @@ static int nft_rt_get_init(const struct nft_ctx *ctx, return -EOPNOTSUPP; } - priv->dreg = nft_parse_register(tb[NFTA_RT_DREG]); - return nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, len); + return nft_parse_register_store(ctx, tb[NFTA_RT_DREG], &priv->dreg, + NULL, NFT_DATA_VALUE, len); } static int nft_rt_get_dump(struct sk_buff *skb, diff --git a/net/netfilter/nft_socket.c b/net/netfilter/nft_socket.c index 8a0125e966c8..f6d517185d9c 100644 --- a/net/netfilter/nft_socket.c +++ b/net/netfilter/nft_socket.c @@ -10,7 +10,7 @@ struct nft_socket { enum nft_socket_keys key:8; union { - enum nft_registers dreg:8; + u8 dreg; }; }; @@ -146,9 +146,8 @@ static int nft_socket_init(const struct nft_ctx *ctx, return -EOPNOTSUPP; } - priv->dreg = nft_parse_register(tb[NFTA_SOCKET_DREG]); - return nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, len); + return nft_parse_register_store(ctx, tb[NFTA_SOCKET_DREG], &priv->dreg, + NULL, NFT_DATA_VALUE, len); } static int nft_socket_dump(struct sk_buff *skb, diff --git a/net/netfilter/nft_tunnel.c b/net/netfilter/nft_tunnel.c index d3eb953d0333..3b27926d5382 100644 --- a/net/netfilter/nft_tunnel.c +++ b/net/netfilter/nft_tunnel.c @@ -15,7 +15,7 @@ struct nft_tunnel { enum nft_tunnel_keys key:8; - enum nft_registers dreg:8; + u8 dreg; enum nft_tunnel_mode mode:8; }; @@ -93,8 +93,6 @@ static int nft_tunnel_get_init(const struct nft_ctx *ctx, return -EOPNOTSUPP; } - priv->dreg = nft_parse_register(tb[NFTA_TUNNEL_DREG]); - if (tb[NFTA_TUNNEL_MODE]) { priv->mode = ntohl(nla_get_be32(tb[NFTA_TUNNEL_MODE])); if (priv->mode > NFT_TUNNEL_MODE_MAX) @@ -103,8 +101,8 @@ static int nft_tunnel_get_init(const struct nft_ctx *ctx, priv->mode = NFT_TUNNEL_MODE_NONE; } - return nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, len); + return nft_parse_register_store(ctx, tb[NFTA_TUNNEL_DREG], &priv->dreg, + NULL, NFT_DATA_VALUE, len); } static int nft_tunnel_get_dump(struct sk_buff *skb, diff --git a/net/netfilter/nft_xfrm.c b/net/netfilter/nft_xfrm.c index 06d5cabf1d7c..cbbbc4ecad3a 100644 --- a/net/netfilter/nft_xfrm.c +++ b/net/netfilter/nft_xfrm.c @@ -24,7 +24,7 @@ static const struct nla_policy nft_xfrm_policy[NFTA_XFRM_MAX + 1] = { struct nft_xfrm { enum nft_xfrm_keys key:8; - enum nft_registers dreg:8; + u8 dreg; u8 dir; u8 spnum; }; @@ -86,9 +86,8 @@ static int nft_xfrm_get_init(const struct nft_ctx *ctx, priv->spnum = spnum; - priv->dreg = nft_parse_register(tb[NFTA_XFRM_DREG]); - return nft_validate_register_store(ctx, priv->dreg, NULL, - NFT_DATA_VALUE, len); + return nft_parse_register_store(ctx, tb[NFTA_XFRM_DREG], &priv->dreg, + NULL, NFT_DATA_VALUE, len); } /* Return true if key asks for daddr/saddr and current From 15cc30ac2a8d7185f8ebf97dd1ddd90a7c79783b Mon Sep 17 00:00:00 2001 From: Florian Westphal Date: Wed, 18 May 2022 20:15:31 +0200 Subject: [PATCH 1082/1842] netfilter: use get_random_u32 instead of prandom [ Upstream commit b1fd94e704571f98b21027340eecf821b2bdffba ] bh might occur while updating per-cpu rnd_state from user context, ie. local_out path. BUG: using smp_processor_id() in preemptible [00000000] code: nginx/2725 caller is nft_ng_random_eval+0x24/0x54 [nft_numgen] Call Trace: check_preemption_disabled+0xde/0xe0 nft_ng_random_eval+0x24/0x54 [nft_numgen] Use the random driver instead, this also avoids need for local prandom state. Moreover, prandom now uses the random driver since d4150779e60f ("random32: use real rng for non-deterministic randomness"). Based on earlier patch from Pablo Neira. Fixes: 6b2faee0ca91 ("netfilter: nft_meta: place prandom handling in a helper") Fixes: 978d8f9055c3 ("netfilter: nft_numgen: add map lookups for numgen random operations") Signed-off-by: Florian Westphal Signed-off-by: Pablo Neira Ayuso Signed-off-by: Sasha Levin --- net/netfilter/nft_meta.c | 13 ++----------- net/netfilter/nft_numgen.c | 12 +++--------- 2 files changed, 5 insertions(+), 20 deletions(-) diff --git a/net/netfilter/nft_meta.c b/net/netfilter/nft_meta.c index a7e01e9952f1..44d9b38e5f90 100644 --- a/net/netfilter/nft_meta.c +++ b/net/netfilter/nft_meta.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -32,8 +33,6 @@ #define NFT_META_SECS_PER_DAY 86400 #define NFT_META_DAYS_PER_WEEK 7 -static DEFINE_PER_CPU(struct rnd_state, nft_prandom_state); - static u8 nft_meta_weekday(void) { time64_t secs = ktime_get_real_seconds(); @@ -267,13 +266,6 @@ static bool nft_meta_get_eval_ifname(enum nft_meta_keys key, u32 *dest, return true; } -static noinline u32 nft_prandom_u32(void) -{ - struct rnd_state *state = this_cpu_ptr(&nft_prandom_state); - - return prandom_u32_state(state); -} - #ifdef CONFIG_IP_ROUTE_CLASSID static noinline bool nft_meta_get_eval_rtclassid(const struct sk_buff *skb, u32 *dest) @@ -385,7 +377,7 @@ void nft_meta_get_eval(const struct nft_expr *expr, break; #endif case NFT_META_PRANDOM: - *dest = nft_prandom_u32(); + *dest = get_random_u32(); break; #ifdef CONFIG_XFRM case NFT_META_SECPATH: @@ -514,7 +506,6 @@ int nft_meta_get_init(const struct nft_ctx *ctx, len = IFNAMSIZ; break; case NFT_META_PRANDOM: - prandom_init_once(&nft_prandom_state); len = sizeof(u32); break; #ifdef CONFIG_XFRM diff --git a/net/netfilter/nft_numgen.c b/net/netfilter/nft_numgen.c index 722cac1e90e0..4e43214e88de 100644 --- a/net/netfilter/nft_numgen.c +++ b/net/netfilter/nft_numgen.c @@ -9,12 +9,11 @@ #include #include #include +#include #include #include #include -static DEFINE_PER_CPU(struct rnd_state, nft_numgen_prandom_state); - struct nft_ng_inc { u8 dreg; u32 modulus; @@ -104,12 +103,9 @@ struct nft_ng_random { u32 offset; }; -static u32 nft_ng_random_gen(struct nft_ng_random *priv) +static u32 nft_ng_random_gen(const struct nft_ng_random *priv) { - struct rnd_state *state = this_cpu_ptr(&nft_numgen_prandom_state); - - return reciprocal_scale(prandom_u32_state(state), priv->modulus) + - priv->offset; + return reciprocal_scale(get_random_u32(), priv->modulus) + priv->offset; } static void nft_ng_random_eval(const struct nft_expr *expr, @@ -137,8 +133,6 @@ static int nft_ng_random_init(const struct nft_ctx *ctx, if (priv->offset + priv->modulus - 1 < priv->offset) return -EOVERFLOW; - prandom_init_once(&nft_numgen_prandom_state); - return nft_parse_register_store(ctx, tb[NFTA_NG_DREG], &priv->dreg, NULL, NFT_DATA_VALUE, sizeof(u32)); } From 10eb239e293516d99f7df050a1a0c8a9cf484070 Mon Sep 17 00:00:00 2001 From: Damien Le Moal Date: Wed, 8 Jun 2022 10:13:02 +0900 Subject: [PATCH 1083/1842] scsi: scsi_debug: Fix zone transition to full condition [ Upstream commit 566d3c57eb526f32951af15866086e236ce1fc8a ] When a write command to a sequential write required or sequential write preferred zone result in the zone write pointer reaching the end of the zone, the zone condition must be set to full AND the number of implicitly or explicitly open zones updated to have a correct accounting for zone resources. However, the function zbc_inc_wp() only sets the zone condition to full without updating the open zone counters, resulting in a zone state machine breakage. Introduce the helper function zbc_set_zone_full() and use it in zbc_inc_wp() to correctly transition zones to the full condition. Link: https://lore.kernel.org/r/20220608011302.92061-1-damien.lemoal@opensource.wdc.com Fixes: f0d1cf9378bd ("scsi: scsi_debug: Add ZBC zone commands") Reviewed-by: Niklas Cassel Acked-by: Douglas Gilbert Signed-off-by: Damien Le Moal Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin --- drivers/scsi/scsi_debug.c | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c index 6b00de6b6f0e..5eb959b5f701 100644 --- a/drivers/scsi/scsi_debug.c +++ b/drivers/scsi/scsi_debug.c @@ -2746,6 +2746,24 @@ static void zbc_open_zone(struct sdebug_dev_info *devip, } } +static inline void zbc_set_zone_full(struct sdebug_dev_info *devip, + struct sdeb_zone_state *zsp) +{ + switch (zsp->z_cond) { + case ZC2_IMPLICIT_OPEN: + devip->nr_imp_open--; + break; + case ZC3_EXPLICIT_OPEN: + devip->nr_exp_open--; + break; + default: + WARN_ONCE(true, "Invalid zone %llu condition %x\n", + zsp->z_start, zsp->z_cond); + break; + } + zsp->z_cond = ZC5_FULL; +} + static void zbc_inc_wp(struct sdebug_dev_info *devip, unsigned long long lba, unsigned int num) { @@ -2758,7 +2776,7 @@ static void zbc_inc_wp(struct sdebug_dev_info *devip, if (zsp->z_type == ZBC_ZONE_TYPE_SWR) { zsp->z_wp += num; if (zsp->z_wp >= zend) - zsp->z_cond = ZC5_FULL; + zbc_set_zone_full(devip, zsp); return; } @@ -2777,7 +2795,7 @@ static void zbc_inc_wp(struct sdebug_dev_info *devip, n = num; } if (zsp->z_wp >= zend) - zsp->z_cond = ZC5_FULL; + zbc_set_zone_full(devip, zsp); num -= n; lba += n; From 505a375eea117985ed47cbcb5e1d43a37d2104c8 Mon Sep 17 00:00:00 2001 From: Jonathan Marek Date: Mon, 13 Jun 2022 18:10:19 -0400 Subject: [PATCH 1084/1842] drm/msm: use for_each_sgtable_sg to iterate over scatterlist [ Upstream commit 62b5e322fb6cc5a5a91fdeba0e4e57e75d9f4387 ] The dma_map_sgtable() call (used to invalidate cache) overwrites sgt->nents with 1, so msm_iommu_pagetable_map maps only the first physical segment. To fix this problem use for_each_sgtable_sg(), which uses orig_nents. Fixes: b145c6e65eb0 ("drm/msm: Add support to create a local pagetable") Signed-off-by: Jonathan Marek Link: https://lore.kernel.org/r/20220613221019.11399-1-jonathan@marek.ca Signed-off-by: Rob Clark Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/msm_iommu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 22ac7c692a81..ecab6287c1c3 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -58,7 +58,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, u64 addr = iova; unsigned int i; - for_each_sg(sgt->sgl, sg, sgt->nents, i) { + for_each_sgtable_sg(sgt, sg, i) { size_t size = sg->length; phys_addr_t phys = sg_phys(sg); From 516760f1d2979903eaad5b437256913c5cd98416 Mon Sep 17 00:00:00 2001 From: Jon Maxwell Date: Wed, 15 Jun 2022 11:15:40 +1000 Subject: [PATCH 1085/1842] bpf: Fix request_sock leak in sk lookup helpers [ Upstream commit 3046a827316c0e55fc563b4fb78c93b9ca5c7c37 ] A customer reported a request_socket leak in a Calico cloud environment. We found that a BPF program was doing a socket lookup with takes a refcnt on the socket and that it was finding the request_socket but returning the parent LISTEN socket via sk_to_full_sk() without decrementing the child request socket 1st, resulting in request_sock slab object leak. This patch retains the existing behaviour of returning full socks to the caller but it also decrements the child request_socket if one is present before doing so to prevent the leak. Thanks to Curtis Taylor for all the help in diagnosing and testing this. And thanks to Antoine Tenart for the reproducer and patch input. v2 of this patch contains, refactor as per Daniel Borkmann's suggestions to validate RCU flags on the listen socket so that it balances with bpf_sk_release() and update comments as per Martin KaFai Lau's suggestion. One small change to Daniels suggestion, put "sk = sk2" under "if (sk2 != sk)" to avoid an extra instruction. Fixes: f7355a6c0497 ("bpf: Check sk_fullsock() before returning from bpf_sk_lookup()") Fixes: edbf8c01de5a ("bpf: add skc_lookup_tcp helper") Co-developed-by: Antoine Tenart Signed-off-by: Antoine Tenart Signed-off-by: Jon Maxwell Signed-off-by: Daniel Borkmann Tested-by: Curtis Taylor Cc: Martin KaFai Lau Link: https://lore.kernel.org/bpf/56d6f898-bde0-bb25-3427-12a330b29fb8@iogearbox.net Link: https://lore.kernel.org/bpf/20220615011540.813025-1-jmaxwell37@gmail.com Signed-off-by: Sasha Levin --- net/core/filter.c | 34 ++++++++++++++++++++++++++++------ 1 file changed, 28 insertions(+), 6 deletions(-) diff --git a/net/core/filter.c b/net/core/filter.c index d348f1d3fb8f..246947fbc958 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -5982,10 +5982,21 @@ __bpf_sk_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len, ifindex, proto, netns_id, flags); if (sk) { - sk = sk_to_full_sk(sk); - if (!sk_fullsock(sk)) { + struct sock *sk2 = sk_to_full_sk(sk); + + /* sk_to_full_sk() may return (sk)->rsk_listener, so make sure the original sk + * sock refcnt is decremented to prevent a request_sock leak. + */ + if (!sk_fullsock(sk2)) + sk2 = NULL; + if (sk2 != sk) { sock_gen_put(sk); - return NULL; + /* Ensure there is no need to bump sk2 refcnt */ + if (unlikely(sk2 && !sock_flag(sk2, SOCK_RCU_FREE))) { + WARN_ONCE(1, "Found non-RCU, unreferenced socket!"); + return NULL; + } + sk = sk2; } } @@ -6019,10 +6030,21 @@ bpf_sk_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len, flags); if (sk) { - sk = sk_to_full_sk(sk); - if (!sk_fullsock(sk)) { + struct sock *sk2 = sk_to_full_sk(sk); + + /* sk_to_full_sk() may return (sk)->rsk_listener, so make sure the original sk + * sock refcnt is decremented to prevent a request_sock leak. + */ + if (!sk_fullsock(sk2)) + sk2 = NULL; + if (sk2 != sk) { sock_gen_put(sk); - return NULL; + /* Ensure there is no need to bump sk2 refcnt */ + if (unlikely(sk2 && !sock_flag(sk2, SOCK_RCU_FREE))) { + WARN_ONCE(1, "Found non-RCU, unreferenced socket!"); + return NULL; + } + sk = sk2; } } From 4ae116428e81f4c3785792d5647d445c8b3e0171 Mon Sep 17 00:00:00 2001 From: Samuel Holland Date: Wed, 15 Jun 2022 00:42:53 -0500 Subject: [PATCH 1086/1842] drm/sun4i: Fix crash during suspend after component bind failure [ Upstream commit 1342b5b23da9559a1578978eaff7f797d8a87d91 ] If the component driver fails to bind, or is unbound, the driver data for the top-level platform device points to a freed drm_device. If the system is then suspended, the driver passes this dangling pointer to drm_mode_config_helper_suspend(), which crashes. Fix this by only setting the driver data while the platform driver holds a reference to the drm_device. Fixes: 624b4b48d9d8 ("drm: sun4i: Add support for suspending the display driver") Signed-off-by: Samuel Holland Reviewed-by: Jernej Skrabec Signed-off-by: Maxime Ripard Link: https://lore.kernel.org/r/20220615054254.16352-1-samuel@sholland.org Signed-off-by: Sasha Levin --- drivers/gpu/drm/sun4i/sun4i_drv.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/sun4i/sun4i_drv.c b/drivers/gpu/drm/sun4i/sun4i_drv.c index 29861fc81b35..c5912fd53772 100644 --- a/drivers/gpu/drm/sun4i/sun4i_drv.c +++ b/drivers/gpu/drm/sun4i/sun4i_drv.c @@ -71,7 +71,6 @@ static int sun4i_drv_bind(struct device *dev) goto free_drm; } - dev_set_drvdata(dev, drm); drm->dev_private = drv; INIT_LIST_HEAD(&drv->frontend_list); INIT_LIST_HEAD(&drv->engine_list); @@ -112,6 +111,8 @@ static int sun4i_drv_bind(struct device *dev) drm_fbdev_generic_setup(drm, 32); + dev_set_drvdata(dev, drm); + return 0; finish_poll: @@ -128,6 +129,7 @@ static void sun4i_drv_unbind(struct device *dev) { struct drm_device *drm = dev_get_drvdata(dev); + dev_set_drvdata(dev, NULL); drm_dev_unregister(drm); drm_kms_helper_poll_fini(drm); drm_atomic_helper_shutdown(drm); From a51c199e4d2bbb8748c12e2ce846024dea012e57 Mon Sep 17 00:00:00 2001 From: Jakub Sitnicki Date: Thu, 16 Jun 2022 18:20:36 +0200 Subject: [PATCH 1087/1842] bpf, x86: Fix tail call count offset calculation on bpf2bpf call [ Upstream commit ff672c67ee7635ca1e28fb13729e8ef0d1f08ce5 ] On x86-64 the tail call count is passed from one BPF function to another through %rax. Additionally, on function entry, the tail call count value is stored on stack right after the BPF program stack, due to register shortage. The stored count is later loaded from stack either when performing a tail call - to check if we have not reached the tail call limit - or before calling another BPF function call in order to pass it via %rax. In the latter case, we miscalculate the offset at which the tail call count was stored on function entry. The JIT does not take into account that the allocated BPF program stack is always a multiple of 8 on x86, while the actual stack depth does not have to be. This leads to a load from an offset that belongs to the BPF stack, as shown in the example below: SEC("tc") int entry(struct __sk_buff *skb) { /* Have data on stack which size is not a multiple of 8 */ volatile char arr[1] = {}; return subprog_tail(skb); } int entry(struct __sk_buff * skb): 0: (b4) w2 = 0 1: (73) *(u8 *)(r10 -1) = r2 2: (85) call pc+1#bpf_prog_ce2f79bb5f3e06dd_F 3: (95) exit int entry(struct __sk_buff * skb): 0xffffffffa0201788: nop DWORD PTR [rax+rax*1+0x0] 0xffffffffa020178d: xor eax,eax 0xffffffffa020178f: push rbp 0xffffffffa0201790: mov rbp,rsp 0xffffffffa0201793: sub rsp,0x8 0xffffffffa020179a: push rax 0xffffffffa020179b: xor esi,esi 0xffffffffa020179d: mov BYTE PTR [rbp-0x1],sil 0xffffffffa02017a1: mov rax,QWORD PTR [rbp-0x9] !!! tail call count 0xffffffffa02017a8: call 0xffffffffa02017d8 !!! is at rbp-0x10 0xffffffffa02017ad: leave 0xffffffffa02017ae: ret Fix it by rounding up the BPF stack depth to a multiple of 8, when calculating the tail call count offset on stack. Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") Signed-off-by: Jakub Sitnicki Signed-off-by: Daniel Borkmann Acked-by: Maciej Fijalkowski Acked-by: Daniel Borkmann Link: https://lore.kernel.org/bpf/20220616162037.535469-2-jakub@cloudflare.com Signed-off-by: Sasha Levin --- arch/x86/net/bpf_jit_comp.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index a0a7ead52698..1714e85eb26d 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1261,8 +1261,9 @@ xadd: if (is_imm8(insn->off)) case BPF_JMP | BPF_CALL: func = (u8 *) __bpf_call_base + imm32; if (tail_call_reachable) { + /* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */ EMIT3_off32(0x48, 0x8B, 0x85, - -(bpf_prog->aux->stack_depth + 8)); + -round_up(bpf_prog->aux->stack_depth, 8) - 8); if (!imm32 || emit_call(&prog, func, image + addrs[i - 1] + 7)) return -EINVAL; } else { From ab7f565ac70594576bc23ac3da24d13c68122171 Mon Sep 17 00:00:00 2001 From: Claudiu Manoil Date: Fri, 10 Jun 2022 11:40:37 +0300 Subject: [PATCH 1088/1842] phy: aquantia: Fix AN when higher speeds than 1G are not advertised [ Upstream commit 9b7fd1670a94a57d974795acebde843a5c1a354e ] Even when the eth port is resticted to work with speeds not higher than 1G, and so the eth driver is requesting the phy (via phylink) to advertise up to 1000BASET support, the aquantia phy device is still advertising for 2.5G and 5G speeds. Clear these advertising defaults when requested. Cc: Ondrej Spacek Fixes: 09c4c57f7bc41 ("net: phy: aquantia: add support for auto-negotiation configuration") Signed-off-by: Claudiu Manoil Link: https://lore.kernel.org/r/20220610084037.7625-1-claudiu.manoil@nxp.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/phy/aquantia_main.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/drivers/net/phy/aquantia_main.c b/drivers/net/phy/aquantia_main.c index 41e7c1432497..75a62d1cc737 100644 --- a/drivers/net/phy/aquantia_main.c +++ b/drivers/net/phy/aquantia_main.c @@ -34,6 +34,8 @@ #define MDIO_AN_VEND_PROV 0xc400 #define MDIO_AN_VEND_PROV_1000BASET_FULL BIT(15) #define MDIO_AN_VEND_PROV_1000BASET_HALF BIT(14) +#define MDIO_AN_VEND_PROV_5000BASET_FULL BIT(11) +#define MDIO_AN_VEND_PROV_2500BASET_FULL BIT(10) #define MDIO_AN_VEND_PROV_DOWNSHIFT_EN BIT(4) #define MDIO_AN_VEND_PROV_DOWNSHIFT_MASK GENMASK(3, 0) #define MDIO_AN_VEND_PROV_DOWNSHIFT_DFLT 4 @@ -230,9 +232,20 @@ static int aqr_config_aneg(struct phy_device *phydev) phydev->advertising)) reg |= MDIO_AN_VEND_PROV_1000BASET_HALF; + /* Handle the case when the 2.5G and 5G speeds are not advertised */ + if (linkmode_test_bit(ETHTOOL_LINK_MODE_2500baseT_Full_BIT, + phydev->advertising)) + reg |= MDIO_AN_VEND_PROV_2500BASET_FULL; + + if (linkmode_test_bit(ETHTOOL_LINK_MODE_5000baseT_Full_BIT, + phydev->advertising)) + reg |= MDIO_AN_VEND_PROV_5000BASET_FULL; + ret = phy_modify_mmd_changed(phydev, MDIO_MMD_AN, MDIO_AN_VEND_PROV, MDIO_AN_VEND_PROV_1000BASET_HALF | - MDIO_AN_VEND_PROV_1000BASET_FULL, reg); + MDIO_AN_VEND_PROV_1000BASET_FULL | + MDIO_AN_VEND_PROV_2500BASET_FULL | + MDIO_AN_VEND_PROV_5000BASET_FULL, reg); if (ret < 0) return ret; if (ret > 0) From f299d3fbe431aa5556edc0670a401d2b7b796490 Mon Sep 17 00:00:00 2001 From: Xin Long Date: Tue, 18 May 2021 10:09:08 +0800 Subject: [PATCH 1089/1842] tipc: simplify the finalize work queue [ Upstream commit be07f056396d6bb40963c45a02951c566ddeef8e ] This patch is to use "struct work_struct" for the finalize work queue instead of "struct tipc_net_work", as it can get the "net" and "addr" from tipc_net's other members and there is no need to add extra net and addr in tipc_net by defining "struct tipc_net_work". Note that it's safe to get net from tn->bcl as bcl is always released after the finalize work queue is done. Signed-off-by: Xin Long Acked-by: Jon Maloy Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/tipc/core.c | 4 ++-- net/tipc/core.h | 8 +------- net/tipc/discover.c | 4 ++-- net/tipc/link.c | 5 +++++ net/tipc/link.h | 1 + net/tipc/net.c | 15 +++------------ 6 files changed, 14 insertions(+), 23 deletions(-) diff --git a/net/tipc/core.c b/net/tipc/core.c index 40c03085c0ea..96bfcb2986f5 100644 --- a/net/tipc/core.c +++ b/net/tipc/core.c @@ -60,7 +60,7 @@ static int __net_init tipc_init_net(struct net *net) tn->trial_addr = 0; tn->addr_trial_end = 0; tn->capabilities = TIPC_NODE_CAPABILITIES; - INIT_WORK(&tn->final_work.work, tipc_net_finalize_work); + INIT_WORK(&tn->work, tipc_net_finalize_work); memset(tn->node_id, 0, sizeof(tn->node_id)); memset(tn->node_id_string, 0, sizeof(tn->node_id_string)); tn->mon_threshold = TIPC_DEF_MON_THRESHOLD; @@ -112,7 +112,7 @@ static void __net_exit tipc_exit_net(struct net *net) tipc_detach_loopback(net); /* Make sure the tipc_net_finalize_work() finished */ - cancel_work_sync(&tn->final_work.work); + cancel_work_sync(&tn->work); tipc_net_stop(net); tipc_bcast_stop(net); diff --git a/net/tipc/core.h b/net/tipc/core.h index 992924a849be..73a26b0b9ca1 100644 --- a/net/tipc/core.h +++ b/net/tipc/core.h @@ -90,12 +90,6 @@ extern unsigned int tipc_net_id __read_mostly; extern int sysctl_tipc_rmem[3] __read_mostly; extern int sysctl_tipc_named_timeout __read_mostly; -struct tipc_net_work { - struct work_struct work; - struct net *net; - u32 addr; -}; - struct tipc_net { u8 node_id[NODE_ID_LEN]; u32 node_addr; @@ -150,7 +144,7 @@ struct tipc_net { struct tipc_crypto *crypto_tx; #endif /* Work item for net finalize */ - struct tipc_net_work final_work; + struct work_struct work; /* The numbers of work queues in schedule */ atomic_t wq_count; }; diff --git a/net/tipc/discover.c b/net/tipc/discover.c index d4ecacddb40c..14bc20604051 100644 --- a/net/tipc/discover.c +++ b/net/tipc/discover.c @@ -167,7 +167,7 @@ static bool tipc_disc_addr_trial_msg(struct tipc_discoverer *d, /* Apply trial address if we just left trial period */ if (!trial && !self) { - tipc_sched_net_finalize(net, tn->trial_addr); + schedule_work(&tn->work); msg_set_prevnode(buf_msg(d->skb), tn->trial_addr); msg_set_type(buf_msg(d->skb), DSC_REQ_MSG); } @@ -307,7 +307,7 @@ static void tipc_disc_timeout(struct timer_list *t) if (!time_before(jiffies, tn->addr_trial_end) && !tipc_own_addr(net)) { mod_timer(&d->timer, jiffies + TIPC_DISC_INIT); spin_unlock_bh(&d->lock); - tipc_sched_net_finalize(net, tn->trial_addr); + schedule_work(&tn->work); return; } diff --git a/net/tipc/link.c b/net/tipc/link.c index 7a353ff62844..064fdb8e50e1 100644 --- a/net/tipc/link.c +++ b/net/tipc/link.c @@ -344,6 +344,11 @@ char tipc_link_plane(struct tipc_link *l) return l->net_plane; } +struct net *tipc_link_net(struct tipc_link *l) +{ + return l->net; +} + void tipc_link_update_caps(struct tipc_link *l, u16 capabilities) { l->peer_caps = capabilities; diff --git a/net/tipc/link.h b/net/tipc/link.h index fc07232c9a12..a16f401fdabd 100644 --- a/net/tipc/link.h +++ b/net/tipc/link.h @@ -156,4 +156,5 @@ int tipc_link_bc_sync_rcv(struct tipc_link *l, struct tipc_msg *hdr, int tipc_link_bc_nack_rcv(struct tipc_link *l, struct sk_buff *skb, struct sk_buff_head *xmitq); bool tipc_link_too_silent(struct tipc_link *l); +struct net *tipc_link_net(struct tipc_link *l); #endif diff --git a/net/tipc/net.c b/net/tipc/net.c index 0bb2323201da..671cb4f9d563 100644 --- a/net/tipc/net.c +++ b/net/tipc/net.c @@ -41,6 +41,7 @@ #include "socket.h" #include "node.h" #include "bcast.h" +#include "link.h" #include "netlink.h" #include "monitor.h" @@ -138,19 +139,9 @@ static void tipc_net_finalize(struct net *net, u32 addr) void tipc_net_finalize_work(struct work_struct *work) { - struct tipc_net_work *fwork; + struct tipc_net *tn = container_of(work, struct tipc_net, work); - fwork = container_of(work, struct tipc_net_work, work); - tipc_net_finalize(fwork->net, fwork->addr); -} - -void tipc_sched_net_finalize(struct net *net, u32 addr) -{ - struct tipc_net *tn = tipc_net(net); - - tn->final_work.net = net; - tn->final_work.addr = addr; - schedule_work(&tn->final_work.work); + tipc_net_finalize(tipc_link_net(tn->bcl), tn->trial_addr); } void tipc_net_stop(struct net *net) From 361c5521c1e49843b710f455cae3c0a50b714323 Mon Sep 17 00:00:00 2001 From: Hoang Le Date: Fri, 17 Jun 2022 08:45:51 +0700 Subject: [PATCH 1090/1842] tipc: fix use-after-free Read in tipc_named_reinit [ Upstream commit 911600bf5a5e84bfda4d33ee32acc75ecf6159f0 ] syzbot found the following issue on: ================================================================== BUG: KASAN: use-after-free in tipc_named_reinit+0x94f/0x9b0 net/tipc/name_distr.c:413 Read of size 8 at addr ffff88805299a000 by task kworker/1:9/23764 CPU: 1 PID: 23764 Comm: kworker/1:9 Not tainted 5.18.0-rc4-syzkaller-00878-g17d49e6e8012 #0 Hardware name: Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: events tipc_net_finalize_work Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 print_address_description.constprop.0.cold+0xeb/0x495 mm/kasan/report.c:313 print_report mm/kasan/report.c:429 [inline] kasan_report.cold+0xf4/0x1c6 mm/kasan/report.c:491 tipc_named_reinit+0x94f/0x9b0 net/tipc/name_distr.c:413 tipc_net_finalize+0x234/0x3d0 net/tipc/net.c:138 process_one_work+0x996/0x1610 kernel/workqueue.c:2289 worker_thread+0x665/0x1080 kernel/workqueue.c:2436 kthread+0x2e9/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298 [...] ================================================================== In the commit d966ddcc3821 ("tipc: fix a deadlock when flushing scheduled work"), the cancel_work_sync() function just to make sure ONLY the work tipc_net_finalize_work() is executing/pending on any CPU completed before tipc namespace is destroyed through tipc_exit_net(). But this function is not guaranteed the work is the last queued. So, the destroyed instance may be accessed in the work which will try to enqueue later. In order to completely fix, we re-order the calling of cancel_work_sync() to make sure the work tipc_net_finalize_work() was last queued and it must be completed by calling cancel_work_sync(). Reported-by: syzbot+47af19f3307fc9c5c82e@syzkaller.appspotmail.com Fixes: d966ddcc3821 ("tipc: fix a deadlock when flushing scheduled work") Acked-by: Jon Maloy Signed-off-by: Ying Xue Signed-off-by: Hoang Le Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/tipc/core.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/net/tipc/core.c b/net/tipc/core.c index 96bfcb2986f5..7724499f516e 100644 --- a/net/tipc/core.c +++ b/net/tipc/core.c @@ -111,10 +111,9 @@ static void __net_exit tipc_exit_net(struct net *net) struct tipc_net *tn = tipc_net(net); tipc_detach_loopback(net); + tipc_net_stop(net); /* Make sure the tipc_net_finalize_work() finished */ cancel_work_sync(&tn->work); - tipc_net_stop(net); - tipc_bcast_stop(net); tipc_nametbl_stop(net); tipc_sk_rht_destroy(net); From c12a2c9b1b460ed72e6b3c33aac1ef51b0329b66 Mon Sep 17 00:00:00 2001 From: Lorenzo Bianconi Date: Thu, 16 Jun 2022 16:13:20 +0200 Subject: [PATCH 1091/1842] igb: fix a use-after-free issue in igb_clean_tx_ring [ Upstream commit 3f6a57ee8544ec3982f8a3cbcbf4aea7d47eb9ec ] Fix the following use-after-free bug in igb_clean_tx_ring routine when the NIC is running in XDP mode. The issue can be triggered redirecting traffic into the igb NIC and then closing the device while the traffic is flowing. [ 73.322719] CPU: 1 PID: 487 Comm: xdp_redirect Not tainted 5.18.3-apu2 #9 [ 73.330639] Hardware name: PC Engines APU2/APU2, BIOS 4.0.7 02/28/2017 [ 73.337434] RIP: 0010:refcount_warn_saturate+0xa7/0xf0 [ 73.362283] RSP: 0018:ffffc9000081f798 EFLAGS: 00010282 [ 73.367761] RAX: 0000000000000000 RBX: ffffc90000420f80 RCX: 0000000000000000 [ 73.375200] RDX: ffff88811ad22d00 RSI: ffff88811ad171e0 RDI: ffff88811ad171e0 [ 73.382590] RBP: 0000000000000900 R08: ffffffff82298f28 R09: 0000000000000058 [ 73.390008] R10: 0000000000000219 R11: ffffffff82280f40 R12: 0000000000000090 [ 73.397356] R13: ffff888102343a40 R14: ffff88810359e0e4 R15: 0000000000000000 [ 73.404806] FS: 00007ff38d31d740(0000) GS:ffff88811ad00000(0000) knlGS:0000000000000000 [ 73.413129] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 73.419096] CR2: 000055cff35f13f8 CR3: 0000000106391000 CR4: 00000000000406e0 [ 73.426565] Call Trace: [ 73.429087] [ 73.431314] igb_clean_tx_ring+0x43/0x140 [igb] [ 73.436002] igb_down+0x1d7/0x220 [igb] [ 73.439974] __igb_close+0x3c/0x120 [igb] [ 73.444118] igb_xdp+0x10c/0x150 [igb] [ 73.447983] ? igb_pci_sriov_configure+0x70/0x70 [igb] [ 73.453362] dev_xdp_install+0xda/0x110 [ 73.457371] dev_xdp_attach+0x1da/0x550 [ 73.461369] do_setlink+0xfd0/0x10f0 [ 73.465166] ? __nla_validate_parse+0x89/0xc70 [ 73.469714] rtnl_setlink+0x11a/0x1e0 [ 73.473547] rtnetlink_rcv_msg+0x145/0x3d0 [ 73.477709] ? rtnl_calcit.isra.0+0x130/0x130 [ 73.482258] netlink_rcv_skb+0x8d/0x110 [ 73.486229] netlink_unicast+0x230/0x340 [ 73.490317] netlink_sendmsg+0x215/0x470 [ 73.494395] __sys_sendto+0x179/0x190 [ 73.498268] ? move_addr_to_user+0x37/0x70 [ 73.502547] ? __sys_getsockname+0x84/0xe0 [ 73.506853] ? netlink_setsockopt+0x1c1/0x4a0 [ 73.511349] ? __sys_setsockopt+0xc8/0x1d0 [ 73.515636] __x64_sys_sendto+0x20/0x30 [ 73.519603] do_syscall_64+0x3b/0x80 [ 73.523399] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 73.528712] RIP: 0033:0x7ff38d41f20c [ 73.551866] RSP: 002b:00007fff3b945a68 EFLAGS: 00000246 ORIG_RAX: 000000000000002c [ 73.559640] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff38d41f20c [ 73.567066] RDX: 0000000000000034 RSI: 00007fff3b945b30 RDI: 0000000000000003 [ 73.574457] RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000000000 [ 73.581852] R10: 0000000000000000 R11: 0000000000000246 R12: 00007fff3b945ab0 [ 73.589179] R13: 0000000000000000 R14: 0000000000000003 R15: 00007fff3b945b30 [ 73.596545] [ 73.598842] ---[ end trace 0000000000000000 ]--- Fixes: 9cbc948b5a20c ("igb: add XDP support") Signed-off-by: Lorenzo Bianconi Reviewed-by: Jesse Brandeburg Acked-by: Jesper Dangaard Brouer Link: https://lore.kernel.org/r/e5c01d549dc37bff18e46aeabd6fb28a7bcf84be.1655388571.git.lorenzo@kernel.org Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/intel/igb/igb_main.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 5e67c9c119d2..758e468e677a 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -4813,8 +4813,11 @@ static void igb_clean_tx_ring(struct igb_ring *tx_ring) while (i != tx_ring->next_to_use) { union e1000_adv_tx_desc *eop_desc, *tx_desc; - /* Free all the Tx ring sk_buffs */ - dev_kfree_skb_any(tx_buffer->skb); + /* Free all the Tx ring sk_buffs or xdp frames */ + if (tx_buffer->type == IGB_TYPE_SKB) + dev_kfree_skb_any(tx_buffer->skb); + else + xdp_return_frame(tx_buffer->xdpf); /* unmap skb header data */ dma_unmap_single(tx_ring->dev, From 2e3216b929bb1c74ae8bea6042b9c85ad37240e8 Mon Sep 17 00:00:00 2001 From: Jay Vosburgh Date: Thu, 16 Jun 2022 12:32:40 -0700 Subject: [PATCH 1092/1842] bonding: ARP monitor spams NETDEV_NOTIFY_PEERS notifiers [ Upstream commit 7a9214f3d88cfdb099f3896e102a306b316d8707 ] The bonding ARP monitor fails to decrement send_peer_notif, the number of peer notifications (gratuitous ARP or ND) to be sent. This results in a continuous series of notifications. Correct this by decrementing the counter for each notification. Reported-by: Jonathan Toppins Signed-off-by: Jay Vosburgh Fixes: b0929915e035 ("bonding: Fix RTNL: assertion failed at net/core/rtnetlink.c for ab arp monitor") Link: https://lore.kernel.org/netdev/b2fd4147-8f50-bebd-963a-1a3e8d1d9715@redhat.com/ Tested-by: Jonathan Toppins Reviewed-by: Jonathan Toppins Link: https://lore.kernel.org/r/9400.1655407960@famine Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/bonding/bond_main.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index cbeb69bca0bb..9c4b45341fd2 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -3368,9 +3368,11 @@ static void bond_activebackup_arp_mon(struct bonding *bond) if (!rtnl_trylock()) return; - if (should_notify_peers) + if (should_notify_peers) { + bond->send_peer_notif--; call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, bond->dev); + } if (should_notify_rtnl) { bond_slave_state_notify(bond); bond_slave_link_notify(bond); From 363fd6e3461851826cc4dfd624104cb205c61bd0 Mon Sep 17 00:00:00 2001 From: Peilin Ye Date: Thu, 16 Jun 2022 16:43:36 -0700 Subject: [PATCH 1093/1842] net/sched: sch_netem: Fix arithmetic in netem_dump() for 32-bit platforms [ Upstream commit a2b1a5d40bd12b44322c2ccd40bb0ec1699708b6 ] As reported by Yuming, currently tc always show a latency of UINT_MAX for netem Qdisc's on 32-bit platforms: $ tc qdisc add dev dummy0 root netem latency 100ms $ tc qdisc show dev dummy0 qdisc netem 8001: root refcnt 2 limit 1000 delay 275s 275s ^^^^^^^^^^^^^^^^ Let us take a closer look at netem_dump(): qopt.latency = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->latency, UINT_MAX); qopt.latency is __u32, psched_tdiff_t is signed long, (psched_tdiff_t)(UINT_MAX) is negative for 32-bit platforms, so qopt.latency is always UINT_MAX. Fix it by using psched_time_t (u64) instead. Note: confusingly, users have two ways to specify 'latency': 1. normally, via '__u32 latency' in struct tc_netem_qopt; 2. via the TCA_NETEM_LATENCY64 attribute, which is s64. For the second case, theoretically 'latency' could be negative. This patch ignores that corner case, since it is broken (i.e. assigning a negative s64 to __u32) anyways, and should be handled separately. Thanks Ted Lin for the analysis [1] . [1] https://github.com/raspberrypi/linux/issues/3512 Reported-by: Yuming Chen Fixes: 112f9cb65643 ("netem: convert to qdisc_watchdog_schedule_ns") Reviewed-by: Cong Wang Signed-off-by: Peilin Ye Acked-by: Stephen Hemminger Link: https://lore.kernel.org/r/20220616234336.2443-1-yepeilin.cs@gmail.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- net/sched/sch_netem.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c index 0c345e43a09a..adc5407fd5d5 100644 --- a/net/sched/sch_netem.c +++ b/net/sched/sch_netem.c @@ -1146,9 +1146,9 @@ static int netem_dump(struct Qdisc *sch, struct sk_buff *skb) struct tc_netem_rate rate; struct tc_netem_slot slot; - qopt.latency = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->latency), + qopt.latency = min_t(psched_time_t, PSCHED_NS2TICKS(q->latency), UINT_MAX); - qopt.jitter = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->jitter), + qopt.jitter = min_t(psched_time_t, PSCHED_NS2TICKS(q->jitter), UINT_MAX); qopt.limit = q->limit; qopt.loss = q->loss; From d16a4339825e64f9ddcdff5277982d640bae933b Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Tue, 7 Jun 2022 15:08:38 +0400 Subject: [PATCH 1094/1842] drm/msm/mdp4: Fix refcount leak in mdp4_modeset_init_intf [ Upstream commit b9cc4598607cb7f7eae5c75fc1e3209cd52ff5e0 ] of_graph_get_remote_node() returns remote device node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. Add missing of_node_put() to avoid refcount leak. Fixes: 86418f90a4c1 ("drm: convert drivers to use of_graph_get_remote_node") Signed-off-by: Miaoqian Lin Reviewed-by: Dmitry Baryshkov Reviewed-by: Stephen Boyd Reviewed-by: Abhinav Kumar Patchwork: https://patchwork.freedesktop.org/patch/488473/ Link: https://lore.kernel.org/r/20220607110841.53889-1-linmq006@gmail.com Signed-off-by: Rob Clark Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c index 913de5938782..b4d0bfc83d70 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -221,6 +221,7 @@ static int mdp4_modeset_init_intf(struct mdp4_kms *mdp4_kms, encoder = mdp4_lcdc_encoder_init(dev, panel_node); if (IS_ERR(encoder)) { DRM_DEV_ERROR(dev->dev, "failed to construct LCDC encoder\n"); + of_node_put(panel_node); return PTR_ERR(encoder); } @@ -230,6 +231,7 @@ static int mdp4_modeset_init_intf(struct mdp4_kms *mdp4_kms, connector = mdp4_lvds_connector_init(dev, panel_node, encoder); if (IS_ERR(connector)) { DRM_DEV_ERROR(dev->dev, "failed to initialize LVDS connector\n"); + of_node_put(panel_node); return PTR_ERR(connector); } From efb2b6916050715b0af521706332f0a2d01e7e85 Mon Sep 17 00:00:00 2001 From: Kuogee Hsieh Date: Mon, 6 Jun 2022 10:55:39 -0700 Subject: [PATCH 1095/1842] drm/msm/dp: check core_initialized before disable interrupts at dp_display_unbind() [ Upstream commit d80c3ba0ac247791a4ed7a0cd865a64906c8906a ] During msm initialize phase, dp_display_unbind() will be called to undo initializations had been done by dp_display_bind() previously if there is error happen at msm_drm_bind. In this case, core_initialized flag had to be check to make sure clocks is on before update DP controller register to disable HPD interrupts. Otherwise system will crash due to below NOC fatal error. QTISECLIB [01f01a7ad]CNOC2 ERROR: ERRLOG0_LOW = 0x00061007 QTISECLIB [01f01a7ad]GEM_NOC ERROR: ERRLOG0_LOW = 0x00001007 QTISECLIB [01f0371a0]CNOC2 ERROR: ERRLOG0_HIGH = 0x00000003 QTISECLIB [01f055297]GEM_NOC ERROR: ERRLOG0_HIGH = 0x00000003 QTISECLIB [01f072beb]CNOC2 ERROR: ERRLOG1_LOW = 0x00000024 QTISECLIB [01f0914b8]GEM_NOC ERROR: ERRLOG1_LOW = 0x00000042 QTISECLIB [01f0ae639]CNOC2 ERROR: ERRLOG1_HIGH = 0x00004002 QTISECLIB [01f0cc73f]GEM_NOC ERROR: ERRLOG1_HIGH = 0x00004002 QTISECLIB [01f0ea092]CNOC2 ERROR: ERRLOG2_LOW = 0x0009020c QTISECLIB [01f10895f]GEM_NOC ERROR: ERRLOG2_LOW = 0x0ae9020c QTISECLIB [01f125ae1]CNOC2 ERROR: ERRLOG2_HIGH = 0x00000000 QTISECLIB [01f143be7]GEM_NOC ERROR: ERRLOG2_HIGH = 0x00000000 QTISECLIB [01f16153a]CNOC2 ERROR: ERRLOG3_LOW = 0x00000000 QTISECLIB [01f17fe07]GEM_NOC ERROR: ERRLOG3_LOW = 0x00000000 QTISECLIB [01f19cf89]CNOC2 ERROR: ERRLOG3_HIGH = 0x00000000 QTISECLIB [01f1bb08e]GEM_NOC ERROR: ERRLOG3_HIGH = 0x00000000 QTISECLIB [01f1d8a31]CNOC2 ERROR: SBM1 FAULTINSTATUS0_LOW = 0x00000002 QTISECLIB [01f1f72a4]GEM_NOC ERROR: SBM0 FAULTINSTATUS0_LOW = 0x00000001 QTISECLIB [01f21a217]CNOC3 ERROR: ERRLOG0_LOW = 0x00000006 QTISECLIB [01f23dfd3]NOC error fatal changes in v2: -- drop the first patch (drm/msm: enable msm irq after all initializations are done successfully at msm_drm_init()) since the problem had been fixed by other patch Fixes: 570d3e5d28db ("drm/msm/dp: stop event kernel thread when DP unbind") Signed-off-by: Kuogee Hsieh Reviewed-by: Stephen Boyd Patchwork: https://patchwork.freedesktop.org/patch/488387/ Link: https://lore.kernel.org/r/1654538139-7450-1-git-send-email-quic_khsieh@quicinc.com Signed-off-by: Rob Clark Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/dp/dp_display.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c index ebd05678a27b..47bdddb860e5 100644 --- a/drivers/gpu/drm/msm/dp/dp_display.c +++ b/drivers/gpu/drm/msm/dp/dp_display.c @@ -268,7 +268,8 @@ static void dp_display_unbind(struct device *dev, struct device *master, } /* disable all HPD interrupts */ - dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false); + if (dp->core_initialized) + dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false); kthread_stop(dp->ev_tsk); From 3d67cb00cbbb776e2165a807bc7fe114293ffdd2 Mon Sep 17 00:00:00 2001 From: Kuogee Hsieh Date: Tue, 3 Nov 2020 14:53:36 -0800 Subject: [PATCH 1096/1842] drm/msm/dp: fixes wrong connection state caused by failure of link train [ Upstream commit 62671d2ef24bca1e2e1709a59a5bfb5c423cdc8e ] Connection state is not set correctly happen when either failure of link train due to cable unplugged in the middle of aux channel reading or cable plugged in while in suspended state. This patch fixes these problems. This patch also replace ST_SUSPEND_PENDING with ST_DISPLAY_OFF. Changes in V2: -- Add more information to commit message. Changes in V3: -- change base Changes in V4: -- add Fixes tag Signed-off-by: Kuogee Hsieh Signed-off-by: Rob Clark Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/dp/dp_display.c | 42 ++++++++++++++--------------- drivers/gpu/drm/msm/dp/dp_panel.c | 5 ++++ 2 files changed, 25 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c index 47bdddb860e5..4b18ab71ae59 100644 --- a/drivers/gpu/drm/msm/dp/dp_display.c +++ b/drivers/gpu/drm/msm/dp/dp_display.c @@ -45,7 +45,7 @@ enum { ST_CONNECT_PENDING, ST_CONNECTED, ST_DISCONNECT_PENDING, - ST_SUSPEND_PENDING, + ST_DISPLAY_OFF, ST_SUSPENDED, }; @@ -531,7 +531,7 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data) mutex_lock(&dp->event_mutex); state = dp->hpd_state; - if (state == ST_SUSPEND_PENDING) { + if (state == ST_DISPLAY_OFF || state == ST_SUSPENDED) { mutex_unlock(&dp->event_mutex); return 0; } @@ -553,14 +553,14 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data) hpd->hpd_high = 1; ret = dp_display_usbpd_configure_cb(&dp->pdev->dev); - if (ret) { /* failed */ + if (ret) { /* link train failed */ hpd->hpd_high = 0; dp->hpd_state = ST_DISCONNECTED; + } else { + /* start sentinel checking in case of missing uevent */ + dp_add_event(dp, EV_CONNECT_PENDING_TIMEOUT, 0, tout); } - /* start sanity checking */ - dp_add_event(dp, EV_CONNECT_PENDING_TIMEOUT, 0, tout); - mutex_unlock(&dp->event_mutex); /* uevent will complete connection part */ @@ -612,11 +612,6 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data) mutex_lock(&dp->event_mutex); state = dp->hpd_state; - if (state == ST_SUSPEND_PENDING) { - mutex_unlock(&dp->event_mutex); - return 0; - } - if (state == ST_DISCONNECT_PENDING || state == ST_DISCONNECTED) { mutex_unlock(&dp->event_mutex); return 0; @@ -643,7 +638,7 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data) */ dp_display_usbpd_disconnect_cb(&dp->pdev->dev); - /* start sanity checking */ + /* start sentinel checking in case of missing uevent */ dp_add_event(dp, EV_DISCONNECT_PENDING_TIMEOUT, 0, DP_TIMEOUT_5_SECOND); /* signal the disconnect event early to ensure proper teardown */ @@ -682,7 +677,7 @@ static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data) /* irq_hpd can happen at either connected or disconnected state */ state = dp->hpd_state; - if (state == ST_SUSPEND_PENDING) { + if (state == ST_DISPLAY_OFF) { mutex_unlock(&dp->event_mutex); return 0; } @@ -1119,7 +1114,7 @@ static irqreturn_t dp_display_irq_handler(int irq, void *dev_id) } if (hpd_isr_status & DP_DP_IRQ_HPD_INT_MASK) { - /* delete connect pending event first */ + /* stop sentinel connect pending checking */ dp_del_event(dp, EV_CONNECT_PENDING_TIMEOUT); dp_add_event(dp, EV_IRQ_HPD_INT, 0, 0); } @@ -1252,13 +1247,10 @@ static int dp_pm_resume(struct device *dev) status = dp_catalog_hpd_get_state_status(dp->catalog); - if (status) { + if (status) dp->dp_display.is_connected = true; - } else { + else dp->dp_display.is_connected = false; - /* make sure next resume host_init be called */ - dp->core_initialized = false; - } mutex_unlock(&dp->event_mutex); @@ -1280,6 +1272,9 @@ static int dp_pm_suspend(struct device *dev) dp->hpd_state = ST_SUSPENDED; + /* host_init will be called at pm_resume */ + dp->core_initialized = false; + mutex_unlock(&dp->event_mutex); return 0; @@ -1412,6 +1407,7 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder) mutex_lock(&dp_display->event_mutex); + /* stop sentinel checking */ dp_del_event(dp_display, EV_CONNECT_PENDING_TIMEOUT); rc = dp_display_set_mode(dp, &dp_display->dp_mode); @@ -1430,7 +1426,7 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder) state = dp_display->hpd_state; - if (state == ST_SUSPEND_PENDING) + if (state == ST_DISPLAY_OFF) dp_display_host_init(dp_display); dp_display_enable(dp_display, 0); @@ -1442,7 +1438,8 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder) dp_display_unprepare(dp); } - if (state == ST_SUSPEND_PENDING) + /* manual kick off plug event to train link */ + if (state == ST_DISPLAY_OFF) dp_add_event(dp_display, EV_IRQ_HPD_INT, 0, 0); /* completed connection */ @@ -1474,6 +1471,7 @@ int msm_dp_display_disable(struct msm_dp *dp, struct drm_encoder *encoder) mutex_lock(&dp_display->event_mutex); + /* stop sentinel checking */ dp_del_event(dp_display, EV_DISCONNECT_PENDING_TIMEOUT); dp_display_disable(dp_display, 0); @@ -1487,7 +1485,7 @@ int msm_dp_display_disable(struct msm_dp *dp, struct drm_encoder *encoder) /* completed disconnection */ dp_display->hpd_state = ST_DISCONNECTED; } else { - dp_display->hpd_state = ST_SUSPEND_PENDING; + dp_display->hpd_state = ST_DISPLAY_OFF; } mutex_unlock(&dp_display->event_mutex); diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c index 2768d1d306f0..550871ba6e5a 100644 --- a/drivers/gpu/drm/msm/dp/dp_panel.c +++ b/drivers/gpu/drm/msm/dp/dp_panel.c @@ -196,6 +196,11 @@ int dp_panel_read_sink_caps(struct dp_panel *dp_panel, &panel->aux->ddc); if (!dp_panel->edid) { DRM_ERROR("panel edid read failed\n"); + /* check edid read fail is due to unplug */ + if (!dp_catalog_hpd_get_state_status(panel->catalog)) { + rc = -ETIMEDOUT; + goto end; + } /* fail safe edid */ mutex_lock(&connector->dev->mode_config.mutex); From d11cb082151f5276e6e506b3b8c512f8cc57ae83 Mon Sep 17 00:00:00 2001 From: Kuogee Hsieh Date: Tue, 3 Nov 2020 12:49:00 -0800 Subject: [PATCH 1097/1842] drm/msm/dp: deinitialize mainlink if link training failed [ Upstream commit 231a04fcc6cb5b0e5f72c015d36462a17355f925 ] DP compo phy have to be enable to start link training. When link training failed phy need to be disabled so that next link traning can be proceed smoothly at next plug in. This patch de-initialize mainlink to disable phy if link training failed. This prevent system crash due to disp_cc_mdss_dp_link_intf_clk stuck at "off" state. This patch also perform checking power_on flag at dp_display_enable() and dp_display_disable() to avoid crashing when unplug cable while display is off. Signed-off-by: Kuogee Hsieh Signed-off-by: Rob Clark Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/dp/dp_catalog.c | 2 +- drivers/gpu/drm/msm/dp/dp_catalog.h | 2 +- drivers/gpu/drm/msm/dp/dp_ctrl.c | 40 +++++++++++++++++++++++++++-- drivers/gpu/drm/msm/dp/dp_display.c | 15 ++++++++++- drivers/gpu/drm/msm/dp/dp_panel.c | 2 +- 5 files changed, 55 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c b/drivers/gpu/drm/msm/dp/dp_catalog.c index aeca8b2ac5c6..2da6982efdbf 100644 --- a/drivers/gpu/drm/msm/dp/dp_catalog.c +++ b/drivers/gpu/drm/msm/dp/dp_catalog.c @@ -572,7 +572,7 @@ void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog) dp_write_aux(catalog, REG_DP_DP_HPD_CTRL, DP_DP_HPD_CTRL_HPD_EN); } -u32 dp_catalog_hpd_get_state_status(struct dp_catalog *dp_catalog) +u32 dp_catalog_link_is_connected(struct dp_catalog *dp_catalog) { struct dp_catalog_private *catalog = container_of(dp_catalog, struct dp_catalog_private, dp_catalog); diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.h b/drivers/gpu/drm/msm/dp/dp_catalog.h index 6d257dbebf29..176a9020a520 100644 --- a/drivers/gpu/drm/msm/dp/dp_catalog.h +++ b/drivers/gpu/drm/msm/dp/dp_catalog.h @@ -97,7 +97,7 @@ void dp_catalog_ctrl_enable_irq(struct dp_catalog *dp_catalog, bool enable); void dp_catalog_hpd_config_intr(struct dp_catalog *dp_catalog, u32 intr_mask, bool en); void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog); -u32 dp_catalog_hpd_get_state_status(struct dp_catalog *dp_catalog); +u32 dp_catalog_link_is_connected(struct dp_catalog *dp_catalog); u32 dp_catalog_hpd_get_intr_status(struct dp_catalog *dp_catalog); void dp_catalog_ctrl_phy_reset(struct dp_catalog *dp_catalog); int dp_catalog_ctrl_update_vx_px(struct dp_catalog *dp_catalog, u8 v_level, diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c index c83a1650437d..b9ca844ce2ad 100644 --- a/drivers/gpu/drm/msm/dp/dp_ctrl.c +++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c @@ -1460,6 +1460,30 @@ static int dp_ctrl_reinitialize_mainlink(struct dp_ctrl_private *ctrl) return ret; } +static int dp_ctrl_deinitialize_mainlink(struct dp_ctrl_private *ctrl) +{ + struct dp_io *dp_io; + struct phy *phy; + int ret; + + dp_io = &ctrl->parser->io; + phy = dp_io->phy; + + dp_catalog_ctrl_mainlink_ctrl(ctrl->catalog, false); + + dp_catalog_ctrl_reset(ctrl->catalog); + + ret = dp_power_clk_enable(ctrl->power, DP_CTRL_PM, false); + if (ret) { + DRM_ERROR("Failed to disable link clocks. ret=%d\n", ret); + } + + phy_power_off(phy); + phy_exit(phy); + + return 0; +} + static int dp_ctrl_link_maintenance(struct dp_ctrl_private *ctrl) { int ret = 0; @@ -1640,8 +1664,7 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl) if (rc) return rc; - while (--link_train_max_retries && - !atomic_read(&ctrl->dp_ctrl.aborted)) { + while (--link_train_max_retries) { rc = dp_ctrl_reinitialize_mainlink(ctrl); if (rc) { DRM_ERROR("Failed to reinitialize mainlink. rc=%d\n", @@ -1656,6 +1679,10 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl) break; } else if (training_step == DP_TRAINING_1) { /* link train_1 failed */ + if (!dp_catalog_link_is_connected(ctrl->catalog)) { + break; + } + rc = dp_ctrl_link_rate_down_shift(ctrl); if (rc < 0) { /* already in RBR = 1.6G */ if (cr.lane_0_1 & DP_LANE0_1_CR_DONE) { @@ -1675,6 +1702,10 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl) } } else if (training_step == DP_TRAINING_2) { /* link train_2 failed, lower lane rate */ + if (!dp_catalog_link_is_connected(ctrl->catalog)) { + break; + } + rc = dp_ctrl_link_lane_down_shift(ctrl); if (rc < 0) { /* end with failure */ @@ -1695,6 +1726,11 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl) */ if (rc == 0) /* link train successfully */ dp_ctrl_push_idle(dp_ctrl); + else { + /* link training failed */ + dp_ctrl_deinitialize_mainlink(ctrl); + rc = -ECONNRESET; + } return rc; } diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c index 4b18ab71ae59..d504cf68283a 100644 --- a/drivers/gpu/drm/msm/dp/dp_display.c +++ b/drivers/gpu/drm/msm/dp/dp_display.c @@ -556,6 +556,11 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data) if (ret) { /* link train failed */ hpd->hpd_high = 0; dp->hpd_state = ST_DISCONNECTED; + + if (ret == -ECONNRESET) { /* cable unplugged */ + dp->core_initialized = false; + } + } else { /* start sentinel checking in case of missing uevent */ dp_add_event(dp, EV_CONNECT_PENDING_TIMEOUT, 0, tout); @@ -827,6 +832,11 @@ static int dp_display_enable(struct dp_display_private *dp, u32 data) dp_display = g_dp_display; + if (dp_display->power_on) { + DRM_DEBUG_DP("Link already setup, return\n"); + return 0; + } + rc = dp_ctrl_on_stream(dp->ctrl); if (!rc) dp_display->power_on = true; @@ -859,6 +869,9 @@ static int dp_display_disable(struct dp_display_private *dp, u32 data) dp_display = g_dp_display; + if (!dp_display->power_on) + return 0; + /* wait only if audio was enabled */ if (dp_display->audio_enabled) { /* signal the disconnect event */ @@ -1245,7 +1258,7 @@ static int dp_pm_resume(struct device *dev) dp_catalog_ctrl_hpd_config(dp->catalog); - status = dp_catalog_hpd_get_state_status(dp->catalog); + status = dp_catalog_link_is_connected(dp->catalog); if (status) dp->dp_display.is_connected = true; diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c index 550871ba6e5a..4e8a19114e87 100644 --- a/drivers/gpu/drm/msm/dp/dp_panel.c +++ b/drivers/gpu/drm/msm/dp/dp_panel.c @@ -197,7 +197,7 @@ int dp_panel_read_sink_caps(struct dp_panel *dp_panel, if (!dp_panel->edid) { DRM_ERROR("panel edid read failed\n"); /* check edid read fail is due to unplug */ - if (!dp_catalog_hpd_get_state_status(panel->catalog)) { + if (!dp_catalog_link_is_connected(panel->catalog)) { rc = -ETIMEDOUT; goto end; } From 61f8f4034c04231148e7580ae51a2a74171c04a0 Mon Sep 17 00:00:00 2001 From: Kuogee Hsieh Date: Tue, 3 Nov 2020 12:49:02 -0800 Subject: [PATCH 1098/1842] drm/msm/dp: promote irq_hpd handle to handle link training correctly [ Upstream commit 26b8d66a399e625f3aa2c02ccbab1bff2e00040c ] Some dongles require link training done at irq_hpd request instead of plugin request. This patch promote irq_hpd handler to handle link training and setup hpd_state correctly. Signed-off-by: Kuogee Hsieh Signed-off-by: Rob Clark Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/dp/dp_display.c | 25 +++++++++++++++++++++---- 1 file changed, 21 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c index d504cf68283a..f1f777baa2c4 100644 --- a/drivers/gpu/drm/msm/dp/dp_display.c +++ b/drivers/gpu/drm/msm/dp/dp_display.c @@ -476,10 +476,9 @@ static int dp_display_handle_irq_hpd(struct dp_display_private *dp) sink_request = dp->link->sink_request; if (sink_request & DS_PORT_STATUS_CHANGED) { - dp_add_event(dp, EV_USER_NOTIFICATION, false, 0); if (dp_display_is_sink_count_zero(dp)) { DRM_DEBUG_DP("sink count is zero, nothing to do\n"); - return 0; + return -ENOTCONN; } return dp_display_process_hpd_high(dp); @@ -496,7 +495,9 @@ static int dp_display_handle_irq_hpd(struct dp_display_private *dp) static int dp_display_usbpd_attention_cb(struct device *dev) { int rc = 0; + u32 sink_request; struct dp_display_private *dp; + struct dp_usbpd *hpd; if (!dev) { DRM_ERROR("invalid dev\n"); @@ -510,10 +511,26 @@ static int dp_display_usbpd_attention_cb(struct device *dev) return -ENODEV; } + hpd = dp->usbpd; + /* check for any test request issued by sink */ rc = dp_link_process_request(dp->link); - if (!rc) - dp_display_handle_irq_hpd(dp); + if (!rc) { + sink_request = dp->link->sink_request; + if (sink_request & DS_PORT_STATUS_CHANGED) { + /* same as unplugged */ + hpd->hpd_high = 0; + dp->hpd_state = ST_DISCONNECT_PENDING; + dp_add_event(dp, EV_USER_NOTIFICATION, false, 0); + } + + rc = dp_display_handle_irq_hpd(dp); + + if (!rc && (sink_request & DS_PORT_STATUS_CHANGED)) { + hpd->hpd_high = 1; + dp->hpd_state = ST_CONNECT_PENDING; + } + } return rc; } From acf76125bb2b247d27cb4ca57fda772c8a691fe9 Mon Sep 17 00:00:00 2001 From: Kuogee Hsieh Date: Wed, 18 Nov 2020 13:00:14 -0800 Subject: [PATCH 1099/1842] drm/msm/dp: fix connect/disconnect handled at irq_hpd [ Upstream commit c58eb1b54feefc3a47fab78addd14083bc941c44 ] Some usb type-c dongle use irq_hpd request to perform device connection and disconnection. This patch add handling of both connection and disconnection are based on the state of hpd_state and sink_count. Changes in V2: -- add dp_display_handle_port_ststus_changed() -- fix kernel test robot complaint Changes in V3: -- add encoder_mode_set into struct dp_display_private Reported-by: kernel test robot Fixes: 26b8d66a399e ("drm/msm/dp: promote irq_hpd handle to handle link training correctly") Tested-by: Stephen Boyd Signed-off-by: Kuogee Hsieh Signed-off-by: Rob Clark Signed-off-by: Sasha Levin --- drivers/gpu/drm/msm/dp/dp_display.c | 92 +++++++++++++++++------------ 1 file changed, 55 insertions(+), 37 deletions(-) diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c index f1f777baa2c4..a3de1d0523ea 100644 --- a/drivers/gpu/drm/msm/dp/dp_display.c +++ b/drivers/gpu/drm/msm/dp/dp_display.c @@ -102,6 +102,8 @@ struct dp_display_private { struct dp_display_mode dp_mode; struct msm_dp dp_display; + bool encoder_mode_set; + /* wait for audio signaling */ struct completion audio_comp; @@ -306,13 +308,24 @@ static void dp_display_send_hpd_event(struct msm_dp *dp_display) drm_helper_hpd_irq_event(connector->dev); } -static int dp_display_send_hpd_notification(struct dp_display_private *dp, - bool hpd) + +static void dp_display_set_encoder_mode(struct dp_display_private *dp) { - static bool encoder_mode_set; struct msm_drm_private *priv = dp->dp_display.drm_dev->dev_private; struct msm_kms *kms = priv->kms; + if (!dp->encoder_mode_set && dp->dp_display.encoder && + kms->funcs->set_encoder_mode) { + kms->funcs->set_encoder_mode(kms, + dp->dp_display.encoder, false); + + dp->encoder_mode_set = true; + } +} + +static int dp_display_send_hpd_notification(struct dp_display_private *dp, + bool hpd) +{ if ((hpd && dp->dp_display.is_connected) || (!hpd && !dp->dp_display.is_connected)) { DRM_DEBUG_DP("HPD already %s\n", (hpd ? "on" : "off")); @@ -325,15 +338,6 @@ static int dp_display_send_hpd_notification(struct dp_display_private *dp, dp->dp_display.is_connected = hpd; - if (dp->dp_display.is_connected && dp->dp_display.encoder - && !encoder_mode_set - && kms->funcs->set_encoder_mode) { - kms->funcs->set_encoder_mode(kms, - dp->dp_display.encoder, false); - DRM_DEBUG_DP("set_encoder_mode() Completed\n"); - encoder_mode_set = true; - } - dp_display_send_hpd_event(&dp->dp_display); return 0; @@ -369,7 +373,6 @@ static int dp_display_process_hpd_high(struct dp_display_private *dp) dp_add_event(dp, EV_USER_NOTIFICATION, true, 0); - end: return rc; } @@ -386,6 +389,8 @@ static void dp_display_host_init(struct dp_display_private *dp) if (dp->usbpd->orientation == ORIENTATION_CC2) flip = true; + dp_display_set_encoder_mode(dp); + dp_power_init(dp->power, flip); dp_ctrl_host_init(dp->ctrl, flip); dp_aux_init(dp->aux); @@ -469,24 +474,42 @@ static void dp_display_handle_video_request(struct dp_display_private *dp) } } +static int dp_display_handle_port_ststus_changed(struct dp_display_private *dp) +{ + int rc = 0; + + if (dp_display_is_sink_count_zero(dp)) { + DRM_DEBUG_DP("sink count is zero, nothing to do\n"); + if (dp->hpd_state != ST_DISCONNECTED) { + dp->hpd_state = ST_DISCONNECT_PENDING; + dp_add_event(dp, EV_USER_NOTIFICATION, false, 0); + } + } else { + if (dp->hpd_state == ST_DISCONNECTED) { + dp->hpd_state = ST_CONNECT_PENDING; + rc = dp_display_process_hpd_high(dp); + if (rc) + dp->hpd_state = ST_DISCONNECTED; + } + } + + return rc; +} + static int dp_display_handle_irq_hpd(struct dp_display_private *dp) { - u32 sink_request; + u32 sink_request = dp->link->sink_request; - sink_request = dp->link->sink_request; - - if (sink_request & DS_PORT_STATUS_CHANGED) { - if (dp_display_is_sink_count_zero(dp)) { - DRM_DEBUG_DP("sink count is zero, nothing to do\n"); - return -ENOTCONN; + if (dp->hpd_state == ST_DISCONNECTED) { + if (sink_request & DP_LINK_STATUS_UPDATED) { + DRM_ERROR("Disconnected, no DP_LINK_STATUS_UPDATED\n"); + return -EINVAL; } - - return dp_display_process_hpd_high(dp); } dp_ctrl_handle_sink_request(dp->ctrl); - if (dp->link->sink_request & DP_TEST_LINK_VIDEO_PATTERN) + if (sink_request & DP_TEST_LINK_VIDEO_PATTERN) dp_display_handle_video_request(dp); return 0; @@ -517,19 +540,10 @@ static int dp_display_usbpd_attention_cb(struct device *dev) rc = dp_link_process_request(dp->link); if (!rc) { sink_request = dp->link->sink_request; - if (sink_request & DS_PORT_STATUS_CHANGED) { - /* same as unplugged */ - hpd->hpd_high = 0; - dp->hpd_state = ST_DISCONNECT_PENDING; - dp_add_event(dp, EV_USER_NOTIFICATION, false, 0); - } - - rc = dp_display_handle_irq_hpd(dp); - - if (!rc && (sink_request & DS_PORT_STATUS_CHANGED)) { - hpd->hpd_high = 1; - dp->hpd_state = ST_CONNECT_PENDING; - } + if (sink_request & DS_PORT_STATUS_CHANGED) + rc = dp_display_handle_port_ststus_changed(dp); + else + rc = dp_display_handle_irq_hpd(dp); } return rc; @@ -694,6 +708,7 @@ static int dp_disconnect_pending_timeout(struct dp_display_private *dp, u32 data static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data) { u32 state; + int ret; mutex_lock(&dp->event_mutex); @@ -704,7 +719,10 @@ static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data) return 0; } - dp_display_usbpd_attention_cb(&dp->pdev->dev); + ret = dp_display_usbpd_attention_cb(&dp->pdev->dev); + if (ret == -ECONNRESET) { /* cable unplugged */ + dp->core_initialized = false; + } mutex_unlock(&dp->event_mutex); From cec9867ee55478ef5dcb2adf030fe0c442a4c4ee Mon Sep 17 00:00:00 2001 From: Eric Dumazet Date: Mon, 20 Jun 2022 01:35:06 -0700 Subject: [PATCH 1100/1842] erspan: do not assume transport header is always set [ Upstream commit 301bd140ed0b24f0da660874c7e8a47dad8c8222 ] Rewrite tests in ip6erspan_tunnel_xmit() and erspan_fb_xmit() to not assume transport header is set. syzbot reported: WARNING: CPU: 0 PID: 1350 at include/linux/skbuff.h:2911 skb_transport_header include/linux/skbuff.h:2911 [inline] WARNING: CPU: 0 PID: 1350 at include/linux/skbuff.h:2911 ip6erspan_tunnel_xmit+0x15af/0x2eb0 net/ipv6/ip6_gre.c:963 Modules linked in: CPU: 0 PID: 1350 Comm: aoe_tx0 Not tainted 5.19.0-rc2-syzkaller-00160-g274295c6e53f #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014 RIP: 0010:skb_transport_header include/linux/skbuff.h:2911 [inline] RIP: 0010:ip6erspan_tunnel_xmit+0x15af/0x2eb0 net/ipv6/ip6_gre.c:963 Code: 0f 47 f0 40 88 b5 7f fe ff ff e8 8c 16 4b f9 89 de bf ff ff ff ff e8 a0 12 4b f9 66 83 fb ff 0f 85 1d f1 ff ff e8 71 16 4b f9 <0f> 0b e9 43 f0 ff ff e8 65 16 4b f9 48 8d 85 30 ff ff ff ba 60 00 RSP: 0018:ffffc90005daf910 EFLAGS: 00010293 RAX: 0000000000000000 RBX: 000000000000ffff RCX: 0000000000000000 RDX: ffff88801f032100 RSI: ffffffff882e8d3f RDI: 0000000000000003 RBP: ffffc90005dafab8 R08: 0000000000000003 R09: 000000000000ffff R10: 000000000000ffff R11: 0000000000000000 R12: ffff888024f21d40 R13: 000000000000a288 R14: 00000000000000b0 R15: ffff888025a2e000 FS: 0000000000000000(0000) GS:ffff88802c800000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000001b2e425000 CR3: 000000006d099000 CR4: 0000000000152ef0 Call Trace: __netdev_start_xmit include/linux/netdevice.h:4805 [inline] netdev_start_xmit include/linux/netdevice.h:4819 [inline] xmit_one net/core/dev.c:3588 [inline] dev_hard_start_xmit+0x188/0x880 net/core/dev.c:3604 sch_direct_xmit+0x19f/0xbe0 net/sched/sch_generic.c:342 __dev_xmit_skb net/core/dev.c:3815 [inline] __dev_queue_xmit+0x14a1/0x3900 net/core/dev.c:4219 dev_queue_xmit include/linux/netdevice.h:2994 [inline] tx+0x6a/0xc0 drivers/block/aoe/aoenet.c:63 kthread+0x1e7/0x3b0 drivers/block/aoe/aoecmd.c:1229 kthread+0x2e9/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:302 Fixes: d5db21a3e697 ("erspan: auto detect truncated ipv6 packets.") Reported-by: syzbot Signed-off-by: Eric Dumazet Cc: William Tu Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/ip_gre.c | 15 ++++++++++----- net/ipv6/ip6_gre.c | 15 ++++++++++----- 2 files changed, 20 insertions(+), 10 deletions(-) diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c index a7e32be8714f..6ab5c50aa7a8 100644 --- a/net/ipv4/ip_gre.c +++ b/net/ipv4/ip_gre.c @@ -519,7 +519,6 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev) int tunnel_hlen; int version; int nhoff; - int thoff; tun_info = skb_tunnel_info(skb); if (unlikely(!tun_info || !(tun_info->mode & IP_TUNNEL_INFO_TX) || @@ -553,10 +552,16 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev) (ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff)) truncate = true; - thoff = skb_transport_header(skb) - skb_mac_header(skb); - if (skb->protocol == htons(ETH_P_IPV6) && - (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)) - truncate = true; + if (skb->protocol == htons(ETH_P_IPV6)) { + int thoff; + + if (skb_transport_header_was_set(skb)) + thoff = skb_transport_header(skb) - skb_mac_header(skb); + else + thoff = nhoff + sizeof(struct ipv6hdr); + if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff) + truncate = true; + } if (version == 1) { erspan_build_header(skb, ntohl(tunnel_id_to_key32(key->tun_id)), diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c index 3f88ba6555ab..9e0890738d93 100644 --- a/net/ipv6/ip6_gre.c +++ b/net/ipv6/ip6_gre.c @@ -944,7 +944,6 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb, __be16 proto; __u32 mtu; int nhoff; - int thoff; if (!pskb_inet_may_pull(skb)) goto tx_err; @@ -965,10 +964,16 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb, (ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff)) truncate = true; - thoff = skb_transport_header(skb) - skb_mac_header(skb); - if (skb->protocol == htons(ETH_P_IPV6) && - (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)) - truncate = true; + if (skb->protocol == htons(ETH_P_IPV6)) { + int thoff; + + if (skb_transport_header_was_set(skb)) + thoff = skb_transport_header(skb) - skb_mac_header(skb); + else + thoff = nhoff + sizeof(struct ipv6hdr); + if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff) + truncate = true; + } if (skb_cow_head(skb, dev->needed_headroom ?: t->hlen)) goto tx_err; From e82376b632473790781a182b10823a4b5dbf6606 Mon Sep 17 00:00:00 2001 From: Ziyang Xuan Date: Mon, 20 Jun 2022 12:35:08 +0800 Subject: [PATCH 1101/1842] net/tls: fix tls_sk_proto_close executed repeatedly [ Upstream commit 69135c572d1f84261a6de2a1268513a7e71753e2 ] After setting the sock ktls, update ctx->sk_proto to sock->sk_prot by tls_update(), so now ctx->sk_proto->close is tls_sk_proto_close(). When close the sock, tls_sk_proto_close() is called for sock->sk_prot->close is tls_sk_proto_close(). But ctx->sk_proto->close() will be executed later in tls_sk_proto_close(). Thus tls_sk_proto_close() executed repeatedly occurred. That will trigger the following bug. ================================================================= KASAN: null-ptr-deref in range [0x0000000000000010-0x0000000000000017] RIP: 0010:tls_sk_proto_close+0xd8/0xaf0 net/tls/tls_main.c:306 Call Trace: tls_sk_proto_close+0x356/0xaf0 net/tls/tls_main.c:329 inet_release+0x12e/0x280 net/ipv4/af_inet.c:428 __sock_release+0xcd/0x280 net/socket.c:650 sock_close+0x18/0x20 net/socket.c:1365 Updating a proto which is same with sock->sk_prot is incorrect. Add proto and sock->sk_prot equality check at the head of tls_update() to fix it. Fixes: 95fa145479fb ("bpf: sockmap/tls, close can race with map free") Reported-by: syzbot+29c3c12f3214b85ad081@syzkaller.appspotmail.com Signed-off-by: Ziyang Xuan Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/tls/tls_main.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c index 58d22d6b86ae..9492528f5852 100644 --- a/net/tls/tls_main.c +++ b/net/tls/tls_main.c @@ -787,6 +787,9 @@ static void tls_update(struct sock *sk, struct proto *p, { struct tls_context *ctx; + if (sk->sk_prot == p) + return; + ctx = tls_get_ctx(sk); if (likely(ctx)) { ctx->sk_write_space = write_space; From 20119c1e0fff89542ff3272ace87e04cf6ee6bea Mon Sep 17 00:00:00 2001 From: Gerd Hoffmann Date: Mon, 20 Jun 2022 09:15:47 +0200 Subject: [PATCH 1102/1842] udmabuf: add back sanity check [ Upstream commit 05b252cccb2e5c3f56119d25de684b4f810ba40a ] Check vm_fault->pgoff before using it. When we removed the warning, we also removed the check. Fixes: 7b26e4e2119d ("udmabuf: drop WARN_ON() check.") Reported-by: zdi-disclosures@trendmicro.com Suggested-by: Linus Torvalds Signed-off-by: Gerd Hoffmann Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- drivers/dma-buf/udmabuf.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c index cfbf10128aae..2e3b76519b49 100644 --- a/drivers/dma-buf/udmabuf.c +++ b/drivers/dma-buf/udmabuf.c @@ -26,8 +26,11 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; struct udmabuf *ubuf = vma->vm_private_data; + pgoff_t pgoff = vmf->pgoff; - vmf->page = ubuf->pages[vmf->pgoff]; + if (pgoff >= ubuf->pagecount) + return VM_FAULT_SIGBUS; + vmf->page = ubuf->pages[pgoff]; get_page(vmf->page); return 0; } From b60c375ad140f9ea4c18e59b9fa3f1ef80951522 Mon Sep 17 00:00:00 2001 From: Jie2x Zhou Date: Thu, 16 Jun 2022 15:40:46 +0800 Subject: [PATCH 1103/1842] selftests: netfilter: correct PKTGEN_SCRIPT_PATHS in nft_concat_range.sh [ Upstream commit 5d79d8af8dec58bf709b3124d09d9572edd9c617 ] Before change: make -C netfilter TEST: performance net,port [SKIP] perf not supported port,net [SKIP] perf not supported net6,port [SKIP] perf not supported port,proto [SKIP] perf not supported net6,port,mac [SKIP] perf not supported net6,port,mac,proto [SKIP] perf not supported net,mac [SKIP] perf not supported After change: net,mac [ OK ] baseline (drop from netdev hook): 2061098pps baseline hash (non-ranged entries): 1606741pps baseline rbtree (match on first field only): 1191607pps set with 1000 full, ranged entries: 1639119pps ok 8 selftests: netfilter: nft_concat_range.sh Fixes: 611973c1e06f ("selftests: netfilter: Introduce tests for sets with range concatenation") Reported-by: kernel test robot Signed-off-by: Jie2x Zhou Signed-off-by: Pablo Neira Ayuso Signed-off-by: Sasha Levin --- tools/testing/selftests/netfilter/nft_concat_range.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/netfilter/nft_concat_range.sh b/tools/testing/selftests/netfilter/nft_concat_range.sh index b5eef5ffb58e..af3461cb5c40 100755 --- a/tools/testing/selftests/netfilter/nft_concat_range.sh +++ b/tools/testing/selftests/netfilter/nft_concat_range.sh @@ -31,7 +31,7 @@ BUGS="flush_remove_add reload" # List of possible paths to pktgen script from kernel tree for performance tests PKTGEN_SCRIPT_PATHS=" - ../../../samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh + ../../../../samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh pktgen/pktgen_bench_xmit_mode_netif_receive.sh" # Definition of set types: From cc649a78654af08db77db44558ca96edb63e3bbb Mon Sep 17 00:00:00 2001 From: Julien Grall Date: Fri, 17 Jun 2022 11:30:37 +0100 Subject: [PATCH 1104/1842] x86/xen: Remove undefined behavior in setup_features() [ Upstream commit ecb6237fa397b7b810d798ad19322eca466dbab1 ] 1 << 31 is undefined. So switch to 1U << 31. Fixes: 5ead97c84fa7 ("xen: Core Xen implementation") Signed-off-by: Julien Grall Reviewed-by: Juergen Gross Link: https://lore.kernel.org/r/20220617103037.57828-1-julien@xen.org Signed-off-by: Juergen Gross Signed-off-by: Sasha Levin --- drivers/xen/features.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/xen/features.c b/drivers/xen/features.c index 25c053b09605..2c306de228db 100644 --- a/drivers/xen/features.c +++ b/drivers/xen/features.c @@ -29,6 +29,6 @@ void xen_setup_features(void) if (HYPERVISOR_xen_version(XENVER_get_features, &fi) < 0) break; for (j = 0; j < 32; j++) - xen_features[i * 32 + j] = !!(fi.submap & 1< Date: Fri, 10 Jun 2022 19:14:20 +0800 Subject: [PATCH 1105/1842] MIPS: Remove repetitive increase irq_err_count [ Upstream commit c81aba8fde2aee4f5778ebab3a1d51bd2ef48e4c ] commit 979934da9e7a ("[PATCH] mips: update IRQ handling for vr41xx") added a function irq_dispatch, and it'll increase irq_err_count when the get_irq callback returns a negative value, but increase irq_err_count in get_irq was not removed. And also, modpost complains once gpio-vr41xx drivers become modules. ERROR: modpost: "irq_err_count" [drivers/gpio/gpio-vr41xx.ko] undefined! So it would be a good idea to remove repetitive increase irq_err_count in get_irq callback. Fixes: 27fdd325dace ("MIPS: Update VR41xx GPIO driver to use gpiolib") Fixes: 979934da9e7a ("[PATCH] mips: update IRQ handling for vr41xx") Reported-by: k2ci Signed-off-by: huhai Signed-off-by: Genjian Zhang Signed-off-by: Thomas Bogendoerfer Signed-off-by: Sasha Levin --- arch/mips/vr41xx/common/icu.c | 2 -- drivers/gpio/gpio-vr41xx.c | 2 -- 2 files changed, 4 deletions(-) diff --git a/arch/mips/vr41xx/common/icu.c b/arch/mips/vr41xx/common/icu.c index 7b7f25b4b057..9240bcdbe74e 100644 --- a/arch/mips/vr41xx/common/icu.c +++ b/arch/mips/vr41xx/common/icu.c @@ -640,8 +640,6 @@ static int icu_get_irq(unsigned int irq) printk(KERN_ERR "spurious ICU interrupt: %04x,%04x\n", pend1, pend2); - atomic_inc(&irq_err_count); - return -1; } diff --git a/drivers/gpio/gpio-vr41xx.c b/drivers/gpio/gpio-vr41xx.c index 98cd715ccc33..8d09b619c166 100644 --- a/drivers/gpio/gpio-vr41xx.c +++ b/drivers/gpio/gpio-vr41xx.c @@ -217,8 +217,6 @@ static int giu_get_irq(unsigned int irq) printk(KERN_ERR "spurious GIU interrupt: %04x(%04x),%04x(%04x)\n", maskl, pendl, maskh, pendh); - atomic_inc(&irq_err_count); - return -EINVAL; } From 7b564e3254b7db5fbfbf11a824627a6c31b932b4 Mon Sep 17 00:00:00 2001 From: David Howells Date: Tue, 21 Jun 2022 15:59:57 +0100 Subject: [PATCH 1106/1842] afs: Fix dynamic root getattr [ Upstream commit cb78d1b5efffe4cf97e16766329dd7358aed3deb ] The recent patch to make afs_getattr consult the server didn't account for the pseudo-inodes employed by the dynamic root-type afs superblock not having a volume or a server to access, and thus an oops occurs if such a directory is stat'd. Fix this by checking to see if the vnode->volume pointer actually points anywhere before following it in afs_getattr(). This can be tested by stat'ing a directory in /afs. It may be sufficient just to do "ls /afs" and the oops looks something like: BUG: kernel NULL pointer dereference, address: 0000000000000020 ... RIP: 0010:afs_getattr+0x8b/0x14b ... Call Trace: vfs_statx+0x79/0xf5 vfs_fstatat+0x49/0x62 Fixes: 2aeb8c86d499 ("afs: Fix afs_getattr() to refetch file status if callback break occurred") Reported-by: Marc Dionne Signed-off-by: David Howells Reviewed-by: Marc Dionne Tested-by: Marc Dionne cc: linux-afs@lists.infradead.org Link: https://lore.kernel.org/r/165408450783.1031787.7941404776393751186.stgit@warthog.procyon.org.uk/ Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- fs/afs/inode.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/afs/inode.c b/fs/afs/inode.c index 7e7a9454bcb9..826fae22a8cc 100644 --- a/fs/afs/inode.c +++ b/fs/afs/inode.c @@ -734,7 +734,8 @@ int afs_getattr(const struct path *path, struct kstat *stat, _enter("{ ino=%lu v=%u }", inode->i_ino, inode->i_generation); - if (!(query_flags & AT_STATX_DONT_SYNC) && + if (vnode->volume && + !(query_flags & AT_STATX_DONT_SYNC) && !test_bit(AFS_VNODE_CB_PROMISED, &vnode->flags)) { key = afs_request_key(vnode->volume->cell); if (IS_ERR(key)) From 40b3815b2c9027061fb74c1d21a1e9b8f486977d Mon Sep 17 00:00:00 2001 From: Anatolii Gerasymenko Date: Mon, 20 Jun 2022 09:47:05 +0200 Subject: [PATCH 1107/1842] ice: ethtool: advertise 1000M speeds properly [ Upstream commit c3d184c83ff4b80167e34edfc3d21df424bf27ff ] In current implementation ice_update_phy_type enables all link modes for selected speed. This approach doesn't work for 1000M speeds, because both copper (1000baseT) and optical (1000baseX) standards cannot be enabled at once. Fix this, by adding the function `ice_set_phy_type_from_speed()` for 1000M speeds. Fixes: 48cb27f2fd18 ("ice: Implement handlers for ethtool PHY/link operations") Signed-off-by: Anatolii Gerasymenko Tested-by: Gurucharan (A Contingent worker at Intel) Signed-off-by: Tony Nguyen Signed-off-by: Sasha Levin --- drivers/net/ethernet/intel/ice/ice_ethtool.c | 39 +++++++++++++++++++- 1 file changed, 38 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index 421fc707f80a..060897eb9cab 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -2174,6 +2174,42 @@ ice_setup_autoneg(struct ice_port_info *p, struct ethtool_link_ksettings *ks, return err; } +/** + * ice_set_phy_type_from_speed - set phy_types based on speeds + * and advertised modes + * @ks: ethtool link ksettings struct + * @phy_type_low: pointer to the lower part of phy_type + * @phy_type_high: pointer to the higher part of phy_type + * @adv_link_speed: targeted link speeds bitmap + */ +static void +ice_set_phy_type_from_speed(const struct ethtool_link_ksettings *ks, + u64 *phy_type_low, u64 *phy_type_high, + u16 adv_link_speed) +{ + /* Handle 1000M speed in a special way because ice_update_phy_type + * enables all link modes, but having mixed copper and optical + * standards is not supported. + */ + adv_link_speed &= ~ICE_AQ_LINK_SPEED_1000MB; + + if (ethtool_link_ksettings_test_link_mode(ks, advertising, + 1000baseT_Full)) + *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_T | + ICE_PHY_TYPE_LOW_1G_SGMII; + + if (ethtool_link_ksettings_test_link_mode(ks, advertising, + 1000baseKX_Full)) + *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_KX; + + if (ethtool_link_ksettings_test_link_mode(ks, advertising, + 1000baseX_Full)) + *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_SX | + ICE_PHY_TYPE_LOW_1000BASE_LX; + + ice_update_phy_type(phy_type_low, phy_type_high, adv_link_speed); +} + /** * ice_set_link_ksettings - Set Speed and Duplex * @netdev: network interface device structure @@ -2310,7 +2346,8 @@ ice_set_link_ksettings(struct net_device *netdev, adv_link_speed = curr_link_speed; /* Convert the advertise link speeds to their corresponded PHY_TYPE */ - ice_update_phy_type(&phy_type_low, &phy_type_high, adv_link_speed); + ice_set_phy_type_from_speed(ks, &phy_type_low, &phy_type_high, + adv_link_speed); if (!autoneg_changed && adv_link_speed == curr_link_speed) { netdev_info(netdev, "Nothing changed, exiting without setting anything.\n"); From 7d7450363fdfd17222f70f2014a5abe2a1fe2e2b Mon Sep 17 00:00:00 2001 From: Aidan MacDonald Date: Mon, 20 Jun 2022 21:05:56 +0100 Subject: [PATCH 1108/1842] regmap-irq: Fix a bug in regmap_irq_enable() for type_in_mask chips [ Upstream commit 485037ae9a095491beb7f893c909a76cc4f9d1e7 ] When enabling a type_in_mask irq, the type_buf contents must be AND'd with the mask of the IRQ we're enabling to avoid enabling other IRQs by accident, which can happen if several type_in_mask irqs share a mask register. Fixes: bc998a730367 ("regmap: irq: handle HW using separate rising/falling edge interrupts") Signed-off-by: Aidan MacDonald Link: https://lore.kernel.org/r/20220620200644.1961936-2-aidanmacdonald.0x0@gmail.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- drivers/base/regmap/regmap-irq.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/base/regmap/regmap-irq.c b/drivers/base/regmap/regmap-irq.c index 87c5c421e0f4..4466f8bdab2e 100644 --- a/drivers/base/regmap/regmap-irq.c +++ b/drivers/base/regmap/regmap-irq.c @@ -220,6 +220,7 @@ static void regmap_irq_enable(struct irq_data *data) struct regmap_irq_chip_data *d = irq_data_get_irq_chip_data(data); struct regmap *map = d->map; const struct regmap_irq *irq_data = irq_to_regmap_irq(d, data->hwirq); + unsigned int reg = irq_data->reg_offset / map->reg_stride; unsigned int mask, type; type = irq_data->type.type_falling_val | irq_data->type.type_rising_val; @@ -236,14 +237,14 @@ static void regmap_irq_enable(struct irq_data *data) * at the corresponding offset in regmap_irq_set_type(). */ if (d->chip->type_in_mask && type) - mask = d->type_buf[irq_data->reg_offset / map->reg_stride]; + mask = d->type_buf[reg] & irq_data->mask; else mask = irq_data->mask; if (d->chip->clear_on_unmask) d->clear_status = true; - d->mask_buf[irq_data->reg_offset / map->reg_stride] &= ~mask; + d->mask_buf[reg] &= ~mask; } static void regmap_irq_disable(struct irq_data *data) From 3bccf82169c503b7baa1ee469f0098f51ca2de7b Mon Sep 17 00:00:00 2001 From: Kai-Heng Feng Date: Tue, 21 Jun 2022 15:10:56 -0700 Subject: [PATCH 1109/1842] igb: Make DMA faster when CPU is active on the PCIe link [ Upstream commit 4e0effd9007ea0be31f7488611eb3824b4541554 ] Intel I210 on some Intel Alder Lake platforms can only achieve ~750Mbps Tx speed via iperf. The RR2DCDELAY shows around 0x2xxx DMA delay, which will be significantly lower when 1) ASPM is disabled or 2) SoC package c-state stays above PC3. When the RR2DCDELAY is around 0x1xxx the Tx speed can reach to ~950Mbps. According to the I210 datasheet "8.26.1 PCIe Misc. Register - PCIEMISC", "DMA Idle Indication" doesn't seem to tie to DMA coalesce anymore, so set it to 1b for "DMA is considered idle when there is no Rx or Tx AND when there are no TLPs indicating that CPU is active detected on the PCIe link (such as the host executes CSR or Configuration register read or write operation)" and performing Tx should also fall under "active CPU on PCIe link" case. In addition to that, commit b6e0c419f040 ("igb: Move DMA Coalescing init code to separate function.") seems to wrongly changed from enabling E1000_PCIEMISC_LX_DECISION to disabling it, also fix that. Fixes: b6e0c419f040 ("igb: Move DMA Coalescing init code to separate function.") Signed-off-by: Kai-Heng Feng Tested-by: Gurucharan (A Contingent worker at Intel) Signed-off-by: Tony Nguyen Link: https://lore.kernel.org/r/20220621221056.604304-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/intel/igb/igb_main.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 758e468e677a..4e51f4bb58ff 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -9829,11 +9829,10 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba) struct e1000_hw *hw = &adapter->hw; u32 dmac_thr; u16 hwm; + u32 reg; if (hw->mac.type > e1000_82580) { if (adapter->flags & IGB_FLAG_DMAC) { - u32 reg; - /* force threshold to 0. */ wr32(E1000_DMCTXTH, 0); @@ -9866,7 +9865,6 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba) /* Disable BMC-to-OS Watchdog Enable */ if (hw->mac.type != e1000_i354) reg &= ~E1000_DMACR_DC_BMC2OSW_EN; - wr32(E1000_DMACR, reg); /* no lower threshold to disable @@ -9883,12 +9881,12 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba) */ wr32(E1000_DMCTXTH, (IGB_MIN_TXPBSIZE - (IGB_TX_BUF_4096 + adapter->max_frame_size)) >> 6); + } - /* make low power state decision controlled - * by DMA coal - */ + if (hw->mac.type >= e1000_i210 || + (adapter->flags & IGB_FLAG_DMAC)) { reg = rd32(E1000_PCIEMISC); - reg &= ~E1000_PCIEMISC_LX_DECISION; + reg |= E1000_PCIEMISC_LX_DECISION; wr32(E1000_PCIEMISC, reg); } /* endif adapter->dmac is not disabled */ } else if (hw->mac.type == e1000_82580) { From 340fbdc8011f2dc678f622c5ce1cbb5ab8305de7 Mon Sep 17 00:00:00 2001 From: Stephan Gerhold Date: Tue, 21 Jun 2022 13:48:44 +0200 Subject: [PATCH 1110/1842] virtio_net: fix xdp_rxq_info bug after suspend/resume [ Upstream commit 8af52fe9fd3bf5e7478da99193c0632276e1dfce ] The following sequence currently causes a driver bug warning when using virtio_net: # ip link set eth0 up # echo mem > /sys/power/state (or e.g. # rtcwake -s 10 -m mem) # ip link set eth0 down Missing register, driver bug WARNING: CPU: 0 PID: 375 at net/core/xdp.c:138 xdp_rxq_info_unreg+0x58/0x60 Call trace: xdp_rxq_info_unreg+0x58/0x60 virtnet_close+0x58/0xac __dev_close_many+0xac/0x140 __dev_change_flags+0xd8/0x210 dev_change_flags+0x24/0x64 do_setlink+0x230/0xdd0 ... This happens because virtnet_freeze() frees the receive_queue completely (including struct xdp_rxq_info) but does not call xdp_rxq_info_unreg(). Similarly, virtnet_restore() sets up the receive_queue again but does not call xdp_rxq_info_reg(). Actually, parts of virtnet_freeze_down() and virtnet_restore_up() are almost identical to virtnet_close() and virtnet_open(): only the calls to xdp_rxq_info_(un)reg() are missing. This means that we can fix this easily and avoid such problems in the future by just calling virtnet_close()/open() from the freeze/restore handlers. Aside from adding the missing xdp_rxq_info calls the only difference is that the refill work is only cancelled if netif_running(). However, this should not make any functional difference since the refill work should only be active if the network interface is actually up. Fixes: 754b8a21a96d ("virtio_net: setup xdp_rxq_info") Signed-off-by: Stephan Gerhold Acked-by: Jesper Dangaard Brouer Acked-by: Jason Wang Link: https://lore.kernel.org/r/20220621114845.3650258-1-stephan.gerhold@kernkonzept.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/virtio_net.c | 25 ++++++------------------- 1 file changed, 6 insertions(+), 19 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index cbe47eed7cc3..ad9064df3deb 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -2366,7 +2366,6 @@ static const struct ethtool_ops virtnet_ethtool_ops = { static void virtnet_freeze_down(struct virtio_device *vdev) { struct virtnet_info *vi = vdev->priv; - int i; /* Make sure no work handler is accessing the device */ flush_work(&vi->config_work); @@ -2374,14 +2373,8 @@ static void virtnet_freeze_down(struct virtio_device *vdev) netif_tx_lock_bh(vi->dev); netif_device_detach(vi->dev); netif_tx_unlock_bh(vi->dev); - cancel_delayed_work_sync(&vi->refill); - - if (netif_running(vi->dev)) { - for (i = 0; i < vi->max_queue_pairs; i++) { - napi_disable(&vi->rq[i].napi); - virtnet_napi_tx_disable(&vi->sq[i].napi); - } - } + if (netif_running(vi->dev)) + virtnet_close(vi->dev); } static int init_vqs(struct virtnet_info *vi); @@ -2389,7 +2382,7 @@ static int init_vqs(struct virtnet_info *vi); static int virtnet_restore_up(struct virtio_device *vdev) { struct virtnet_info *vi = vdev->priv; - int err, i; + int err; err = init_vqs(vi); if (err) @@ -2398,15 +2391,9 @@ static int virtnet_restore_up(struct virtio_device *vdev) virtio_device_ready(vdev); if (netif_running(vi->dev)) { - for (i = 0; i < vi->curr_queue_pairs; i++) - if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL)) - schedule_delayed_work(&vi->refill, 0); - - for (i = 0; i < vi->max_queue_pairs; i++) { - virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi); - virtnet_napi_tx_enable(vi, vi->sq[i].vq, - &vi->sq[i].napi); - } + err = virtnet_open(vi->dev); + if (err) + return err; } netif_tx_lock_bh(vi->dev); From afbc954e78962028a6806e34f3cc40e236f01770 Mon Sep 17 00:00:00 2001 From: Jakub Kicinski Date: Mon, 20 Jun 2022 12:13:52 -0700 Subject: [PATCH 1111/1842] Revert "net/tls: fix tls_sk_proto_close executed repeatedly" [ Upstream commit 1b205d948fbb06a7613d87dcea0ff5fd8a08ed91 ] This reverts commit 69135c572d1f84261a6de2a1268513a7e71753e2. This commit was just papering over the issue, ULP should not get ->update() called with its own sk_prot. Each ULP would need to add this check. Fixes: 69135c572d1f ("net/tls: fix tls_sk_proto_close executed repeatedly") Signed-off-by: Jakub Kicinski Reviewed-by: John Fastabend Link: https://lore.kernel.org/r/20220620191353.1184629-1-kuba@kernel.org Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- net/tls/tls_main.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c index 9492528f5852..58d22d6b86ae 100644 --- a/net/tls/tls_main.c +++ b/net/tls/tls_main.c @@ -787,9 +787,6 @@ static void tls_update(struct sock *sk, struct proto *p, { struct tls_context *ctx; - if (sk->sk_prot == p) - return; - ctx = tls_get_ctx(sk); if (likely(ctx)) { ctx->sk_write_space = write_space; From fe06c692cd7e3bf12b2561ba2d5f6490e8082a8f Mon Sep 17 00:00:00 2001 From: Chaitanya Kulkarni Date: Mon, 9 Nov 2020 16:33:42 -0800 Subject: [PATCH 1112/1842] nvme: centralize setting the timeout in nvme_alloc_request [ Upstream commit 0d2e7c840b178bf9a47bd0de89d8f9182fa71d86 ] The function nvme_alloc_request() is called from different context (I/O and Admin queue) where callers do not consider the I/O timeout when called from I/O queue context. Update nvme_alloc_request() to set the default I/O and Admin timeout value based on whether the queuedata is set or not. Signed-off-by: Chaitanya Kulkarni Reviewed-by: Sagi Grimberg Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/core.c | 11 +++++++++-- drivers/nvme/host/lightnvm.c | 3 ++- drivers/nvme/host/pci.c | 2 -- 3 files changed, 11 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 0aa68da51ed7..4a7154cbca50 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -553,6 +553,11 @@ struct request *nvme_alloc_request(struct request_queue *q, if (IS_ERR(req)) return req; + if (req->q->queuedata) + req->timeout = NVME_IO_TIMEOUT; + else /* no queuedata implies admin queue */ + req->timeout = ADMIN_TIMEOUT; + req->cmd_flags |= REQ_FAILFAST_DRIVER; nvme_clear_nvme_request(req); nvme_req(req)->cmd = cmd; @@ -927,7 +932,8 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, if (IS_ERR(req)) return PTR_ERR(req); - req->timeout = timeout ? timeout : ADMIN_TIMEOUT; + if (timeout) + req->timeout = timeout; if (buffer && bufflen) { ret = blk_rq_map_kern(q, req, buffer, bufflen, GFP_KERNEL); @@ -1097,7 +1103,8 @@ static int nvme_submit_user_cmd(struct request_queue *q, if (IS_ERR(req)) return PTR_ERR(req); - req->timeout = timeout ? timeout : ADMIN_TIMEOUT; + if (timeout) + req->timeout = timeout; nvme_req(req)->flags |= NVME_REQ_USERCMD; if (ubuffer && bufflen) { diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index 8e562d0f2c30..88a7c8eac455 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -774,7 +774,8 @@ static int nvme_nvm_submit_user_cmd(struct request_queue *q, goto err_cmd; } - rq->timeout = timeout ? timeout : ADMIN_TIMEOUT; + if (timeout) + rq->timeout = timeout; if (ppa_buf && ppa_len) { ppa_list = dma_pool_alloc(dev->dma_pool, GFP_KERNEL, &ppa_dma); diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 7de24a10dd92..f2d0148d4050 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1356,7 +1356,6 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) return BLK_EH_RESET_TIMER; } - abort_req->timeout = ADMIN_TIMEOUT; abort_req->end_io_data = NULL; blk_execute_rq_nowait(abort_req->q, NULL, abort_req, 0, abort_endio); @@ -2283,7 +2282,6 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode) if (IS_ERR(req)) return PTR_ERR(req); - req->timeout = ADMIN_TIMEOUT; req->end_io_data = nvmeq; init_completion(&nvmeq->delete_done); From 3ee62a1f0701ab55370f84a8a786c7ae9ae029e4 Mon Sep 17 00:00:00 2001 From: Chaitanya Kulkarni Date: Mon, 9 Nov 2020 18:24:00 -0800 Subject: [PATCH 1113/1842] nvme: split nvme_alloc_request() [ Upstream commit 39dfe84451b4526a8054cc5a127337bca980dfa3 ] Right now nvme_alloc_request() allocates a request from block layer based on the value of the qid. When qid set to NVME_QID_ANY it used blk_mq_alloc_request() else blk_mq_alloc_request_hctx(). The function nvme_alloc_request() is called from different context, The only place where it uses non NVME_QID_ANY value is for fabrics connect commands :- nvme_submit_sync_cmd() NVME_QID_ANY nvme_features() NVME_QID_ANY nvme_sec_submit() NVME_QID_ANY nvmf_reg_read32() NVME_QID_ANY nvmf_reg_read64() NVME_QID_ANY nvmf_reg_write32() NVME_QID_ANY nvmf_connect_admin_queue() NVME_QID_ANY nvme_submit_user_cmd() NVME_QID_ANY nvme_alloc_request() nvme_keep_alive() NVME_QID_ANY nvme_alloc_request() nvme_timeout() NVME_QID_ANY nvme_alloc_request() nvme_delete_queue() NVME_QID_ANY nvme_alloc_request() nvmet_passthru_execute_cmd() NVME_QID_ANY nvme_alloc_request() nvmf_connect_io_queue() QID __nvme_submit_sync_cmd() nvme_alloc_request() With passthru nvme_alloc_request() now falls into the I/O fast path such that blk_mq_alloc_request_hctx() is never gets called and that adds additional branch check in fast path. Split the nvme_alloc_request() into nvme_alloc_request() and nvme_alloc_request_qid(). Replace each call of the nvme_alloc_request() with NVME_QID_ANY param with a call to newly added nvme_alloc_request() without NVME_QID_ANY. Replace a call to nvme_alloc_request() with QID param with a call to newly added nvme_alloc_request() and nvme_alloc_request_qid() based on the qid value set in the __nvme_submit_sync_cmd(). Signed-off-by: Chaitanya Kulkarni Reviewed-by: Logan Gunthorpe Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/core.c | 52 +++++++++++++++++++++++----------- drivers/nvme/host/lightnvm.c | 5 ++-- drivers/nvme/host/nvme.h | 2 ++ drivers/nvme/host/pci.c | 4 +-- drivers/nvme/target/passthru.c | 2 +- 5 files changed, 42 insertions(+), 23 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 4a7154cbca50..68395dcd067c 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -538,21 +538,14 @@ static inline void nvme_clear_nvme_request(struct request *req) } } -struct request *nvme_alloc_request(struct request_queue *q, - struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid) +static inline unsigned int nvme_req_op(struct nvme_command *cmd) { - unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN; - struct request *req; - - if (qid == NVME_QID_ANY) { - req = blk_mq_alloc_request(q, op, flags); - } else { - req = blk_mq_alloc_request_hctx(q, op, flags, - qid ? qid - 1 : 0); - } - if (IS_ERR(req)) - return req; + return nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN; +} +static inline void nvme_init_request(struct request *req, + struct nvme_command *cmd) +{ if (req->q->queuedata) req->timeout = NVME_IO_TIMEOUT; else /* no queuedata implies admin queue */ @@ -561,11 +554,33 @@ struct request *nvme_alloc_request(struct request_queue *q, req->cmd_flags |= REQ_FAILFAST_DRIVER; nvme_clear_nvme_request(req); nvme_req(req)->cmd = cmd; +} +struct request *nvme_alloc_request(struct request_queue *q, + struct nvme_command *cmd, blk_mq_req_flags_t flags) +{ + struct request *req; + + req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags); + if (!IS_ERR(req)) + nvme_init_request(req, cmd); return req; } EXPORT_SYMBOL_GPL(nvme_alloc_request); +struct request *nvme_alloc_request_qid(struct request_queue *q, + struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid) +{ + struct request *req; + + req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags, + qid ? qid - 1 : 0); + if (!IS_ERR(req)) + nvme_init_request(req, cmd); + return req; +} +EXPORT_SYMBOL_GPL(nvme_alloc_request_qid); + static int nvme_toggle_streams(struct nvme_ctrl *ctrl, bool enable) { struct nvme_command c; @@ -928,7 +943,10 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, struct request *req; int ret; - req = nvme_alloc_request(q, cmd, flags, qid); + if (qid == NVME_QID_ANY) + req = nvme_alloc_request(q, cmd, flags); + else + req = nvme_alloc_request_qid(q, cmd, flags, qid); if (IS_ERR(req)) return PTR_ERR(req); @@ -1099,7 +1117,7 @@ static int nvme_submit_user_cmd(struct request_queue *q, void *meta = NULL; int ret; - req = nvme_alloc_request(q, cmd, 0, NVME_QID_ANY); + req = nvme_alloc_request(q, cmd, 0); if (IS_ERR(req)) return PTR_ERR(req); @@ -1174,8 +1192,8 @@ static int nvme_keep_alive(struct nvme_ctrl *ctrl) { struct request *rq; - rq = nvme_alloc_request(ctrl->admin_q, &ctrl->ka_cmd, BLK_MQ_REQ_RESERVED, - NVME_QID_ANY); + rq = nvme_alloc_request(ctrl->admin_q, &ctrl->ka_cmd, + BLK_MQ_REQ_RESERVED); if (IS_ERR(rq)) return PTR_ERR(rq); diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index 88a7c8eac455..470cef3abec3 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -653,7 +653,7 @@ static struct request *nvme_nvm_alloc_request(struct request_queue *q, nvme_nvm_rqtocmd(rqd, ns, cmd); - rq = nvme_alloc_request(q, (struct nvme_command *)cmd, 0, NVME_QID_ANY); + rq = nvme_alloc_request(q, (struct nvme_command *)cmd, 0); if (IS_ERR(rq)) return rq; @@ -767,8 +767,7 @@ static int nvme_nvm_submit_user_cmd(struct request_queue *q, DECLARE_COMPLETION_ONSTACK(wait); int ret = 0; - rq = nvme_alloc_request(q, (struct nvme_command *)vcmd, 0, - NVME_QID_ANY); + rq = nvme_alloc_request(q, (struct nvme_command *)vcmd, 0); if (IS_ERR(rq)) { ret = -ENOMEM; goto err_cmd; diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 95b9657cabaf..8e40a6306e53 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -662,6 +662,8 @@ void nvme_start_freeze(struct nvme_ctrl *ctrl); #define NVME_QID_ANY -1 struct request *nvme_alloc_request(struct request_queue *q, + struct nvme_command *cmd, blk_mq_req_flags_t flags); +struct request *nvme_alloc_request_qid(struct request_queue *q, struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid); void nvme_cleanup_cmd(struct request *req); blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req, diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index f2d0148d4050..07a4d5d387cd 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1350,7 +1350,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) req->tag, nvmeq->qid); abort_req = nvme_alloc_request(dev->ctrl.admin_q, &cmd, - BLK_MQ_REQ_NOWAIT, NVME_QID_ANY); + BLK_MQ_REQ_NOWAIT); if (IS_ERR(abort_req)) { atomic_inc(&dev->ctrl.abort_limit); return BLK_EH_RESET_TIMER; @@ -2278,7 +2278,7 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode) cmd.delete_queue.opcode = opcode; cmd.delete_queue.qid = cpu_to_le16(nvmeq->qid); - req = nvme_alloc_request(q, &cmd, BLK_MQ_REQ_NOWAIT, NVME_QID_ANY); + req = nvme_alloc_request(q, &cmd, BLK_MQ_REQ_NOWAIT); if (IS_ERR(req)) return PTR_ERR(req); diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c index 8ee94f056898..d24251ece502 100644 --- a/drivers/nvme/target/passthru.c +++ b/drivers/nvme/target/passthru.c @@ -244,7 +244,7 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req) q = ns->queue; } - rq = nvme_alloc_request(q, req->cmd, 0, NVME_QID_ANY); + rq = nvme_alloc_request(q, req->cmd, 0); if (IS_ERR(rq)) { status = NVME_SC_INTERNAL; goto out_put_ns; From e7ccaa1abacf6f8b4fe4e0a06cbb9e2051a4d79e Mon Sep 17 00:00:00 2001 From: Chaitanya Kulkarni Date: Sun, 28 Feb 2021 18:06:06 -0800 Subject: [PATCH 1114/1842] nvme: mark nvme_setup_passsthru() inline [ Upstream commit 7a36604668b9b1f84126ef0342144ba5b07e518f ] Since nvmet_setup_passthru() function falls in fast path when called from the NVMeOF passthru backend, make it inline. Signed-off-by: Chaitanya Kulkarni Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 68395dcd067c..d81b0cff15e0 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -683,7 +683,7 @@ static void nvme_assign_write_stream(struct nvme_ctrl *ctrl, req->q->write_hints[streamid] += blk_rq_bytes(req) >> 9; } -static void nvme_setup_passthrough(struct request *req, +static inline void nvme_setup_passthrough(struct request *req, struct nvme_command *cmd) { memcpy(cmd, nvme_req(req)->cmd, sizeof(*cmd)); From ba388d4e9a684fc7a84b7cbdf9887efe11f4fde5 Mon Sep 17 00:00:00 2001 From: Chaitanya Kulkarni Date: Sun, 28 Feb 2021 18:06:08 -0800 Subject: [PATCH 1115/1842] nvme: don't check nvme_req flags for new req MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit c03fd85de293a4f65fcb94a795bf4c12a432bb6c ] nvme_clear_request() has a check for flag REQ_DONTPREP and it is called from nvme_init_request() and nvme_setuo_cmd(). The function nvme_init_request() is called from nvme_alloc_request() and nvme_alloc_request_qid(). From these two callers new request is allocated everytime. For newly allocated request RQF_DONTPREP is never set. Since after getting a tag, block layer sets the req->rq_flags == 0 and never sets the REQ_DONTPREP when returning the request :- nvme_alloc_request() blk_mq_alloc_request() blk_mq_rq_ctx_init() rq->rq_flags = 0 <---- nvme_alloc_request_qid() blk_mq_alloc_request_hctx() blk_mq_rq_ctx_init() rq->rq_flags = 0 <---- The block layer does set req->rq_flags but REQ_DONTPREP is not one of them and that is set by the driver. That means we can unconditinally set the REQ_DONTPREP value to the rq->rq_flags when nvme_init_request()->nvme_clear_request() is called from above two callers. Move the check for REQ_DONTPREP from nvme_clear_nvme_request() into nvme_setup_cmd(). This is needed since nvme_alloc_request() now gets called from fast path when NVMeOF target is configured with passthru backend to avoid unnecessary checks in the fast path. Signed-off-by: Chaitanya Kulkarni Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/core.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index d81b0cff15e0..c42ad0b8247b 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -531,11 +531,9 @@ EXPORT_SYMBOL_NS_GPL(nvme_put_ns, NVME_TARGET_PASSTHRU); static inline void nvme_clear_nvme_request(struct request *req) { - if (!(req->rq_flags & RQF_DONTPREP)) { - nvme_req(req)->retries = 0; - nvme_req(req)->flags = 0; - req->rq_flags |= RQF_DONTPREP; - } + nvme_req(req)->retries = 0; + nvme_req(req)->flags = 0; + req->rq_flags |= RQF_DONTPREP; } static inline unsigned int nvme_req_op(struct nvme_command *cmd) @@ -854,7 +852,8 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req, struct nvme_ctrl *ctrl = nvme_req(req)->ctrl; blk_status_t ret = BLK_STS_OK; - nvme_clear_nvme_request(req); + if (!(req->rq_flags & RQF_DONTPREP)) + nvme_clear_nvme_request(req); memset(cmd, 0, sizeof(*cmd)); switch (req_op(req)) { From 938f594266a671b7cc90bf96d2bee27f9e4f3339 Mon Sep 17 00:00:00 2001 From: Keith Busch Date: Wed, 17 Mar 2021 13:37:02 -0700 Subject: [PATCH 1116/1842] nvme-pci: allocate nvme_command within driver pdu [ Upstream commit af7fae857ea22e9c2aef812e1321d9c5c206edde ] Except for pci, all the nvme transport drivers allocate a command within the driver's pdu. Align pci with everyone else by allocating the nvme command within pci's pdu and replace the .queue_rq() stack variable with this. Signed-off-by: Keith Busch Reviewed-by: Jens Axboe Reviewed-by: Sagi Grimberg Reviewed-by: Chaitanya Kulkarni Reviewed-by: Himanshu Madhani Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/pci.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 07a4d5d387cd..31c6938e5045 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -224,6 +224,7 @@ struct nvme_queue { */ struct nvme_iod { struct nvme_request req; + struct nvme_command cmd; struct nvme_queue *nvmeq; bool use_sgl; int aborted; @@ -917,7 +918,7 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, struct nvme_dev *dev = nvmeq->dev; struct request *req = bd->rq; struct nvme_iod *iod = blk_mq_rq_to_pdu(req); - struct nvme_command cmnd; + struct nvme_command *cmnd = &iod->cmd; blk_status_t ret; iod->aborted = 0; @@ -931,24 +932,24 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags))) return BLK_STS_IOERR; - ret = nvme_setup_cmd(ns, req, &cmnd); + ret = nvme_setup_cmd(ns, req, cmnd); if (ret) return ret; if (blk_rq_nr_phys_segments(req)) { - ret = nvme_map_data(dev, req, &cmnd); + ret = nvme_map_data(dev, req, cmnd); if (ret) goto out_free_cmd; } if (blk_integrity_rq(req)) { - ret = nvme_map_metadata(dev, req, &cmnd); + ret = nvme_map_metadata(dev, req, cmnd); if (ret) goto out_unmap_data; } blk_mq_start_request(req); - nvme_submit_cmd(nvmeq, &cmnd, bd->last); + nvme_submit_cmd(nvmeq, cmnd, bd->last); return BLK_STS_OK; out_unmap_data: nvme_unmap_data(dev, req); From 169f7d770552437671b998b63560209982615cb6 Mon Sep 17 00:00:00 2001 From: Enzo Matsumiya Date: Fri, 5 Nov 2021 23:08:57 -0300 Subject: [PATCH 1117/1842] nvme-pci: add NO APST quirk for Kioxia device [ Upstream commit 5a6254d55e2a9f7919ead8580d7aa0c7a382b26a ] This particular Kioxia device times out and aborts I/O during any load, but it's more easily observable with discards (fstrim). The device gets to a state that is also not possible to use "nvme set-feature" to disable APST. Booting with nvme_core.default_ps_max_latency=0 solves the issue. We had a dozen or so of these devices behaving this same way in customer environments. Signed-off-by: Enzo Matsumiya Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/core.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index c42ad0b8247b..9ec3ac367a76 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -2699,6 +2699,20 @@ static const struct nvme_core_quirk_entry core_quirks[] = { .vid = 0x14a4, .fr = "22301111", .quirks = NVME_QUIRK_SIMPLE_SUSPEND, + }, + { + /* + * This Kioxia CD6-V Series / HPE PE8030 device times out and + * aborts I/O during any load, but more easily reproducible + * with discards (fstrim). + * + * The device is left in a state where it is also not possible + * to use "nvme set-feature" to disable APST, but booting with + * nvme_core.default_ps_max_latency=0 works. + */ + .vid = 0x1e0f, + .mn = "KCD6XVUL6T40", + .quirks = NVME_QUIRK_NO_APST, } }; From 30531e0d7b5d9418903a4413280fb06bf923e68e Mon Sep 17 00:00:00 2001 From: Christoph Hellwig Date: Fri, 17 Jun 2022 10:29:42 +0200 Subject: [PATCH 1118/1842] nvme: move the Samsung X5 quirk entry to the core quirks [ Upstream commit e6487833182a8a0187f0292aca542fc163ccd03e ] This device shares the PCI ID with the Samsung 970 Evo Plus that does not need or want the quirks. Move the the quirk entry to the core table based on the model number instead. Fixes: bc360b0b1611 ("nvme-pci: add quirks for Samsung X5 SSDs") Signed-off-by: Christoph Hellwig Reviewed-by: Pankaj Raghav Signed-off-by: Sasha Levin --- drivers/nvme/host/core.c | 14 ++++++++++++++ drivers/nvme/host/pci.c | 4 ---- 2 files changed, 14 insertions(+), 4 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 9ec3ac367a76..af2902d70b19 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -2713,6 +2713,20 @@ static const struct nvme_core_quirk_entry core_quirks[] = { .vid = 0x1e0f, .mn = "KCD6XVUL6T40", .quirks = NVME_QUIRK_NO_APST, + }, + { + /* + * The external Samsung X5 SSD fails initialization without a + * delay before checking if it is ready and has a whole set of + * other problems. To make this even more interesting, it + * shares the PCI ID with internal Samsung 970 Evo Plus that + * does not need or want these quirks. + */ + .vid = 0x144d, + .mn = "Samsung Portable SSD X5", + .quirks = NVME_QUIRK_DELAY_BEFORE_CHK_RDY | + NVME_QUIRK_NO_DEEPEST_PS | + NVME_QUIRK_IGNORE_DEV_SUBNQN, } }; diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 31c6938e5045..9e633f4dcec7 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -3265,10 +3265,6 @@ static const struct pci_device_id nvme_id_table[] = { NVME_QUIRK_128_BYTES_SQES | NVME_QUIRK_SHARED_TAGS | NVME_QUIRK_SKIP_CID_GEN }, - { PCI_DEVICE(0x144d, 0xa808), /* Samsung X5 */ - .driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY| - NVME_QUIRK_NO_DEEPEST_PS | - NVME_QUIRK_IGNORE_DEV_SUBNQN, }, { PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) }, { 0, } }; From abe487a88a5daa3b26e415105b4ad2e84624a461 Mon Sep 17 00:00:00 2001 From: Dan Carpenter Date: Thu, 23 Jun 2022 11:29:48 +0300 Subject: [PATCH 1119/1842] gpio: winbond: Fix error code in winbond_gpio_get() [ Upstream commit 9ca766eaea2e87b8b773bff04ee56c055cb76d4e ] This error path returns 1, but it should instead propagate the negative error code from winbond_sio_enter(). Fixes: a0d65009411c ("gpio: winbond: Add driver") Signed-off-by: Dan Carpenter Reviewed-by: Andy Shevchenko Signed-off-by: Bartosz Golaszewski Signed-off-by: Sasha Levin --- drivers/gpio/gpio-winbond.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/gpio/gpio-winbond.c b/drivers/gpio/gpio-winbond.c index 7f8f5b02e31d..4b61d975cc0e 100644 --- a/drivers/gpio/gpio-winbond.c +++ b/drivers/gpio/gpio-winbond.c @@ -385,12 +385,13 @@ static int winbond_gpio_get(struct gpio_chip *gc, unsigned int offset) unsigned long *base = gpiochip_get_data(gc); const struct winbond_gpio_info *info; bool val; + int ret; winbond_gpio_get_info(&offset, &info); - val = winbond_sio_enter(*base); - if (val) - return val; + ret = winbond_sio_enter(*base); + if (ret) + return ret; winbond_sio_select_logical(*base, info->dev); From 5ee016f6120ae6c64f9f8953405590e6d4b604e1 Mon Sep 17 00:00:00 2001 From: Thomas Richter Date: Fri, 10 Jun 2022 15:19:00 +0200 Subject: [PATCH 1120/1842] s390/cpumf: Handle events cycles and instructions identical [ Upstream commit be857b7f77d130dbbd47c91fc35198b040f35865 ] Events CPU_CYCLES and INSTRUCTIONS can be submitted with two different perf_event attribute::type values: - PERF_TYPE_HARDWARE: when invoked via perf tool predefined events name cycles or cpu-cycles or instructions. - pmu->type: when invoked via perf tool event name cpu_cf/CPU_CYLCES/ or cpu_cf/INSTRUCTIONS/. This invocation also selects the PMU to which the event belongs. Handle both type of invocations identical for events CPU_CYLCES and INSTRUCTIONS. They address the same hardware. The result is different when event modifier exclude_kernel is also set. Invocation with event modifier for user space event counting fails. Output before: # perf stat -e cpum_cf/cpu_cycles/u -- true Performance counter stats for 'true': cpum_cf/cpu_cycles/u 0.000761033 seconds time elapsed 0.000076000 seconds user 0.000725000 seconds sys # Output after: # perf stat -e cpum_cf/cpu_cycles/u -- true Performance counter stats for 'true': 349,613 cpum_cf/cpu_cycles/u 0.000844143 seconds time elapsed 0.000079000 seconds user 0.000800000 seconds sys # Fixes: 6a82e23f45fe ("s390/cpumf: Adjust registration of s390 PMU device drivers") Signed-off-by: Thomas Richter Acked-by: Sumanth Korikkar [agordeev@linux.ibm.com corrected commit ID of Fixes commit] Signed-off-by: Alexander Gordeev Signed-off-by: Sasha Levin --- arch/s390/kernel/perf_cpum_cf.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c index 0eb1d1cc53a8..dddb32e53db8 100644 --- a/arch/s390/kernel/perf_cpum_cf.c +++ b/arch/s390/kernel/perf_cpum_cf.c @@ -292,6 +292,26 @@ static int __hw_perf_event_init(struct perf_event *event, unsigned int type) return err; } +/* Events CPU_CYLCES and INSTRUCTIONS can be submitted with two different + * attribute::type values: + * - PERF_TYPE_HARDWARE: + * - pmu->type: + * Handle both type of invocations identical. They address the same hardware. + * The result is different when event modifiers exclude_kernel and/or + * exclude_user are also set. + */ +static int cpumf_pmu_event_type(struct perf_event *event) +{ + u64 ev = event->attr.config; + + if (cpumf_generic_events_basic[PERF_COUNT_HW_CPU_CYCLES] == ev || + cpumf_generic_events_basic[PERF_COUNT_HW_INSTRUCTIONS] == ev || + cpumf_generic_events_user[PERF_COUNT_HW_CPU_CYCLES] == ev || + cpumf_generic_events_user[PERF_COUNT_HW_INSTRUCTIONS] == ev) + return PERF_TYPE_HARDWARE; + return PERF_TYPE_RAW; +} + static int cpumf_pmu_event_init(struct perf_event *event) { unsigned int type = event->attr.type; @@ -301,7 +321,7 @@ static int cpumf_pmu_event_init(struct perf_event *event) err = __hw_perf_event_init(event, type); else if (event->pmu->type == type) /* Registered as unknown PMU */ - err = __hw_perf_event_init(event, PERF_TYPE_RAW); + err = __hw_perf_event_init(event, cpumf_pmu_event_type(event)); else return -ENOENT; From 58c3a27e9c233bcb203624ce4f4132423fe3318d Mon Sep 17 00:00:00 2001 From: Haibo Chen Date: Mon, 25 Apr 2022 16:41:00 +0800 Subject: [PATCH 1121/1842] iio: mma8452: fix probe fail when device tree compatible is used. [ Upstream commit fe18894930a025617114aa8ca0adbf94d5bffe89 ] Correct the logic for the probe. First check of_match_table, if not meet, then check i2c_driver.id_table. If both not meet, then return fail. Fixes: a47ac019e7e8 ("iio: mma8452: Fix probe failing when an i2c_device_id is used") Signed-off-by: Haibo Chen Link: https://lore.kernel.org/r/1650876060-17577-1-git-send-email-haibo.chen@nxp.com Signed-off-by: Jonathan Cameron Signed-off-by: Sasha Levin --- drivers/iio/accel/mma8452.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c index e7e280282774..67463be797de 100644 --- a/drivers/iio/accel/mma8452.c +++ b/drivers/iio/accel/mma8452.c @@ -1542,11 +1542,13 @@ static int mma8452_probe(struct i2c_client *client, mutex_init(&data->lock); data->chip_info = device_get_match_data(&client->dev); - if (!data->chip_info && id) { - data->chip_info = &mma_chip_info_table[id->driver_data]; - } else { - dev_err(&client->dev, "unknown device model\n"); - return -ENODEV; + if (!data->chip_info) { + if (id) { + data->chip_info = &mma_chip_info_table[id->driver_data]; + } else { + dev_err(&client->dev, "unknown device model\n"); + return -ENODEV; + } } data->vdd_reg = devm_regulator_get(&client->dev, "vdd"); From a547662534ca62e70c41ee36776ec4008f836268 Mon Sep 17 00:00:00 2001 From: Baruch Siach Date: Mon, 30 May 2022 11:50:26 +0300 Subject: [PATCH 1122/1842] iio: adc: vf610: fix conversion mode sysfs node name [ Upstream commit f1a633b15cd5371a2a83f02c513984e51132dd68 ] The documentation missed the "in_" prefix for this IIO_SHARED_BY_DIR entry. Fixes: bf04c1a367e3 ("iio: adc: vf610: implement configurable conversion modes") Signed-off-by: Baruch Siach Acked-by: Haibo Chen Link: https://lore.kernel.org/r/560dc93fafe5ef7e9a409885fd20b6beac3973d8.1653900626.git.baruch@tkos.co.il Signed-off-by: Jonathan Cameron Signed-off-by: Sasha Levin --- Documentation/ABI/testing/sysfs-bus-iio-vf610 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Documentation/ABI/testing/sysfs-bus-iio-vf610 b/Documentation/ABI/testing/sysfs-bus-iio-vf610 index 308a6756d3bf..491ead804488 100644 --- a/Documentation/ABI/testing/sysfs-bus-iio-vf610 +++ b/Documentation/ABI/testing/sysfs-bus-iio-vf610 @@ -1,4 +1,4 @@ -What: /sys/bus/iio/devices/iio:deviceX/conversion_mode +What: /sys/bus/iio/devices/iio:deviceX/in_conversion_mode KernelVersion: 4.2 Contact: linux-iio@vger.kernel.org Description: From 116c3e81b0537ac6ea89445ff235edc8d9e64d51 Mon Sep 17 00:00:00 2001 From: Andy Shevchenko Date: Mon, 20 Jun 2022 13:43:16 +0300 Subject: [PATCH 1123/1842] usb: typec: wcove: Drop wrong dependency to INTEL_SOC_PMIC [ Upstream commit 9ef165406308515dcf2e3f6e97b39a1c56d86db5 ] Intel SoC PMIC is a generic name for all PMICs that are used on Intel platforms. In particular, INTEL_SOC_PMIC kernel configuration option refers to Crystal Cove PMIC, which has never been a part of any Intel Broxton hardware. Drop wrong dependency from Kconfig. Note, the correct dependency is satisfied via ACPI PMIC OpRegion driver, which the Type-C depends on. Fixes: d2061f9cc32d ("usb: typec: add driver for Intel Whiskey Cove PMIC USB Type-C PHY") Reported-by: Hans de Goede Reviewed-by: Guenter Roeck Reviewed-by: Heikki Krogerus Reviewed-by: Hans de Goede Signed-off-by: Andy Shevchenko Link: https://lore.kernel.org/r/20220620104316.57592-1-andriy.shevchenko@linux.intel.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sasha Levin --- drivers/usb/typec/tcpm/Kconfig | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/usb/typec/tcpm/Kconfig b/drivers/usb/typec/tcpm/Kconfig index 557f392fe24d..073fd2ea5e0b 100644 --- a/drivers/usb/typec/tcpm/Kconfig +++ b/drivers/usb/typec/tcpm/Kconfig @@ -56,7 +56,6 @@ config TYPEC_WCOVE tristate "Intel WhiskeyCove PMIC USB Type-C PHY driver" depends on ACPI depends on MFD_INTEL_PMC_BXT - depends on INTEL_SOC_PMIC depends on BXT_WC_PMIC_OPREGION help This driver adds support for USB Type-C on Intel Broxton platforms From b8142a84657eb464e4d7b2f6e509fdf70629bcb2 Mon Sep 17 00:00:00 2001 From: Mathias Nyman Date: Thu, 23 Jun 2022 14:19:43 +0300 Subject: [PATCH 1124/1842] xhci: turn off port power in shutdown commit 83810f84ecf11dfc5a9414a8b762c3501b328185 upstream. If ports are not turned off in shutdown then runtime suspended self-powered USB devices may survive in U3 link state over S5. During subsequent boot, if firmware sends an IPC command to program the port in DISCONNECT state, it will time out, causing significant delay in the boot time. Turning off roothub port power is also recommended in xhci specification 4.19.4 "Port Power" in the additional note. Cc: stable@vger.kernel.org Signed-off-by: Mathias Nyman Link: https://lore.kernel.org/r/20220623111945.1557702-3-mathias.nyman@linux.intel.com Signed-off-by: Greg Kroah-Hartman --- drivers/usb/host/xhci-hub.c | 2 +- drivers/usb/host/xhci.c | 15 +++++++++++++-- drivers/usb/host/xhci.h | 2 ++ 3 files changed, 16 insertions(+), 3 deletions(-) diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c index 1eb3b5deb940..94adae8b19f0 100644 --- a/drivers/usb/host/xhci-hub.c +++ b/drivers/usb/host/xhci-hub.c @@ -566,7 +566,7 @@ struct xhci_hub *xhci_get_rhub(struct usb_hcd *hcd) * It will release and re-aquire the lock while calling ACPI * method. */ -static void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd, +void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd, u16 index, bool on, unsigned long *flags) __must_hold(&xhci->lock) { diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index a1ed5e0d0612..997de5f294f1 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -775,6 +775,8 @@ static void xhci_stop(struct usb_hcd *hcd) void xhci_shutdown(struct usb_hcd *hcd) { struct xhci_hcd *xhci = hcd_to_xhci(hcd); + unsigned long flags; + int i; if (xhci->quirks & XHCI_SPURIOUS_REBOOT) usb_disable_xhci_ports(to_pci_dev(hcd->self.sysdev)); @@ -790,12 +792,21 @@ void xhci_shutdown(struct usb_hcd *hcd) del_timer_sync(&xhci->shared_hcd->rh_timer); } - spin_lock_irq(&xhci->lock); + spin_lock_irqsave(&xhci->lock, flags); xhci_halt(xhci); + + /* Power off USB2 ports*/ + for (i = 0; i < xhci->usb2_rhub.num_ports; i++) + xhci_set_port_power(xhci, xhci->main_hcd, i, false, &flags); + + /* Power off USB3 ports*/ + for (i = 0; i < xhci->usb3_rhub.num_ports; i++) + xhci_set_port_power(xhci, xhci->shared_hcd, i, false, &flags); + /* Workaround for spurious wakeups at shutdown with HSW */ if (xhci->quirks & XHCI_SPURIOUS_WAKEUP) xhci_reset(xhci, XHCI_RESET_SHORT_USEC); - spin_unlock_irq(&xhci->lock); + spin_unlock_irqrestore(&xhci->lock, flags); xhci_cleanup_msix(xhci); diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index a46bbf5beffa..0c66424b34ba 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -2162,6 +2162,8 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, u16 wIndex, int xhci_hub_status_data(struct usb_hcd *hcd, char *buf); int xhci_find_raw_port_number(struct usb_hcd *hcd, int port1); struct xhci_hub *xhci_get_rhub(struct usb_hcd *hcd); +void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd, u16 index, + bool on, unsigned long *flags); void xhci_hc_died(struct xhci_hcd *xhci); From 114080d04ae489e24509a0a348c283fb5ea3c5db Mon Sep 17 00:00:00 2001 From: Tanveer Alam Date: Thu, 23 Jun 2022 14:19:44 +0300 Subject: [PATCH 1125/1842] xhci-pci: Allow host runtime PM as default for Intel Raptor Lake xHCI commit 7516da47a349e74de623243a27f9b8a91446bf4f upstream. In the same way as Intel Alder Lake TCSS (Type-C Subsystem) the Raptor Lake TCSS xHCI needs to be runtime suspended whenever possible to allow the TCSS hardware block to enter D3cold and thus save energy. Cc: stable@kernel.org Signed-off-by: Tanveer Alam Signed-off-by: Mathias Nyman Link: https://lore.kernel.org/r/20220623111945.1557702-4-mathias.nyman@linux.intel.com Signed-off-by: Greg Kroah-Hartman --- drivers/usb/host/xhci-pci.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index 886279755804..8ba0c1371fed 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -61,6 +61,7 @@ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI 0x461e #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI 0x464e #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI 0x51ed +#define PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI 0xa71e #define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9 #define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba @@ -265,7 +266,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI || pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI || pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI || - pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI)) + pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI || + pdev->device == PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI)) xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW; if (pdev->vendor == PCI_VENDOR_ID_ETRON && From d87dec22fdf51920be1739aca2e8c901d878017c Mon Sep 17 00:00:00 2001 From: Utkarsh Patel Date: Thu, 23 Jun 2022 14:19:45 +0300 Subject: [PATCH 1126/1842] xhci-pci: Allow host runtime PM as default for Intel Meteor Lake xHCI commit 8ffdc53a60049f3930afe161dc51c67959c8d83d upstream. Meteor Lake TCSS(Type-C Subsystem) xHCI needs to be runtime suspended whenever possible to allow the TCSS hardware block to enter D3cold and thus save energy. Cc: stable@kernel.org Signed-off-by: Utkarsh Patel Signed-off-by: Mathias Nyman Link: https://lore.kernel.org/r/20220623111945.1557702-5-mathias.nyman@linux.intel.com Signed-off-by: Greg Kroah-Hartman --- drivers/usb/host/xhci-pci.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c index 8ba0c1371fed..8952492d43be 100644 --- a/drivers/usb/host/xhci-pci.c +++ b/drivers/usb/host/xhci-pci.c @@ -62,6 +62,7 @@ #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI 0x464e #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI 0x51ed #define PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI 0xa71e +#define PCI_DEVICE_ID_INTEL_METEOR_LAKE_XHCI 0x7ec0 #define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9 #define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba @@ -267,7 +268,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI || pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI || pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI || - pdev->device == PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI)) + pdev->device == PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI || + pdev->device == PCI_DEVICE_ID_INTEL_METEOR_LAKE_XHCI)) xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW; if (pdev->vendor == PCI_VENDOR_ID_ETRON && From 54604108be64c59c38827083c24d260b9820dd6e Mon Sep 17 00:00:00 2001 From: Alan Stern Date: Mon, 13 Jun 2022 10:17:03 -0400 Subject: [PATCH 1127/1842] usb: gadget: Fix non-unique driver names in raw-gadget driver commit f2d8c2606825317b77db1f9ba0fc26ef26160b30 upstream. In a report for a separate bug (which has already been fixed by commit 5f0b5f4d50fa "usb: gadget: fix race when gadget driver register via ioctl") in the raw-gadget driver, the syzbot console log included error messages caused by attempted registration of a new driver with the same name as an existing driver: > kobject_add_internal failed for raw-gadget with -EEXIST, don't try to register things with the same name in the same directory. > UDC core: USB Raw Gadget: driver registration failed: -17 > misc raw-gadget: fail, usb_gadget_register_driver returned -17 These errors arise because raw_gadget.c registers a separate UDC driver for each of the UDC instances it creates, but these drivers all have the same name: "raw-gadget". Until recently this wasn't a problem, but when the "gadget" bus was added and UDC drivers were registered on this bus, it became possible for name conflicts to cause the registrations to fail. The reason is simply that the bus code in the driver core uses the driver name as a sysfs directory name (e.g., /sys/bus/gadget/drivers/raw-gadget/), and you can't create two directories with the same pathname. To fix this problem, the driver names used by raw-gadget are made distinct by appending a unique ID number: "raw-gadget.N", with a different value of N for each driver instance. And to avoid the proliferation of error handling code in the raw_ioctl_init() routine, the error return paths are refactored into the common pattern (goto statements leading to cleanup code at the end of the routine). Link: https://lore.kernel.org/all/0000000000008c664105dffae2eb@google.com/ Fixes: fc274c1e9973 "USB: gadget: Add a new bus for gadgets" CC: Andrey Konovalov CC: Reported-and-tested-by: syzbot+02b16343704b3af1667e@syzkaller.appspotmail.com Reviewed-by: Andrey Konovalov Acked-by: Hillf Danton Signed-off-by: Alan Stern Link: https://lore.kernel.org/r/YqdG32w+3h8c1s7z@rowland.harvard.edu Signed-off-by: Greg Kroah-Hartman --- drivers/usb/gadget/legacy/raw_gadget.c | 62 +++++++++++++++++++------- 1 file changed, 46 insertions(+), 16 deletions(-) diff --git a/drivers/usb/gadget/legacy/raw_gadget.c b/drivers/usb/gadget/legacy/raw_gadget.c index 34cecd3660bf..a8154813af58 100644 --- a/drivers/usb/gadget/legacy/raw_gadget.c +++ b/drivers/usb/gadget/legacy/raw_gadget.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -35,6 +36,9 @@ MODULE_LICENSE("GPL"); /*----------------------------------------------------------------------*/ +static DEFINE_IDA(driver_id_numbers); +#define DRIVER_DRIVER_NAME_LENGTH_MAX 32 + #define RAW_EVENT_QUEUE_SIZE 16 struct raw_event_queue { @@ -160,6 +164,9 @@ struct raw_dev { /* Reference to misc device: */ struct device *dev; + /* Make driver names unique */ + int driver_id_number; + /* Protected by lock: */ enum dev_state state; bool gadget_registered; @@ -188,6 +195,7 @@ static struct raw_dev *dev_new(void) spin_lock_init(&dev->lock); init_completion(&dev->ep0_done); raw_event_queue_init(&dev->queue); + dev->driver_id_number = -1; return dev; } @@ -198,6 +206,9 @@ static void dev_free(struct kref *kref) kfree(dev->udc_name); kfree(dev->driver.udc_name); + kfree(dev->driver.driver.name); + if (dev->driver_id_number >= 0) + ida_free(&driver_id_numbers, dev->driver_id_number); if (dev->req) { if (dev->ep0_urb_queued) usb_ep_dequeue(dev->gadget->ep0, dev->req); @@ -421,6 +432,7 @@ static int raw_ioctl_init(struct raw_dev *dev, unsigned long value) struct usb_raw_init arg; char *udc_driver_name; char *udc_device_name; + char *driver_driver_name; unsigned long flags; if (copy_from_user(&arg, (void __user *)value, sizeof(arg))) @@ -439,36 +451,44 @@ static int raw_ioctl_init(struct raw_dev *dev, unsigned long value) return -EINVAL; } + ret = ida_alloc(&driver_id_numbers, GFP_KERNEL); + if (ret < 0) + return ret; + dev->driver_id_number = ret; + + driver_driver_name = kmalloc(DRIVER_DRIVER_NAME_LENGTH_MAX, GFP_KERNEL); + if (!driver_driver_name) { + ret = -ENOMEM; + goto out_free_driver_id_number; + } + snprintf(driver_driver_name, DRIVER_DRIVER_NAME_LENGTH_MAX, + DRIVER_NAME ".%d", dev->driver_id_number); + udc_driver_name = kmalloc(UDC_NAME_LENGTH_MAX, GFP_KERNEL); - if (!udc_driver_name) - return -ENOMEM; + if (!udc_driver_name) { + ret = -ENOMEM; + goto out_free_driver_driver_name; + } ret = strscpy(udc_driver_name, &arg.driver_name[0], UDC_NAME_LENGTH_MAX); - if (ret < 0) { - kfree(udc_driver_name); - return ret; - } + if (ret < 0) + goto out_free_udc_driver_name; ret = 0; udc_device_name = kmalloc(UDC_NAME_LENGTH_MAX, GFP_KERNEL); if (!udc_device_name) { - kfree(udc_driver_name); - return -ENOMEM; + ret = -ENOMEM; + goto out_free_udc_driver_name; } ret = strscpy(udc_device_name, &arg.device_name[0], UDC_NAME_LENGTH_MAX); - if (ret < 0) { - kfree(udc_driver_name); - kfree(udc_device_name); - return ret; - } + if (ret < 0) + goto out_free_udc_device_name; ret = 0; spin_lock_irqsave(&dev->lock, flags); if (dev->state != STATE_DEV_OPENED) { dev_dbg(dev->dev, "fail, device is not opened\n"); - kfree(udc_driver_name); - kfree(udc_device_name); ret = -EINVAL; goto out_unlock; } @@ -483,14 +503,24 @@ static int raw_ioctl_init(struct raw_dev *dev, unsigned long value) dev->driver.suspend = gadget_suspend; dev->driver.resume = gadget_resume; dev->driver.reset = gadget_reset; - dev->driver.driver.name = DRIVER_NAME; + dev->driver.driver.name = driver_driver_name; dev->driver.udc_name = udc_device_name; dev->driver.match_existing_only = 1; dev->state = STATE_DEV_INITIALIZED; + spin_unlock_irqrestore(&dev->lock, flags); + return ret; out_unlock: spin_unlock_irqrestore(&dev->lock, flags); +out_free_udc_device_name: + kfree(udc_device_name); +out_free_udc_driver_name: + kfree(udc_driver_name); +out_free_driver_driver_name: + kfree(driver_driver_name); +out_free_driver_id_number: + ida_free(&driver_id_numbers, dev->driver_id_number); return ret; } From 656eca37aae1f7dad26693ab1bd1c1d4f110741c Mon Sep 17 00:00:00 2001 From: Alan Stern Date: Wed, 22 Jun 2022 10:46:31 -0400 Subject: [PATCH 1128/1842] USB: gadget: Fix double-free bug in raw_gadget driver commit 90bc2af24638659da56397ff835f3c95a948f991 upstream. Re-reading a recently merged fix to the raw_gadget driver showed that it inadvertently introduced a double-free bug in a failure pathway. If raw_ioctl_init() encounters an error after the driver ID number has been allocated, it deallocates the ID number before returning. But when dev_free() runs later on, it will then try to deallocate the ID number a second time. Closely related to this issue is another error in the recent fix: The ID number is stored in the raw_dev structure before the code checks to see whether the structure has already been initialized, in which case the new ID number would overwrite the earlier value. The solution to both bugs is to keep the new ID number in a local variable, and store it in the raw_dev structure only after the check for prior initialization. No errors can occur after that point, so the double-free will never happen. Fixes: f2d8c2606825 ("usb: gadget: Fix non-unique driver names in raw-gadget driver") CC: Andrey Konovalov CC: Signed-off-by: Alan Stern Link: https://lore.kernel.org/r/YrMrRw5AyIZghN0v@rowland.harvard.edu Signed-off-by: Greg Kroah-Hartman --- drivers/usb/gadget/legacy/raw_gadget.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/drivers/usb/gadget/legacy/raw_gadget.c b/drivers/usb/gadget/legacy/raw_gadget.c index a8154813af58..b496ca937dee 100644 --- a/drivers/usb/gadget/legacy/raw_gadget.c +++ b/drivers/usb/gadget/legacy/raw_gadget.c @@ -429,6 +429,7 @@ static int raw_release(struct inode *inode, struct file *fd) static int raw_ioctl_init(struct raw_dev *dev, unsigned long value) { int ret = 0; + int driver_id_number; struct usb_raw_init arg; char *udc_driver_name; char *udc_device_name; @@ -451,10 +452,9 @@ static int raw_ioctl_init(struct raw_dev *dev, unsigned long value) return -EINVAL; } - ret = ida_alloc(&driver_id_numbers, GFP_KERNEL); - if (ret < 0) - return ret; - dev->driver_id_number = ret; + driver_id_number = ida_alloc(&driver_id_numbers, GFP_KERNEL); + if (driver_id_number < 0) + return driver_id_number; driver_driver_name = kmalloc(DRIVER_DRIVER_NAME_LENGTH_MAX, GFP_KERNEL); if (!driver_driver_name) { @@ -462,7 +462,7 @@ static int raw_ioctl_init(struct raw_dev *dev, unsigned long value) goto out_free_driver_id_number; } snprintf(driver_driver_name, DRIVER_DRIVER_NAME_LENGTH_MAX, - DRIVER_NAME ".%d", dev->driver_id_number); + DRIVER_NAME ".%d", driver_id_number); udc_driver_name = kmalloc(UDC_NAME_LENGTH_MAX, GFP_KERNEL); if (!udc_driver_name) { @@ -506,6 +506,7 @@ static int raw_ioctl_init(struct raw_dev *dev, unsigned long value) dev->driver.driver.name = driver_driver_name; dev->driver.udc_name = udc_device_name; dev->driver.match_existing_only = 1; + dev->driver_id_number = driver_id_number; dev->state = STATE_DEV_INITIALIZED; spin_unlock_irqrestore(&dev->lock, flags); @@ -520,7 +521,7 @@ static int raw_ioctl_init(struct raw_dev *dev, unsigned long value) out_free_driver_driver_name: kfree(driver_driver_name); out_free_driver_id_number: - ida_free(&driver_id_numbers, dev->driver_id_number); + ida_free(&driver_id_numbers, driver_id_number); return ret; } From 2d7bdb6a5a37e57b2b93e138f543e8accdad2531 Mon Sep 17 00:00:00 2001 From: Xu Yang Date: Thu, 23 Jun 2022 11:02:42 +0800 Subject: [PATCH 1129/1842] usb: chipidea: udc: check request status before setting device address commit b24346a240b36cfc4df194d145463874985aa29b upstream. The complete() function may be called even though request is not completed. In this case, it's necessary to check request status so as not to set device address wrongly. Fixes: 10775eb17bee ("usb: chipidea: udc: update gadget states according to ch9") cc: Signed-off-by: Xu Yang Link: https://lore.kernel.org/r/20220623030242.41796-1-xu.yang_2@nxp.com Signed-off-by: Greg Kroah-Hartman --- drivers/usb/chipidea/udc.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c index 5f35cdd2cf1d..67d8da04848e 100644 --- a/drivers/usb/chipidea/udc.c +++ b/drivers/usb/chipidea/udc.c @@ -1034,6 +1034,9 @@ isr_setup_status_complete(struct usb_ep *ep, struct usb_request *req) struct ci_hdrc *ci = req->context; unsigned long flags; + if (req->status < 0) + return; + if (ci->setaddr) { hw_usb_set_address(ci, ci->address); ci->setaddr = false; From 4b6cdcff7cb8b9324a7fb2cb4460f523b9d4674d Mon Sep 17 00:00:00 2001 From: Jaegeuk Kim Date: Tue, 31 May 2022 18:27:09 -0700 Subject: [PATCH 1130/1842] f2fs: attach inline_data after setting compression commit 4cde00d50707c2ef6647b9b96b2cb40b6eb24397 upstream. This fixes the below corruption. [345393.335389] F2FS-fs (vdb): sanity_check_inode: inode (ino=6d0, mode=33206) should not have inline_data, run fsck to fix Cc: Fixes: 677a82b44ebf ("f2fs: fix to do sanity check for inline inode") Signed-off-by: Jaegeuk Kim Signed-off-by: Greg Kroah-Hartman --- fs/f2fs/namei.c | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c index 6ae2beabe578..72b109685db4 100644 --- a/fs/f2fs/namei.c +++ b/fs/f2fs/namei.c @@ -91,8 +91,6 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) if (test_opt(sbi, INLINE_XATTR)) set_inode_flag(inode, FI_INLINE_XATTR); - if (test_opt(sbi, INLINE_DATA) && f2fs_may_inline_data(inode)) - set_inode_flag(inode, FI_INLINE_DATA); if (f2fs_may_inline_dentry(inode)) set_inode_flag(inode, FI_INLINE_DENTRY); @@ -109,10 +107,6 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) f2fs_init_extent_tree(inode, NULL); - stat_inc_inline_xattr(inode); - stat_inc_inline_inode(inode); - stat_inc_inline_dir(inode); - F2FS_I(inode)->i_flags = f2fs_mask_flags(mode, F2FS_I(dir)->i_flags & F2FS_FL_INHERITED); @@ -129,6 +123,14 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) set_compress_context(inode); } + /* Should enable inline_data after compression set */ + if (test_opt(sbi, INLINE_DATA) && f2fs_may_inline_data(inode)) + set_inode_flag(inode, FI_INLINE_DATA); + + stat_inc_inline_xattr(inode); + stat_inc_inline_inode(inode); + stat_inc_inline_dir(inode); + f2fs_set_inode_flags(inode); trace_f2fs_new_inode(inode, 0); @@ -317,6 +319,9 @@ static void set_compress_inode(struct f2fs_sb_info *sbi, struct inode *inode, if (!is_extension_exist(name, ext[i], false)) continue; + /* Do not use inline_data with compression */ + stat_dec_inline_inode(inode); + clear_inode_flag(inode, FI_INLINE_DATA); set_compress_context(inode); return; } From f26379e19958a5164373d1e96c08a53742704f22 Mon Sep 17 00:00:00 2001 From: Dmitry Rokosov Date: Tue, 24 May 2022 18:14:45 +0000 Subject: [PATCH 1131/1842] iio:chemical:ccs811: rearrange iio trigger get and register commit d710359c0b445e8c03e24f19ae2fb79ce7282260 upstream. IIO trigger interface function iio_trigger_get() should be called after iio_trigger_register() (or its devm analogue) strictly, because of iio_trigger_get() acquires module refcnt based on the trigger->owner pointer, which is initialized inside iio_trigger_register() to THIS_MODULE. If this call order is wrong, the next iio_trigger_put() (from sysfs callback or "delete module" path) will dereference "default" module refcnt, which is incorrect behaviour. Fixes: f1f065d7ac30 ("iio: chemical: ccs811: Add support for data ready trigger") Signed-off-by: Dmitry Rokosov Reviewed-by: Andy Shevchenko Link: https://lore.kernel.org/r/20220524181150.9240-5-ddrokosov@sberdevices.ru Cc: Signed-off-by: Jonathan Cameron Signed-off-by: Greg Kroah-Hartman --- drivers/iio/chemical/ccs811.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/iio/chemical/ccs811.c b/drivers/iio/chemical/ccs811.c index 60dd87e96f5f..384d167b4fd6 100644 --- a/drivers/iio/chemical/ccs811.c +++ b/drivers/iio/chemical/ccs811.c @@ -500,11 +500,11 @@ static int ccs811_probe(struct i2c_client *client, data->drdy_trig->dev.parent = &client->dev; data->drdy_trig->ops = &ccs811_trigger_ops; iio_trigger_set_drvdata(data->drdy_trig, indio_dev); - indio_dev->trig = data->drdy_trig; - iio_trigger_get(indio_dev->trig); ret = iio_trigger_register(data->drdy_trig); if (ret) goto err_poweroff; + + indio_dev->trig = iio_trigger_get(data->drdy_trig); } ret = iio_triggered_buffer_setup(indio_dev, NULL, From e26dcf627971aa0029fd192543505fe61f5d965b Mon Sep 17 00:00:00 2001 From: Dmitry Rokosov Date: Tue, 24 May 2022 18:14:39 +0000 Subject: [PATCH 1132/1842] iio:accel:bma180: rearrange iio trigger get and register commit e5f3205b04d7f95a2ef43bce4b454a7f264d6923 upstream. IIO trigger interface function iio_trigger_get() should be called after iio_trigger_register() (or its devm analogue) strictly, because of iio_trigger_get() acquires module refcnt based on the trigger->owner pointer, which is initialized inside iio_trigger_register() to THIS_MODULE. If this call order is wrong, the next iio_trigger_put() (from sysfs callback or "delete module" path) will dereference "default" module refcnt, which is incorrect behaviour. Fixes: 0668a4e4d297 ("iio: accel: bma180: Fix indio_dev->trig assignment") Signed-off-by: Dmitry Rokosov Reviewed-by: Andy Shevchenko Link: https://lore.kernel.org/r/20220524181150.9240-2-ddrokosov@sberdevices.ru Cc: Signed-off-by: Jonathan Cameron Signed-off-by: Greg Kroah-Hartman --- drivers/iio/accel/bma180.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/iio/accel/bma180.c b/drivers/iio/accel/bma180.c index da56488182d0..6aa5a72c89b2 100644 --- a/drivers/iio/accel/bma180.c +++ b/drivers/iio/accel/bma180.c @@ -1068,11 +1068,12 @@ static int bma180_probe(struct i2c_client *client, data->trig->dev.parent = dev; data->trig->ops = &bma180_trigger_ops; iio_trigger_set_drvdata(data->trig, indio_dev); - indio_dev->trig = iio_trigger_get(data->trig); ret = iio_trigger_register(data->trig); if (ret) goto err_trigger_free; + + indio_dev->trig = iio_trigger_get(data->trig); } ret = iio_triggered_buffer_setup(indio_dev, NULL, From 42caf44906d6de02c4649e98e4a8bd03c4c53fa1 Mon Sep 17 00:00:00 2001 From: Dmitry Rokosov Date: Tue, 24 May 2022 18:14:43 +0000 Subject: [PATCH 1133/1842] iio:accel:mxc4005: rearrange iio trigger get and register commit 9354c224c9b4f55847a0de3e968cba2ebf15af3b upstream. IIO trigger interface function iio_trigger_get() should be called after iio_trigger_register() (or its devm analogue) strictly, because of iio_trigger_get() acquires module refcnt based on the trigger->owner pointer, which is initialized inside iio_trigger_register() to THIS_MODULE. If this call order is wrong, the next iio_trigger_put() (from sysfs callback or "delete module" path) will dereference "default" module refcnt, which is incorrect behaviour. Fixes: 47196620c82f ("iio: mxc4005: add data ready trigger for mxc4005") Signed-off-by: Dmitry Rokosov Reviewed-by: Andy Shevchenko Link: https://lore.kernel.org/r/20220524181150.9240-4-ddrokosov@sberdevices.ru Cc: Signed-off-by: Jonathan Cameron Signed-off-by: Greg Kroah-Hartman --- drivers/iio/accel/mxc4005.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/iio/accel/mxc4005.c b/drivers/iio/accel/mxc4005.c index 5a2b0ffbb145..ecd9d8ad5928 100644 --- a/drivers/iio/accel/mxc4005.c +++ b/drivers/iio/accel/mxc4005.c @@ -461,8 +461,6 @@ static int mxc4005_probe(struct i2c_client *client, data->dready_trig->dev.parent = &client->dev; data->dready_trig->ops = &mxc4005_trigger_ops; iio_trigger_set_drvdata(data->dready_trig, indio_dev); - indio_dev->trig = data->dready_trig; - iio_trigger_get(indio_dev->trig); ret = devm_iio_trigger_register(&client->dev, data->dready_trig); if (ret) { @@ -470,6 +468,8 @@ static int mxc4005_probe(struct i2c_client *client, "failed to register trigger\n"); return ret; } + + indio_dev->trig = iio_trigger_get(data->dready_trig); } return devm_iio_device_register(&client->dev, indio_dev); From c1ec7d52a218d1efe7d6a7c3dbafb3d3022392c9 Mon Sep 17 00:00:00 2001 From: Haibo Chen Date: Wed, 15 Jun 2022 19:31:58 +0800 Subject: [PATCH 1134/1842] iio: accel: mma8452: ignore the return value of reset operation commit bf745142cc0a3e1723f9207fb0c073c88464b7b4 upstream. On fxls8471, after set the reset bit, the device will reset immediately, will not give ACK. So ignore the return value of this reset operation, let the following code logic to check whether the reset operation works. Signed-off-by: Haibo Chen Fixes: ecabae713196 ("iio: mma8452: Initialise before activating") Reviewed-by: Hans de Goede Link: https://lore.kernel.org/r/1655292718-14287-1-git-send-email-haibo.chen@nxp.com Cc: Signed-off-by: Jonathan Cameron Signed-off-by: Greg Kroah-Hartman --- drivers/iio/accel/mma8452.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c index 67463be797de..b12e80464706 100644 --- a/drivers/iio/accel/mma8452.c +++ b/drivers/iio/accel/mma8452.c @@ -1496,10 +1496,14 @@ static int mma8452_reset(struct i2c_client *client) int i; int ret; - ret = i2c_smbus_write_byte_data(client, MMA8452_CTRL_REG2, + /* + * Find on fxls8471, after config reset bit, it reset immediately, + * and will not give ACK, so here do not check the return value. + * The following code will read the reset register, and check whether + * this reset works. + */ + i2c_smbus_write_byte_data(client, MMA8452_CTRL_REG2, MMA8452_CTRL_REG2_RST); - if (ret < 0) - return ret; for (i = 0; i < 10; i++) { usleep_range(100, 200); From 399788e819a17c43f275516b375b4314b6827191 Mon Sep 17 00:00:00 2001 From: Zheyu Ma Date: Tue, 10 May 2022 17:24:31 +0800 Subject: [PATCH 1135/1842] iio: gyro: mpu3050: Fix the error handling in mpu3050_power_up() commit b2f5ad97645e1deb5ca9bcb7090084b92cae35d2 upstream. The driver should disable regulators when fails at regmap_update_bits(). Signed-off-by: Zheyu Ma Reviewed-by: Linus Walleij Cc: Link: https://lore.kernel.org/r/20220510092431.1711284-1-zheyuma97@gmail.com Signed-off-by: Jonathan Cameron Signed-off-by: Greg Kroah-Hartman --- drivers/iio/gyro/mpu3050-core.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/iio/gyro/mpu3050-core.c b/drivers/iio/gyro/mpu3050-core.c index 39e1c4306c47..84c6ad4bcccb 100644 --- a/drivers/iio/gyro/mpu3050-core.c +++ b/drivers/iio/gyro/mpu3050-core.c @@ -872,6 +872,7 @@ static int mpu3050_power_up(struct mpu3050 *mpu3050) ret = regmap_update_bits(mpu3050->map, MPU3050_PWR_MGM, MPU3050_PWR_MGM_SLEEP, 0); if (ret) { + regulator_bulk_disable(ARRAY_SIZE(mpu3050->regs), mpu3050->regs); dev_err(mpu3050->dev, "error setting power mode\n"); return ret; } From b07a30a774b3c3e584a68dc91779c68ea2da4813 Mon Sep 17 00:00:00 2001 From: Vincent Whitchurch Date: Thu, 19 May 2022 11:19:25 +0200 Subject: [PATCH 1136/1842] iio: trigger: sysfs: fix use-after-free on remove commit 78601726d4a59a291acc5a52da1d3a0a6831e4e8 upstream. Ensure that the irq_work has completed before the trigger is freed. ================================================================== BUG: KASAN: use-after-free in irq_work_run_list Read of size 8 at addr 0000000064702248 by task python3/25 Call Trace: irq_work_run_list irq_work_tick update_process_times tick_sched_handle tick_sched_timer __hrtimer_run_queues hrtimer_interrupt Allocated by task 25: kmem_cache_alloc_trace iio_sysfs_trig_add dev_attr_store sysfs_kf_write kernfs_fop_write_iter new_sync_write vfs_write ksys_write sys_write Freed by task 25: kfree iio_sysfs_trig_remove dev_attr_store sysfs_kf_write kernfs_fop_write_iter new_sync_write vfs_write ksys_write sys_write ================================================================== Fixes: f38bc926d022 ("staging:iio:sysfs-trigger: Use irq_work to properly active trigger") Signed-off-by: Vincent Whitchurch Reviewed-by: Lars-Peter Clausen Link: https://lore.kernel.org/r/20220519091925.1053897-1-vincent.whitchurch@axis.com Cc: Signed-off-by: Jonathan Cameron Signed-off-by: Greg Kroah-Hartman --- drivers/iio/trigger/iio-trig-sysfs.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/iio/trigger/iio-trig-sysfs.c b/drivers/iio/trigger/iio-trig-sysfs.c index e09e58072872..2277d6336ac0 100644 --- a/drivers/iio/trigger/iio-trig-sysfs.c +++ b/drivers/iio/trigger/iio-trig-sysfs.c @@ -196,6 +196,7 @@ static int iio_sysfs_trigger_remove(int id) } iio_trigger_unregister(t->trig); + irq_work_sync(&t->work); iio_trigger_free(t->trig); list_del(&t->l); From 3e0af68b99b8aab0a3b7e2a92292825d001a5cc2 Mon Sep 17 00:00:00 2001 From: Olivier Moysan Date: Thu, 9 Jun 2022 11:52:34 +0200 Subject: [PATCH 1137/1842] iio: adc: stm32: fix maximum clock rate for stm32mp15x commit 990539486e7e311fb5dab1bf4d85d1a8973ae644 upstream. Change maximum STM32 ADC input clock rate to 36MHz, as specified in STM32MP15x datasheets. Fixes: d58c67d1d851 ("iio: adc: stm32-adc: add support for STM32MP1") Signed-off-by: Olivier Moysan Reviewed-by: Fabrice Gasnier Link: https://lore.kernel.org/r/20220609095234.375925-1-olivier.moysan@foss.st.com Cc: Signed-off-by: Jonathan Cameron Signed-off-by: Greg Kroah-Hartman --- drivers/iio/adc/stm32-adc-core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/iio/adc/stm32-adc-core.c b/drivers/iio/adc/stm32-adc-core.c index a83199b212a4..3b9a8acc7045 100644 --- a/drivers/iio/adc/stm32-adc-core.c +++ b/drivers/iio/adc/stm32-adc-core.c @@ -797,7 +797,7 @@ static const struct stm32_adc_priv_cfg stm32h7_adc_priv_cfg = { static const struct stm32_adc_priv_cfg stm32mp1_adc_priv_cfg = { .regs = &stm32h7_adc_common_regs, .clk_sel = stm32h7_adc_clk_sel, - .max_clk_rate_hz = 40000000, + .max_clk_rate_hz = 36000000, .has_syscfg = HAS_VBOOSTER | HAS_ANASWVDD, .num_irqs = 2, }; From e3ebb9d16ce13772f0b094fbd5d02bef42da83e9 Mon Sep 17 00:00:00 2001 From: Jean-Baptiste Maneyrol Date: Thu, 9 Jun 2022 12:23:01 +0200 Subject: [PATCH 1138/1842] iio: imu: inv_icm42600: Fix broken icm42600 (chip id 0 value) commit 106b391e1b859100a3f38f0ad874236e9be06bde upstream. The 0 value used for INV_CHIP_ICM42600 was not working since the match in i2c/spi was checking against NULL value. To keep this check, add a first INV_CHIP_INVALID 0 value as safe guard. Fixes: 31c24c1e93c3 ("iio: imu: inv_icm42600: add core of new inv_icm42600 driver") Signed-off-by: Jean-Baptiste Maneyrol Link: https://lore.kernel.org/r/20220609102301.4794-1-jmaneyrol@invensense.com Cc: Signed-off-by: Jonathan Cameron Signed-off-by: Greg Kroah-Hartman --- drivers/iio/imu/inv_icm42600/inv_icm42600.h | 1 + drivers/iio/imu/inv_icm42600/inv_icm42600_core.c | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600.h b/drivers/iio/imu/inv_icm42600/inv_icm42600.h index c0f5059b13b3..995a9dc06521 100644 --- a/drivers/iio/imu/inv_icm42600/inv_icm42600.h +++ b/drivers/iio/imu/inv_icm42600/inv_icm42600.h @@ -17,6 +17,7 @@ #include "inv_icm42600_buffer.h" enum inv_icm42600_chip { + INV_CHIP_INVALID, INV_CHIP_ICM42600, INV_CHIP_ICM42602, INV_CHIP_ICM42605, diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c index 8bd77185ccb7..dcbd4e928851 100644 --- a/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c +++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c @@ -565,7 +565,7 @@ int inv_icm42600_core_probe(struct regmap *regmap, int chip, int irq, bool open_drain; int ret; - if (chip < 0 || chip >= INV_CHIP_NB) { + if (chip <= INV_CHIP_INVALID || chip >= INV_CHIP_NB) { dev_err(dev, "invalid chip = %d\n", chip); return -ENODEV; } From 62284d45e26d53423fcdb5b5cf0bdb8af4e366c9 Mon Sep 17 00:00:00 2001 From: Yannick Brosseau Date: Mon, 16 May 2022 16:39:38 -0400 Subject: [PATCH 1139/1842] iio: adc: stm32: Fix ADCs iteration in irq handler commit d2214cca4d3eadc74eac9e30301ec7cad5355f00 upstream. The irq handler was only checking the mask for the first ADCs in the case of the F4 and H7 generation, since it was iterating up to the num_irq value. This patch add the maximum number of ADC in the common register, which map to the number of entries of eoc_msk and ovr_msk in stm32_adc_common_regs. This allow the handler to check all ADCs in that module. Tested on a STM32F429NIH6. Fixes: 695e2f5c289b ("iio: adc: stm32-adc: fix a regression when using dma and irq") Signed-off-by: Yannick Brosseau Reviewed-by: Fabrice Gasnier Link: https://lore.kernel.org/r/20220516203939.3498673-2-yannick.brosseau@gmail.com Cc: Signed-off-by: Jonathan Cameron Signed-off-by: Greg Kroah-Hartman --- drivers/iio/adc/stm32-adc-core.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/iio/adc/stm32-adc-core.c b/drivers/iio/adc/stm32-adc-core.c index 3b9a8acc7045..20fc867e3998 100644 --- a/drivers/iio/adc/stm32-adc-core.c +++ b/drivers/iio/adc/stm32-adc-core.c @@ -64,6 +64,7 @@ struct stm32_adc_priv; * @max_clk_rate_hz: maximum analog clock rate (Hz, from datasheet) * @has_syscfg: SYSCFG capability flags * @num_irqs: number of interrupt lines + * @num_adcs: maximum number of ADC instances in the common registers */ struct stm32_adc_priv_cfg { const struct stm32_adc_common_regs *regs; @@ -71,6 +72,7 @@ struct stm32_adc_priv_cfg { u32 max_clk_rate_hz; unsigned int has_syscfg; unsigned int num_irqs; + unsigned int num_adcs; }; /** @@ -333,7 +335,7 @@ static void stm32_adc_irq_handler(struct irq_desc *desc) * before invoking the interrupt handler (e.g. call ISR only for * IRQ-enabled ADCs). */ - for (i = 0; i < priv->cfg->num_irqs; i++) { + for (i = 0; i < priv->cfg->num_adcs; i++) { if ((status & priv->cfg->regs->eoc_msk[i] && stm32_adc_eoc_enabled(priv, i)) || (status & priv->cfg->regs->ovr_msk[i])) @@ -784,6 +786,7 @@ static const struct stm32_adc_priv_cfg stm32f4_adc_priv_cfg = { .clk_sel = stm32f4_adc_clk_sel, .max_clk_rate_hz = 36000000, .num_irqs = 1, + .num_adcs = 3, }; static const struct stm32_adc_priv_cfg stm32h7_adc_priv_cfg = { @@ -792,6 +795,7 @@ static const struct stm32_adc_priv_cfg stm32h7_adc_priv_cfg = { .max_clk_rate_hz = 36000000, .has_syscfg = HAS_VBOOSTER, .num_irqs = 1, + .num_adcs = 2, }; static const struct stm32_adc_priv_cfg stm32mp1_adc_priv_cfg = { @@ -800,6 +804,7 @@ static const struct stm32_adc_priv_cfg stm32mp1_adc_priv_cfg = { .max_clk_rate_hz = 36000000, .has_syscfg = HAS_VBOOSTER | HAS_ANASWVDD, .num_irqs = 2, + .num_adcs = 2, }; static const struct of_device_id stm32_adc_of_match[] = { From d579c893dd6ccb636ad80069f5b6287380386307 Mon Sep 17 00:00:00 2001 From: Yannick Brosseau Date: Mon, 16 May 2022 16:39:39 -0400 Subject: [PATCH 1140/1842] iio: adc: stm32: Fix IRQs on STM32F4 by removing custom spurious IRQs message commit 99bded02dae5e1e2312813506c41dc8db2fb656c upstream. The check for spurious IRQs introduced in 695e2f5c289bb assumed that the bits in the control and status registers are aligned. This is true for the H7 and MP1 version, but not the F4. The interrupt was then never handled on the F4. Instead of increasing the complexity of the comparison and check each bit specifically, we remove this check completely and rely on the generic handler for spurious IRQs. Fixes: 695e2f5c289b ("iio: adc: stm32-adc: fix a regression when using dma and irq") Signed-off-by: Yannick Brosseau Reviewed-by: Fabrice Gasnier Link: https://lore.kernel.org/r/20220516203939.3498673-3-yannick.brosseau@gmail.com Cc: Signed-off-by: Jonathan Cameron Signed-off-by: Greg Kroah-Hartman --- drivers/iio/adc/stm32-adc.c | 10 ---------- 1 file changed, 10 deletions(-) diff --git a/drivers/iio/adc/stm32-adc.c b/drivers/iio/adc/stm32-adc.c index 9939dee01743..e60ad48196ff 100644 --- a/drivers/iio/adc/stm32-adc.c +++ b/drivers/iio/adc/stm32-adc.c @@ -1265,7 +1265,6 @@ static irqreturn_t stm32_adc_threaded_isr(int irq, void *data) struct stm32_adc *adc = iio_priv(indio_dev); const struct stm32_adc_regspec *regs = adc->cfg->regs; u32 status = stm32_adc_readl(adc, regs->isr_eoc.reg); - u32 mask = stm32_adc_readl(adc, regs->ier_eoc.reg); /* Check ovr status right now, as ovr mask should be already disabled */ if (status & regs->isr_ovr.mask) { @@ -1280,11 +1279,6 @@ static irqreturn_t stm32_adc_threaded_isr(int irq, void *data) return IRQ_HANDLED; } - if (!(status & mask)) - dev_err_ratelimited(&indio_dev->dev, - "Unexpected IRQ: IER=0x%08x, ISR=0x%08x\n", - mask, status); - return IRQ_NONE; } @@ -1294,10 +1288,6 @@ static irqreturn_t stm32_adc_isr(int irq, void *data) struct stm32_adc *adc = iio_priv(indio_dev); const struct stm32_adc_regspec *regs = adc->cfg->regs; u32 status = stm32_adc_readl(adc, regs->isr_eoc.reg); - u32 mask = stm32_adc_readl(adc, regs->ier_eoc.reg); - - if (!(status & mask)) - return IRQ_WAKE_THREAD; if (status & regs->isr_ovr.mask) { /* From d40514d4403a89e00c17815140b3ec74d8ddd9b6 Mon Sep 17 00:00:00 2001 From: Hans de Goede Date: Fri, 6 May 2022 11:50:40 +0200 Subject: [PATCH 1141/1842] iio: adc: axp288: Override TS pin bias current for some models MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 048058399f19d43cf21de9f5d36cd8144337d004 upstream. Since commit 9bcf15f75cac ("iio: adc: axp288: Fix TS-pin handling") we preserve the bias current set by the firmware at boot. This fixes issues we were seeing on various models. Some models like the Nuvision Solo 10 Draw tablet actually need the old hardcoded 80ųA bias current for battery temperature monitoring to work properly. Add a quirk entry for the Nuvision Solo 10 Draw to the DMI quirk table to restore setting the bias current to 80ųA on this model. Fixes: 9bcf15f75cac ("iio: adc: axp288: Fix TS-pin handling") BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=215882 Signed-off-by: Hans de Goede Link: https://lore.kernel.org/r/20220506095040.21008-1-hdegoede@redhat.com Cc: Signed-off-by: Jonathan Cameron Signed-off-by: Greg Kroah-Hartman --- drivers/iio/adc/axp288_adc.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/iio/adc/axp288_adc.c b/drivers/iio/adc/axp288_adc.c index 5f5e8b39e4d2..84dbe9e2f0ef 100644 --- a/drivers/iio/adc/axp288_adc.c +++ b/drivers/iio/adc/axp288_adc.c @@ -196,6 +196,14 @@ static const struct dmi_system_id axp288_adc_ts_bias_override[] = { }, .driver_data = (void *)(uintptr_t)AXP288_ADC_TS_BIAS_80UA, }, + { + /* Nuvision Solo 10 Draw */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "TMAX"), + DMI_MATCH(DMI_PRODUCT_NAME, "TM101W610L"), + }, + .driver_data = (void *)(uintptr_t)AXP288_ADC_TS_BIAS_80UA, + }, {} }; From 501652a2ad5450b4908e1f204ce75b2414c305b7 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Tue, 24 May 2022 11:45:17 +0400 Subject: [PATCH 1142/1842] iio: adc: adi-axi-adc: Fix refcount leak in adi_axi_adc_attach_client commit ada7b0c0dedafd7d059115adf49e48acba3153a8 upstream. of_parse_phandle() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. Add missing of_node_put() to avoid refcount leak. Fixes: ef04070692a2 ("iio: adc: adi-axi-adc: add support for AXI ADC IP core") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220524074517.45268-1-linmq006@gmail.com Cc: Signed-off-by: Jonathan Cameron Signed-off-by: Greg Kroah-Hartman --- drivers/iio/adc/adi-axi-adc.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/iio/adc/adi-axi-adc.c b/drivers/iio/adc/adi-axi-adc.c index 9109da2d2e15..cbe1011a2408 100644 --- a/drivers/iio/adc/adi-axi-adc.c +++ b/drivers/iio/adc/adi-axi-adc.c @@ -334,16 +334,19 @@ static struct adi_axi_adc_client *adi_axi_adc_attach_client(struct device *dev) if (!try_module_get(cl->dev->driver->owner)) { mutex_unlock(®istered_clients_lock); + of_node_put(cln); return ERR_PTR(-ENODEV); } get_device(cl->dev); cl->info = info; mutex_unlock(®istered_clients_lock); + of_node_put(cln); return cl; } mutex_unlock(®istered_clients_lock); + of_node_put(cln); return ERR_PTR(-EPROBE_DEFER); } From 6c0839cf1b9e1b3c88da6af76794583cbfae8da3 Mon Sep 17 00:00:00 2001 From: Liang He Date: Fri, 17 Jun 2022 19:53:23 +0800 Subject: [PATCH 1143/1842] xtensa: xtfpga: Fix refcount leak bug in setup commit 173940b3ae40114d4179c251a98ee039dc9cd5b3 upstream. In machine_setup(), of_find_compatible_node() will return a node pointer with refcount incremented. We should use of_node_put() when it is not used anymore. Cc: stable@vger.kernel.org Signed-off-by: Liang He Message-Id: <20220617115323.4046905-1-windhl@126.com> Signed-off-by: Max Filippov Signed-off-by: Greg Kroah-Hartman --- arch/xtensa/platforms/xtfpga/setup.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/xtensa/platforms/xtfpga/setup.c b/arch/xtensa/platforms/xtfpga/setup.c index 538e6748e85a..c79c1d09ea86 100644 --- a/arch/xtensa/platforms/xtfpga/setup.c +++ b/arch/xtensa/platforms/xtfpga/setup.c @@ -133,6 +133,7 @@ static int __init machine_setup(void) if ((eth = of_find_compatible_node(eth, NULL, "opencores,ethoc"))) update_local_mac(eth); + of_node_put(eth); return 0; } arch_initcall(machine_setup); From af0ff2da01521144bc11194f4c26485d7c9cee73 Mon Sep 17 00:00:00 2001 From: Liang He Date: Fri, 17 Jun 2022 20:44:32 +0800 Subject: [PATCH 1144/1842] xtensa: Fix refcount leak bug in time.c commit a0117dc956429f2ede17b323046e1968d1849150 upstream. In calibrate_ccount(), of_find_compatible_node() will return a node pointer with refcount incremented. We should use of_node_put() when it is not used anymore. Cc: stable@vger.kernel.org Signed-off-by: Liang He Message-Id: <20220617124432.4049006-1-windhl@126.com> Signed-off-by: Max Filippov Signed-off-by: Greg Kroah-Hartman --- arch/xtensa/kernel/time.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/xtensa/kernel/time.c b/arch/xtensa/kernel/time.c index 77971fe4cc95..8e81ba63ed38 100644 --- a/arch/xtensa/kernel/time.c +++ b/arch/xtensa/kernel/time.c @@ -154,6 +154,7 @@ static void __init calibrate_ccount(void) cpu = of_find_compatible_node(NULL, NULL, "cdns,xtensa-cpu"); if (cpu) { clk = of_clk_get(cpu, 0); + of_node_put(cpu); if (!IS_ERR(clk)) { ccount_freq = clk_get_rate(clk); return; From a1c902349ad5656903589d672ad163a23a2a99b4 Mon Sep 17 00:00:00 2001 From: Helge Deller Date: Tue, 7 Jun 2022 12:57:58 +0200 Subject: [PATCH 1145/1842] parisc/stifb: Fix fb_is_primary_device() only available with CONFIG_FB_STI commit 1d0811b03eb30b2f0793acaa96c6ce90b8b9c87a upstream. Fix this build error noticed by the kernel test robot: drivers/video/console/sticore.c:1132:5: error: redefinition of 'fb_is_primary_device' arch/parisc/include/asm/fb.h:18:19: note: previous definition of 'fb_is_primary_device' Signed-off-by: Helge Deller Reported-by: kernel test robot Cc: stable@vger.kernel.org # v5.10+ Signed-off-by: Greg Kroah-Hartman --- arch/parisc/include/asm/fb.h | 2 +- drivers/video/console/sticore.c | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/parisc/include/asm/fb.h b/arch/parisc/include/asm/fb.h index d63a2acb91f2..55d29c4f716e 100644 --- a/arch/parisc/include/asm/fb.h +++ b/arch/parisc/include/asm/fb.h @@ -12,7 +12,7 @@ static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma, pgprot_val(vma->vm_page_prot) |= _PAGE_NO_CACHE; } -#if defined(CONFIG_STI_CONSOLE) || defined(CONFIG_FB_STI) +#if defined(CONFIG_FB_STI) int fb_is_primary_device(struct fb_info *info); #else static inline int fb_is_primary_device(struct fb_info *info) diff --git a/drivers/video/console/sticore.c b/drivers/video/console/sticore.c index 77622ef401d8..68fb531f245a 100644 --- a/drivers/video/console/sticore.c +++ b/drivers/video/console/sticore.c @@ -1127,6 +1127,7 @@ int sti_call(const struct sti_struct *sti, unsigned long func, return ret; } +#if defined(CONFIG_FB_STI) /* check if given fb_info is the primary device */ int fb_is_primary_device(struct fb_info *info) { @@ -1142,6 +1143,7 @@ int fb_is_primary_device(struct fb_info *info) return (sti->info == info); } EXPORT_SYMBOL(fb_is_primary_device); +#endif MODULE_AUTHOR("Philipp Rumpf, Helge Deller, Thomas Bogendoerfer"); MODULE_DESCRIPTION("Core STI driver for HP's NGLE series graphics cards in HP PARISC machines"); From ca144919afd4cd5c297ccca05a027f6392edb066 Mon Sep 17 00:00:00 2001 From: Helge Deller Date: Sun, 26 Jun 2022 11:50:43 +0200 Subject: [PATCH 1146/1842] parisc: Enable ARCH_HAS_STRICT_MODULE_RWX commit 0a1355db36718178becd2bfe728a023933d73123 upstream. Fix a boot crash on a c8000 machine as reported by Dave. Basically it changes patch_map() to return an alias mapping to the to-be-patched code in order to prevent writing to write-protected memory. Signed-off-by: Helge Deller Suggested-by: John David Anglin Cc: stable@vger.kernel.org # v5.2+ Link: https://lore.kernel.org/all/e8ec39e8-25f8-e6b4-b7ed-4cb23efc756e@bell.net/ Signed-off-by: Greg Kroah-Hartman --- arch/parisc/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig index 14f3252f2da0..2d89f79f460c 100644 --- a/arch/parisc/Kconfig +++ b/arch/parisc/Kconfig @@ -11,6 +11,7 @@ config PARISC select ARCH_WANT_FRAME_POINTERS select ARCH_HAS_ELF_RANDOMIZE select ARCH_HAS_STRICT_KERNEL_RWX + select ARCH_HAS_STRICT_MODULE_RWX select ARCH_HAS_UBSAN_SANITIZE_ALL select ARCH_NO_SG_CHAIN select ARCH_SUPPORTS_MEMORY_FAILURE From 1f5a9205a3be6406af488cd6511c481a6d6e9b71 Mon Sep 17 00:00:00 2001 From: "Naveen N. Rao" Date: Thu, 9 Jun 2022 16:03:28 +0530 Subject: [PATCH 1147/1842] powerpc: Enable execve syscall exit tracepoint commit ec6d0dde71d760aa60316f8d1c9a1b0d99213529 upstream. On execve[at], we are zero'ing out most of the thread register state including gpr[0], which contains the syscall number. Due to this, we fail to trigger the syscall exit tracepoint properly. Fix this by retaining gpr[0] in the thread register state. Before this patch: # tail /sys/kernel/debug/tracing/trace cat-123 [000] ..... 61.449351: sys_execve(filename: 7fffa6b23448, argv: 7fffa6b233e0, envp: 7fffa6b233f8) cat-124 [000] ..... 62.428481: sys_execve(filename: 7fffa6b23448, argv: 7fffa6b233e0, envp: 7fffa6b233f8) echo-125 [000] ..... 65.813702: sys_execve(filename: 7fffa6b23378, argv: 7fffa6b233a0, envp: 7fffa6b233b0) echo-125 [000] ..... 65.822214: sys_execveat(fd: 0, filename: 1009ac48, argv: 7ffff65d0c98, envp: 7ffff65d0ca8, flags: 0) After this patch: # tail /sys/kernel/debug/tracing/trace cat-127 [000] ..... 100.416262: sys_execve(filename: 7fffa41b3448, argv: 7fffa41b33e0, envp: 7fffa41b33f8) cat-127 [000] ..... 100.418203: sys_execve -> 0x0 echo-128 [000] ..... 103.873968: sys_execve(filename: 7fffa41b3378, argv: 7fffa41b33a0, envp: 7fffa41b33b0) echo-128 [000] ..... 103.875102: sys_execve -> 0x0 echo-128 [000] ..... 103.882097: sys_execveat(fd: 0, filename: 1009ac48, argv: 7fffd10d2148, envp: 7fffd10d2158, flags: 0) echo-128 [000] ..... 103.883225: sys_execveat -> 0x0 Cc: stable@vger.kernel.org Signed-off-by: Naveen N. Rao Tested-by: Sumit Dubey2 Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220609103328.41306-1-naveen.n.rao@linux.vnet.ibm.com Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/kernel/process.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index cfb8fd76afb4..c43cc26bde5d 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -1800,7 +1800,7 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp) tm_reclaim_current(0); #endif - memset(regs->gpr, 0, sizeof(regs->gpr)); + memset(®s->gpr[1], 0, sizeof(regs->gpr) - sizeof(regs->gpr[0])); regs->ctr = 0; regs->link = 0; regs->xer = 0; From 7db1ba660b07bc89fd608f30bff583d35ba34353 Mon Sep 17 00:00:00 2001 From: Andrew Donnellan Date: Tue, 14 Jun 2022 23:49:52 +1000 Subject: [PATCH 1148/1842] powerpc/rtas: Allow ibm,platform-dump RTAS call with null buffer address commit 7bc08056a6dabc3a1442216daf527edf61ac24b6 upstream. Add a special case to block_rtas_call() to allow the ibm,platform-dump RTAS call through the RTAS filter if the buffer address is 0. According to PAPR, ibm,platform-dump is called with a null buffer address to notify the platform firmware that processing of a particular dump is finished. Without this, on a pseries machine with CONFIG_PPC_RTAS_FILTER enabled, an application such as rtas_errd that is attempting to retrieve a dump will encounter an error at the end of the retrieval process. Fixes: bd59380c5ba4 ("powerpc/rtas: Restrict RTAS requests from userspace") Cc: stable@vger.kernel.org Reported-by: Sathvika Vasireddy Signed-off-by: Andrew Donnellan Reviewed-by: Tyrel Datwyler Reviewed-by: Nathan Lynch Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220614134952.156010-1-ajd@linux.ibm.com Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/kernel/rtas.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c index cf421eb7f90d..bf962051af0a 100644 --- a/arch/powerpc/kernel/rtas.c +++ b/arch/powerpc/kernel/rtas.c @@ -1040,7 +1040,7 @@ static struct rtas_filter rtas_filters[] __ro_after_init = { { "get-time-of-day", -1, -1, -1, -1, -1 }, { "ibm,get-vpd", -1, 0, -1, 1, 2 }, { "ibm,lpar-perftools", -1, 2, 3, -1, -1 }, - { "ibm,platform-dump", -1, 4, 5, -1, -1 }, + { "ibm,platform-dump", -1, 4, 5, -1, -1 }, /* Special cased */ { "ibm,read-slot-reset-state", -1, -1, -1, -1, -1 }, { "ibm,scan-log-dump", -1, 0, 1, -1, -1 }, { "ibm,set-dynamic-indicator", -1, 2, -1, -1, -1 }, @@ -1087,6 +1087,15 @@ static bool block_rtas_call(int token, int nargs, size = 1; end = base + size - 1; + + /* + * Special case for ibm,platform-dump - NULL buffer + * address is used to indicate end of dump processing + */ + if (!strcmp(f->name, "ibm,platform-dump") && + base == 0) + return false; + if (!in_rmo_buf(base, end)) goto err; } From f78acc4288edf747e8376d50c92cb427e3193ec6 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Tue, 21 Jun 2022 16:08:49 +0200 Subject: [PATCH 1149/1842] powerpc/powernv: wire up rng during setup_arch commit f3eac426657d985b97c92fa5f7ae1d43f04721f3 upstream. The platform's RNG must be available before random_init() in order to be useful for initial seeding, which in turn means that it needs to be called from setup_arch(), rather than from an init call. Complicating things, however, is that POWER8 systems need some per-cpu state and kmalloc, which isn't available at this stage. So we split things up into an early phase and a later opportunistic phase. This commit also removes some noisy log messages that don't add much. Fixes: a4da0d50b2a0 ("powerpc: Implement arch_get_random_long/int() for powernv") Cc: stable@vger.kernel.org # v3.13+ Signed-off-by: Jason A. Donenfeld Reviewed-by: Christophe Leroy [mpe: Add of_node_put(), use pnv naming, minor change log editing] Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220621140849.127227-1-Jason@zx2c4.com Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/platforms/powernv/powernv.h | 2 + arch/powerpc/platforms/powernv/rng.c | 52 ++++++++++++++++-------- arch/powerpc/platforms/powernv/setup.c | 2 + 3 files changed, 40 insertions(+), 16 deletions(-) diff --git a/arch/powerpc/platforms/powernv/powernv.h b/arch/powerpc/platforms/powernv/powernv.h index 11df4e16a1cc..528946ee7a77 100644 --- a/arch/powerpc/platforms/powernv/powernv.h +++ b/arch/powerpc/platforms/powernv/powernv.h @@ -42,4 +42,6 @@ ssize_t memcons_copy(struct memcons *mc, char *to, loff_t pos, size_t count); u32 memcons_get_size(struct memcons *mc); struct memcons *memcons_init(struct device_node *node, const char *mc_prop_name); +void pnv_rng_init(void); + #endif /* _POWERNV_H */ diff --git a/arch/powerpc/platforms/powernv/rng.c b/arch/powerpc/platforms/powernv/rng.c index 69c344c8884f..2b5a1a41234c 100644 --- a/arch/powerpc/platforms/powernv/rng.c +++ b/arch/powerpc/platforms/powernv/rng.c @@ -17,6 +17,7 @@ #include #include #include +#include "powernv.h" #define DARN_ERR 0xFFFFFFFFFFFFFFFFul @@ -28,7 +29,6 @@ struct powernv_rng { static DEFINE_PER_CPU(struct powernv_rng *, powernv_rng); - int powernv_hwrng_present(void) { struct powernv_rng *rng; @@ -98,9 +98,6 @@ static int initialise_darn(void) return 0; } } - - pr_warn("Unable to use DARN for get_random_seed()\n"); - return -EIO; } @@ -163,32 +160,55 @@ static __init int rng_create(struct device_node *dn) rng_init_per_cpu(rng, dn); - pr_info_once("Registering arch random hook.\n"); - ppc_md.get_random_seed = powernv_get_random_long; return 0; } -static __init int rng_init(void) +static int __init pnv_get_random_long_early(unsigned long *v) { struct device_node *dn; - int rc; + + if (!slab_is_available()) + return 0; + + if (cmpxchg(&ppc_md.get_random_seed, pnv_get_random_long_early, + NULL) != pnv_get_random_long_early) + return 0; for_each_compatible_node(dn, NULL, "ibm,power-rng") { - rc = rng_create(dn); - if (rc) { - pr_err("Failed creating rng for %pOF (%d).\n", - dn, rc); + if (rng_create(dn)) continue; - } - /* Create devices for hwrng driver */ of_platform_device_create(dn, NULL, NULL); } - initialise_darn(); + if (!ppc_md.get_random_seed) + return 0; + return ppc_md.get_random_seed(v); +} +void __init pnv_rng_init(void) +{ + struct device_node *dn; + + /* Prefer darn over the rest. */ + if (!initialise_darn()) + return; + + dn = of_find_compatible_node(NULL, NULL, "ibm,power-rng"); + if (dn) + ppc_md.get_random_seed = pnv_get_random_long_early; + + of_node_put(dn); +} + +static int __init pnv_rng_late_init(void) +{ + unsigned long v; + /* In case it wasn't called during init for some other reason. */ + if (ppc_md.get_random_seed == pnv_get_random_long_early) + pnv_get_random_long_early(&v); return 0; } -machine_subsys_initcall(powernv, rng_init); +machine_subsys_initcall(powernv, pnv_rng_late_init); diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c index 4426a109ec2f..1a2f12dc0552 100644 --- a/arch/powerpc/platforms/powernv/setup.c +++ b/arch/powerpc/platforms/powernv/setup.c @@ -193,6 +193,8 @@ static void __init pnv_setup_arch(void) pnv_check_guarded_cores(); /* XXX PMCS */ + + pnv_rng_init(); } static void __init pnv_init(void) From fb70bd86751ad2dd7d8db15a60d2bf8ce9803d72 Mon Sep 17 00:00:00 2001 From: Alexander Stein Date: Tue, 10 May 2022 07:46:12 +0200 Subject: [PATCH 1150/1842] ARM: dts: imx7: Move hsic_phy power domain to HSIC PHY node commit 552ca27929ab28b341ae9b2629f0de3a84c98ee8 upstream. Move the power domain to its actual user. This keeps the power domain enabled even when the USB host is runtime suspended. This is necessary to detect any downstream events, like device attach. Fixes: 02f8eb40ef7b ("ARM: dts: imx7s: Add power domain for imx7d HSIC") Suggested-by: Jun Li Signed-off-by: Alexander Stein Reviewed-by: Fabio Estevam Signed-off-by: Shawn Guo Signed-off-by: Greg Kroah-Hartman --- arch/arm/boot/dts/imx7s.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi index 84d9cc13afb9..9e1b0af0aa43 100644 --- a/arch/arm/boot/dts/imx7s.dtsi +++ b/arch/arm/boot/dts/imx7s.dtsi @@ -102,6 +102,7 @@ usbphynop3: usbphynop3 { compatible = "usb-nop-xceiv"; clocks = <&clks IMX7D_USB_HSIC_ROOT_CLK>; clock-names = "main_clk"; + power-domains = <&pgc_hsic_phy>; #phy-cells = <0>; }; @@ -1104,7 +1105,6 @@ usbh: usb@30b30000 { compatible = "fsl,imx7d-usb", "fsl,imx27-usb"; reg = <0x30b30000 0x200>; interrupts = ; - power-domains = <&pgc_hsic_phy>; clocks = <&clks IMX7D_USB_CTRL_CLK>; fsl,usbphy = <&usbphynop3>; fsl,usbmisc = <&usbmisc3 0>; From 59fdf108144c046e39e8ecb0dade7426fde772a6 Mon Sep 17 00:00:00 2001 From: Lucas Stach Date: Wed, 11 May 2022 18:08:23 +0200 Subject: [PATCH 1151/1842] ARM: dts: imx6qdl: correct PU regulator ramp delay commit 93a8ba2a619816d631bd69e9ce2172b4d7a481b8 upstream. Contrary to what was believed at the time, the ramp delay of 150us is not plenty for the PU LDO with the default step time of 512 pulses of the 24MHz clock. Measurements have shown that after enabling the LDO the voltage on VDDPU_CAP jumps to ~750mV in the first step and after that the regulator executes the normal ramp up as defined by the step size control. This means it takes the regulator between 360us and 370us to ramp up to the nominal 1.15V voltage for this power domain. With the old setting of the ramp delay the power up of the PU GPC domain would happen in the middle of the regulator ramp with the voltage being at around 900mV. Apparently this was enough for most units to properly power up the peripherals in the domain and execute the reset. Some units however, fail to power up properly, especially when the chip is at a low temperature. In that case any access to the GPU registers would yield an incorrect result with no way to recover from this situation. Change the ramp delay to 380us to cover the measured ramp up time with a bit of additional slack. Fixes: 40130d327f72 ("ARM: dts: imx6qdl: Allow disabling the PU regulator, add a enable ramp delay") Signed-off-by: Lucas Stach Signed-off-by: Shawn Guo Signed-off-by: Greg Kroah-Hartman --- arch/arm/boot/dts/imx6qdl.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/boot/dts/imx6qdl.dtsi b/arch/arm/boot/dts/imx6qdl.dtsi index 7a8837cbe21b..7858ae5d39df 100644 --- a/arch/arm/boot/dts/imx6qdl.dtsi +++ b/arch/arm/boot/dts/imx6qdl.dtsi @@ -756,7 +756,7 @@ reg_pu: regulator-vddpu { regulator-name = "vddpu"; regulator-min-microvolt = <725000>; regulator-max-microvolt = <1450000>; - regulator-enable-ramp-delay = <150>; + regulator-enable-ramp-delay = <380>; anatop-reg-offset = <0x140>; anatop-vol-bit-shift = <9>; anatop-vol-bit-width = <5>; From 68f28d52e6cbab8dcfa249cac4356d1d0573e868 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Mon, 23 May 2022 18:55:13 +0400 Subject: [PATCH 1152/1842] ARM: exynos: Fix refcount leak in exynos_map_pmu commit c4c79525042a4a7df96b73477feaf232fe44ae81 upstream. of_find_matching_node() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. Add missing of_node_put() to avoid refcount leak. of_node_put() checks null pointer. Fixes: fce9e5bb2526 ("ARM: EXYNOS: Add support for mapping PMU base address via DT") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220523145513.12341-1-linmq006@gmail.com Signed-off-by: Krzysztof Kozlowski Signed-off-by: Greg Kroah-Hartman --- arch/arm/mach-exynos/exynos.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm/mach-exynos/exynos.c b/arch/arm/mach-exynos/exynos.c index 83d1d1327f96..1276585f72c5 100644 --- a/arch/arm/mach-exynos/exynos.c +++ b/arch/arm/mach-exynos/exynos.c @@ -149,6 +149,7 @@ static void exynos_map_pmu(void) np = of_find_matching_node(NULL, exynos_dt_pmu_match); if (np) pmu_base_addr = of_iomap(np, 0); + of_node_put(np); } static void __init exynos_init_irq(void) From 30bbfeb480ae8b5ee43199d72417b232590440c2 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Thu, 26 May 2022 11:53:22 +0400 Subject: [PATCH 1153/1842] soc: bcm: brcmstb: pm: pm-arm: Fix refcount leak in brcmstb_pm_probe commit 37d838de369b07b596c19ff3662bf0293fdb09ee upstream. of_find_matching_node() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. Add missing of_node_put() to avoid refcount leak. In brcmstb_init_sram, it pass dn to of_address_to_resource(), of_address_to_resource() will call of_find_device_by_node() to take reference, so we should release the reference returned by of_find_matching_node(). Fixes: 0b741b8234c8 ("soc: bcm: brcmstb: Add support for S2/S3/S5 suspend states (ARM)") Signed-off-by: Miaoqian Lin Reviewed-by: Andy Shevchenko Signed-off-by: Florian Fainelli Signed-off-by: Greg Kroah-Hartman --- drivers/soc/bcm/brcmstb/pm/pm-arm.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/soc/bcm/brcmstb/pm/pm-arm.c b/drivers/soc/bcm/brcmstb/pm/pm-arm.c index b1062334e608..c6ec7d95bcfc 100644 --- a/drivers/soc/bcm/brcmstb/pm/pm-arm.c +++ b/drivers/soc/bcm/brcmstb/pm/pm-arm.c @@ -780,6 +780,7 @@ static int brcmstb_pm_probe(struct platform_device *pdev) } ret = brcmstb_init_sram(dn); + of_node_put(dn); if (ret) { pr_err("error setting up SRAM for PM\n"); return ret; From 44a5b3a073e5aaa5720929dba95b2725eb32bb65 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Wed, 1 Jun 2022 13:05:48 +0400 Subject: [PATCH 1154/1842] ARM: Fix refcount leak in axxia_boot_secondary commit 7c7ff68daa93d8c4cdea482da4f2429c0398fcde upstream. of_find_compatible_node() returns a node pointer with refcount incremented, we should use of_node_put() on it when done. Add missing of_node_put() to avoid refcount leak. Fixes: 1d22924e1c4e ("ARM: Add platform support for LSI AXM55xx SoC") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220601090548.47616-1-linmq006@gmail.com' Signed-off-by: Arnd Bergmann Signed-off-by: Greg Kroah-Hartman --- arch/arm/mach-axxia/platsmp.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/arm/mach-axxia/platsmp.c b/arch/arm/mach-axxia/platsmp.c index 512943eae30a..2e203626eda5 100644 --- a/arch/arm/mach-axxia/platsmp.c +++ b/arch/arm/mach-axxia/platsmp.c @@ -39,6 +39,7 @@ static int axxia_boot_secondary(unsigned int cpu, struct task_struct *idle) return -ENOENT; syscon = of_iomap(syscon_np, 0); + of_node_put(syscon_np); if (!syscon) return -ENOMEM; From 889aad2203e09eed2071ca8985c25e9d6aea5735 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Thu, 2 Jun 2022 08:17:21 +0400 Subject: [PATCH 1155/1842] memory: samsung: exynos5422-dmc: Fix refcount leak in of_get_dram_timings commit 1332661e09304b7b8e84e5edc11811ba08d12abe upstream. of_parse_phandle() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. This function doesn't call of_node_put() in some error paths. To unify the structure, Add put_node label and goto it on errors. Fixes: 6e7674c3c6df ("memory: Add DMC driver for Exynos5422") Signed-off-by: Miaoqian Lin Reviewed-by: Lukasz Luba Link: https://lore.kernel.org/r/20220602041721.64348-1-linmq006@gmail.com Signed-off-by: Krzysztof Kozlowski Signed-off-by: Greg Kroah-Hartman --- drivers/memory/samsung/exynos5422-dmc.c | 29 +++++++++++++++---------- 1 file changed, 18 insertions(+), 11 deletions(-) diff --git a/drivers/memory/samsung/exynos5422-dmc.c b/drivers/memory/samsung/exynos5422-dmc.c index 049a1356f7dd..5cec30d2dc7d 100644 --- a/drivers/memory/samsung/exynos5422-dmc.c +++ b/drivers/memory/samsung/exynos5422-dmc.c @@ -1192,33 +1192,39 @@ static int of_get_dram_timings(struct exynos5_dmc *dmc) dmc->timing_row = devm_kmalloc_array(dmc->dev, TIMING_COUNT, sizeof(u32), GFP_KERNEL); - if (!dmc->timing_row) - return -ENOMEM; + if (!dmc->timing_row) { + ret = -ENOMEM; + goto put_node; + } dmc->timing_data = devm_kmalloc_array(dmc->dev, TIMING_COUNT, sizeof(u32), GFP_KERNEL); - if (!dmc->timing_data) - return -ENOMEM; + if (!dmc->timing_data) { + ret = -ENOMEM; + goto put_node; + } dmc->timing_power = devm_kmalloc_array(dmc->dev, TIMING_COUNT, sizeof(u32), GFP_KERNEL); - if (!dmc->timing_power) - return -ENOMEM; + if (!dmc->timing_power) { + ret = -ENOMEM; + goto put_node; + } dmc->timings = of_lpddr3_get_ddr_timings(np_ddr, dmc->dev, DDR_TYPE_LPDDR3, &dmc->timings_arr_size); if (!dmc->timings) { - of_node_put(np_ddr); dev_warn(dmc->dev, "could not get timings from DT\n"); - return -EINVAL; + ret = -EINVAL; + goto put_node; } dmc->min_tck = of_lpddr3_get_min_tck(np_ddr, dmc->dev); if (!dmc->min_tck) { - of_node_put(np_ddr); dev_warn(dmc->dev, "could not get tck from DT\n"); - return -EINVAL; + ret = -EINVAL; + goto put_node; } /* Sorted array of OPPs with frequency ascending */ @@ -1232,13 +1238,14 @@ static int of_get_dram_timings(struct exynos5_dmc *dmc) clk_period_ps); } - of_node_put(np_ddr); /* Take the highest frequency's timings as 'bypass' */ dmc->bypass_timing_row = dmc->timing_row[idx - 1]; dmc->bypass_timing_data = dmc->timing_data[idx - 1]; dmc->bypass_timing_power = dmc->timing_power[idx - 1]; +put_node: + of_node_put(np_ddr); return ret; } From c980392af1473d6d5662f70d8089c8e6d85144a4 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Sun, 5 Jun 2022 11:58:41 +0400 Subject: [PATCH 1156/1842] ARM: cns3xxx: Fix refcount leak in cns3xxx_init commit 1ba904b6b16e08de5aed7c1349838d9cd0d178c5 upstream. of_find_compatible_node() returns a node pointer with refcount incremented, we should use of_node_put() on it when done. Add missing of_node_put() to avoid refcount leak. Fixes: 415f59142d9d ("ARM: cns3xxx: initial DT support") Signed-off-by: Miaoqian Lin Acked-by: Krzysztof Halasa Signed-off-by: Arnd Bergmann Signed-off-by: Greg Kroah-Hartman --- arch/arm/mach-cns3xxx/core.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm/mach-cns3xxx/core.c b/arch/arm/mach-cns3xxx/core.c index e4f4b20b83a2..3fc4ec830e3a 100644 --- a/arch/arm/mach-cns3xxx/core.c +++ b/arch/arm/mach-cns3xxx/core.c @@ -372,6 +372,7 @@ static void __init cns3xxx_init(void) /* De-Asscer SATA Reset */ cns3xxx_pwr_soft_rst(CNS3XXX_PWR_SOFTWARE_RST(SATA)); } + of_node_put(dn); dn = of_find_compatible_node(NULL, NULL, "cavium,cns3420-sdhci"); if (of_device_is_available(dn)) { @@ -385,6 +386,7 @@ static void __init cns3xxx_init(void) cns3xxx_pwr_clk_en(CNS3XXX_PWR_CLK_EN(SDIO)); cns3xxx_pwr_soft_rst(CNS3XXX_PWR_SOFTWARE_RST(SDIO)); } + of_node_put(dn); pm_power_off = cns3xxx_power_off; From 959bbaf5b7a9cc5504b88e3a5b87a11937df6366 Mon Sep 17 00:00:00 2001 From: Masahiro Yamada Date: Sat, 11 Jun 2022 03:32:30 +0900 Subject: [PATCH 1157/1842] modpost: fix section mismatch check for exported init/exit sections commit 28438794aba47a27e922857d27b31b74e8559143 upstream. Since commit f02e8a6596b7 ("module: Sort exported symbols"), EXPORT_SYMBOL* is placed in the individual section ___ksymtab(_gpl)+ (3 leading underscores instead of 2). Since then, modpost cannot detect the bad combination of EXPORT_SYMBOL and __init/__exit. Fix the .fromsec field. Fixes: f02e8a6596b7 ("module: Sort exported symbols") Signed-off-by: Masahiro Yamada Reviewed-by: Nick Desaulniers Signed-off-by: Greg Kroah-Hartman --- scripts/mod/modpost.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c index 79aef50ede17..e48742760fec 100644 --- a/scripts/mod/modpost.c +++ b/scripts/mod/modpost.c @@ -1119,7 +1119,7 @@ static const struct sectioncheck sectioncheck[] = { }, /* Do not export init/exit functions or data */ { - .fromsec = { "__ksymtab*", NULL }, + .fromsec = { "___ksymtab*", NULL }, .bad_tosec = { INIT_SECTIONS, EXIT_SECTIONS, NULL }, .mismatch = EXPORT_TO_INIT_EXIT, .symbol_white_list = { DEFAULT_SYMBOL_WHITE_LIST, NULL }, From feb5ab798698a05d14e74804dd09ba352c714553 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Mon, 20 Jun 2022 11:03:48 +0200 Subject: [PATCH 1158/1842] random: update comment from copy_to_user() -> copy_to_iter() commit 63b8ea5e4f1a87dea4d3114293fc8e96a8f193d7 upstream. This comment wasn't updated when we moved from read() to read_iter(), so this patch makes the trivial fix. Fixes: 1b388e7765f2 ("random: convert to using fops->read_iter()") Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index da6d74d757e6..f769d858eda7 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -452,7 +452,7 @@ static ssize_t get_random_bytes_user(struct iov_iter *iter) /* * Immediately overwrite the ChaCha key at index 4 with random - * bytes, in case userspace causes copy_to_user() below to sleep + * bytes, in case userspace causes copy_to_iter() below to sleep * forever, so that we still retain forward secrecy in that case. */ crng_make_state(chacha_state, (u8 *)&chacha_state[4], CHACHA_KEY_SIZE); From 95d73d510b8ae6652b48709bc0f9283a18f8b092 Mon Sep 17 00:00:00 2001 From: Masahiro Yamada Date: Fri, 24 Jun 2022 04:11:47 +0900 Subject: [PATCH 1159/1842] kbuild: link vmlinux only once for CONFIG_TRIM_UNUSED_KSYMS (2nd attempt) commit 53632ba87d9f302a8d97a11ec2f4f4eec7bb75ea upstream. If CONFIG_TRIM_UNUSED_KSYMS is enabled and the kernel is built from a pristine state, the vmlinux is linked twice. Commit 3fdc7d3fe4c0 ("kbuild: link vmlinux only once for CONFIG_TRIM_UNUSED_KSYMS") explains why this happens, but it did not fix the issue at all. Now I realized I had applied a wrong patch. In v1 patch [1], the autoksyms_recursive target correctly recurses to "$(MAKE) -f $(srctree)/Makefile autoksyms_recursive". In v2 patch [2], I accidentally dropped the diff line, and it recurses to "$(MAKE) -f $(srctree)/Makefile vmlinux". Restore the code I intended in v1. [1]: https://lore.kernel.org/linux-kbuild/1521045861-22418-8-git-send-email-yamada.masahiro@socionext.com/ [2]: https://lore.kernel.org/linux-kbuild/1521166725-24157-8-git-send-email-yamada.masahiro@socionext.com/ Fixes: 3fdc7d3fe4c0 ("kbuild: link vmlinux only once for CONFIG_TRIM_UNUSED_KSYMS") Signed-off-by: Masahiro Yamada Tested-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 57434487c2b4..97c37c3ab0da 100644 --- a/Makefile +++ b/Makefile @@ -1156,7 +1156,7 @@ KBUILD_MODULES := 1 autoksyms_recursive: descend modules.order $(Q)$(CONFIG_SHELL) $(srctree)/scripts/adjust_autoksyms.sh \ - "$(MAKE) -f $(srctree)/Makefile vmlinux" + "$(MAKE) -f $(srctree)/Makefile autoksyms_recursive" endif autoksyms_h := $(if $(CONFIG_TRIM_UNUSED_KSYMS), include/generated/autoksyms.h) From 1cca46c20541d8521c855f7eace5e96f4f005923 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 11 Jun 2022 17:10:15 +0200 Subject: [PATCH 1160/1842] powerpc/pseries: wire up rng during setup_arch() commit e561e472a3d441753bd012333b057f48fef1045b upstream. The platform's RNG must be available before random_init() in order to be useful for initial seeding, which in turn means that it needs to be called from setup_arch(), rather than from an init call. Fortunately, each platform already has a setup_arch function pointer, which means it's easy to wire this up. This commit also removes some noisy log messages that don't add much. Fixes: a489043f4626 ("powerpc/pseries: Implement arch_get_random_long() based on H_RANDOM") Cc: stable@vger.kernel.org # v3.13+ Signed-off-by: Jason A. Donenfeld Reviewed-by: Christophe Leroy Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220611151015.548325-4-Jason@zx2c4.com Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/platforms/pseries/pseries.h | 2 ++ arch/powerpc/platforms/pseries/rng.c | 11 +++-------- arch/powerpc/platforms/pseries/setup.c | 2 ++ 3 files changed, 7 insertions(+), 8 deletions(-) diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h index 593840847cd3..ada9601aaff1 100644 --- a/arch/powerpc/platforms/pseries/pseries.h +++ b/arch/powerpc/platforms/pseries/pseries.h @@ -114,4 +114,6 @@ int dlpar_workqueue_init(void); void pseries_setup_security_mitigations(void); void pseries_lpar_read_hblkrm_characteristics(void); +void pseries_rng_init(void); + #endif /* _PSERIES_PSERIES_H */ diff --git a/arch/powerpc/platforms/pseries/rng.c b/arch/powerpc/platforms/pseries/rng.c index 6268545947b8..6ddfdeaace9e 100644 --- a/arch/powerpc/platforms/pseries/rng.c +++ b/arch/powerpc/platforms/pseries/rng.c @@ -10,6 +10,7 @@ #include #include #include +#include "pseries.h" static int pseries_get_random_long(unsigned long *v) @@ -24,19 +25,13 @@ static int pseries_get_random_long(unsigned long *v) return 0; } -static __init int rng_init(void) +void __init pseries_rng_init(void) { struct device_node *dn; dn = of_find_compatible_node(NULL, NULL, "ibm,random"); if (!dn) - return -ENODEV; - - pr_info("Registering arch random hook.\n"); - + return; ppc_md.get_random_seed = pseries_get_random_long; - of_node_put(dn); - return 0; } -machine_subsys_initcall(pseries, rng_init); diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c index 47dfada140e1..0eac9ca782c2 100644 --- a/arch/powerpc/platforms/pseries/setup.c +++ b/arch/powerpc/platforms/pseries/setup.c @@ -824,6 +824,8 @@ static void __init pSeries_setup_arch(void) if (swiotlb_force == SWIOTLB_FORCE) ppc_swiotlb_enable = 1; + + pseries_rng_init(); } static void pseries_panic(char *str) From deb587b1a48d3ac4a29378fc238a4677944dff83 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Wed, 29 Jun 2022 08:59:54 +0200 Subject: [PATCH 1161/1842] Linux 5.10.127 Link: https://lore.kernel.org/r/20220627111933.455024953@linuxfoundation.org Tested-by: Jon Hunter Tested-by: Florian Fainelli Tested-by: Shuah Khan Tested-by: Guenter Roeck Tested-by: Salvatore Bonaccorso Tested-by: Sudip Mukherjee Tested-by: Hulk Robot Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 97c37c3ab0da..e3eb9ba19f86 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 126 +SUBLEVEL = 127 EXTRAVERSION = NAME = Dare mighty things From 52dc7f3f6fa151f5d64b3e6fe95369c21600e72c Mon Sep 17 00:00:00 2001 From: Amir Goldstein Date: Thu, 30 Jun 2022 08:43:21 +0300 Subject: [PATCH 1162/1842] MAINTAINERS: add Amir as xfs maintainer for 5.10.y This is an attempt to direct the bots and human that are testing LTS 5.10.y towards the maintainer of xfs in the 5.10.y tree. This is not an upstream MAINTAINERS entry and 5.15.y and 5.4.y will have their own LTS xfs maintainer entries. Update Darrick's email address from upstream and add Amir as xfs maintaier for the 5.10.y tree. Suggested-by: Darrick J. Wong Link: https://lore.kernel.org/linux-xfs/Yrx6%2F0UmYyuBPjEr@magnolia/ Signed-off-by: Amir Goldstein Reviewed-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- MAINTAINERS | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/MAINTAINERS b/MAINTAINERS index 7c118b507912..4d10e79030a9 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -19246,7 +19246,8 @@ F: arch/x86/xen/*swiotlb* F: drivers/xen/*swiotlb* XFS FILESYSTEM -M: Darrick J. Wong +M: Amir Goldstein +M: Darrick J. Wong M: linux-xfs@vger.kernel.org L: linux-xfs@vger.kernel.org S: Supported From 069fff50d4008970642a5380c3022e76dd8e7336 Mon Sep 17 00:00:00 2001 From: Christoph Hellwig Date: Tue, 2 Feb 2021 13:13:23 +0100 Subject: [PATCH 1163/1842] drm: remove drm_fb_helper_modinit commit bf22c9ec39da90ce866d5f625d616f28bc733dc1 upstream. drm_fb_helper_modinit has a lot of boilerplate for what is not very simple functionality. Just open code it in the only caller using IS_ENABLED and IS_MODULE, and skip the find_module check as a request_module is harmless if the module is already loaded (and not other caller has this find_module check either). Acked-by: Daniel Vetter Signed-off-by: Christoph Hellwig Signed-off-by: Jessica Yu Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/drm_crtc_helper_internal.h | 10 ---------- drivers/gpu/drm/drm_fb_helper.c | 21 -------------------- drivers/gpu/drm/drm_kms_helper_common.c | 23 +++++++++++----------- 3 files changed, 11 insertions(+), 43 deletions(-) diff --git a/drivers/gpu/drm/drm_crtc_helper_internal.h b/drivers/gpu/drm/drm_crtc_helper_internal.h index 25ce42e79995..61e09f8a8d0f 100644 --- a/drivers/gpu/drm/drm_crtc_helper_internal.h +++ b/drivers/gpu/drm/drm_crtc_helper_internal.h @@ -32,16 +32,6 @@ #include #include -/* drm_fb_helper.c */ -#ifdef CONFIG_DRM_FBDEV_EMULATION -int drm_fb_helper_modinit(void); -#else -static inline int drm_fb_helper_modinit(void) -{ - return 0; -} -#endif - /* drm_dp_aux_dev.c */ #ifdef CONFIG_DRM_DP_AUX_CHARDEV int drm_dp_aux_dev_init(void); diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c index 8033467db4be..ac5d61e65124 100644 --- a/drivers/gpu/drm/drm_fb_helper.c +++ b/drivers/gpu/drm/drm_fb_helper.c @@ -2271,24 +2271,3 @@ void drm_fbdev_generic_setup(struct drm_device *dev, drm_client_register(&fb_helper->client); } EXPORT_SYMBOL(drm_fbdev_generic_setup); - -/* The Kconfig DRM_KMS_HELPER selects FRAMEBUFFER_CONSOLE (if !EXPERT) - * but the module doesn't depend on any fb console symbols. At least - * attempt to load fbcon to avoid leaving the system without a usable console. - */ -int __init drm_fb_helper_modinit(void) -{ -#if defined(CONFIG_FRAMEBUFFER_CONSOLE_MODULE) && !defined(CONFIG_EXPERT) - const char name[] = "fbcon"; - struct module *fbcon; - - mutex_lock(&module_mutex); - fbcon = find_module(name); - mutex_unlock(&module_mutex); - - if (!fbcon) - request_module_nowait(name); -#endif - return 0; -} -EXPORT_SYMBOL(drm_fb_helper_modinit); diff --git a/drivers/gpu/drm/drm_kms_helper_common.c b/drivers/gpu/drm/drm_kms_helper_common.c index 221a8528c993..f933da1656eb 100644 --- a/drivers/gpu/drm/drm_kms_helper_common.c +++ b/drivers/gpu/drm/drm_kms_helper_common.c @@ -64,19 +64,18 @@ MODULE_PARM_DESC(edid_firmware, static int __init drm_kms_helper_init(void) { - int ret; + /* + * The Kconfig DRM_KMS_HELPER selects FRAMEBUFFER_CONSOLE (if !EXPERT) + * but the module doesn't depend on any fb console symbols. At least + * attempt to load fbcon to avoid leaving the system without a usable + * console. + */ + if (IS_ENABLED(CONFIG_DRM_FBDEV_EMULATION) && + IS_MODULE(CONFIG_FRAMEBUFFER_CONSOLE) && + !IS_ENABLED(CONFIG_EXPERT)) + request_module_nowait("fbcon"); - /* Call init functions from specific kms helpers here */ - ret = drm_fb_helper_modinit(); - if (ret < 0) - goto out; - - ret = drm_dp_aux_dev_init(); - if (ret < 0) - goto out; - -out: - return ret; + return drm_dp_aux_dev_init(); } static void __exit drm_kms_helper_exit(void) From c4ff3ffe0138234774602152fe67e3a898c615c6 Mon Sep 17 00:00:00 2001 From: Masahiro Yamada Date: Mon, 27 Jun 2022 12:22:09 +0900 Subject: [PATCH 1164/1842] tick/nohz: unexport __init-annotated tick_nohz_full_setup() commit 2390095113e98fc52fffe35c5206d30d9efe3f78 upstream. EXPORT_SYMBOL and __init is a bad combination because the .init.text section is freed up after the initialization. Hence, modules cannot use symbols annotated __init. The access to a freed symbol may end up with kernel panic. modpost used to detect it, but it had been broken for a decade. Commit 28438794aba4 ("modpost: fix section mismatch check for exported init/exit sections") fixed it so modpost started to warn it again, then this showed up: MODPOST vmlinux.symvers WARNING: modpost: vmlinux.o(___ksymtab_gpl+tick_nohz_full_setup+0x0): Section mismatch in reference from the variable __ksymtab_tick_nohz_full_setup to the function .init.text:tick_nohz_full_setup() The symbol tick_nohz_full_setup is exported and annotated __init Fix this by removing the __init annotation of tick_nohz_full_setup or drop the export. Drop the export because tick_nohz_full_setup() is only called from the built-in code in kernel/sched/isolation.c. Fixes: ae9e557b5be2 ("time: Export tick start/stop functions for rcutorture") Reported-by: Linus Torvalds Signed-off-by: Masahiro Yamada Tested-by: Paul E. McKenney Signed-off-by: Linus Torvalds Cc: Thomas Backlund Signed-off-by: Greg Kroah-Hartman --- kernel/time/tick-sched.c | 1 - 1 file changed, 1 deletion(-) diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index 9c352357fc8b..92fb738813f3 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -425,7 +425,6 @@ void __init tick_nohz_full_setup(cpumask_var_t cpumask) cpumask_copy(tick_nohz_full_mask, cpumask); tick_nohz_full_running = true; } -EXPORT_SYMBOL_GPL(tick_nohz_full_setup); static int tick_nohz_cpu_down(unsigned int cpu) { From 09c9902cd80a07c2e69024f96f049985047e64b8 Mon Sep 17 00:00:00 2001 From: Coly Li Date: Fri, 27 May 2022 23:28:16 +0800 Subject: [PATCH 1165/1842] bcache: memset on stack variables in bch_btree_check() and bch_sectors_dirty_init() commit 7d6b902ea0e02b2a25c480edf471cbaa4ebe6b3c upstream. The local variables check_state (in bch_btree_check()) and state (in bch_sectors_dirty_init()) should be fully filled by 0, because before allocating them on stack, they were dynamically allocated by kzalloc(). Signed-off-by: Coly Li Link: https://lore.kernel.org/r/20220527152818.27545-2-colyli@suse.de Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- drivers/md/bcache/btree.c | 1 + drivers/md/bcache/writeback.c | 1 + 2 files changed, 2 insertions(+) diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c index f64834785c8b..b47c00dea0f2 100644 --- a/drivers/md/bcache/btree.c +++ b/drivers/md/bcache/btree.c @@ -2017,6 +2017,7 @@ int bch_btree_check(struct cache_set *c) if (c->root->level == 0) return 0; + memset(&check_state, 0, sizeof(struct btree_check_state)); check_state.c = c; check_state.total_threads = bch_btree_chkthread_nr(); check_state.key_idx = 0; diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c index 0145046a45f4..a878b959fbcd 100644 --- a/drivers/md/bcache/writeback.c +++ b/drivers/md/bcache/writeback.c @@ -901,6 +901,7 @@ void bch_sectors_dirty_init(struct bcache_device *d) return; } + memset(&state, 0, sizeof(struct bch_dirty_init_state)); state.c = c; state.d = d; state.total_threads = bch_btre_dirty_init_thread_nr(); From db3f8110c3b0e1185f6288331cb428978708fb79 Mon Sep 17 00:00:00 2001 From: Rustam Kovhaev Date: Mon, 27 Jun 2022 09:51:36 +0300 Subject: [PATCH 1166/1842] xfs: use kmem_cache_free() for kmem_cache objects commit c30a0cbd07ecc0eec7b3cd568f7b1c7bb7913f93 upstream. For kmalloc() allocations SLOB prepends the blocks with a 4-byte header, and it puts the size of the allocated blocks in that header. Blocks allocated with kmem_cache_alloc() allocations do not have that header. SLOB explodes when you allocate memory with kmem_cache_alloc() and then try to free it with kfree() instead of kmem_cache_free(). SLOB will assume that there is a header when there is none, read some garbage to size variable and corrupt the adjacent objects, which eventually leads to hang or panic. Let's make XFS work with SLOB by using proper free function. Fixes: 9749fee83f38 ("xfs: enable the xfs_defer mechanism to process extents to free") Signed-off-by: Rustam Kovhaev Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_extfree_item.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/fs/xfs/xfs_extfree_item.c b/fs/xfs/xfs_extfree_item.c index 5c0395256bd1..11474770d630 100644 --- a/fs/xfs/xfs_extfree_item.c +++ b/fs/xfs/xfs_extfree_item.c @@ -482,7 +482,7 @@ xfs_extent_free_finish_item( free->xefi_startblock, free->xefi_blockcount, &free->xefi_oinfo, free->xefi_skip_discard); - kmem_free(free); + kmem_cache_free(xfs_bmap_free_item_zone, free); return error; } @@ -502,7 +502,7 @@ xfs_extent_free_cancel_item( struct xfs_extent_free_item *free; free = container_of(item, struct xfs_extent_free_item, xefi_list); - kmem_free(free); + kmem_cache_free(xfs_bmap_free_item_zone, free); } const struct xfs_defer_op_type xfs_extent_free_defer_type = { @@ -564,7 +564,7 @@ xfs_agfl_free_finish_item( extp->ext_len = free->xefi_blockcount; efdp->efd_next_extent++; - kmem_free(free); + kmem_cache_free(xfs_bmap_free_item_zone, free); return error; } From 0cdccc05da76a87a4e04d03eb812bacc33864ad9 Mon Sep 17 00:00:00 2001 From: Brian Foster Date: Mon, 27 Jun 2022 09:51:37 +0300 Subject: [PATCH 1167/1842] xfs: punch out data fork delalloc blocks on COW writeback failure commit 5ca5916b6bc93577c360c06cb7cdf71adb9b5faf upstream. If writeback I/O to a COW extent fails, the COW fork blocks are punched out and the data fork blocks left alone. It is possible for COW fork blocks to overlap non-shared data fork blocks (due to cowextsz hint prealloc), however, and writeback unconditionally maps to the COW fork whenever blocks exist at the corresponding offset of the page undergoing writeback. This means it's quite possible for a COW fork extent to overlap delalloc data fork blocks, writeback to convert and map to the COW fork blocks, writeback to fail, and finally for ioend completion to cancel the COW fork blocks and leave stale data fork delalloc blocks around in the inode. The blocks are effectively stale because writeback failure also discards dirty page state. If this occurs, it is likely to trigger assert failures, free space accounting corruption and failures in unrelated file operations. For example, a subsequent reflink attempt of the affected file to a new target file will trip over the stale delalloc in the source file and fail. Several of these issues are occasionally reproduced by generic/648, but are reproducible on demand with the right sequence of operations and timely I/O error injection. To fix this problem, update the ioend failure path to also punch out underlying data fork delalloc blocks on I/O error. This is analogous to the writeback submission failure path in xfs_discard_page() where we might fail to map data fork delalloc blocks and consistent with the successful COW writeback completion path, which is responsible for unmapping from the data fork and remapping in COW fork blocks. Fixes: 787eb485509f ("xfs: fix and streamline error handling in xfs_end_io") Signed-off-by: Brian Foster Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_aops.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 4304c6416fbb..4b76a32d2f16 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -145,6 +145,7 @@ xfs_end_ioend( struct iomap_ioend *ioend) { struct xfs_inode *ip = XFS_I(ioend->io_inode); + struct xfs_mount *mp = ip->i_mount; xfs_off_t offset = ioend->io_offset; size_t size = ioend->io_size; unsigned int nofs_flag; @@ -160,18 +161,26 @@ xfs_end_ioend( /* * Just clean up the in-memory strutures if the fs has been shut down. */ - if (XFS_FORCED_SHUTDOWN(ip->i_mount)) { + if (XFS_FORCED_SHUTDOWN(mp)) { error = -EIO; goto done; } /* - * Clean up any COW blocks on an I/O error. + * Clean up all COW blocks and underlying data fork delalloc blocks on + * I/O error. The delalloc punch is required because this ioend was + * mapped to blocks in the COW fork and the associated pages are no + * longer dirty. If we don't remove delalloc blocks here, they become + * stale and can corrupt free space accounting on unmount. */ error = blk_status_to_errno(ioend->io_bio->bi_status); if (unlikely(error)) { - if (ioend->io_flags & IOMAP_F_SHARED) + if (ioend->io_flags & IOMAP_F_SHARED) { xfs_reflink_cancel_cow_range(ip, offset, size, true); + xfs_bmap_punch_delalloc_range(ip, + XFS_B_TO_FSBT(mp, offset), + XFS_B_TO_FSB(mp, size)); + } goto done; } From 1e76bd4c67224a645558314c0097d5b5a338bba9 Mon Sep 17 00:00:00 2001 From: Yang Xu Date: Mon, 27 Jun 2022 09:51:38 +0300 Subject: [PATCH 1168/1842] xfs: Fix the free logic of state in xfs_attr_node_hasname commit a1de97fe296c52eafc6590a3506f4bbd44ecb19a upstream. When testing xfstests xfs/126 on lastest upstream kernel, it will hang on some machine. Adding a getxattr operation after xattr corrupted, I can reproduce it 100%. The deadlock as below: [983.923403] task:setfattr state:D stack: 0 pid:17639 ppid: 14687 flags:0x00000080 [ 983.923405] Call Trace: [ 983.923410] __schedule+0x2c4/0x700 [ 983.923412] schedule+0x37/0xa0 [ 983.923414] schedule_timeout+0x274/0x300 [ 983.923416] __down+0x9b/0xf0 [ 983.923451] ? xfs_buf_find.isra.29+0x3c8/0x5f0 [xfs] [ 983.923453] down+0x3b/0x50 [ 983.923471] xfs_buf_lock+0x33/0xf0 [xfs] [ 983.923490] xfs_buf_find.isra.29+0x3c8/0x5f0 [xfs] [ 983.923508] xfs_buf_get_map+0x4c/0x320 [xfs] [ 983.923525] xfs_buf_read_map+0x53/0x310 [xfs] [ 983.923541] ? xfs_da_read_buf+0xcf/0x120 [xfs] [ 983.923560] xfs_trans_read_buf_map+0x1cf/0x360 [xfs] [ 983.923575] ? xfs_da_read_buf+0xcf/0x120 [xfs] [ 983.923590] xfs_da_read_buf+0xcf/0x120 [xfs] [ 983.923606] xfs_da3_node_read+0x1f/0x40 [xfs] [ 983.923621] xfs_da3_node_lookup_int+0x69/0x4a0 [xfs] [ 983.923624] ? kmem_cache_alloc+0x12e/0x270 [ 983.923637] xfs_attr_node_hasname+0x6e/0xa0 [xfs] [ 983.923651] xfs_has_attr+0x6e/0xd0 [xfs] [ 983.923664] xfs_attr_set+0x273/0x320 [xfs] [ 983.923683] xfs_xattr_set+0x87/0xd0 [xfs] [ 983.923686] __vfs_removexattr+0x4d/0x60 [ 983.923688] __vfs_removexattr_locked+0xac/0x130 [ 983.923689] vfs_removexattr+0x4e/0xf0 [ 983.923690] removexattr+0x4d/0x80 [ 983.923693] ? __check_object_size+0xa8/0x16b [ 983.923695] ? strncpy_from_user+0x47/0x1a0 [ 983.923696] ? getname_flags+0x6a/0x1e0 [ 983.923697] ? _cond_resched+0x15/0x30 [ 983.923699] ? __sb_start_write+0x1e/0x70 [ 983.923700] ? mnt_want_write+0x28/0x50 [ 983.923701] path_removexattr+0x9b/0xb0 [ 983.923702] __x64_sys_removexattr+0x17/0x20 [ 983.923704] do_syscall_64+0x5b/0x1a0 [ 983.923705] entry_SYSCALL_64_after_hwframe+0x65/0xca [ 983.923707] RIP: 0033:0x7f080f10ee1b When getxattr calls xfs_attr_node_get function, xfs_da3_node_lookup_int fails with EFSCORRUPTED in xfs_attr_node_hasname because we have use blocktrash to random it in xfs/126. So it free state in internal and xfs_attr_node_get doesn't do xfs_buf_trans release job. Then subsequent removexattr will hang because of it. This bug was introduced by kernel commit 07120f1abdff ("xfs: Add xfs_has_attr and subroutines"). It adds xfs_attr_node_hasname helper and said caller will be responsible for freeing the state in this case. But xfs_attr_node_hasname will free state itself instead of caller if xfs_da3_node_lookup_int fails. Fix this bug by moving the step of free state into caller. [amir: this text from original commit is not relevant for 5.10 backport: Also, use "goto error/out" instead of returning error directly in xfs_attr_node_addname_find_attr and xfs_attr_node_removename_setup function because we should free state ourselves. ] Fixes: 07120f1abdff ("xfs: Add xfs_has_attr and subroutines") Signed-off-by: Yang Xu Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/libxfs/xfs_attr.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/fs/xfs/libxfs/xfs_attr.c b/fs/xfs/libxfs/xfs_attr.c index 96ac7e562b87..fcca36bbd997 100644 --- a/fs/xfs/libxfs/xfs_attr.c +++ b/fs/xfs/libxfs/xfs_attr.c @@ -876,21 +876,18 @@ xfs_attr_node_hasname( state = xfs_da_state_alloc(args); if (statep != NULL) - *statep = NULL; + *statep = state; /* * Search to see if name exists, and get back a pointer to it. */ error = xfs_da3_node_lookup_int(state, &retval); - if (error) { - xfs_da_state_free(state); - return error; - } + if (error) + retval = error; - if (statep != NULL) - *statep = state; - else + if (!statep) xfs_da_state_free(state); + return retval; } From 071e750ffb3dc625cc92826950c26554f161a32c Mon Sep 17 00:00:00 2001 From: "Darrick J. Wong" Date: Mon, 27 Jun 2022 09:51:39 +0300 Subject: [PATCH 1169/1842] xfs: remove all COW fork extents when remounting readonly commit 089558bc7ba785c03815a49c89e28ad9b8de51f9 upstream. [backport xfs_icwalk -> xfs_eofblocks for 5.10.y] As part of multiple customer escalations due to file data corruption after copy on write operations, I wrote some fstests that use fsstress to hammer on COW to shake things loose. Regrettably, I caught some filesystem shutdowns due to incorrect rmap operations with the following loop: mount # (0) fsstress & # (1) while true; do fsstress mount -o remount,ro # (2) fsstress mount -o remount,rw # (3) done When (2) happens, notice that (1) is still running. xfs_remount_ro will call xfs_blockgc_stop to walk the inode cache to free all the COW extents, but the blockgc mechanism races with (1)'s reader threads to take IOLOCKs and loses, which means that it doesn't clean them all out. Call such a file (A). When (3) happens, xfs_remount_rw calls xfs_reflink_recover_cow, which walks the ondisk refcount btree and frees any COW extent that it finds. This function does not check the inode cache, which means that incore COW forks of inode (A) is now inconsistent with the ondisk metadata. If one of those former COW extents are allocated and mapped into another file (B) and someone triggers a COW to the stale reservation in (A), A's dirty data will be written into (B) and once that's done, those blocks will be transferred to (A)'s data fork without bumping the refcount. The results are catastrophic -- file (B) and the refcount btree are now corrupt. Solve this race by forcing the xfs_blockgc_free_space to run synchronously, which causes xfs_icwalk to return to inodes that were skipped because the blockgc code couldn't take the IOLOCK. This is safe to do here because the VFS has already prohibited new writer threads. Fixes: 10ddf64e420f ("xfs: remove leftover CoW reservations when remounting ro") Signed-off-by: Darrick J. Wong Reviewed-by: Dave Chinner Reviewed-by: Chandan Babu R Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_super.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 5ebd6cdc44a7..05cea7788d49 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1695,7 +1695,10 @@ static int xfs_remount_ro( struct xfs_mount *mp) { - int error; + struct xfs_eofblocks eofb = { + .eof_flags = XFS_EOF_FLAGS_SYNC, + }; + int error; /* * Cancel background eofb scanning so it cannot race with the final @@ -1703,8 +1706,13 @@ xfs_remount_ro( */ xfs_stop_block_reaping(mp); - /* Get rid of any leftover CoW reservations... */ - error = xfs_icache_free_cowblocks(mp, NULL); + /* + * Clear out all remaining COW staging extents and speculative post-EOF + * preallocations so that we don't leave inodes requiring inactivation + * cleanups during reclaim on a read-only mount. We must process every + * cached inode, so this requires a synchronous cache scan. + */ + error = xfs_icache_free_cowblocks(mp, &eofb); if (error) { xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE); return error; From 6b734f7b7071859f582b5acb95abb97e1276a030 Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Mon, 27 Jun 2022 09:51:40 +0300 Subject: [PATCH 1170/1842] xfs: check sb_meta_uuid for dabuf buffer recovery commit 09654ed8a18cfd45027a67d6cbca45c9ea54feab upstream. Got a report that a repeated crash test of a container host would eventually fail with a log recovery error preventing the system from mounting the root filesystem. It manifested as a directory leaf node corruption on writeback like so: XFS (loop0): Mounting V5 Filesystem XFS (loop0): Starting recovery (logdev: internal) XFS (loop0): Metadata corruption detected at xfs_dir3_leaf_check_int+0x99/0xf0, xfs_dir3_leaf1 block 0x12faa158 XFS (loop0): Unmount and run xfs_repair XFS (loop0): First 128 bytes of corrupted metadata buffer: 00000000: 00 00 00 00 00 00 00 00 3d f1 00 00 e1 9e d5 8b ........=....... 00000010: 00 00 00 00 12 fa a1 58 00 00 00 29 00 00 1b cc .......X...).... 00000020: 91 06 78 ff f7 7e 4a 7d 8d 53 86 f2 ac 47 a8 23 ..x..~J}.S...G.# 00000030: 00 00 00 00 17 e0 00 80 00 43 00 00 00 00 00 00 .........C...... 00000040: 00 00 00 2e 00 00 00 08 00 00 17 2e 00 00 00 0a ................ 00000050: 02 35 79 83 00 00 00 30 04 d3 b4 80 00 00 01 50 .5y....0.......P 00000060: 08 40 95 7f 00 00 02 98 08 41 fe b7 00 00 02 d4 .@.......A...... 00000070: 0d 62 ef a7 00 00 01 f2 14 50 21 41 00 00 00 0c .b.......P!A.... XFS (loop0): Corruption of in-memory data (0x8) detected at xfs_do_force_shutdown+0x1a/0x20 (fs/xfs/xfs_buf.c:1514). Shutting down. XFS (loop0): Please unmount the filesystem and rectify the problem(s) XFS (loop0): log mount/recovery failed: error -117 XFS (loop0): log mount failed Tracing indicated that we were recovering changes from a transaction at LSN 0x29/0x1c16 into a buffer that had an LSN of 0x29/0x1d57. That is, log recovery was overwriting a buffer with newer changes on disk than was in the transaction. Tracing indicated that we were hitting the "recovery immediately" case in xfs_buf_log_recovery_lsn(), and hence it was ignoring the LSN in the buffer. The code was extracting the LSN correctly, then ignoring it because the UUID in the buffer did not match the superblock UUID. The problem arises because the UUID check uses the wrong UUID - it should be checking the sb_meta_uuid, not sb_uuid. This filesystem has sb_uuid != sb_meta_uuid (which is fine), and the buffer has the correct matching sb_meta_uuid in it, it's just the code checked it against the wrong superblock uuid. The is no corruption in the filesystem, and failing to recover the buffer due to a write verifier failure means the recovery bug did not propagate the corruption to disk. Hence there is no corruption before or after this bug has manifested, the impact is limited simply to an unmountable filesystem.... This was missed back in 2015 during an audit of incorrect sb_uuid usage that resulted in commit fcfbe2c4ef42 ("xfs: log recovery needs to validate against sb_meta_uuid") that fixed the magic32 buffers to validate against sb_meta_uuid instead of sb_uuid. It missed the magicda buffers.... Fixes: ce748eaa65f2 ("xfs: create new metadata UUID field and incompat flag") Signed-off-by: Dave Chinner Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_buf_item_recover.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/xfs/xfs_buf_item_recover.c b/fs/xfs/xfs_buf_item_recover.c index d44e8b4a3391..1d649462d731 100644 --- a/fs/xfs/xfs_buf_item_recover.c +++ b/fs/xfs/xfs_buf_item_recover.c @@ -805,7 +805,7 @@ xlog_recover_get_buf_lsn( } if (lsn != (xfs_lsn_t)-1) { - if (!uuid_equal(&mp->m_sb.sb_uuid, uuid)) + if (!uuid_equal(&mp->m_sb.sb_meta_uuid, uuid)) goto recover_immediately; return lsn; } From 6a656280e775821f75c0b9ca599b10174210fbdb Mon Sep 17 00:00:00 2001 From: "Naveen N. Rao" Date: Mon, 16 May 2022 12:44:22 +0530 Subject: [PATCH 1171/1842] powerpc/ftrace: Remove ftrace init tramp once kernel init is complete commit 84ade0a6655bee803d176525ef457175cbf4df22 upstream. Stop using the ftrace trampoline for init section once kernel init is complete. Fixes: 67361cf8071286 ("powerpc/ftrace: Handle large kernel configs") Cc: stable@vger.kernel.org # v4.20+ Signed-off-by: Naveen N. Rao Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220516071422.463738-1-naveen.n.rao@linux.vnet.ibm.com Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/include/asm/ftrace.h | 4 +++- arch/powerpc/kernel/trace/ftrace.c | 15 ++++++++++++--- arch/powerpc/mm/mem.c | 2 ++ 3 files changed, 17 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h index bc76970b6ee5..e647dfcb3191 100644 --- a/arch/powerpc/include/asm/ftrace.h +++ b/arch/powerpc/include/asm/ftrace.h @@ -96,7 +96,7 @@ static inline bool arch_syscall_match_sym_name(const char *sym, const char *name #endif /* PPC64_ELF_ABI_v1 */ #endif /* CONFIG_FTRACE_SYSCALLS */ -#ifdef CONFIG_PPC64 +#if defined(CONFIG_PPC64) && defined(CONFIG_FUNCTION_TRACER) #include static inline void this_cpu_disable_ftrace(void) @@ -120,11 +120,13 @@ static inline u8 this_cpu_get_ftrace_enabled(void) return get_paca()->ftrace_enabled; } +void ftrace_free_init_tramp(void); #else /* CONFIG_PPC64 */ static inline void this_cpu_disable_ftrace(void) { } static inline void this_cpu_enable_ftrace(void) { } static inline void this_cpu_set_ftrace_enabled(u8 ftrace_enabled) { } static inline u8 this_cpu_get_ftrace_enabled(void) { return 1; } +static inline void ftrace_free_init_tramp(void) { } #endif /* CONFIG_PPC64 */ #endif /* !__ASSEMBLY__ */ diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c index 42761ebec9f7..d24aea4fed7a 100644 --- a/arch/powerpc/kernel/trace/ftrace.c +++ b/arch/powerpc/kernel/trace/ftrace.c @@ -336,9 +336,7 @@ static int setup_mcount_compiler_tramp(unsigned long tramp) /* Is this a known long jump tramp? */ for (i = 0; i < NUM_FTRACE_TRAMPS; i++) - if (!ftrace_tramps[i]) - break; - else if (ftrace_tramps[i] == tramp) + if (ftrace_tramps[i] == tramp) return 0; /* Is this a known plt tramp? */ @@ -882,6 +880,17 @@ void arch_ftrace_update_code(int command) extern unsigned int ftrace_tramp_text[], ftrace_tramp_init[]; +void ftrace_free_init_tramp(void) +{ + int i; + + for (i = 0; i < NUM_FTRACE_TRAMPS && ftrace_tramps[i]; i++) + if (ftrace_tramps[i] == (unsigned long)ftrace_tramp_init) { + ftrace_tramps[i] = 0; + return; + } +} + int __init ftrace_dyn_arch_init(void) { int i; diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 22eb1c718e62..1ed276d2305f 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -51,6 +51,7 @@ #include #include #include +#include #include @@ -347,6 +348,7 @@ void free_initmem(void) mark_initmem_nx(); init_mem_is_free = true; free_initmem_default(POISON_FREE_INITMEM); + ftrace_free_init_tramp(); } /** From 2d10984d99ac2b652b4f31efd2e059957f1fa51f Mon Sep 17 00:00:00 2001 From: Vladimir Oltean Date: Tue, 28 Jun 2022 20:20:14 +0300 Subject: [PATCH 1172/1842] net: mscc: ocelot: allow unregistered IP multicast flooding Flooding of unregistered IP multicast has been broken (both to other switch ports and to the CPU) since the ocelot driver introduction, and up until commit 4cf35a2b627a ("net: mscc: ocelot: fix broken IP multicast flooding"), a bug fix for commit 421741ea5672 ("net: mscc: ocelot: offload bridge port flags to device") from v5.12. The driver used to set PGID_MCIPV4 and PGID_MCIPV6 to the empty port mask (0), which made unregistered IPv4/IPv6 multicast go nowhere, and without ever modifying that port mask at runtime. The expectation is that such packets are treated as broadcast, and flooded according to the forwarding domain (to the CPU if the port is standalone, or to the CPU and other bridged ports, if under a bridge). Since the aforementioned commit, the limitation has been lifted by responding to SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS events emitted by the bridge. As for host flooding, DSA synthesizes another call to ocelot_port_bridge_flags() on the NPI port which ensures that the CPU gets the unregistered multicast traffic it might need, for example for smcroute to work between standalone ports. But between v4.18 and v5.12, IP multicast flooding has remained unfixed. Delete the inexplicable premature optimization of clearing PGID_MCIPV4 and PGID_MCIPV6 as part of the init sequence, and allow unregistered IP multicast to be flooded freely according to the forwarding domain established by PGID_SRC, by explicitly programming PGID_MCIPV4 and PGID_MCIPV6 towards all physical ports plus the CPU port module. Fixes: a556c76adc05 ("net: mscc: Add initial Ocelot switch support") Cc: stable@kernel.org Signed-off-by: Vladimir Oltean Signed-off-by: Greg Kroah-Hartman --- drivers/net/ethernet/mscc/ocelot.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c index a06466ecca12..a55861ea4206 100644 --- a/drivers/net/ethernet/mscc/ocelot.c +++ b/drivers/net/ethernet/mscc/ocelot.c @@ -1593,8 +1593,12 @@ int ocelot_init(struct ocelot *ocelot) ocelot_write_rix(ocelot, ANA_PGID_PGID_PGID(GENMASK(ocelot->num_phys_ports, 0)), ANA_PGID_PGID, PGID_MC); - ocelot_write_rix(ocelot, 0, ANA_PGID_PGID, PGID_MCIPV4); - ocelot_write_rix(ocelot, 0, ANA_PGID_PGID, PGID_MCIPV6); + ocelot_write_rix(ocelot, + ANA_PGID_PGID_PGID(GENMASK(ocelot->num_phys_ports, 0)), + ANA_PGID_PGID, PGID_MCIPV4); + ocelot_write_rix(ocelot, + ANA_PGID_PGID_PGID(GENMASK(ocelot->num_phys_ports, 0)), + ANA_PGID_PGID, PGID_MCIPV6); /* Allow manual injection via DEVCPU_QS registers, and byte swap these * registers endianness. From ea86c1430c83aa91f2c4122408922e34a1279775 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Sat, 2 Jul 2022 16:39:25 +0200 Subject: [PATCH 1173/1842] Linux 5.10.128 Link: https://lore.kernel.org/r/20220630133230.676254336@linuxfoundation.org Tested-by: Jon Hunter Tested-by: Salvatore Bonaccorso Tested-by: Florian Fainelli Tested-by: Shuah Khan Tested-by: Guenter Roeck Tested-by: Hulk Robot Tested-by: Linux Kernel Functional Testing Tested-by: Sudip Mukherjee Tested-by: Pavel Machek (CIP) Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index e3eb9ba19f86..b89ad8a987db 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 127 +SUBLEVEL = 128 EXTRAVERSION = NAME = Dare mighty things From 03b9e016598f6f7f6676d4e1c927e11a1863aeae Mon Sep 17 00:00:00 2001 From: Ruili Ji Date: Wed, 22 Jun 2022 14:20:22 +0800 Subject: [PATCH 1174/1842] drm/amdgpu: To flush tlb for MMHUB of RAVEN series commit 5cb0e3fb2c54eabfb3f932a1574bff1774946bc0 upstream. amdgpu: [mmhub0] no-retry page fault (src_id:0 ring:40 vmid:8 pasid:32769, for process test_basic pid 3305 thread test_basic pid 3305) amdgpu: in page starting at address 0x00007ff990003000 from IH client 0x12 (VMC) amdgpu: VM_L2_PROTECTION_FAULT_STATUS:0x00840051 amdgpu: Faulty UTCL2 client ID: MP1 (0x0) amdgpu: MORE_FAULTS: 0x1 amdgpu: WALKER_ERROR: 0x0 amdgpu: PERMISSION_FAULTS: 0x5 amdgpu: MAPPING_ERROR: 0x0 amdgpu: RW: 0x1 When memory is allocated by kfd, no one triggers the tlb flush for MMHUB0. There is page fault from MMHUB0. v2:fix indentation v3:change subject and fix indentation Signed-off-by: Ruili Ji Reviewed-by: Philip Yang Reviewed-by: Aaron Liu Acked-by: Alex Deucher Signed-off-by: Alex Deucher Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c index fb6230c62daa..d3a974d10552 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c @@ -689,7 +689,8 @@ int amdgpu_amdkfd_flush_gpu_tlb_pasid(struct kgd_dev *kgd, uint16_t pasid) const uint32_t flush_type = 0; bool all_hub = false; - if (adev->family == AMDGPU_FAMILY_AI) + if (adev->family == AMDGPU_FAMILY_AI || + adev->family == AMDGPU_FAMILY_RV) all_hub = true; return amdgpu_gmc_flush_gpu_tlb_pasid(adev, pasid, flush_type, all_hub); From e77804158b30b5c7a0638062ab8adde625104d3b Mon Sep 17 00:00:00 2001 From: Nicolas Dichtel Date: Thu, 23 Jun 2022 14:00:15 +0200 Subject: [PATCH 1175/1842] ipv6: take care of disable_policy when restoring routes commit 3b0dc529f56b5f2328244130683210be98f16f7f upstream. When routes corresponding to addresses are restored by fixup_permanent_addr(), the dst_nopolicy parameter was not set. The typical use case is a user that configures an address on a down interface and then put this interface up. Let's take care of this flag in addrconf_f6i_alloc(), so that every callers benefit ont it. CC: stable@kernel.org CC: David Forster Fixes: df789fe75206 ("ipv6: Provide ipv6 version of "disable_policy" sysctl") Reported-by: Siwar Zitouni Signed-off-by: Nicolas Dichtel Reviewed-by: David Ahern Link: https://lore.kernel.org/r/20220623120015.32640-1-nicolas.dichtel@6wind.com Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- net/ipv6/addrconf.c | 4 ---- net/ipv6/route.c | 9 ++++++++- 2 files changed, 8 insertions(+), 5 deletions(-) diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index 0562fb321959..05317e6f48f8 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -1102,10 +1102,6 @@ ipv6_add_addr(struct inet6_dev *idev, struct ifa6_config *cfg, goto out; } - if (net->ipv6.devconf_all->disable_policy || - idev->cnf.disable_policy) - f6i->dst_nopolicy = true; - neigh_parms_data_state_setall(idev->nd_parms); ifa->addr = *cfg->pfx; diff --git a/net/ipv6/route.c b/net/ipv6/route.c index 6ace9f0ac22f..e67505c6d856 100644 --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -4479,8 +4479,15 @@ struct fib6_info *addrconf_f6i_alloc(struct net *net, } f6i = ip6_route_info_create(&cfg, gfp_flags, NULL); - if (!IS_ERR(f6i)) + if (!IS_ERR(f6i)) { f6i->dst_nocount = true; + + if (!anycast && + (net->ipv6.devconf_all->disable_policy || + idev->cnf.disable_policy)) + f6i->dst_nopolicy = true; + } + return f6i; } From 0b99c4a1893612adaa4e751e2750bf999896dedd Mon Sep 17 00:00:00 2001 From: Pablo Greco Date: Sat, 25 Jun 2022 09:15:02 -0300 Subject: [PATCH 1176/1842] nvme-pci: add NVME_QUIRK_BOGUS_NID for ADATA XPG SX6000LNP (AKA SPECTRIX S40G) commit 1629de0e0373e04d68e88e6d9d3071fbf70b7ea8 upstream. ADATA XPG SPECTRIX S40G drives report bogus eui64 values that appear to be the same across drives in one system. Quirk them out so they are not marked as "non globally unique" duplicates. Before: [ 2.258919] nvme nvme1: pci function 0000:06:00.0 [ 2.264898] nvme nvme2: pci function 0000:05:00.0 [ 2.323235] nvme nvme1: failed to set APST feature (2) [ 2.326153] nvme nvme2: failed to set APST feature (2) [ 2.333935] nvme nvme1: allocated 64 MiB host memory buffer. [ 2.336492] nvme nvme2: allocated 64 MiB host memory buffer. [ 2.339611] nvme nvme1: 7/0/0 default/read/poll queues [ 2.341805] nvme nvme2: 7/0/0 default/read/poll queues [ 2.346114] nvme1n1: p1 [ 2.347197] nvme nvme2: globally duplicate IDs for nsid 1 After: [ 2.427715] nvme nvme1: pci function 0000:06:00.0 [ 2.427771] nvme nvme2: pci function 0000:05:00.0 [ 2.488154] nvme nvme2: failed to set APST feature (2) [ 2.489895] nvme nvme1: failed to set APST feature (2) [ 2.498773] nvme nvme2: allocated 64 MiB host memory buffer. [ 2.500587] nvme nvme1: allocated 64 MiB host memory buffer. [ 2.504113] nvme nvme2: 7/0/0 default/read/poll queues [ 2.507026] nvme nvme1: 7/0/0 default/read/poll queues [ 2.509467] nvme nvme2: Ignoring bogus Namespace Identifiers [ 2.512804] nvme nvme1: Ignoring bogus Namespace Identifiers [ 2.513698] nvme1n1: p1 Signed-off-by: Pablo Greco Reviewed-by: Keith Busch Reviewed-by: Chaitanya Kulkarni Cc: Signed-off-by: Christoph Hellwig Signed-off-by: Greg Kroah-Hartman --- drivers/nvme/host/pci.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 9e633f4dcec7..3622c5c9515f 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -3245,7 +3245,8 @@ static const struct pci_device_id nvme_id_table[] = { { PCI_DEVICE(0x1d1d, 0x2601), /* CNEX Granby */ .driver_data = NVME_QUIRK_LIGHTNVM, }, { PCI_DEVICE(0x10ec, 0x5762), /* ADATA SX6000LNP */ - .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, }, + .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN | + NVME_QUIRK_BOGUS_NID, }, { PCI_DEVICE(0x1cc1, 0x8201), /* ADATA SX8200PNP 512GB */ .driver_data = NVME_QUIRK_NO_DEEPEST_PS | NVME_QUIRK_IGNORE_DEV_SUBNQN, }, From e6a7d30b650a813bf82ef208ef1a85e2900c416a Mon Sep 17 00:00:00 2001 From: Chris Ye Date: Tue, 31 May 2022 17:09:54 -0700 Subject: [PATCH 1177/1842] nvdimm: Fix badblocks clear off-by-one error commit ef9102004a87cb3f8b26e000a095a261fc0467d3 upstream. nvdimm_clear_badblocks_region() validates badblock clearing requests against the span of the region, however it compares the inclusive badblock request range to the exclusive region range. Fix up the off-by-one error. Fixes: 23f498448362 ("libnvdimm: rework region badblocks clearing") Cc: Signed-off-by: Chris Ye Reviewed-by: Vishal Verma Link: https://lore.kernel.org/r/165404219489.2445897.9792886413715690399.stgit@dwillia2-xfh Signed-off-by: Dan Williams Signed-off-by: Greg Kroah-Hartman --- drivers/nvdimm/bus.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c index 2304c6183822..9ec59960f216 100644 --- a/drivers/nvdimm/bus.c +++ b/drivers/nvdimm/bus.c @@ -187,8 +187,8 @@ static int nvdimm_clear_badblocks_region(struct device *dev, void *data) ndr_end = nd_region->ndr_start + nd_region->ndr_size - 1; /* make sure we are in the region */ - if (ctx->phys < nd_region->ndr_start - || (ctx->phys + ctx->cleared) > ndr_end) + if (ctx->phys < nd_region->ndr_start || + (ctx->phys + ctx->cleared - 1) > ndr_end) return 0; sector = (ctx->phys - nd_region->ndr_start) / 512; From e188bbdb92292cc7b13103c3e1be6b845ce24a3b Mon Sep 17 00:00:00 2001 From: Liam Howlett Date: Fri, 24 Jun 2022 01:17:58 +0000 Subject: [PATCH 1178/1842] powerpc/prom_init: Fix kernel config grep commit 6886da5f49e6d86aad76807a93f3eef5e4f01b10 upstream. When searching for config options, use the KCONFIG_CONFIG shell variable so that builds using non-standard config locations work. Fixes: 26deb04342e3 ("powerpc: prepare string/mem functions for KASAN") Cc: stable@vger.kernel.org # v5.2+ Signed-off-by: Liam R. Howlett Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220624011745.4060795-1-Liam.Howlett@oracle.com Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/kernel/prom_init_check.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/kernel/prom_init_check.sh b/arch/powerpc/kernel/prom_init_check.sh index b183ab9c5107..dfa5f729f774 100644 --- a/arch/powerpc/kernel/prom_init_check.sh +++ b/arch/powerpc/kernel/prom_init_check.sh @@ -13,7 +13,7 @@ # If you really need to reference something from prom_init.o add # it to the list below: -grep "^CONFIG_KASAN=y$" .config >/dev/null +grep "^CONFIG_KASAN=y$" ${KCONFIG_CONFIG} >/dev/null if [ $? -eq 0 ] then MEM_FUNCS="__memcpy __memset" From 68a34e478ad58af3b464d37cd14a4d136f1876b9 Mon Sep 17 00:00:00 2001 From: Christophe Leroy Date: Thu, 23 Jun 2022 10:56:17 +0200 Subject: [PATCH 1179/1842] powerpc/book3e: Fix PUD allocation size in map_kernel_page() commit 986481618023e18e187646b0fff05a3c337531cb upstream. Commit 2fb4706057bc ("powerpc: add support for folded p4d page tables") erroneously changed PUD setup to a mix of PMD and PUD. Fix it. While at it, use PTE_TABLE_SIZE instead of PAGE_SIZE for PTE tables in order to avoid any confusion. Fixes: 2fb4706057bc ("powerpc: add support for folded p4d page tables") Cc: stable@vger.kernel.org # v5.8+ Signed-off-by: Christophe Leroy Acked-by: Mike Rapoport Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/95ddfd6176d53e6c85e13bd1c358359daa56775f.1655974558.git.christophe.leroy@csgroup.eu Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/mm/nohash/book3e_pgtable.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/mm/nohash/book3e_pgtable.c b/arch/powerpc/mm/nohash/book3e_pgtable.c index 77884e24281d..3d845e001c87 100644 --- a/arch/powerpc/mm/nohash/book3e_pgtable.c +++ b/arch/powerpc/mm/nohash/book3e_pgtable.c @@ -95,8 +95,8 @@ int __ref map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot) pgdp = pgd_offset_k(ea); p4dp = p4d_offset(pgdp, ea); if (p4d_none(*p4dp)) { - pmdp = early_alloc_pgtable(PMD_TABLE_SIZE); - p4d_populate(&init_mm, p4dp, pmdp); + pudp = early_alloc_pgtable(PUD_TABLE_SIZE); + p4d_populate(&init_mm, p4dp, pudp); } pudp = pud_offset(p4dp, ea); if (pud_none(*pudp)) { @@ -105,7 +105,7 @@ int __ref map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot) } pmdp = pmd_offset(pudp, ea); if (!pmd_present(*pmdp)) { - ptep = early_alloc_pgtable(PAGE_SIZE); + ptep = early_alloc_pgtable(PTE_TABLE_SIZE); pmd_populate_kernel(&init_mm, pmdp, ptep); } ptep = pte_offset_kernel(pmdp, ea); From 213c550deb6b46e8e5556cb90ead18b2227dd223 Mon Sep 17 00:00:00 2001 From: "Naveen N. Rao" Date: Tue, 28 Jun 2022 00:41:19 +0530 Subject: [PATCH 1180/1842] powerpc/bpf: Fix use of user_pt_regs in uapi MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit b21bd5a4b130f8370861478d2880985daace5913 upstream. Trying to build a .c file that includes : $ cat test_bpf_headers.c #include throws the below error: /usr/include/linux/bpf_perf_event.h:14:28: error: field ‘regs’ has incomplete type 14 | bpf_user_pt_regs_t regs; | ^~~~ This is because we typedef bpf_user_pt_regs_t to 'struct user_pt_regs' in arch/powerpc/include/uaps/asm/bpf_perf_event.h, but 'struct user_pt_regs' is not exposed to userspace. Powerpc has both pt_regs and user_pt_regs structures. However, unlike arm64 and s390, we expose user_pt_regs to userspace as just 'pt_regs'. As such, we should typedef bpf_user_pt_regs_t to 'struct pt_regs' for userspace. Within the kernel though, we want to typedef bpf_user_pt_regs_t to 'struct user_pt_regs'. Remove arch/powerpc/include/uapi/asm/bpf_perf_event.h so that the uapi/asm-generic version of the header is exposed to userspace. Introduce arch/powerpc/include/asm/bpf_perf_event.h so that we can typedef bpf_user_pt_regs_t to 'struct user_pt_regs' for use within the kernel. Note that this was not showing up with the bpf selftest build since tools/include/uapi/asm/bpf_perf_event.h didn't include the powerpc variant. Fixes: a6460b03f945ee ("powerpc/bpf: Fix broken uapi for BPF_PROG_TYPE_PERF_EVENT") Cc: stable@vger.kernel.org # v4.20+ Signed-off-by: Naveen N. Rao [mpe: Use typical naming for header include guard] Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220627191119.142867-1-naveen.n.rao@linux.vnet.ibm.com Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/include/asm/bpf_perf_event.h | 9 +++++++++ arch/powerpc/include/uapi/asm/bpf_perf_event.h | 9 --------- 2 files changed, 9 insertions(+), 9 deletions(-) create mode 100644 arch/powerpc/include/asm/bpf_perf_event.h delete mode 100644 arch/powerpc/include/uapi/asm/bpf_perf_event.h diff --git a/arch/powerpc/include/asm/bpf_perf_event.h b/arch/powerpc/include/asm/bpf_perf_event.h new file mode 100644 index 000000000000..e8a7b4ffb58c --- /dev/null +++ b/arch/powerpc/include/asm/bpf_perf_event.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_BPF_PERF_EVENT_H +#define _ASM_POWERPC_BPF_PERF_EVENT_H + +#include + +typedef struct user_pt_regs bpf_user_pt_regs_t; + +#endif /* _ASM_POWERPC_BPF_PERF_EVENT_H */ diff --git a/arch/powerpc/include/uapi/asm/bpf_perf_event.h b/arch/powerpc/include/uapi/asm/bpf_perf_event.h deleted file mode 100644 index 5e1e648aeec4..000000000000 --- a/arch/powerpc/include/uapi/asm/bpf_perf_event.h +++ /dev/null @@ -1,9 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ -#ifndef _UAPI__ASM_BPF_PERF_EVENT_H__ -#define _UAPI__ASM_BPF_PERF_EVENT_H__ - -#include - -typedef struct user_pt_regs bpf_user_pt_regs_t; - -#endif /* _UAPI__ASM_BPF_PERF_EVENT_H__ */ From 9bf2b0757b04c78dc5d6e3a198acca98457b32a1 Mon Sep 17 00:00:00 2001 From: Heinz Mauelshagen Date: Tue, 28 Jun 2022 00:37:22 +0200 Subject: [PATCH 1181/1842] dm raid: fix accesses beyond end of raid member array commit 332bd0778775d0cf105c4b9e03e460b590749916 upstream. On dm-raid table load (using raid_ctr), dm-raid allocates an array rs->devs[rs->raid_disks] for the raid device members. rs->raid_disks is defined by the number of raid metadata and image tupples passed into the target's constructor. In the case of RAID layout changes being requested, that number can be different from the current number of members for existing raid sets as defined in their superblocks. Example RAID layout changes include: - raid1 legs being added/removed - raid4/5/6/10 number of stripes changed (stripe reshaping) - takeover to higher raid level (e.g. raid5 -> raid6) When accessing array members, rs->raid_disks must be used in control loops instead of the potentially larger value in rs->md.raid_disks. Otherwise it will cause memory access beyond the end of the rs->devs array. Fix this by changing code that is prone to out-of-bounds access. Also fix validate_raid_redundancy() to validate all devices that are added. Also, use braces to help clean up raid_iterate_devices(). The out-of-bounds memory accesses was discovered using KASAN. This commit was verified to pass all LVM2 RAID tests (with KASAN enabled). Cc: stable@vger.kernel.org Signed-off-by: Heinz Mauelshagen Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/dm-raid.c | 34 ++++++++++++++++++---------------- 1 file changed, 18 insertions(+), 16 deletions(-) diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c index f5083b4a0195..4e94200e0142 100644 --- a/drivers/md/dm-raid.c +++ b/drivers/md/dm-raid.c @@ -1002,12 +1002,13 @@ static int validate_region_size(struct raid_set *rs, unsigned long region_size) static int validate_raid_redundancy(struct raid_set *rs) { unsigned int i, rebuild_cnt = 0; - unsigned int rebuilds_per_group = 0, copies; + unsigned int rebuilds_per_group = 0, copies, raid_disks; unsigned int group_size, last_group_start; - for (i = 0; i < rs->md.raid_disks; i++) - if (!test_bit(In_sync, &rs->dev[i].rdev.flags) || - !rs->dev[i].rdev.sb_page) + for (i = 0; i < rs->raid_disks; i++) + if (!test_bit(FirstUse, &rs->dev[i].rdev.flags) && + ((!test_bit(In_sync, &rs->dev[i].rdev.flags) || + !rs->dev[i].rdev.sb_page))) rebuild_cnt++; switch (rs->md.level) { @@ -1047,8 +1048,9 @@ static int validate_raid_redundancy(struct raid_set *rs) * A A B B C * C D D E E */ + raid_disks = min(rs->raid_disks, rs->md.raid_disks); if (__is_raid10_near(rs->md.new_layout)) { - for (i = 0; i < rs->md.raid_disks; i++) { + for (i = 0; i < raid_disks; i++) { if (!(i % copies)) rebuilds_per_group = 0; if ((!rs->dev[i].rdev.sb_page || @@ -1071,10 +1073,10 @@ static int validate_raid_redundancy(struct raid_set *rs) * results in the need to treat the last (potentially larger) * set differently. */ - group_size = (rs->md.raid_disks / copies); - last_group_start = (rs->md.raid_disks / group_size) - 1; + group_size = (raid_disks / copies); + last_group_start = (raid_disks / group_size) - 1; last_group_start *= group_size; - for (i = 0; i < rs->md.raid_disks; i++) { + for (i = 0; i < raid_disks; i++) { if (!(i % copies) && !(i > last_group_start)) rebuilds_per_group = 0; if ((!rs->dev[i].rdev.sb_page || @@ -1589,7 +1591,7 @@ static sector_t __rdev_sectors(struct raid_set *rs) { int i; - for (i = 0; i < rs->md.raid_disks; i++) { + for (i = 0; i < rs->raid_disks; i++) { struct md_rdev *rdev = &rs->dev[i].rdev; if (!test_bit(Journal, &rdev->flags) && @@ -3732,13 +3734,13 @@ static int raid_iterate_devices(struct dm_target *ti, unsigned int i; int r = 0; - for (i = 0; !r && i < rs->md.raid_disks; i++) - if (rs->dev[i].data_dev) - r = fn(ti, - rs->dev[i].data_dev, - 0, /* No offset on data devs */ - rs->md.dev_sectors, - data); + for (i = 0; !r && i < rs->raid_disks; i++) { + if (rs->dev[i].data_dev) { + r = fn(ti, rs->dev[i].data_dev, + 0, /* No offset on data devs */ + rs->md.dev_sectors, data); + } + } return r; } From d8bca518d5272fe349e0a722fdb9e3acb661f3f0 Mon Sep 17 00:00:00 2001 From: Mikulas Patocka Date: Wed, 29 Jun 2022 13:40:57 -0400 Subject: [PATCH 1182/1842] dm raid: fix KASAN warning in raid5_add_disks commit 617b365872a247480e9dcd50a32c8d1806b21861 upstream. There's a KASAN warning in raid5_add_disk when running the LVM testsuite. The warning happens in the test lvconvert-raid-reshape-linear_to_raid6-single-type.sh. We fix the warning by verifying that rdev->saved_raid_disk is within limits. Cc: stable@vger.kernel.org Signed-off-by: Mikulas Patocka Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman --- drivers/md/raid5.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 02767866b9ff..c8cafdb094aa 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -8004,6 +8004,7 @@ static int raid5_add_disk(struct mddev *mddev, struct md_rdev *rdev) */ if (rdev->saved_raid_disk >= 0 && rdev->saved_raid_disk >= first && + rdev->saved_raid_disk <= last && conf->disks[rdev->saved_raid_disk].rdev == NULL) first = rdev->saved_raid_disk; From ed03a650fb573a3b846dfb3b35471aeee0e5d41b Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Sat, 11 Jun 2022 00:20:23 +0200 Subject: [PATCH 1183/1842] s390/archrandom: simplify back to earlier design and initialize earlier commit e4f74400308cb8abde5fdc9cad609c2aba32110c upstream. s390x appears to present two RNG interfaces: - a "TRNG" that gathers entropy using some hardware function; and - a "DRBG" that takes in a seed and expands it. Previously, the TRNG was wired up to arch_get_random_{long,int}(), but it was observed that this was being called really frequently, resulting in high overhead. So it was changed to be wired up to arch_get_random_ seed_{long,int}(), which was a reasonable decision. Later on, the DRBG was then wired up to arch_get_random_{long,int}(), with a complicated buffer filling thread, to control overhead and rate. Fortunately, none of the performance issues matter much now. The RNG always attempts to use arch_get_random_seed_{long,int}() first, which means a complicated implementation of arch_get_random_{long,int}() isn't really valuable or useful to have around. And it's only used when reseeding, which means it won't hit the high throughput complications that were faced before. So this commit returns to an earlier design of just calling the TRNG in arch_get_random_seed_{long,int}(), and returning false in arch_get_ random_{long,int}(). Part of what makes the simplification possible is that the RNG now seeds itself using the TRNG at bootup. But this only works if the TRNG is detected early in boot, before random_init() is called. So this commit also causes that check to happen in setup_arch(). Cc: stable@vger.kernel.org Cc: Harald Freudenberger Cc: Ingo Franzki Cc: Juergen Christ Cc: Heiko Carstens Signed-off-by: Jason A. Donenfeld Link: https://lore.kernel.org/r/20220610222023.378448-1-Jason@zx2c4.com Reviewed-by: Harald Freudenberger Acked-by: Heiko Carstens Signed-off-by: Alexander Gordeev Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman --- arch/s390/crypto/arch_random.c | 111 +---------------------------- arch/s390/include/asm/archrandom.h | 13 ++-- arch/s390/kernel/setup.c | 5 ++ 3 files changed, 14 insertions(+), 115 deletions(-) diff --git a/arch/s390/crypto/arch_random.c b/arch/s390/crypto/arch_random.c index 4cbb4b6d85a8..1f2d40993c4d 100644 --- a/arch/s390/crypto/arch_random.c +++ b/arch/s390/crypto/arch_random.c @@ -2,126 +2,17 @@ /* * s390 arch random implementation. * - * Copyright IBM Corp. 2017, 2018 + * Copyright IBM Corp. 2017, 2020 * Author(s): Harald Freudenberger - * - * The s390_arch_random_generate() function may be called from random.c - * in interrupt context. So this implementation does the best to be very - * fast. There is a buffer of random data which is asynchronously checked - * and filled by a workqueue thread. - * If there are enough bytes in the buffer the s390_arch_random_generate() - * just delivers these bytes. Otherwise false is returned until the - * worker thread refills the buffer. - * The worker fills the rng buffer by pulling fresh entropy from the - * high quality (but slow) true hardware random generator. This entropy - * is then spread over the buffer with an pseudo random generator PRNG. - * As the arch_get_random_seed_long() fetches 8 bytes and the calling - * function add_interrupt_randomness() counts this as 1 bit entropy the - * distribution needs to make sure there is in fact 1 bit entropy contained - * in 8 bytes of the buffer. The current values pull 32 byte entropy - * and scatter this into a 2048 byte buffer. So 8 byte in the buffer - * will contain 1 bit of entropy. - * The worker thread is rescheduled based on the charge level of the - * buffer but at least with 500 ms delay to avoid too much CPU consumption. - * So the max. amount of rng data delivered via arch_get_random_seed is - * limited to 4k bytes per second. */ #include #include #include -#include #include -#include #include DEFINE_STATIC_KEY_FALSE(s390_arch_random_available); atomic64_t s390_arch_random_counter = ATOMIC64_INIT(0); EXPORT_SYMBOL(s390_arch_random_counter); - -#define ARCH_REFILL_TICKS (HZ/2) -#define ARCH_PRNG_SEED_SIZE 32 -#define ARCH_RNG_BUF_SIZE 2048 - -static DEFINE_SPINLOCK(arch_rng_lock); -static u8 *arch_rng_buf; -static unsigned int arch_rng_buf_idx; - -static void arch_rng_refill_buffer(struct work_struct *); -static DECLARE_DELAYED_WORK(arch_rng_work, arch_rng_refill_buffer); - -bool s390_arch_random_generate(u8 *buf, unsigned int nbytes) -{ - /* max hunk is ARCH_RNG_BUF_SIZE */ - if (nbytes > ARCH_RNG_BUF_SIZE) - return false; - - /* lock rng buffer */ - if (!spin_trylock(&arch_rng_lock)) - return false; - - /* try to resolve the requested amount of bytes from the buffer */ - arch_rng_buf_idx -= nbytes; - if (arch_rng_buf_idx < ARCH_RNG_BUF_SIZE) { - memcpy(buf, arch_rng_buf + arch_rng_buf_idx, nbytes); - atomic64_add(nbytes, &s390_arch_random_counter); - spin_unlock(&arch_rng_lock); - return true; - } - - /* not enough bytes in rng buffer, refill is done asynchronously */ - spin_unlock(&arch_rng_lock); - - return false; -} -EXPORT_SYMBOL(s390_arch_random_generate); - -static void arch_rng_refill_buffer(struct work_struct *unused) -{ - unsigned int delay = ARCH_REFILL_TICKS; - - spin_lock(&arch_rng_lock); - if (arch_rng_buf_idx > ARCH_RNG_BUF_SIZE) { - /* buffer is exhausted and needs refill */ - u8 seed[ARCH_PRNG_SEED_SIZE]; - u8 prng_wa[240]; - /* fetch ARCH_PRNG_SEED_SIZE bytes of entropy */ - cpacf_trng(NULL, 0, seed, sizeof(seed)); - /* blow this entropy up to ARCH_RNG_BUF_SIZE with PRNG */ - memset(prng_wa, 0, sizeof(prng_wa)); - cpacf_prno(CPACF_PRNO_SHA512_DRNG_SEED, - &prng_wa, NULL, 0, seed, sizeof(seed)); - cpacf_prno(CPACF_PRNO_SHA512_DRNG_GEN, - &prng_wa, arch_rng_buf, ARCH_RNG_BUF_SIZE, NULL, 0); - arch_rng_buf_idx = ARCH_RNG_BUF_SIZE; - } - delay += (ARCH_REFILL_TICKS * arch_rng_buf_idx) / ARCH_RNG_BUF_SIZE; - spin_unlock(&arch_rng_lock); - - /* kick next check */ - queue_delayed_work(system_long_wq, &arch_rng_work, delay); -} - -static int __init s390_arch_random_init(void) -{ - /* all the needed PRNO subfunctions available ? */ - if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG) && - cpacf_query_func(CPACF_PRNO, CPACF_PRNO_SHA512_DRNG_GEN)) { - - /* alloc arch random working buffer */ - arch_rng_buf = kmalloc(ARCH_RNG_BUF_SIZE, GFP_KERNEL); - if (!arch_rng_buf) - return -ENOMEM; - - /* kick worker queue job to fill the random buffer */ - queue_delayed_work(system_long_wq, - &arch_rng_work, ARCH_REFILL_TICKS); - - /* enable arch random to the outside world */ - static_branch_enable(&s390_arch_random_available); - } - - return 0; -} -arch_initcall(s390_arch_random_init); diff --git a/arch/s390/include/asm/archrandom.h b/arch/s390/include/asm/archrandom.h index de61ce562052..2c6e1c6ecbe7 100644 --- a/arch/s390/include/asm/archrandom.h +++ b/arch/s390/include/asm/archrandom.h @@ -2,7 +2,7 @@ /* * Kernel interface for the s390 arch_random_* functions * - * Copyright IBM Corp. 2017 + * Copyright IBM Corp. 2017, 2020 * * Author: Harald Freudenberger * @@ -15,12 +15,11 @@ #include #include +#include DECLARE_STATIC_KEY_FALSE(s390_arch_random_available); extern atomic64_t s390_arch_random_counter; -bool s390_arch_random_generate(u8 *buf, unsigned int nbytes); - static inline bool __must_check arch_get_random_long(unsigned long *v) { return false; @@ -34,7 +33,9 @@ static inline bool __must_check arch_get_random_int(unsigned int *v) static inline bool __must_check arch_get_random_seed_long(unsigned long *v) { if (static_branch_likely(&s390_arch_random_available)) { - return s390_arch_random_generate((u8 *)v, sizeof(*v)); + cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v)); + atomic64_add(sizeof(*v), &s390_arch_random_counter); + return true; } return false; } @@ -42,7 +43,9 @@ static inline bool __must_check arch_get_random_seed_long(unsigned long *v) static inline bool __must_check arch_get_random_seed_int(unsigned int *v) { if (static_branch_likely(&s390_arch_random_available)) { - return s390_arch_random_generate((u8 *)v, sizeof(*v)); + cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v)); + atomic64_add(sizeof(*v), &s390_arch_random_counter); + return true; } return false; } diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c index f9f8721dc532..520cf5a152cf 100644 --- a/arch/s390/kernel/setup.c +++ b/arch/s390/kernel/setup.c @@ -1009,6 +1009,11 @@ static void __init setup_randomness(void) if (stsi(vmms, 3, 2, 2) == 0 && vmms->count) add_device_randomness(&vmms->vm, sizeof(vmms->vm[0]) * vmms->count); memblock_free((unsigned long) vmms, PAGE_SIZE); + +#ifdef CONFIG_ARCH_RANDOM + if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG)) + static_branch_enable(&s390_arch_random_available); +#endif } /* From 6a0b9512a6aa7b7835d8138f5ffdcb4789c093d4 Mon Sep 17 00:00:00 2001 From: Chuck Lever Date: Thu, 30 Jun 2022 16:48:18 -0400 Subject: [PATCH 1184/1842] SUNRPC: Fix READ_PLUS crasher commit a23dd544debcda4ee4a549ec7de59e85c3c8345c upstream. Looks like there are still cases when "space_left - frag1bytes" can legitimately exceed PAGE_SIZE. Ensure that xdr->end always remains within the current encode buffer. Reported-by: Bruce Fields Reported-by: Zorro Lang Link: https://bugzilla.kernel.org/show_bug.cgi?id=216151 Fixes: 6c254bf3b637 ("SUNRPC: Fix the calculation of xdr->end in xdr_get_next_encode_buffer()") Signed-off-by: Chuck Lever Signed-off-by: Greg Kroah-Hartman --- net/sunrpc/xdr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c index c8ed6d3d5762..d84bb5037bb5 100644 --- a/net/sunrpc/xdr.c +++ b/net/sunrpc/xdr.c @@ -752,7 +752,7 @@ static __be32 *xdr_get_next_encode_buffer(struct xdr_stream *xdr, */ xdr->p = (void *)p + frag2bytes; space_left = xdr->buf->buflen - xdr->buf->len; - if (space_left - nbytes >= PAGE_SIZE) + if (space_left - frag1bytes >= PAGE_SIZE) xdr->end = (void *)p + PAGE_SIZE; else xdr->end = (void *)p + space_left - frag1bytes; From 8f74cb27c2b4872fd14bf046201fa7b36a46885e Mon Sep 17 00:00:00 2001 From: Duoming Zhou Date: Wed, 29 Jun 2022 08:26:40 +0800 Subject: [PATCH 1185/1842] net: rose: fix UAF bugs caused by timer handler commit 9cc02ede696272c5271a401e4f27c262359bc2f6 upstream. There are UAF bugs in rose_heartbeat_expiry(), rose_timer_expiry() and rose_idletimer_expiry(). The root cause is that del_timer() could not stop the timer handler that is running and the refcount of sock is not managed properly. One of the UAF bugs is shown below: (thread 1) | (thread 2) | rose_bind | rose_connect | rose_start_heartbeat rose_release | (wait a time) case ROSE_STATE_0 | rose_destroy_socket | rose_heartbeat_expiry rose_stop_heartbeat | sock_put(sk) | ... sock_put(sk) // FREE | | bh_lock_sock(sk) // USE The sock is deallocated by sock_put() in rose_release() and then used by bh_lock_sock() in rose_heartbeat_expiry(). Although rose_destroy_socket() calls rose_stop_heartbeat(), it could not stop the timer that is running. The KASAN report triggered by POC is shown below: BUG: KASAN: use-after-free in _raw_spin_lock+0x5a/0x110 Write of size 4 at addr ffff88800ae59098 by task swapper/3/0 ... Call Trace: dump_stack_lvl+0xbf/0xee print_address_description+0x7b/0x440 print_report+0x101/0x230 ? irq_work_single+0xbb/0x140 ? _raw_spin_lock+0x5a/0x110 kasan_report+0xed/0x120 ? _raw_spin_lock+0x5a/0x110 kasan_check_range+0x2bd/0x2e0 _raw_spin_lock+0x5a/0x110 rose_heartbeat_expiry+0x39/0x370 ? rose_start_heartbeat+0xb0/0xb0 call_timer_fn+0x2d/0x1c0 ? rose_start_heartbeat+0xb0/0xb0 expire_timers+0x1f3/0x320 __run_timers+0x3ff/0x4d0 run_timer_softirq+0x41/0x80 __do_softirq+0x233/0x544 irq_exit_rcu+0x41/0xa0 sysvec_apic_timer_interrupt+0x8c/0xb0 asm_sysvec_apic_timer_interrupt+0x1b/0x20 RIP: 0010:default_idle+0xb/0x10 RSP: 0018:ffffc9000012fea0 EFLAGS: 00000202 RAX: 000000000000bcae RBX: ffff888006660f00 RCX: 000000000000bcae RDX: 0000000000000001 RSI: ffffffff843a11c0 RDI: ffffffff843a1180 RBP: dffffc0000000000 R08: dffffc0000000000 R09: ffffed100da36d46 R10: dfffe9100da36d47 R11: ffffffff83cf0950 R12: 0000000000000000 R13: 1ffff11000ccc1e0 R14: ffffffff8542af28 R15: dffffc0000000000 ... Allocated by task 146: __kasan_kmalloc+0xc4/0xf0 sk_prot_alloc+0xdd/0x1a0 sk_alloc+0x2d/0x4e0 rose_create+0x7b/0x330 __sock_create+0x2dd/0x640 __sys_socket+0xc7/0x270 __x64_sys_socket+0x71/0x80 do_syscall_64+0x43/0x90 entry_SYSCALL_64_after_hwframe+0x46/0xb0 Freed by task 152: kasan_set_track+0x4c/0x70 kasan_set_free_info+0x1f/0x40 ____kasan_slab_free+0x124/0x190 kfree+0xd3/0x270 __sk_destruct+0x314/0x460 rose_release+0x2fa/0x3b0 sock_close+0xcb/0x230 __fput+0x2d9/0x650 task_work_run+0xd6/0x160 exit_to_user_mode_loop+0xc7/0xd0 exit_to_user_mode_prepare+0x4e/0x80 syscall_exit_to_user_mode+0x20/0x40 do_syscall_64+0x4f/0x90 entry_SYSCALL_64_after_hwframe+0x46/0xb0 This patch adds refcount of sock when we use functions such as rose_start_heartbeat() and so on to start timer, and decreases the refcount of sock when timer is finished or deleted by functions such as rose_stop_heartbeat() and so on. As a result, the UAF bugs could be mitigated. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Duoming Zhou Tested-by: Duoming Zhou Link: https://lore.kernel.org/r/20220629002640.5693-1-duoming@zju.edu.cn Signed-off-by: Paolo Abeni Signed-off-by: Greg Kroah-Hartman --- net/rose/rose_timer.c | 34 +++++++++++++++++++--------------- 1 file changed, 19 insertions(+), 15 deletions(-) diff --git a/net/rose/rose_timer.c b/net/rose/rose_timer.c index b3138fc2e552..f06ddbed3fed 100644 --- a/net/rose/rose_timer.c +++ b/net/rose/rose_timer.c @@ -31,89 +31,89 @@ static void rose_idletimer_expiry(struct timer_list *); void rose_start_heartbeat(struct sock *sk) { - del_timer(&sk->sk_timer); + sk_stop_timer(sk, &sk->sk_timer); sk->sk_timer.function = rose_heartbeat_expiry; sk->sk_timer.expires = jiffies + 5 * HZ; - add_timer(&sk->sk_timer); + sk_reset_timer(sk, &sk->sk_timer, sk->sk_timer.expires); } void rose_start_t1timer(struct sock *sk) { struct rose_sock *rose = rose_sk(sk); - del_timer(&rose->timer); + sk_stop_timer(sk, &rose->timer); rose->timer.function = rose_timer_expiry; rose->timer.expires = jiffies + rose->t1; - add_timer(&rose->timer); + sk_reset_timer(sk, &rose->timer, rose->timer.expires); } void rose_start_t2timer(struct sock *sk) { struct rose_sock *rose = rose_sk(sk); - del_timer(&rose->timer); + sk_stop_timer(sk, &rose->timer); rose->timer.function = rose_timer_expiry; rose->timer.expires = jiffies + rose->t2; - add_timer(&rose->timer); + sk_reset_timer(sk, &rose->timer, rose->timer.expires); } void rose_start_t3timer(struct sock *sk) { struct rose_sock *rose = rose_sk(sk); - del_timer(&rose->timer); + sk_stop_timer(sk, &rose->timer); rose->timer.function = rose_timer_expiry; rose->timer.expires = jiffies + rose->t3; - add_timer(&rose->timer); + sk_reset_timer(sk, &rose->timer, rose->timer.expires); } void rose_start_hbtimer(struct sock *sk) { struct rose_sock *rose = rose_sk(sk); - del_timer(&rose->timer); + sk_stop_timer(sk, &rose->timer); rose->timer.function = rose_timer_expiry; rose->timer.expires = jiffies + rose->hb; - add_timer(&rose->timer); + sk_reset_timer(sk, &rose->timer, rose->timer.expires); } void rose_start_idletimer(struct sock *sk) { struct rose_sock *rose = rose_sk(sk); - del_timer(&rose->idletimer); + sk_stop_timer(sk, &rose->idletimer); if (rose->idle > 0) { rose->idletimer.function = rose_idletimer_expiry; rose->idletimer.expires = jiffies + rose->idle; - add_timer(&rose->idletimer); + sk_reset_timer(sk, &rose->idletimer, rose->idletimer.expires); } } void rose_stop_heartbeat(struct sock *sk) { - del_timer(&sk->sk_timer); + sk_stop_timer(sk, &sk->sk_timer); } void rose_stop_timer(struct sock *sk) { - del_timer(&rose_sk(sk)->timer); + sk_stop_timer(sk, &rose_sk(sk)->timer); } void rose_stop_idletimer(struct sock *sk) { - del_timer(&rose_sk(sk)->idletimer); + sk_stop_timer(sk, &rose_sk(sk)->idletimer); } static void rose_heartbeat_expiry(struct timer_list *t) @@ -130,6 +130,7 @@ static void rose_heartbeat_expiry(struct timer_list *t) (sk->sk_state == TCP_LISTEN && sock_flag(sk, SOCK_DEAD))) { bh_unlock_sock(sk); rose_destroy_socket(sk); + sock_put(sk); return; } break; @@ -152,6 +153,7 @@ static void rose_heartbeat_expiry(struct timer_list *t) rose_start_heartbeat(sk); bh_unlock_sock(sk); + sock_put(sk); } static void rose_timer_expiry(struct timer_list *t) @@ -181,6 +183,7 @@ static void rose_timer_expiry(struct timer_list *t) break; } bh_unlock_sock(sk); + sock_put(sk); } static void rose_idletimer_expiry(struct timer_list *t) @@ -205,4 +208,5 @@ static void rose_idletimer_expiry(struct timer_list *t) sock_set_flag(sk, SOCK_DEAD); } bh_unlock_sock(sk); + sock_put(sk); } From c0a28f2ddf9a76cd6d94714a44bb25e120885e26 Mon Sep 17 00:00:00 2001 From: Jose Alonso Date: Tue, 28 Jun 2022 12:13:02 -0300 Subject: [PATCH 1186/1842] net: usb: ax88179_178a: Fix packet receiving commit f8ebb3ac881b17712e1d5967c97ab1806b16d3d6 upstream. This patch corrects packet receiving in ax88179_rx_fixup. - problem observed: ifconfig shows allways a lot of 'RX Errors' while packets are received normally. This occurs because ax88179_rx_fixup does not recognise properly the usb urb received. The packets are normally processed and at the end, the code exits with 'return 0', generating RX Errors. (pkt_cnt==-2 and ptk_hdr over field rx_hdr trying to identify another packet there) This is a usb urb received by "tcpdump -i usbmon2 -X" on a little-endian CPU: 0x0000: eeee f8e3 3b19 87a0 94de 80e3 daac 0800 ^ packet 1 start (pkt_len = 0x05ec) ^^^^ IP alignment pseudo header ^ ethernet packet start last byte ethernet packet v padding (8-bytes aligned) vvvv vvvv 0x05e0: c92d d444 1420 8a69 83dd 272f e82b 9811 0x05f0: eeee f8e3 3b19 87a0 94de 80e3 daac 0800 ... ^ packet 2 0x0be0: eeee f8e3 3b19 87a0 94de 80e3 daac 0800 ... 0x1130: 9d41 9171 8a38 0ec5 eeee f8e3 3b19 87a0 ... 0x1720: 8cfc 15ff 5e4c e85c eeee f8e3 3b19 87a0 ... 0x1d10: ecfa 2a3a 19ab c78c eeee f8e3 3b19 87a0 ... 0x2070: eeee f8e3 3b19 87a0 94de 80e3 daac 0800 ... ^ packet 7 0x2120: 7c88 4ca5 5c57 7dcc 0d34 7577 f778 7e0a 0x2130: f032 e093 7489 0740 3008 ec05 0000 0080 ====1==== ====2==== hdr_off ^ pkt_len = 0x05ec ^^^^ AX_RXHDR_*=0x00830 ^^^^ ^ pkt_len = 0 ^^^^ AX_RXHDR_DROP_ERR=0x80000000 ^^^^ ^ 0x2140: 3008 ec05 0000 0080 3008 5805 0000 0080 0x2150: 3008 ec05 0000 0080 3008 ec05 0000 0080 0x2160: 3008 5803 0000 0080 3008 c800 0000 0080 ===11==== ===12==== ===13==== ===14==== 0x2170: 0000 0000 0e00 3821 ^^^^ ^^^^ rx_hdr ^^^^ pkt_cnt=14 ^^^^ hdr_off=0x2138 ^^^^ ^^^^ padding The dump shows that pkt_cnt is the number of entrys in the per-packet metadata. It is "2 * packet count". Each packet have two entrys. The first have a valid value (pkt_len and AX_RXHDR_*) and the second have a dummy-header 0x80000000 (pkt_len=0 with AX_RXHDR_DROP_ERR). Why exists dummy-header for each packet?!? My guess is that this was done probably to align the entry for each packet to 64-bits and maintain compatibility with old firmware. There is also a padding (0x00000000) before the rx_hdr to align the end of rx_hdr to 64-bit. Note that packets have a alignment of 64-bits (8-bytes). This patch assumes that the dummy-header and the last padding are optional. So it preserves semantics and recognises the same valid packets as the current code. This patch was made using only the dumpfile information and tested with only one device: 0b95:1790 ASIX Electronics Corp. AX88179 Gigabit Ethernet Fixes: 57bc3d3ae8c1 ("net: usb: ax88179_178a: Fix out-of-bounds accesses in RX fixup") Fixes: e2ca90c276e1 ("ax88179_178a: ASIX AX88179_178A USB 3.0/2.0 to gigabit ethernet adapter driver") Signed-off-by: Jose Alonso Acked-by: Paolo Abeni Link: https://lore.kernel.org/r/d6970bb04bf67598af4d316eaeb1792040b18cfd.camel@gmail.com Signed-off-by: Paolo Abeni Signed-off-by: Greg Kroah-Hartman --- drivers/net/usb/ax88179_178a.c | 105 ++++++++++++++++++++++++--------- 1 file changed, 78 insertions(+), 27 deletions(-) diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c index 0b0cbcee1920..79a53fe245e5 100644 --- a/drivers/net/usb/ax88179_178a.c +++ b/drivers/net/usb/ax88179_178a.c @@ -1471,6 +1471,42 @@ static int ax88179_rx_fixup(struct usbnet *dev, struct sk_buff *skb) * are bundled into this buffer and where we can find an array of * per-packet metadata (which contains elements encoded into u16). */ + + /* SKB contents for current firmware: + * + * ... + * + * + * ... + * + * + * + * where: + * contains pkt_len bytes: + * 2 bytes of IP alignment pseudo header + * packet received + * contains 4 bytes: + * pkt_len and fields AX_RXHDR_* + * 0-7 bytes to terminate at + * 8 bytes boundary (64-bit). + * 4 bytes to make rx_hdr terminate at + * 8 bytes boundary (64-bit) + * contains 4 bytes: + * pkt_len=0 and AX_RXHDR_DROP_ERR + * contains 4 bytes: + * pkt_cnt and hdr_off (offset of + * ) + * + * pkt_cnt is number of entrys in the per-packet metadata. + * In current firmware there is 2 entrys per packet. + * The first points to the packet and the + * second is a dummy header. + * This was done probably to align fields in 64-bit and + * maintain compatibility with old firmware. + * This code assumes that and are + * optional. + */ + if (skb->len < 4) return 0; skb_trim(skb, skb->len - 4); @@ -1484,51 +1520,66 @@ static int ax88179_rx_fixup(struct usbnet *dev, struct sk_buff *skb) /* Make sure that the bounds of the metadata array are inside the SKB * (and in front of the counter at the end). */ - if (pkt_cnt * 2 + hdr_off > skb->len) + if (pkt_cnt * 4 + hdr_off > skb->len) return 0; pkt_hdr = (u32 *)(skb->data + hdr_off); /* Packets must not overlap the metadata array */ skb_trim(skb, hdr_off); - for (; ; pkt_cnt--, pkt_hdr++) { + for (; pkt_cnt > 0; pkt_cnt--, pkt_hdr++) { + u16 pkt_len_plus_padd; u16 pkt_len; le32_to_cpus(pkt_hdr); pkt_len = (*pkt_hdr >> 16) & 0x1fff; + pkt_len_plus_padd = (pkt_len + 7) & 0xfff8; - if (pkt_len > skb->len) + /* Skip dummy header used for alignment + */ + if (pkt_len == 0) + continue; + + if (pkt_len_plus_padd > skb->len) return 0; /* Check CRC or runt packet */ - if (((*pkt_hdr & (AX_RXHDR_CRC_ERR | AX_RXHDR_DROP_ERR)) == 0) && - pkt_len >= 2 + ETH_HLEN) { - bool last = (pkt_cnt == 0); - - if (last) { - ax_skb = skb; - } else { - ax_skb = skb_clone(skb, GFP_ATOMIC); - if (!ax_skb) - return 0; - } - ax_skb->len = pkt_len; - /* Skip IP alignment pseudo header */ - skb_pull(ax_skb, 2); - skb_set_tail_pointer(ax_skb, ax_skb->len); - ax_skb->truesize = pkt_len + sizeof(struct sk_buff); - ax88179_rx_checksum(ax_skb, pkt_hdr); - - if (last) - return 1; - - usbnet_skb_return(dev, ax_skb); + if ((*pkt_hdr & (AX_RXHDR_CRC_ERR | AX_RXHDR_DROP_ERR)) || + pkt_len < 2 + ETH_HLEN) { + dev->net->stats.rx_errors++; + skb_pull(skb, pkt_len_plus_padd); + continue; } - /* Trim this packet away from the SKB */ - if (!skb_pull(skb, (pkt_len + 7) & 0xFFF8)) + /* last packet */ + if (pkt_len_plus_padd == skb->len) { + skb_trim(skb, pkt_len); + + /* Skip IP alignment pseudo header */ + skb_pull(skb, 2); + + skb->truesize = SKB_TRUESIZE(pkt_len_plus_padd); + ax88179_rx_checksum(skb, pkt_hdr); + return 1; + } + + ax_skb = skb_clone(skb, GFP_ATOMIC); + if (!ax_skb) return 0; + skb_trim(ax_skb, pkt_len); + + /* Skip IP alignment pseudo header */ + skb_pull(ax_skb, 2); + + skb->truesize = pkt_len_plus_padd + + SKB_DATA_ALIGN(sizeof(struct sk_buff)); + ax88179_rx_checksum(ax_skb, pkt_hdr); + usbnet_skb_return(dev, ax_skb); + + skb_pull(skb, pkt_len_plus_padd); } + + return 0; } static struct sk_buff * From f7b8fb4584456c5cdc3400b14d69d28a0c4a701a Mon Sep 17 00:00:00 2001 From: Jason Wang Date: Fri, 17 Jun 2022 15:29:49 +0800 Subject: [PATCH 1187/1842] virtio-net: fix race between ndo_open() and virtio_device_ready() commit 50c0ada627f56c92f5953a8bf9158b045ad026a1 upstream. We currently call virtio_device_ready() after netdev registration. Since ndo_open() can be called immediately after register_netdev, this means there exists a race between ndo_open() and virtio_device_ready(): the driver may start to use the device before DRIVER_OK which violates the spec. Fix this by switching to use register_netdevice() and protect the virtio_device_ready() with rtnl_lock() to make sure ndo_open() can only be called after virtio_device_ready(). Fixes: 4baf1e33d0842 ("virtio_net: enable VQs early") Signed-off-by: Jason Wang Message-Id: <20220617072949.30734-1-jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin Signed-off-by: Greg Kroah-Hartman --- drivers/net/virtio_net.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index ad9064df3deb..37178b078ee3 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -3171,14 +3171,20 @@ static int virtnet_probe(struct virtio_device *vdev) } } - err = register_netdev(dev); + /* serialize netdev register + virtio_device_ready() with ndo_open() */ + rtnl_lock(); + + err = register_netdevice(dev); if (err) { pr_debug("virtio_net: registering device failed\n"); + rtnl_unlock(); goto free_failover; } virtio_device_ready(vdev); + rtnl_unlock(); + err = virtnet_cpu_notif_add(vi); if (err) { pr_debug("virtio_net: registering cpu notifier failed\n"); From 3f55912a1a98d3ab10c42293c7750f1d6fff36aa Mon Sep 17 00:00:00 2001 From: Dimitris Michailidis Date: Wed, 22 Jun 2022 17:02:34 -0700 Subject: [PATCH 1188/1842] selftests/net: pass ipv6_args to udpgso_bench's IPv6 TCP test commit b968080808f7f28b89aa495b7402ba48eb17ee93 upstream. udpgso_bench.sh has been running its IPv6 TCP test with IPv4 arguments since its initial conmit. Looks like a typo. Fixes: 3a687bef148d ("selftests: udp gso benchmark") Cc: willemb@google.com Signed-off-by: Dimitris Michailidis Acked-by: Willem de Bruijn Link: https://lore.kernel.org/r/20220623000234.61774-1-dmichail@fungible.com Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/net/udpgso_bench.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/net/udpgso_bench.sh b/tools/testing/selftests/net/udpgso_bench.sh index 80b5d352702e..dc932fd65363 100755 --- a/tools/testing/selftests/net/udpgso_bench.sh +++ b/tools/testing/selftests/net/udpgso_bench.sh @@ -120,7 +120,7 @@ run_all() { run_udp "${ipv4_args}" echo "ipv6" - run_tcp "${ipv4_args}" + run_tcp "${ipv6_args}" run_udp "${ipv6_args}" } From 0b2499c8014fc06733202a2bd36451b96ed7cd2b Mon Sep 17 00:00:00 2001 From: Doug Berger Date: Wed, 22 Jun 2022 20:02:04 -0700 Subject: [PATCH 1189/1842] net: dsa: bcm_sf2: force pause link settings commit 7c97bc0128b2eecc703106112679a69d446d1a12 upstream. The pause settings reported by the PHY should also be applied to the GMII port status override otherwise the switch will not generate pause frames towards the link partner despite the advertisement saying otherwise. Fixes: 246d7f773c13 ("net: dsa: add Broadcom SF2 switch driver") Signed-off-by: Doug Berger Signed-off-by: Florian Fainelli Link: https://lore.kernel.org/r/20220623030204.1966851-1-f.fainelli@gmail.com Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/net/dsa/bcm_sf2.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c index b712b4f27efd..c6563d212476 100644 --- a/drivers/net/dsa/bcm_sf2.c +++ b/drivers/net/dsa/bcm_sf2.c @@ -774,6 +774,11 @@ static void bcm_sf2_sw_mac_link_up(struct dsa_switch *ds, int port, if (duplex == DUPLEX_FULL) reg |= DUPLX_MODE; + if (tx_pause) + reg |= TXFLOW_CNTL; + if (rx_pause) + reg |= RXFLOW_CNTL; + core_writel(priv, reg, offset); } From bec1be0a745ab420718217e3e0d9542a75108989 Mon Sep 17 00:00:00 2001 From: Jakub Kicinski Date: Wed, 22 Jun 2022 21:20:39 -0700 Subject: [PATCH 1190/1842] net: tun: unlink NAPI from device on destruction commit 3b9bc84d311104906d2b4995a9a02d7b7ddab2db upstream. Syzbot found a race between tun file and device destruction. NAPIs live in struct tun_file which can get destroyed before the netdev so we have to del them explicitly. The current code is missing deleting the NAPI if the queue was detached first. Fixes: 943170998b20 ("tun: enable NAPI for TUN/TAP driver") Reported-by: syzbot+b75c138e9286ac742647@syzkaller.appspotmail.com Link: https://lore.kernel.org/r/20220623042039.2274708-1-kuba@kernel.org Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/net/tun.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/tun.c b/drivers/net/tun.c index 55ce141c93c7..aa8ee7542c3c 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -733,6 +733,7 @@ static void tun_detach_all(struct net_device *dev) sock_put(&tfile->sk); } list_for_each_entry_safe(tfile, tmp, &tun->disabled, next) { + tun_napi_del(tfile); tun_enable_queue(tfile); tun_queue_purge(tfile); xdp_rxq_info_unreg(&tfile->xdp_rxq); From c70ca16f72b21d5e33d030b4f2da6df3abf0f293 Mon Sep 17 00:00:00 2001 From: Jakub Kicinski Date: Wed, 22 Jun 2022 21:21:05 -0700 Subject: [PATCH 1191/1842] net: tun: stop NAPI when detaching queues commit a8fc8cb5692aebb9c6f7afd4265366d25dcd1d01 upstream. While looking at a syzbot report I noticed the NAPI only gets disabled before it's deleted. I think that user can detach the queue before destroying the device and the NAPI will never be stopped. Fixes: 943170998b20 ("tun: enable NAPI for TUN/TAP driver") Acked-by: Petar Penkov Link: https://lore.kernel.org/r/20220623042105.2274812-1-kuba@kernel.org Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/net/tun.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/drivers/net/tun.c b/drivers/net/tun.c index aa8ee7542c3c..a2715ffc7c9c 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -279,6 +279,12 @@ static void tun_napi_init(struct tun_struct *tun, struct tun_file *tfile, } } +static void tun_napi_enable(struct tun_file *tfile) +{ + if (tfile->napi_enabled) + napi_enable(&tfile->napi); +} + static void tun_napi_disable(struct tun_file *tfile) { if (tfile->napi_enabled) @@ -659,8 +665,10 @@ static void __tun_detach(struct tun_file *tfile, bool clean) if (clean) { RCU_INIT_POINTER(tfile->tun, NULL); sock_put(&tfile->sk); - } else + } else { tun_disable_queue(tun, tfile); + tun_napi_disable(tfile); + } synchronize_net(); tun_flow_delete_by_queue(tun, tun->numqueues + 1); @@ -814,6 +822,7 @@ static int tun_attach(struct tun_struct *tun, struct file *file, if (tfile->detached) { tun_enable_queue(tfile); + tun_napi_enable(tfile); } else { sock_hold(&tfile->sk); tun_napi_init(tun, tfile, napi, napi_frags); From 9c06d84855bdb38fd9160dcb5708db482c358c73 Mon Sep 17 00:00:00 2001 From: Enguerrand de Ribaucourt Date: Thu, 23 Jun 2022 15:46:44 +0200 Subject: [PATCH 1192/1842] net: dp83822: disable false carrier interrupt commit c96614eeab663646f57f67aa591e015abd8bd0ba upstream. When unplugging an Ethernet cable, false carrier events were produced by the PHY at a very high rate. Once the false carrier counter full, an interrupt was triggered every few clock cycles until the cable was replugged. This resulted in approximately 10k/s interrupts. Since the false carrier counter (FCSCR) is never used, we can safely disable this interrupt. In addition to improving performance, this also solved MDIO read timeouts I was randomly encountering with an i.MX8 fec MAC because of the interrupt flood. The interrupt count and MDIO timeout fix were tested on a v5.4.110 kernel. Fixes: 87461f7a58ab ("net: phy: DP83822 initial driver submission") Signed-off-by: Enguerrand de Ribaucourt Reviewed-by: Andrew Lunn Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/net/phy/dp83822.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c index 3d75b98f3051..29fabe15cdcf 100644 --- a/drivers/net/phy/dp83822.c +++ b/drivers/net/phy/dp83822.c @@ -244,7 +244,6 @@ static int dp83822_config_intr(struct phy_device *phydev) return misr_status; misr_status |= (DP83822_RX_ERR_HF_INT_EN | - DP83822_FALSE_CARRIER_HF_INT_EN | DP83822_LINK_STAT_INT_EN | DP83822_ENERGY_DET_INT_EN | DP83822_LINK_QUAL_INT_EN); From a42bd00f00357ba43c1b0b5235dc62d0a0dee936 Mon Sep 17 00:00:00 2001 From: Enguerrand de Ribaucourt Date: Thu, 23 Jun 2022 15:46:45 +0200 Subject: [PATCH 1193/1842] net: dp83822: disable rx error interrupt commit 0e597e2affb90d6ea48df6890d882924acf71e19 upstream. Some RX errors, notably when disconnecting the cable, increase the RCSR register. Once half full (0x7fff), an interrupt flood is generated. I measured ~3k/s interrupts even after the RX errors transfer was stopped. Since we don't read and clear the RCSR register, we should disable this interrupt. Fixes: 87461f7a58ab ("net: phy: DP83822 initial driver submission") Signed-off-by: Enguerrand de Ribaucourt Reviewed-by: Andrew Lunn Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/net/phy/dp83822.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c index 29fabe15cdcf..3a8849716459 100644 --- a/drivers/net/phy/dp83822.c +++ b/drivers/net/phy/dp83822.c @@ -243,8 +243,7 @@ static int dp83822_config_intr(struct phy_device *phydev) if (misr_status < 0) return misr_status; - misr_status |= (DP83822_RX_ERR_HF_INT_EN | - DP83822_LINK_STAT_INT_EN | + misr_status |= (DP83822_LINK_STAT_INT_EN | DP83822_ENERGY_DET_INT_EN | DP83822_LINK_QUAL_INT_EN); From 9de276dfb20c797c8a495fcb6e9720a1d799041e Mon Sep 17 00:00:00 2001 From: Kamal Heib Date: Wed, 25 May 2022 16:20:29 +0300 Subject: [PATCH 1194/1842] RDMA/qedr: Fix reporting QP timeout attribute MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 118f767413ada4eef7825fbd4af7c0866f883441 upstream. Make sure to save the passed QP timeout attribute when the QP gets modified, so when calling query QP the right value is reported and not the converted value that is required by the firmware. This issue was found while running the pyverbs tests. Fixes: cecbcddf6461 ("qedr: Add support for QP verbs") Link: https://lore.kernel.org/r/20220525132029.84813-1-kamalheib1@gmail.com Signed-off-by: Kamal Heib Acked-by: Michal Kalderon  Signed-off-by: Leon Romanovsky Signed-off-by: Greg Kroah-Hartman --- drivers/infiniband/hw/qedr/qedr.h | 1 + drivers/infiniband/hw/qedr/verbs.c | 4 +++- 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/hw/qedr/qedr.h b/drivers/infiniband/hw/qedr/qedr.h index 9dde70373a55..8ef6eecc42a0 100644 --- a/drivers/infiniband/hw/qedr/qedr.h +++ b/drivers/infiniband/hw/qedr/qedr.h @@ -418,6 +418,7 @@ struct qedr_qp { u32 sq_psn; u32 qkey; u32 dest_qp_num; + u8 timeout; /* Relevant to qps created from kernel space only (ULPs) */ u8 prev_wqe_size; diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c index eeb87f31cd25..f7b97b8e81a4 100644 --- a/drivers/infiniband/hw/qedr/verbs.c +++ b/drivers/infiniband/hw/qedr/verbs.c @@ -2622,6 +2622,8 @@ int qedr_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, 1 << max_t(int, attr->timeout - 8, 0); else qp_params.ack_timeout = 0; + + qp->timeout = attr->timeout; } if (attr_mask & IB_QP_RETRY_CNT) { @@ -2781,7 +2783,7 @@ int qedr_query_qp(struct ib_qp *ibqp, rdma_ah_set_dgid_raw(&qp_attr->ah_attr, ¶ms.dgid.bytes[0]); rdma_ah_set_port_num(&qp_attr->ah_attr, 1); rdma_ah_set_sl(&qp_attr->ah_attr, 0); - qp_attr->timeout = params.timeout; + qp_attr->timeout = qp->timeout; qp_attr->rnr_retry = params.rnr_retry; qp_attr->retry_cnt = params.retry_cnt; qp_attr->min_rnr_timer = params.min_rnr_nak_timer; From b0cab8b517aeaf2592c3479294f934209c41a26f Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Tue, 21 Jun 2022 09:25:44 +0400 Subject: [PATCH 1195/1842] RDMA/cm: Fix memory leak in ib_cm_insert_listen commit 2990f223ffa7bb25422956b9f79f9176a5b38346 upstream. cm_alloc_id_priv() allocates resource for the cm_id_priv. When cm_init_listen() fails it doesn't free it, leading to memory leak. Add the missing error unwind. Fixes: 98f67156a80f ("RDMA/cm: Simplify establishing a listen cm_id") Link: https://lore.kernel.org/r/20220621052546.4821-1-linmq006@gmail.com Signed-off-by: Miaoqian Lin Signed-off-by: Jason Gunthorpe Signed-off-by: Greg Kroah-Hartman --- drivers/infiniband/core/cm.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c index ee568bdf3c78..3cc7a23fa69f 100644 --- a/drivers/infiniband/core/cm.c +++ b/drivers/infiniband/core/cm.c @@ -1280,8 +1280,10 @@ struct ib_cm_id *ib_cm_insert_listen(struct ib_device *device, return ERR_CAST(cm_id_priv); err = cm_init_listen(cm_id_priv, service_id, 0); - if (err) + if (err) { + ib_destroy_cm_id(&cm_id_priv->id); return ERR_PTR(err); + } spin_lock_irq(&cm_id_priv->lock); listen_id_priv = cm_insert_listen(cm_id_priv, cm_handler); From fae2a9fb1eaf348ad8732f90d42ebbb971bd7e95 Mon Sep 17 00:00:00 2001 From: Tao Liu Date: Mon, 27 Jun 2022 22:00:04 +0800 Subject: [PATCH 1196/1842] linux/dim: Fix divide by 0 in RDMA DIM commit 0fe3dbbefb74a8575f61d7801b08dbc50523d60d upstream. Fix a divide 0 error in rdma_dim_stats_compare() when prev->cpe_ratio == 0. CallTrace: Hardware name: H3C R4900 G3/RS33M2C9S, BIOS 2.00.37P21 03/12/2020 task: ffff880194b78000 task.stack: ffffc90006714000 RIP: 0010:backport_rdma_dim+0x10e/0x240 [mlx_compat] RSP: 0018:ffff880c10e83ec0 EFLAGS: 00010202 RAX: 0000000000002710 RBX: ffff88096cd7f780 RCX: 0000000000000064 RDX: 0000000000000000 RSI: 0000000000000002 RDI: 0000000000000001 RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000000 R12: 000000001d7c6c09 R13: ffff88096cd7f780 R14: ffff880b174fe800 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffff880c10e80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00000000a0965b00 CR3: 000000000200a003 CR4: 00000000007606e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 PKRU: 55555554 Call Trace: ib_poll_handler+0x43/0x80 [ib_core] irq_poll_softirq+0xae/0x110 __do_softirq+0xd1/0x28c irq_exit+0xde/0xf0 do_IRQ+0x54/0xe0 common_interrupt+0x8f/0x8f ? cpuidle_enter_state+0xd9/0x2a0 ? cpuidle_enter_state+0xc7/0x2a0 ? do_idle+0x170/0x1d0 ? cpu_startup_entry+0x6f/0x80 ? start_secondary+0x1b9/0x210 ? secondary_startup_64+0xa5/0xb0 Code: 0f 87 e1 00 00 00 8b 4c 24 14 44 8b 43 14 89 c8 4d 63 c8 44 29 c0 99 31 d0 29 d0 31 d2 48 98 48 8d 04 80 48 8d 04 80 48 c1 e0 02 <49> f7 f1 48 83 f8 0a 0f 86 c1 00 00 00 44 39 c1 7f 10 48 89 df RIP: backport_rdma_dim+0x10e/0x240 [mlx_compat] RSP: ffff880c10e83ec0 Fixes: f4915455dcf0 ("linux/dim: Implement RDMA adaptive moderation (DIM)") Link: https://lore.kernel.org/r/20220627140004.3099-1-thomas.liu@ucloud.cn Signed-off-by: Tao Liu Reviewed-by: Max Gurtovoy Acked-by: Leon Romanovsky Signed-off-by: Jason Gunthorpe Signed-off-by: Greg Kroah-Hartman --- include/linux/dim.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/dim.h b/include/linux/dim.h index b698266d0035..6c5733981563 100644 --- a/include/linux/dim.h +++ b/include/linux/dim.h @@ -21,7 +21,7 @@ * We consider 10% difference as significant. */ #define IS_SIGNIFICANT_DIFF(val, ref) \ - (((100UL * abs((val) - (ref))) / (ref)) > 10) + ((ref) && (((100UL * abs((val) - (ref))) / (ref)) > 10)) /* * Calculate the gap between two values. From eb1757ca20b8eff7b20373c47198b17bf6fb581d Mon Sep 17 00:00:00 2001 From: Oliver Neukum Date: Tue, 28 Jun 2022 11:35:17 +0200 Subject: [PATCH 1197/1842] usbnet: fix memory allocation in helpers commit e65af5403e462ccd7dff6a045a886c64da598c2e upstream. usbnet provides some helper functions that are also used in the context of reset() operations. During a reset the other drivers on a device are unable to operate. As that can be block drivers, a driver for another interface cannot use paging in its memory allocations without risking a deadlock. Use GFP_NOIO in the helpers. Fixes: 877bd862f32b8 ("usbnet: introduce usbnet 3 command helpers") Signed-off-by: Oliver Neukum Link: https://lore.kernel.org/r/20220628093517.7469-1-oneukum@suse.com Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/net/usb/usbnet.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c index 402390b1a66b..74a833ad7aa9 100644 --- a/drivers/net/usb/usbnet.c +++ b/drivers/net/usb/usbnet.c @@ -1969,7 +1969,7 @@ static int __usbnet_read_cmd(struct usbnet *dev, u8 cmd, u8 reqtype, cmd, reqtype, value, index, size); if (size) { - buf = kmalloc(size, GFP_KERNEL); + buf = kmalloc(size, GFP_NOIO); if (!buf) goto out; } @@ -2001,7 +2001,7 @@ static int __usbnet_write_cmd(struct usbnet *dev, u8 cmd, u8 reqtype, cmd, reqtype, value, index, size); if (data) { - buf = kmemdup(data, size, GFP_KERNEL); + buf = kmemdup(data, size, GFP_NOIO); if (!buf) goto out; } else { From db82bb6054048fac62809aeed21950b05335de06 Mon Sep 17 00:00:00 2001 From: YueHaibing Date: Tue, 28 Jun 2022 11:31:34 +0800 Subject: [PATCH 1198/1842] net: ipv6: unexport __init-annotated seg6_hmac_net_init() commit 53ad46169fe2996fe1b623ba6c9c4fa33847876f upstream. As of commit 5801f064e351 ("net: ipv6: unexport __init-annotated seg6_hmac_init()"), EXPORT_SYMBOL and __init is a bad combination because the .init.text section is freed up after the initialization. Hence, modules cannot use symbols annotated __init. The access to a freed symbol may end up with kernel panic. This remove the EXPORT_SYMBOL to fix modpost warning: WARNING: modpost: vmlinux.o(___ksymtab+seg6_hmac_net_init+0x0): Section mismatch in reference from the variable __ksymtab_seg6_hmac_net_init to the function .init.text:seg6_hmac_net_init() The symbol seg6_hmac_net_init is exported and annotated __init Fix this by removing the __init annotation of seg6_hmac_net_init or drop the export. Fixes: bf355b8d2c30 ("ipv6: sr: add core files for SR HMAC support") Reported-by: Hulk Robot Signed-off-by: YueHaibing Link: https://lore.kernel.org/r/20220628033134.21088-1-yuehaibing@huawei.com Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- net/ipv6/seg6_hmac.c | 1 - 1 file changed, 1 deletion(-) diff --git a/net/ipv6/seg6_hmac.c b/net/ipv6/seg6_hmac.c index b9179708e3c1..552bce1fdfb9 100644 --- a/net/ipv6/seg6_hmac.c +++ b/net/ipv6/seg6_hmac.c @@ -409,7 +409,6 @@ int __net_init seg6_hmac_net_init(struct net *net) return 0; } -EXPORT_SYMBOL(seg6_hmac_net_init); void seg6_hmac_exit(void) { From 208ff7967534329a8a2e1bba1e20e0e29b6d90c6 Mon Sep 17 00:00:00 2001 From: Alexey Khoroshilov Date: Sat, 25 Jun 2022 23:52:43 +0300 Subject: [PATCH 1199/1842] NFSD: restore EINVAL error translation in nfsd_commit() commit 8a9ffb8c857c2c99403bd6483a5a005fed5c0773 upstream. commit 555dbf1a9aac ("nfsd: Replace use of rwsem with errseq_t") incidentally broke translation of -EINVAL to nfserr_notsupp. The patch restores that. Found by Linux Verification Center (linuxtesting.org) with SVACE. Signed-off-by: Alexey Khoroshilov Fixes: 555dbf1a9aac ("nfsd: Replace use of rwsem with errseq_t") Signed-off-by: Chuck Lever Signed-off-by: Greg Kroah-Hartman --- fs/nfsd/vfs.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c index 548ebc913f92..c852bb5ff212 100644 --- a/fs/nfsd/vfs.c +++ b/fs/nfsd/vfs.c @@ -1156,6 +1156,7 @@ nfsd_commit(struct svc_rqst *rqstp, struct svc_fh *fhp, nfsd_net_id)); err2 = filemap_check_wb_err(nf->nf_file->f_mapping, since); + err = nfserrno(err2); break; case -EINVAL: err = nfserr_notsupp; @@ -1163,8 +1164,8 @@ nfsd_commit(struct svc_rqst *rqstp, struct svc_fh *fhp, default: nfsd_reset_boot_verifier(net_generic(nf->nf_net, nfsd_net_id)); + err = nfserrno(err2); } - err = nfserrno(err2); } else nfsd_copy_boot_verifier(verf, net_generic(nf->nf_net, nfsd_net_id)); From 653bdcd833b7505987bbda852144c8b6a4b2ab3a Mon Sep 17 00:00:00 2001 From: Jason Wang Date: Mon, 20 Jun 2022 13:11:14 +0800 Subject: [PATCH 1200/1842] caif_virtio: fix race between virtio_device_ready() and ndo_open() commit 11a37eb66812ce6a06b79223ad530eb0e1d7294d upstream. We currently depend on probe() calling virtio_device_ready() - which happens after netdev registration. Since ndo_open() can be called immediately after register_netdev, this means there exists a race between ndo_open() and virtio_device_ready(): the driver may start to use the device (e.g. TX) before DRIVER_OK which violates the spec. Fix this by switching to use register_netdevice() and protect the virtio_device_ready() with rtnl_lock() to make sure ndo_open() can only be called after virtio_device_ready(). Fixes: 0d2e1a2926b18 ("caif_virtio: Introduce caif over virtio") Signed-off-by: Jason Wang Message-Id: <20220620051115.3142-3-jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin Signed-off-by: Greg Kroah-Hartman --- drivers/net/caif/caif_virtio.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/net/caif/caif_virtio.c b/drivers/net/caif/caif_virtio.c index 47a6d62b7511..a701932f5cc2 100644 --- a/drivers/net/caif/caif_virtio.c +++ b/drivers/net/caif/caif_virtio.c @@ -723,13 +723,21 @@ static int cfv_probe(struct virtio_device *vdev) /* Carrier is off until netdevice is opened */ netif_carrier_off(netdev); + /* serialize netdev register + virtio_device_ready() with ndo_open() */ + rtnl_lock(); + /* register Netdev */ - err = register_netdev(netdev); + err = register_netdevice(netdev); if (err) { + rtnl_unlock(); dev_err(&vdev->dev, "Unable to register netdev (%d)\n", err); goto err; } + virtio_device_ready(vdev); + + rtnl_unlock(); + debugfs_init(cfv); return 0; From e65027fdebbacd40595e96ef7b5d2418f71bddf2 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Thu, 26 May 2022 12:28:56 +0400 Subject: [PATCH 1201/1842] PM / devfreq: exynos-ppmu: Fix refcount leak in of_get_devfreq_events commit f44b799603a9b5d2e375b0b2d54dd0b791eddfc2 upstream. of_get_child_by_name() returns a node pointer with refcount incremented, we should use of_node_put() on it when done. This function only calls of_node_put() in normal path, missing it in error paths. Add missing of_node_put() to avoid refcount leak. Fixes: f262f28c1470 ("PM / devfreq: event: Add devfreq_event class") Signed-off-by: Miaoqian Lin Signed-off-by: Chanwoo Choi Signed-off-by: Greg Kroah-Hartman --- drivers/devfreq/event/exynos-ppmu.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/devfreq/event/exynos-ppmu.c b/drivers/devfreq/event/exynos-ppmu.c index 17ed980d9099..d6da9c3e3106 100644 --- a/drivers/devfreq/event/exynos-ppmu.c +++ b/drivers/devfreq/event/exynos-ppmu.c @@ -514,15 +514,19 @@ static int of_get_devfreq_events(struct device_node *np, count = of_get_child_count(events_np); desc = devm_kcalloc(dev, count, sizeof(*desc), GFP_KERNEL); - if (!desc) + if (!desc) { + of_node_put(events_np); return -ENOMEM; + } info->num_events = count; of_id = of_match_device(exynos_ppmu_id_match, dev); if (of_id) info->ppmu_type = (enum exynos_ppmu_type)of_id->data; - else + else { + of_node_put(events_np); return -EINVAL; + } j = 0; for_each_child_of_node(events_np, node) { From 4b480a7940ff85674cde15dacd1764e69e7f680b Mon Sep 17 00:00:00 2001 From: Masahiro Yamada Date: Tue, 14 Jun 2022 02:09:00 +0900 Subject: [PATCH 1202/1842] s390: remove unneeded 'select BUILD_BIN2C' commit 25deecb21c18ee29e3be8ac6177b2a9504c33d2d upstream. Since commit 4c0f032d4963 ("s390/purgatory: Omit use of bin2c"), s390 builds the purgatory without using bin2c. Remove 'select BUILD_BIN2C' to avoid the unneeded build of bin2c. Fixes: 4c0f032d4963 ("s390/purgatory: Omit use of bin2c") Signed-off-by: Masahiro Yamada Link: https://lore.kernel.org/r/20220613170902.1775211-1-masahiroy@kernel.org Signed-off-by: Alexander Gordeev Signed-off-by: Greg Kroah-Hartman --- arch/s390/Kconfig | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index 896b68e541b2..878993982e39 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -507,7 +507,6 @@ config KEXEC config KEXEC_FILE bool "kexec file based system call" select KEXEC_CORE - select BUILD_BIN2C depends on CRYPTO depends on CRYPTO_SHA256 depends on CRYPTO_SHA256_S390 From 91d3bb82c43e8dfb68ef9e0d61d7bf27b0c7b5b0 Mon Sep 17 00:00:00 2001 From: Pablo Neira Ayuso Date: Tue, 21 Jun 2022 14:01:41 +0200 Subject: [PATCH 1203/1842] netfilter: nft_dynset: restore set element counter when failing to update commit 05907f10e235680cc7fb196810e4ad3215d5e648 upstream. This patch fixes a race condition. nft_rhash_update() might fail for two reasons: - Element already exists in the hashtable. - Another packet won race to insert an entry in the hashtable. In both cases, new() has already bumped the counter via atomic_add_unless(), therefore, decrement the set element counter. Fixes: 22fe54d5fefc ("netfilter: nf_tables: add support for dynamic set updates") Signed-off-by: Pablo Neira Ayuso Signed-off-by: Greg Kroah-Hartman --- net/netfilter/nft_set_hash.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c index 858c8d4d659a..a5cfb321ae23 100644 --- a/net/netfilter/nft_set_hash.c +++ b/net/netfilter/nft_set_hash.c @@ -142,6 +142,7 @@ static bool nft_rhash_update(struct nft_set *set, const u32 *key, /* Another cpu may race to insert the element with the same key */ if (prev) { nft_set_elem_destroy(set, he, true); + atomic_dec(&set->nelems); he = prev; } @@ -151,6 +152,7 @@ static bool nft_rhash_update(struct nft_set *set, const u32 *key, err2: nft_set_elem_destroy(set, he, true); + atomic_dec(&set->nelems); err1: return false; } From ac12337229eacd47c8a6ac59d3a026dd3ad3f058 Mon Sep 17 00:00:00 2001 From: Victor Nogueira Date: Thu, 23 Jun 2022 11:07:41 -0300 Subject: [PATCH 1204/1842] net/sched: act_api: Notify user space if any actions were flushed before error commit 76b39b94382f9e0a639e1c70c3253de248cc4c83 upstream. If during an action flush operation one of the actions is still being referenced, the flush operation is aborted and the kernel returns to user space with an error. However, if the kernel was able to flush, for example, 3 actions and failed on the fourth, the kernel will not notify user space that it deleted 3 actions before failing. This patch fixes that behaviour by notifying user space of how many actions were deleted before flush failed and by setting extack with a message describing what happened. Fixes: 55334a5db5cd ("net_sched: act: refuse to remove bound action outside") Signed-off-by: Victor Nogueira Acked-by: Jamal Hadi Salim Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- net/sched/act_api.c | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/net/sched/act_api.c b/net/sched/act_api.c index 7b29aa1a3ce9..4ab9c2a6f650 100644 --- a/net/sched/act_api.c +++ b/net/sched/act_api.c @@ -302,7 +302,8 @@ static int tcf_idr_release_unsafe(struct tc_action *p) } static int tcf_del_walker(struct tcf_idrinfo *idrinfo, struct sk_buff *skb, - const struct tc_action_ops *ops) + const struct tc_action_ops *ops, + struct netlink_ext_ack *extack) { struct nlattr *nest; int n_i = 0; @@ -318,20 +319,25 @@ static int tcf_del_walker(struct tcf_idrinfo *idrinfo, struct sk_buff *skb, if (nla_put_string(skb, TCA_KIND, ops->kind)) goto nla_put_failure; + ret = 0; mutex_lock(&idrinfo->lock); idr_for_each_entry_ul(idr, p, tmp, id) { if (IS_ERR(p)) continue; ret = tcf_idr_release_unsafe(p); - if (ret == ACT_P_DELETED) { + if (ret == ACT_P_DELETED) module_put(ops->owner); - n_i++; - } else if (ret < 0) { - mutex_unlock(&idrinfo->lock); - goto nla_put_failure; - } + else if (ret < 0) + break; + n_i++; } mutex_unlock(&idrinfo->lock); + if (ret < 0) { + if (n_i) + NL_SET_ERR_MSG(extack, "Unable to flush all TC actions"); + else + goto nla_put_failure; + } ret = nla_put_u32(skb, TCA_FCNT, n_i); if (ret) @@ -352,7 +358,7 @@ int tcf_generic_walker(struct tc_action_net *tn, struct sk_buff *skb, struct tcf_idrinfo *idrinfo = tn->idrinfo; if (type == RTM_DELACTION) { - return tcf_del_walker(idrinfo, skb, ops); + return tcf_del_walker(idrinfo, skb, ops, extack); } else if (type == RTM_GETACTION) { return tcf_dump_walker(idrinfo, skb, cb); } else { From 7597ed348e62ad7e94814bda4c00dad286988123 Mon Sep 17 00:00:00 2001 From: Eric Dumazet Date: Mon, 27 Jun 2022 10:28:13 +0000 Subject: [PATCH 1205/1842] net: bonding: fix possible NULL deref in rlb code commit ab84db251c04d38b8dc7ee86e13d4050bedb1c88 upstream. syzbot has two reports involving the same root cause. bond_alb_initialize() must not set bond->alb_info.rlb_enabled if a memory allocation error is detected. Report 1: general protection fault, probably for non-canonical address 0xdffffc0000000002: 0000 [#1] PREEMPT SMP KASAN KASAN: null-ptr-deref in range [0x0000000000000010-0x0000000000000017] CPU: 0 PID: 12276 Comm: kworker/u4:10 Not tainted 5.19.0-rc3-syzkaller-00132-g3b89b511ea0c #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: netns cleanup_net RIP: 0010:rlb_clear_slave+0x10e/0x690 drivers/net/bonding/bond_alb.c:393 Code: 8e fc 83 fb ff 0f 84 74 02 00 00 e8 cc 2a 8e fc 48 8b 44 24 08 89 dd 48 c1 e5 06 4c 8d 34 28 49 8d 7e 14 48 89 f8 48 c1 e8 03 <42> 0f b6 14 20 48 89 f8 83 e0 07 83 c0 03 38 d0 7c 08 84 d2 0f 85 RSP: 0018:ffffc90018a8f678 EFLAGS: 00010203 RAX: 0000000000000002 RBX: 0000000000000000 RCX: 0000000000000000 RDX: ffff88803375bb00 RSI: ffffffff84ec4ac4 RDI: 0000000000000014 RBP: 0000000000000000 R08: 0000000000000005 R09: 00000000ffffffff R10: 0000000000000000 R11: 0000000000000000 R12: dffffc0000000000 R13: ffff8880ac889000 R14: 0000000000000000 R15: ffff88815a668c80 FS: 0000000000000000(0000) GS:ffff8880b9a00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00005597077e10b0 CR3: 0000000026668000 CR4: 00000000003506f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: bond_alb_deinit_slave+0x43c/0x6b0 drivers/net/bonding/bond_alb.c:1663 __bond_release_one.cold+0x383/0xd53 drivers/net/bonding/bond_main.c:2370 bond_slave_netdev_event drivers/net/bonding/bond_main.c:3778 [inline] bond_netdev_event+0x993/0xad0 drivers/net/bonding/bond_main.c:3889 notifier_call_chain+0xb5/0x200 kernel/notifier.c:87 call_netdevice_notifiers_info+0xb5/0x130 net/core/dev.c:1945 call_netdevice_notifiers_extack net/core/dev.c:1983 [inline] call_netdevice_notifiers net/core/dev.c:1997 [inline] unregister_netdevice_many+0x948/0x18b0 net/core/dev.c:10839 default_device_exit_batch+0x449/0x590 net/core/dev.c:11333 ops_exit_list+0x125/0x170 net/core/net_namespace.c:167 cleanup_net+0x4ea/0xb00 net/core/net_namespace.c:594 process_one_work+0x996/0x1610 kernel/workqueue.c:2289 worker_thread+0x665/0x1080 kernel/workqueue.c:2436 kthread+0x2e9/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:302 Report 2: general protection fault, probably for non-canonical address 0xdffffc0000000006: 0000 [#1] PREEMPT SMP KASAN KASAN: null-ptr-deref in range [0x0000000000000030-0x0000000000000037] CPU: 1 PID: 5206 Comm: syz-executor.1 Not tainted 5.18.0-syzkaller-12108-g58f9d52ff689 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 RIP: 0010:rlb_req_update_slave_clients+0x109/0x2f0 drivers/net/bonding/bond_alb.c:502 Code: 5d 18 8f fc 41 80 3e 00 0f 85 a5 01 00 00 89 d8 48 c1 e0 06 49 03 84 24 68 01 00 00 48 8d 78 30 49 89 c7 48 89 fa 48 c1 ea 03 <80> 3c 2a 00 0f 85 98 01 00 00 4d 39 6f 30 75 83 e8 22 18 8f fc 49 RSP: 0018:ffffc9000300ee80 EFLAGS: 00010206 RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffc90016c11000 RDX: 0000000000000006 RSI: ffffffff84eb6bf3 RDI: 0000000000000030 RBP: dffffc0000000000 R08: 0000000000000005 R09: 00000000ffffffff R10: 0000000000000000 R11: 0000000000000000 R12: ffff888027c80c80 R13: ffff88807d7ff800 R14: ffffed1004f901bd R15: 0000000000000000 FS: 00007f6f46c58700(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020010000 CR3: 00000000516cc000 CR4: 00000000003506e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: alb_fasten_mac_swap+0x886/0xa80 drivers/net/bonding/bond_alb.c:1070 bond_alb_handle_active_change+0x624/0x1050 drivers/net/bonding/bond_alb.c:1765 bond_change_active_slave+0xfa1/0x29b0 drivers/net/bonding/bond_main.c:1173 bond_select_active_slave+0x23f/0xa50 drivers/net/bonding/bond_main.c:1253 bond_enslave+0x3b34/0x53b0 drivers/net/bonding/bond_main.c:2159 do_set_master+0x1c8/0x220 net/core/rtnetlink.c:2577 rtnl_newlink_create net/core/rtnetlink.c:3380 [inline] __rtnl_newlink+0x13ac/0x17e0 net/core/rtnetlink.c:3580 rtnl_newlink+0x64/0xa0 net/core/rtnetlink.c:3593 rtnetlink_rcv_msg+0x43a/0xc90 net/core/rtnetlink.c:6089 netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2501 netlink_unicast_kernel net/netlink/af_netlink.c:1319 [inline] netlink_unicast+0x543/0x7f0 net/netlink/af_netlink.c:1345 netlink_sendmsg+0x917/0xe10 net/netlink/af_netlink.c:1921 sock_sendmsg_nosec net/socket.c:714 [inline] sock_sendmsg+0xcf/0x120 net/socket.c:734 ____sys_sendmsg+0x6eb/0x810 net/socket.c:2492 ___sys_sendmsg+0xf3/0x170 net/socket.c:2546 __sys_sendmsg net/socket.c:2575 [inline] __do_sys_sendmsg net/socket.c:2584 [inline] __se_sys_sendmsg net/socket.c:2582 [inline] __x64_sys_sendmsg+0x132/0x220 net/socket.c:2582 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x46/0xb0 RIP: 0033:0x7f6f45a89109 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f6f46c58168 EFLAGS: 00000246 ORIG_RAX: 000000000000002e RAX: ffffffffffffffda RBX: 00007f6f45b9c030 RCX: 00007f6f45a89109 RDX: 0000000000000000 RSI: 0000000020000080 RDI: 0000000000000006 RBP: 00007f6f45ae308d R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ffed99029af R14: 00007f6f46c58300 R15: 0000000000022000 Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Reported-by: syzbot Signed-off-by: Eric Dumazet Cc: Jay Vosburgh Cc: Veaceslav Falico Cc: Andy Gospodarek Acked-by: Jay Vosburgh Link: https://lore.kernel.org/r/20220627102813.126264-1-edumazet@google.com Signed-off-by: Paolo Abeni Signed-off-by: Greg Kroah-Hartman --- drivers/net/bonding/bond_alb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c index 0436aef9c9ef..152f76f86927 100644 --- a/drivers/net/bonding/bond_alb.c +++ b/drivers/net/bonding/bond_alb.c @@ -1279,12 +1279,12 @@ int bond_alb_initialize(struct bonding *bond, int rlb_enabled) return res; if (rlb_enabled) { - bond->alb_info.rlb_enabled = 1; res = rlb_initialize(bond); if (res) { tlb_deinitialize(bond); return res; } + bond->alb_info.rlb_enabled = 1; } else { bond->alb_info.rlb_enabled = 0; } From 63b2fe509f69b90168a75e04e14573dccf7984e6 Mon Sep 17 00:00:00 2001 From: Yevhen Orlov Date: Wed, 29 Jun 2022 04:29:14 +0300 Subject: [PATCH 1206/1842] net: bonding: fix use-after-free after 802.3ad slave unbind commit 050133e1aa2cb49bb17be847d48a4431598ef562 upstream. commit 0622cab0341c ("bonding: fix 802.3ad aggregator reselection"), resolve case, when there is several aggregation groups in the same bond. bond_3ad_unbind_slave will invalidate (clear) aggregator when __agg_active_ports return zero. So, ad_clear_agg can be executed even, when num_of_ports!=0. Than bond_3ad_unbind_slave can be executed again for, previously cleared aggregator. NOTE: at this time bond_3ad_unbind_slave will not update slave ports list, because lag_ports==NULL. So, here we got slave ports, pointing to freed aggregator memory. Fix with checking actual number of ports in group (as was before commit 0622cab0341c ("bonding: fix 802.3ad aggregator reselection") ), before ad_clear_agg(). The KASAN logs are as follows: [ 767.617392] ================================================================== [ 767.630776] BUG: KASAN: use-after-free in bond_3ad_state_machine_handler+0x13dc/0x1470 [ 767.638764] Read of size 2 at addr ffff00011ba9d430 by task kworker/u8:7/767 [ 767.647361] CPU: 3 PID: 767 Comm: kworker/u8:7 Tainted: G O 5.15.11 #15 [ 767.655329] Hardware name: DNI AmazonGo1 A7040 board (DT) [ 767.660760] Workqueue: lacp_1 bond_3ad_state_machine_handler [ 767.666468] Call trace: [ 767.668930] dump_backtrace+0x0/0x2d0 [ 767.672625] show_stack+0x24/0x30 [ 767.675965] dump_stack_lvl+0x68/0x84 [ 767.679659] print_address_description.constprop.0+0x74/0x2b8 [ 767.685451] kasan_report+0x1f0/0x260 [ 767.689148] __asan_load2+0x94/0xd0 [ 767.692667] bond_3ad_state_machine_handler+0x13dc/0x1470 Fixes: 0622cab0341c ("bonding: fix 802.3ad aggregator reselection") Co-developed-by: Maksym Glubokiy Signed-off-by: Maksym Glubokiy Signed-off-by: Yevhen Orlov Acked-by: Jay Vosburgh Link: https://lore.kernel.org/r/20220629012914.361-1-yevhen.orlov@plvision.eu Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/net/bonding/bond_3ad.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c index c2cef7ba2671..325b20729d8b 100644 --- a/drivers/net/bonding/bond_3ad.c +++ b/drivers/net/bonding/bond_3ad.c @@ -2209,7 +2209,8 @@ void bond_3ad_unbind_slave(struct slave *slave) temp_aggregator->num_of_ports--; if (__agg_active_ports(temp_aggregator) == 0) { select_new_active_agg = temp_aggregator->is_active; - ad_clear_agg(temp_aggregator); + if (temp_aggregator->num_of_ports == 0) + ad_clear_agg(temp_aggregator); if (select_new_active_agg) { slave_info(bond->dev, slave->dev, "Removing an active aggregator\n"); /* select new active aggregator */ From 7d363362e006d67e3d086d1b46a23cda15c7dc66 Mon Sep 17 00:00:00 2001 From: Krzysztof Kozlowski Date: Mon, 27 Jun 2022 14:40:48 +0200 Subject: [PATCH 1207/1842] nfc: nfcmrvl: Fix irq_of_parse_and_map() return value commit 5a478a653b4cca148d5c89832f007ec0809d7e6d upstream. The irq_of_parse_and_map() returns 0 on failure, not a negative ERRNO. Reported-by: Lv Ruyi Fixes: caf6e49bf6d0 ("NFC: nfcmrvl: add spi driver") Signed-off-by: Krzysztof Kozlowski Link: https://lore.kernel.org/r/20220627124048.296253-1-krzysztof.kozlowski@linaro.org Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/nfc/nfcmrvl/i2c.c | 6 +++--- drivers/nfc/nfcmrvl/spi.c | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/nfc/nfcmrvl/i2c.c b/drivers/nfc/nfcmrvl/i2c.c index 18cd96284b77..f81f1cae9324 100644 --- a/drivers/nfc/nfcmrvl/i2c.c +++ b/drivers/nfc/nfcmrvl/i2c.c @@ -186,9 +186,9 @@ static int nfcmrvl_i2c_parse_dt(struct device_node *node, pdata->irq_polarity = IRQF_TRIGGER_RISING; ret = irq_of_parse_and_map(node, 0); - if (ret < 0) { - pr_err("Unable to get irq, error: %d\n", ret); - return ret; + if (!ret) { + pr_err("Unable to get irq\n"); + return -EINVAL; } pdata->irq = ret; diff --git a/drivers/nfc/nfcmrvl/spi.c b/drivers/nfc/nfcmrvl/spi.c index 8e0ddb434770..1f4120e3314b 100644 --- a/drivers/nfc/nfcmrvl/spi.c +++ b/drivers/nfc/nfcmrvl/spi.c @@ -129,9 +129,9 @@ static int nfcmrvl_spi_parse_dt(struct device_node *node, } ret = irq_of_parse_and_map(node, 0); - if (ret < 0) { - pr_err("Unable to get irq, error: %d\n", ret); - return ret; + if (!ret) { + pr_err("Unable to get irq\n"); + return -EINVAL; } pdata->irq = ret; From 09f9946235308f03295fe3405a8f03d6f4c446c6 Mon Sep 17 00:00:00 2001 From: Michael Walle Date: Mon, 27 Jun 2022 19:06:42 +0200 Subject: [PATCH 1208/1842] NFC: nxp-nci: Don't issue a zero length i2c_master_read() commit eddd95b9423946aaacb55cac6a9b2cea8ab944fc upstream. There are packets which doesn't have a payload. In that case, the second i2c_master_read() will have a zero length. But because the NFC controller doesn't have any data left, it will NACK the I2C read and -ENXIO will be returned. In case there is no payload, just skip the second i2c master read. Fixes: 6be88670fc59 ("NFC: nxp-nci_i2c: Add I2C support to NXP NCI driver") Signed-off-by: Michael Walle Reviewed-by: Krzysztof Kozlowski Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- drivers/nfc/nxp-nci/i2c.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/nfc/nxp-nci/i2c.c b/drivers/nfc/nxp-nci/i2c.c index 9f60e4dc5a90..3943a30053b3 100644 --- a/drivers/nfc/nxp-nci/i2c.c +++ b/drivers/nfc/nxp-nci/i2c.c @@ -162,6 +162,9 @@ static int nxp_nci_i2c_nci_read(struct nxp_nci_i2c_phy *phy, skb_put_data(*skb, (void *)&header, NCI_CTRL_HDR_SIZE); + if (!header.plen) + return 0; + r = i2c_master_recv(client, skb_put(*skb, header.plen), header.plen); if (r != header.plen) { nfc_err(&client->dev, From 456bc338871c4a52117dd5ef29cce3745456d248 Mon Sep 17 00:00:00 2001 From: Xin Long Date: Fri, 24 Jun 2022 12:24:31 -0400 Subject: [PATCH 1209/1842] tipc: move bc link creation back to tipc_node_create commit cb8092d70a6f5f01ec1490fce4d35efed3ed996c upstream. Shuang Li reported a NULL pointer dereference crash: [] BUG: kernel NULL pointer dereference, address: 0000000000000068 [] RIP: 0010:tipc_link_is_up+0x5/0x10 [tipc] [] Call Trace: [] [] tipc_bcast_rcv+0xa2/0x190 [tipc] [] tipc_node_bc_rcv+0x8b/0x200 [tipc] [] tipc_rcv+0x3af/0x5b0 [tipc] [] tipc_udp_recv+0xc7/0x1e0 [tipc] It was caused by the 'l' passed into tipc_bcast_rcv() is NULL. When it creates a node in tipc_node_check_dest(), after inserting the new node into hashtable in tipc_node_create(), it creates the bc link. However, there is a gap between this insert and bc link creation, a bc packet may come in and get the node from the hashtable then try to dereference its bc link, which is NULL. This patch is to fix it by moving the bc link creation before inserting into the hashtable. Note that for a preliminary node becoming "real", the bc link creation should also be called before it's rehashed, as we don't create it for preliminary nodes. Fixes: 4cbf8ac2fe5a ("tipc: enable creating a "preliminary" node") Reported-by: Shuang Li Signed-off-by: Xin Long Acked-by: Jon Maloy Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- net/tipc/node.c | 41 ++++++++++++++++++++++------------------- 1 file changed, 22 insertions(+), 19 deletions(-) diff --git a/net/tipc/node.c b/net/tipc/node.c index e4452d55851f..60059827563a 100644 --- a/net/tipc/node.c +++ b/net/tipc/node.c @@ -456,8 +456,8 @@ struct tipc_node *tipc_node_create(struct net *net, u32 addr, u8 *peer_id, bool preliminary) { struct tipc_net *tn = net_generic(net, tipc_net_id); + struct tipc_link *l, *snd_l = tipc_bc_sndlink(net); struct tipc_node *n, *temp_node; - struct tipc_link *l; unsigned long intv; int bearer_id; int i; @@ -472,6 +472,16 @@ struct tipc_node *tipc_node_create(struct net *net, u32 addr, u8 *peer_id, goto exit; /* A preliminary node becomes "real" now, refresh its data */ tipc_node_write_lock(n); + if (!tipc_link_bc_create(net, tipc_own_addr(net), addr, peer_id, U16_MAX, + tipc_link_min_win(snd_l), tipc_link_max_win(snd_l), + n->capabilities, &n->bc_entry.inputq1, + &n->bc_entry.namedq, snd_l, &n->bc_entry.link)) { + pr_warn("Broadcast rcv link refresh failed, no memory\n"); + tipc_node_write_unlock_fast(n); + tipc_node_put(n); + n = NULL; + goto exit; + } n->preliminary = false; n->addr = addr; hlist_del_rcu(&n->hash); @@ -551,7 +561,16 @@ struct tipc_node *tipc_node_create(struct net *net, u32 addr, u8 *peer_id, n->signature = INVALID_NODE_SIG; n->active_links[0] = INVALID_BEARER_ID; n->active_links[1] = INVALID_BEARER_ID; - n->bc_entry.link = NULL; + if (!preliminary && + !tipc_link_bc_create(net, tipc_own_addr(net), addr, peer_id, U16_MAX, + tipc_link_min_win(snd_l), tipc_link_max_win(snd_l), + n->capabilities, &n->bc_entry.inputq1, + &n->bc_entry.namedq, snd_l, &n->bc_entry.link)) { + pr_warn("Broadcast rcv link creation failed, no memory\n"); + kfree(n); + n = NULL; + goto exit; + } tipc_node_get(n); timer_setup(&n->timer, tipc_node_timeout, 0); /* Start a slow timer anyway, crypto needs it */ @@ -1128,7 +1147,7 @@ void tipc_node_check_dest(struct net *net, u32 addr, bool *respond, bool *dupl_addr) { struct tipc_node *n; - struct tipc_link *l, *snd_l; + struct tipc_link *l; struct tipc_link_entry *le; bool addr_match = false; bool sign_match = false; @@ -1148,22 +1167,6 @@ void tipc_node_check_dest(struct net *net, u32 addr, return; tipc_node_write_lock(n); - if (unlikely(!n->bc_entry.link)) { - snd_l = tipc_bc_sndlink(net); - if (!tipc_link_bc_create(net, tipc_own_addr(net), - addr, peer_id, U16_MAX, - tipc_link_min_win(snd_l), - tipc_link_max_win(snd_l), - n->capabilities, - &n->bc_entry.inputq1, - &n->bc_entry.namedq, snd_l, - &n->bc_entry.link)) { - pr_warn("Broadcast rcv link creation failed, no mem\n"); - tipc_node_write_unlock_fast(n); - tipc_node_put(n); - return; - } - } le = &n->links[b->identity]; From b8def021ac7086202f26fdce55471db4794ec76f Mon Sep 17 00:00:00 2001 From: Tong Zhang Date: Sun, 26 Jun 2022 21:33:48 -0700 Subject: [PATCH 1210/1842] epic100: fix use after free on rmmod commit 8ee9d82cd0a45e7d050ade598c9f33032a0f2891 upstream. epic_close() calls epic_rx() and uses dma buffer, but in epic_remove_one() we already freed the dma buffer. To fix this issue, reorder function calls like in the .probe function. BUG: KASAN: use-after-free in epic_rx+0xa6/0x7e0 [epic100] Call Trace: epic_rx+0xa6/0x7e0 [epic100] epic_close+0xec/0x2f0 [epic100] unregister_netdev+0x18/0x20 epic_remove_one+0xaa/0xf0 [epic100] Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Reported-by: Yilun Wu Signed-off-by: Tong Zhang Reviewed-by: Francois Romieu Link: https://lore.kernel.org/r/20220627043351.25615-1-ztong0001@gmail.com Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/net/ethernet/smsc/epic100.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/smsc/epic100.c b/drivers/net/ethernet/smsc/epic100.c index 51cd7dca91cd..3a7d2baec6c1 100644 --- a/drivers/net/ethernet/smsc/epic100.c +++ b/drivers/net/ethernet/smsc/epic100.c @@ -1513,14 +1513,14 @@ static void epic_remove_one(struct pci_dev *pdev) struct net_device *dev = pci_get_drvdata(pdev); struct epic_private *ep = netdev_priv(dev); + unregister_netdev(dev); dma_free_coherent(&pdev->dev, TX_TOTAL_SIZE, ep->tx_ring, ep->tx_ring_dma); dma_free_coherent(&pdev->dev, RX_TOTAL_SIZE, ep->rx_ring, ep->rx_ring_dma); - unregister_netdev(dev); pci_iounmap(pdev, ep->ioaddr); - pci_release_regions(pdev); free_netdev(dev); + pci_release_regions(pdev); pci_disable_device(pdev); /* pci_power_off(pdev, -1); */ } From c9fc52c1739e1d3d69660e30498f99b2b525de8f Mon Sep 17 00:00:00 2001 From: Jens Axboe Date: Thu, 30 Jun 2022 14:44:16 -0600 Subject: [PATCH 1211/1842] io_uring: ensure that send/sendmsg and recv/recvmsg check sqe->ioprio commit 73911426aaaadbae54fa72359b33a7b6a56947db upstream. All other opcodes correctly check if this is set and -EINVAL if it is and they don't support that field, for some reason the these were forgotten. This was unified a bit differently in the upstream tree, but had the same effect as making sure we error on this field. Rather than have a painful backport of the upstream commit, just fixup the mentioned opcodes. Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman --- fs/io_uring.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/fs/io_uring.c b/fs/io_uring.c index 2e12dcbc7b0f..023c3eb34dcc 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -4366,6 +4366,8 @@ static int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; + if (unlikely(sqe->addr2 || sqe->splice_fd_in || sqe->ioprio)) + return -EINVAL; sr->msg_flags = READ_ONCE(sqe->msg_flags); sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr)); @@ -4601,6 +4603,8 @@ static int io_recvmsg_prep(struct io_kiocb *req, if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) return -EINVAL; + if (unlikely(sqe->addr2 || sqe->splice_fd_in || sqe->ioprio)) + return -EINVAL; sr->msg_flags = READ_ONCE(sqe->msg_flags); sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr)); From 59c51c3b545128a92ebfb6dbae990d3abee110e7 Mon Sep 17 00:00:00 2001 From: Eric Dumazet Date: Fri, 24 Jun 2022 15:30:20 +0000 Subject: [PATCH 1212/1842] tunnels: do not assume mac header is set in skb_tunnel_check_pmtu() commit 853a7614880231747040cada91d2b8d2e995c51a upstream. Recently added debug in commit f9aefd6b2aa3 ("net: warn if mac header was not set") caught a bug in skb_tunnel_check_pmtu(), as shown in this syzbot report [1]. In ndo_start_xmit() paths, there is really no need to use skb->mac_header, because skb->data is supposed to point at it. [1] WARNING: CPU: 1 PID: 8604 at include/linux/skbuff.h:2784 skb_mac_header_len include/linux/skbuff.h:2784 [inline] WARNING: CPU: 1 PID: 8604 at include/linux/skbuff.h:2784 skb_tunnel_check_pmtu+0x5de/0x2f90 net/ipv4/ip_tunnel_core.c:413 Modules linked in: CPU: 1 PID: 8604 Comm: syz-executor.3 Not tainted 5.19.0-rc2-syzkaller-00443-g8720bd951b8e #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 RIP: 0010:skb_mac_header_len include/linux/skbuff.h:2784 [inline] RIP: 0010:skb_tunnel_check_pmtu+0x5de/0x2f90 net/ipv4/ip_tunnel_core.c:413 Code: 00 00 00 00 fc ff df 4c 89 fa 48 c1 ea 03 80 3c 02 00 0f 84 b9 fe ff ff 4c 89 ff e8 7c 0f d7 f9 e9 ac fe ff ff e8 c2 13 8a f9 <0f> 0b e9 28 fc ff ff e8 b6 13 8a f9 48 8b 54 24 70 48 b8 00 00 00 RSP: 0018:ffffc90002e4f520 EFLAGS: 00010212 RAX: 0000000000000324 RBX: ffff88804d5fd500 RCX: ffffc90005b52000 RDX: 0000000000040000 RSI: ffffffff87f05e3e RDI: 0000000000000003 RBP: ffffc90002e4f650 R08: 0000000000000003 R09: 000000000000ffff R10: 000000000000ffff R11: 0000000000000000 R12: 000000000000ffff R13: 0000000000000000 R14: 000000000000ffcd R15: 000000000000001f FS: 00007f3babba9700(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020000080 CR3: 0000000075319000 CR4: 00000000003506e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: geneve_xmit_skb drivers/net/geneve.c:927 [inline] geneve_xmit+0xcf8/0x35d0 drivers/net/geneve.c:1107 __netdev_start_xmit include/linux/netdevice.h:4805 [inline] netdev_start_xmit include/linux/netdevice.h:4819 [inline] __dev_direct_xmit+0x500/0x730 net/core/dev.c:4309 dev_direct_xmit include/linux/netdevice.h:3007 [inline] packet_direct_xmit+0x1b8/0x2c0 net/packet/af_packet.c:282 packet_snd net/packet/af_packet.c:3073 [inline] packet_sendmsg+0x21f4/0x55d0 net/packet/af_packet.c:3104 sock_sendmsg_nosec net/socket.c:714 [inline] sock_sendmsg+0xcf/0x120 net/socket.c:734 ____sys_sendmsg+0x6eb/0x810 net/socket.c:2489 ___sys_sendmsg+0xf3/0x170 net/socket.c:2543 __sys_sendmsg net/socket.c:2572 [inline] __do_sys_sendmsg net/socket.c:2581 [inline] __se_sys_sendmsg net/socket.c:2579 [inline] __x64_sys_sendmsg+0x132/0x220 net/socket.c:2579 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x46/0xb0 RIP: 0033:0x7f3baaa89109 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f3babba9168 EFLAGS: 00000246 ORIG_RAX: 000000000000002e RAX: ffffffffffffffda RBX: 00007f3baab9bf60 RCX: 00007f3baaa89109 RDX: 0000000000000000 RSI: 0000000020000a00 RDI: 0000000000000003 RBP: 00007f3baaae305d R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ffe74f2543f R14: 00007f3babba9300 R15: 0000000000022000 Fixes: 4cb47a8644cc ("tunnels: PMTU discovery support for directly bridged IP packets") Reported-by: syzbot Signed-off-by: Eric Dumazet Cc: Stefano Brivio Reviewed-by: Stefano Brivio Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- net/ipv4/ip_tunnel_core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/ip_tunnel_core.c b/net/ipv4/ip_tunnel_core.c index e25be2d01a7a..4b74c67f13c9 100644 --- a/net/ipv4/ip_tunnel_core.c +++ b/net/ipv4/ip_tunnel_core.c @@ -410,7 +410,7 @@ int skb_tunnel_check_pmtu(struct sk_buff *skb, struct dst_entry *encap_dst, u32 mtu = dst_mtu(encap_dst) - headroom; if ((skb_is_gso(skb) && skb_gso_validate_network_len(skb, mtu)) || - (!skb_is_gso(skb) && (skb->len - skb_mac_header_len(skb)) <= mtu)) + (!skb_is_gso(skb) && (skb->len - skb_network_offset(skb)) <= mtu)) return 0; skb_dst_update_pmtu_no_confirm(skb, mtu); From c36d41b65e57cc524adb200c0670978cb54c1dec Mon Sep 17 00:00:00 2001 From: Jakub Kicinski Date: Wed, 29 Jun 2022 11:19:10 -0700 Subject: [PATCH 1213/1842] net: tun: avoid disabling NAPI twice commit ff1fa2081d173b01cebe2fbf0a2d0f1cee9ce4b5 upstream. Eric reports that syzbot made short work out of my speculative fix. Indeed when queue gets detached its tfile->tun remains, so we would try to stop NAPI twice with a detach(), close() sequence. Alternative fix would be to move tun_napi_disable() to tun_detach_all() and let the NAPI run after the queue has been detached. Fixes: a8fc8cb5692a ("net: tun: stop NAPI when detaching queues") Reported-by: syzbot Reported-by: Eric Dumazet Reviewed-by: Eric Dumazet Link: https://lore.kernel.org/r/20220629181911.372047-1-kuba@kernel.org Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/net/tun.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/tun.c b/drivers/net/tun.c index a2715ffc7c9c..be9ff6a74ecc 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -646,7 +646,8 @@ static void __tun_detach(struct tun_file *tfile, bool clean) tun = rtnl_dereference(tfile->tun); if (tun && clean) { - tun_napi_disable(tfile); + if (!tfile->detached) + tun_napi_disable(tfile); tun_napi_del(tfile); } From b261cd005ab980c4018634a849f77e036bfd4f80 Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Sun, 3 Jul 2022 08:04:50 +0300 Subject: [PATCH 1214/1842] xfs: use current->journal_info for detecting transaction recursion commit 756b1c343333a5aefcc26b0409f3fd16f72281bf upstream. Because the iomap code using PF_MEMALLOC_NOFS to detect transaction recursion in XFS is just wrong. Remove it from the iomap code and replace it with XFS specific internal checks using current->journal_info instead. [djwong: This change also realigns the lifetime of NOFS flag changes to match the incore transaction, instead of the inconsistent scheme we have now.] Fixes: 9070733b4efa ("xfs: abstract PF_FSTRANS to PF_MEMALLOC_NOFS") Signed-off-by: Dave Chinner Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/iomap/buffered-io.c | 7 ------- fs/xfs/libxfs/xfs_btree.c | 12 ++++++++++-- fs/xfs/xfs_aops.c | 17 +++++++++++++++-- fs/xfs/xfs_trans.c | 20 +++++--------------- fs/xfs/xfs_trans.h | 30 ++++++++++++++++++++++++++++++ 5 files changed, 60 insertions(+), 26 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index dd33b31b0a82..86297f59b43e 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1460,13 +1460,6 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) PF_MEMALLOC)) goto redirty; - /* - * Given that we do not allow direct reclaim to call us, we should - * never be called in a recursive filesystem reclaim context. - */ - if (WARN_ON_ONCE(current->flags & PF_MEMALLOC_NOFS)) - goto redirty; - /* * Is this page beyond the end of the file? * diff --git a/fs/xfs/libxfs/xfs_btree.c b/fs/xfs/libxfs/xfs_btree.c index 98c82f4935e1..24c7d30e41df 100644 --- a/fs/xfs/libxfs/xfs_btree.c +++ b/fs/xfs/libxfs/xfs_btree.c @@ -2811,7 +2811,7 @@ xfs_btree_split_worker( struct xfs_btree_split_args *args = container_of(work, struct xfs_btree_split_args, work); unsigned long pflags; - unsigned long new_pflags = PF_MEMALLOC_NOFS; + unsigned long new_pflags = 0; /* * we are in a transaction context here, but may also be doing work @@ -2823,12 +2823,20 @@ xfs_btree_split_worker( new_pflags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD; current_set_flags_nested(&pflags, new_pflags); + xfs_trans_set_context(args->cur->bc_tp); args->result = __xfs_btree_split(args->cur, args->level, args->ptrp, args->key, args->curp, args->stat); + + xfs_trans_clear_context(args->cur->bc_tp); + current_restore_flags_nested(&pflags, new_pflags); + + /* + * Do not access args after complete() has run here. We don't own args + * and the owner may run and free args before we return here. + */ complete(args->done); - current_restore_flags_nested(&pflags, new_pflags); } /* diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 4b76a32d2f16..953de843d9c3 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -62,7 +62,7 @@ xfs_setfilesize_trans_alloc( * We hand off the transaction to the completion thread now, so * clear the flag here. */ - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); + xfs_trans_clear_context(tp); return 0; } @@ -125,7 +125,7 @@ xfs_setfilesize_ioend( * thus we need to mark ourselves as being in a transaction manually. * Similarly for freeze protection. */ - current_set_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); + xfs_trans_set_context(tp); __sb_writers_acquired(VFS_I(ip)->i_sb, SB_FREEZE_FS); /* we abort the update if there was an IO error */ @@ -577,6 +577,12 @@ xfs_vm_writepage( { struct xfs_writepage_ctx wpc = { }; + if (WARN_ON_ONCE(current->journal_info)) { + redirty_page_for_writepage(wbc, page); + unlock_page(page); + return 0; + } + return iomap_writepage(page, wbc, &wpc.ctx, &xfs_writeback_ops); } @@ -587,6 +593,13 @@ xfs_vm_writepages( { struct xfs_writepage_ctx wpc = { }; + /* + * Writing back data in a transaction context can result in recursive + * transactions. This is bad, so issue a warning and get out of here. + */ + if (WARN_ON_ONCE(current->journal_info)) + return 0; + xfs_iflags_clear(XFS_I(mapping->host), XFS_ITRUNCATED); return iomap_writepages(mapping, wbc, &wpc.ctx, &xfs_writeback_ops); } diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c index c94e71f741b6..2d7deacea2cf 100644 --- a/fs/xfs/xfs_trans.c +++ b/fs/xfs/xfs_trans.c @@ -68,6 +68,7 @@ xfs_trans_free( xfs_extent_busy_clear(tp->t_mountp, &tp->t_busy, false); trace_xfs_trans_free(tp, _RET_IP_); + xfs_trans_clear_context(tp); if (!(tp->t_flags & XFS_TRANS_NO_WRITECOUNT)) sb_end_intwrite(tp->t_mountp->m_super); xfs_trans_free_dqinfo(tp); @@ -119,7 +120,8 @@ xfs_trans_dup( ntp->t_rtx_res = tp->t_rtx_res - tp->t_rtx_res_used; tp->t_rtx_res = tp->t_rtx_res_used; - ntp->t_pflags = tp->t_pflags; + + xfs_trans_switch_context(tp, ntp); /* move deferred ops over to the new tp */ xfs_defer_move(ntp, tp); @@ -153,9 +155,6 @@ xfs_trans_reserve( int error = 0; bool rsvd = (tp->t_flags & XFS_TRANS_RESERVE) != 0; - /* Mark this thread as being in a transaction */ - current_set_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); - /* * Attempt to reserve the needed disk blocks by decrementing * the number needed from the number available. This will @@ -163,10 +162,8 @@ xfs_trans_reserve( */ if (blocks > 0) { error = xfs_mod_fdblocks(mp, -((int64_t)blocks), rsvd); - if (error != 0) { - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); + if (error != 0) return -ENOSPC; - } tp->t_blk_res += blocks; } @@ -240,9 +237,6 @@ xfs_trans_reserve( xfs_mod_fdblocks(mp, (int64_t)blocks, rsvd); tp->t_blk_res = 0; } - - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); - return error; } @@ -266,6 +260,7 @@ xfs_trans_alloc( tp = kmem_cache_zalloc(xfs_trans_zone, GFP_KERNEL | __GFP_NOFAIL); if (!(flags & XFS_TRANS_NO_WRITECOUNT)) sb_start_intwrite(mp->m_super); + xfs_trans_set_context(tp); /* * Zero-reservation ("empty") transactions can't modify anything, so @@ -878,7 +873,6 @@ __xfs_trans_commit( xfs_log_commit_cil(mp, tp, &commit_lsn, regrant); - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); xfs_trans_free(tp); /* @@ -910,7 +904,6 @@ __xfs_trans_commit( xfs_log_ticket_ungrant(mp->m_log, tp->t_ticket); tp->t_ticket = NULL; } - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); xfs_trans_free_items(tp, !!error); xfs_trans_free(tp); @@ -970,9 +963,6 @@ xfs_trans_cancel( tp->t_ticket = NULL; } - /* mark this thread as no longer being in a transaction */ - current_restore_flags_nested(&tp->t_pflags, PF_MEMALLOC_NOFS); - xfs_trans_free_items(tp, dirty); xfs_trans_free(tp); } diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h index 084658946cc8..075eeade4f7d 100644 --- a/fs/xfs/xfs_trans.h +++ b/fs/xfs/xfs_trans.h @@ -268,4 +268,34 @@ xfs_trans_item_relog( return lip->li_ops->iop_relog(lip, tp); } +static inline void +xfs_trans_set_context( + struct xfs_trans *tp) +{ + ASSERT(current->journal_info == NULL); + tp->t_pflags = memalloc_nofs_save(); + current->journal_info = tp; +} + +static inline void +xfs_trans_clear_context( + struct xfs_trans *tp) +{ + if (current->journal_info == tp) { + memalloc_nofs_restore(tp->t_pflags); + current->journal_info = NULL; + } +} + +static inline void +xfs_trans_switch_context( + struct xfs_trans *old_tp, + struct xfs_trans *new_tp) +{ + ASSERT(current->journal_info == old_tp); + new_tp->t_pflags = old_tp->t_pflags; + old_tp->t_pflags = 0; + current->journal_info = new_tp; +} + #endif /* __XFS_TRANS_H__ */ From 6b7dab812cbad9b894dd5985a9b33b55a060afdc Mon Sep 17 00:00:00 2001 From: Pavel Reichl Date: Sun, 3 Jul 2022 08:04:51 +0300 Subject: [PATCH 1215/1842] xfs: rename variable mp to parsing_mp commit 0f98b4ece18da9d8287bb4cc4e8f78b8760ea0d0 upstream. Rename mp variable to parsisng_mp so it is easy to distinguish between current mount point handle and handle for mount point which mount options are being parsed. Suggested-by: Eric Sandeen Signed-off-by: Pavel Reichl Reviewed-by: Darrick J. Wong Reviewed-by: Carlos Maiolino Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_super.c | 102 ++++++++++++++++++++++----------------------- 1 file changed, 51 insertions(+), 51 deletions(-) diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 05cea7788d49..cba47adbbfdc 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1165,7 +1165,7 @@ xfs_fc_parse_param( struct fs_context *fc, struct fs_parameter *param) { - struct xfs_mount *mp = fc->s_fs_info; + struct xfs_mount *parsing_mp = fc->s_fs_info; struct fs_parse_result result; int size = 0; int opt; @@ -1176,142 +1176,142 @@ xfs_fc_parse_param( switch (opt) { case Opt_logbufs: - mp->m_logbufs = result.uint_32; + parsing_mp->m_logbufs = result.uint_32; return 0; case Opt_logbsize: - if (suffix_kstrtoint(param->string, 10, &mp->m_logbsize)) + if (suffix_kstrtoint(param->string, 10, &parsing_mp->m_logbsize)) return -EINVAL; return 0; case Opt_logdev: - kfree(mp->m_logname); - mp->m_logname = kstrdup(param->string, GFP_KERNEL); - if (!mp->m_logname) + kfree(parsing_mp->m_logname); + parsing_mp->m_logname = kstrdup(param->string, GFP_KERNEL); + if (!parsing_mp->m_logname) return -ENOMEM; return 0; case Opt_rtdev: - kfree(mp->m_rtname); - mp->m_rtname = kstrdup(param->string, GFP_KERNEL); - if (!mp->m_rtname) + kfree(parsing_mp->m_rtname); + parsing_mp->m_rtname = kstrdup(param->string, GFP_KERNEL); + if (!parsing_mp->m_rtname) return -ENOMEM; return 0; case Opt_allocsize: if (suffix_kstrtoint(param->string, 10, &size)) return -EINVAL; - mp->m_allocsize_log = ffs(size) - 1; - mp->m_flags |= XFS_MOUNT_ALLOCSIZE; + parsing_mp->m_allocsize_log = ffs(size) - 1; + parsing_mp->m_flags |= XFS_MOUNT_ALLOCSIZE; return 0; case Opt_grpid: case Opt_bsdgroups: - mp->m_flags |= XFS_MOUNT_GRPID; + parsing_mp->m_flags |= XFS_MOUNT_GRPID; return 0; case Opt_nogrpid: case Opt_sysvgroups: - mp->m_flags &= ~XFS_MOUNT_GRPID; + parsing_mp->m_flags &= ~XFS_MOUNT_GRPID; return 0; case Opt_wsync: - mp->m_flags |= XFS_MOUNT_WSYNC; + parsing_mp->m_flags |= XFS_MOUNT_WSYNC; return 0; case Opt_norecovery: - mp->m_flags |= XFS_MOUNT_NORECOVERY; + parsing_mp->m_flags |= XFS_MOUNT_NORECOVERY; return 0; case Opt_noalign: - mp->m_flags |= XFS_MOUNT_NOALIGN; + parsing_mp->m_flags |= XFS_MOUNT_NOALIGN; return 0; case Opt_swalloc: - mp->m_flags |= XFS_MOUNT_SWALLOC; + parsing_mp->m_flags |= XFS_MOUNT_SWALLOC; return 0; case Opt_sunit: - mp->m_dalign = result.uint_32; + parsing_mp->m_dalign = result.uint_32; return 0; case Opt_swidth: - mp->m_swidth = result.uint_32; + parsing_mp->m_swidth = result.uint_32; return 0; case Opt_inode32: - mp->m_flags |= XFS_MOUNT_SMALL_INUMS; + parsing_mp->m_flags |= XFS_MOUNT_SMALL_INUMS; return 0; case Opt_inode64: - mp->m_flags &= ~XFS_MOUNT_SMALL_INUMS; + parsing_mp->m_flags &= ~XFS_MOUNT_SMALL_INUMS; return 0; case Opt_nouuid: - mp->m_flags |= XFS_MOUNT_NOUUID; + parsing_mp->m_flags |= XFS_MOUNT_NOUUID; return 0; case Opt_largeio: - mp->m_flags |= XFS_MOUNT_LARGEIO; + parsing_mp->m_flags |= XFS_MOUNT_LARGEIO; return 0; case Opt_nolargeio: - mp->m_flags &= ~XFS_MOUNT_LARGEIO; + parsing_mp->m_flags &= ~XFS_MOUNT_LARGEIO; return 0; case Opt_filestreams: - mp->m_flags |= XFS_MOUNT_FILESTREAMS; + parsing_mp->m_flags |= XFS_MOUNT_FILESTREAMS; return 0; case Opt_noquota: - mp->m_qflags &= ~XFS_ALL_QUOTA_ACCT; - mp->m_qflags &= ~XFS_ALL_QUOTA_ENFD; - mp->m_qflags &= ~XFS_ALL_QUOTA_ACTIVE; + parsing_mp->m_qflags &= ~XFS_ALL_QUOTA_ACCT; + parsing_mp->m_qflags &= ~XFS_ALL_QUOTA_ENFD; + parsing_mp->m_qflags &= ~XFS_ALL_QUOTA_ACTIVE; return 0; case Opt_quota: case Opt_uquota: case Opt_usrquota: - mp->m_qflags |= (XFS_UQUOTA_ACCT | XFS_UQUOTA_ACTIVE | + parsing_mp->m_qflags |= (XFS_UQUOTA_ACCT | XFS_UQUOTA_ACTIVE | XFS_UQUOTA_ENFD); return 0; case Opt_qnoenforce: case Opt_uqnoenforce: - mp->m_qflags |= (XFS_UQUOTA_ACCT | XFS_UQUOTA_ACTIVE); - mp->m_qflags &= ~XFS_UQUOTA_ENFD; + parsing_mp->m_qflags |= (XFS_UQUOTA_ACCT | XFS_UQUOTA_ACTIVE); + parsing_mp->m_qflags &= ~XFS_UQUOTA_ENFD; return 0; case Opt_pquota: case Opt_prjquota: - mp->m_qflags |= (XFS_PQUOTA_ACCT | XFS_PQUOTA_ACTIVE | + parsing_mp->m_qflags |= (XFS_PQUOTA_ACCT | XFS_PQUOTA_ACTIVE | XFS_PQUOTA_ENFD); return 0; case Opt_pqnoenforce: - mp->m_qflags |= (XFS_PQUOTA_ACCT | XFS_PQUOTA_ACTIVE); - mp->m_qflags &= ~XFS_PQUOTA_ENFD; + parsing_mp->m_qflags |= (XFS_PQUOTA_ACCT | XFS_PQUOTA_ACTIVE); + parsing_mp->m_qflags &= ~XFS_PQUOTA_ENFD; return 0; case Opt_gquota: case Opt_grpquota: - mp->m_qflags |= (XFS_GQUOTA_ACCT | XFS_GQUOTA_ACTIVE | + parsing_mp->m_qflags |= (XFS_GQUOTA_ACCT | XFS_GQUOTA_ACTIVE | XFS_GQUOTA_ENFD); return 0; case Opt_gqnoenforce: - mp->m_qflags |= (XFS_GQUOTA_ACCT | XFS_GQUOTA_ACTIVE); - mp->m_qflags &= ~XFS_GQUOTA_ENFD; + parsing_mp->m_qflags |= (XFS_GQUOTA_ACCT | XFS_GQUOTA_ACTIVE); + parsing_mp->m_qflags &= ~XFS_GQUOTA_ENFD; return 0; case Opt_discard: - mp->m_flags |= XFS_MOUNT_DISCARD; + parsing_mp->m_flags |= XFS_MOUNT_DISCARD; return 0; case Opt_nodiscard: - mp->m_flags &= ~XFS_MOUNT_DISCARD; + parsing_mp->m_flags &= ~XFS_MOUNT_DISCARD; return 0; #ifdef CONFIG_FS_DAX case Opt_dax: - xfs_mount_set_dax_mode(mp, XFS_DAX_ALWAYS); + xfs_mount_set_dax_mode(parsing_mp, XFS_DAX_ALWAYS); return 0; case Opt_dax_enum: - xfs_mount_set_dax_mode(mp, result.uint_32); + xfs_mount_set_dax_mode(parsing_mp, result.uint_32); return 0; #endif /* Following mount options will be removed in September 2025 */ case Opt_ikeep: - xfs_warn(mp, "%s mount option is deprecated.", param->key); - mp->m_flags |= XFS_MOUNT_IKEEP; + xfs_warn(parsing_mp, "%s mount option is deprecated.", param->key); + parsing_mp->m_flags |= XFS_MOUNT_IKEEP; return 0; case Opt_noikeep: - xfs_warn(mp, "%s mount option is deprecated.", param->key); - mp->m_flags &= ~XFS_MOUNT_IKEEP; + xfs_warn(parsing_mp, "%s mount option is deprecated.", param->key); + parsing_mp->m_flags &= ~XFS_MOUNT_IKEEP; return 0; case Opt_attr2: - xfs_warn(mp, "%s mount option is deprecated.", param->key); - mp->m_flags |= XFS_MOUNT_ATTR2; + xfs_warn(parsing_mp, "%s mount option is deprecated.", param->key); + parsing_mp->m_flags |= XFS_MOUNT_ATTR2; return 0; case Opt_noattr2: - xfs_warn(mp, "%s mount option is deprecated.", param->key); - mp->m_flags &= ~XFS_MOUNT_ATTR2; - mp->m_flags |= XFS_MOUNT_NOATTR2; + xfs_warn(parsing_mp, "%s mount option is deprecated.", param->key); + parsing_mp->m_flags &= ~XFS_MOUNT_ATTR2; + parsing_mp->m_flags |= XFS_MOUNT_NOATTR2; return 0; default: - xfs_warn(mp, "unknown mount option [%s].", param->key); + xfs_warn(parsing_mp, "unknown mount option [%s].", param->key); return -EINVAL; } From da61388f9a750a257563f1d80fd649f7d66b3bd3 Mon Sep 17 00:00:00 2001 From: Pavel Reichl Date: Sun, 3 Jul 2022 08:04:52 +0300 Subject: [PATCH 1216/1842] xfs: Skip repetitive warnings about mount options commit 92cf7d36384b99d5a57bf4422904a3c16dc4527a upstream. Skip the warnings about mount option being deprecated if we are remounting and deprecated option state is not changing. Bug: https://bugzilla.kernel.org/show_bug.cgi?id=211605 Fix-suggested-by: Eric Sandeen Signed-off-by: Pavel Reichl Reviewed-by: Darrick J. Wong Reviewed-by: Carlos Maiolino Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_super.c | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index cba47adbbfdc..04af5d17abc7 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1155,6 +1155,22 @@ suffix_kstrtoint( return ret; } +static inline void +xfs_fs_warn_deprecated( + struct fs_context *fc, + struct fs_parameter *param, + uint64_t flag, + bool value) +{ + /* Don't print the warning if reconfiguring and current mount point + * already had the flag set + */ + if ((fc->purpose & FS_CONTEXT_FOR_RECONFIGURE) && + !!(XFS_M(fc->root->d_sb)->m_flags & flag) == value) + return; + xfs_warn(fc->s_fs_info, "%s mount option is deprecated.", param->key); +} + /* * Set mount state from a mount option. * @@ -1294,19 +1310,19 @@ xfs_fc_parse_param( #endif /* Following mount options will be removed in September 2025 */ case Opt_ikeep: - xfs_warn(parsing_mp, "%s mount option is deprecated.", param->key); + xfs_fs_warn_deprecated(fc, param, XFS_MOUNT_IKEEP, true); parsing_mp->m_flags |= XFS_MOUNT_IKEEP; return 0; case Opt_noikeep: - xfs_warn(parsing_mp, "%s mount option is deprecated.", param->key); + xfs_fs_warn_deprecated(fc, param, XFS_MOUNT_IKEEP, false); parsing_mp->m_flags &= ~XFS_MOUNT_IKEEP; return 0; case Opt_attr2: - xfs_warn(parsing_mp, "%s mount option is deprecated.", param->key); + xfs_fs_warn_deprecated(fc, param, XFS_MOUNT_ATTR2, true); parsing_mp->m_flags |= XFS_MOUNT_ATTR2; return 0; case Opt_noattr2: - xfs_warn(parsing_mp, "%s mount option is deprecated.", param->key); + xfs_fs_warn_deprecated(fc, param, XFS_MOUNT_NOATTR2, true); parsing_mp->m_flags &= ~XFS_MOUNT_ATTR2; parsing_mp->m_flags |= XFS_MOUNT_NOATTR2; return 0; From f12968a5a4beb3c47b588042089aaaa4db9fcdde Mon Sep 17 00:00:00 2001 From: Gao Xiang Date: Sun, 3 Jul 2022 08:04:53 +0300 Subject: [PATCH 1217/1842] xfs: ensure xfs_errortag_random_default matches XFS_ERRTAG_MAX commit b2c2974b8cdf1eb3ef90ff845eb27b19e2187b7e upstream. Add the BUILD_BUG_ON to xfs_errortag_add() in order to make sure that the length of xfs_errortag_random_default matches XFS_ERRTAG_MAX when building. Signed-off-by: Gao Xiang Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_error.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/fs/xfs/xfs_error.c b/fs/xfs/xfs_error.c index 7f6e20899473..f9e2f606b5b8 100644 --- a/fs/xfs/xfs_error.c +++ b/fs/xfs/xfs_error.c @@ -293,6 +293,8 @@ xfs_errortag_add( struct xfs_mount *mp, unsigned int error_tag) { + BUILD_BUG_ON(ARRAY_SIZE(xfs_errortag_random_default) != XFS_ERRTAG_MAX); + if (error_tag >= XFS_ERRTAG_MAX) return -EINVAL; From 7ab7458d7af7469fb5df70325d4253353e75e376 Mon Sep 17 00:00:00 2001 From: Anthony Iliopoulos Date: Sun, 3 Jul 2022 08:04:54 +0300 Subject: [PATCH 1218/1842] xfs: fix xfs_trans slab cache name commit 25dfa65f814951a33072bcbae795989d817858da upstream. Removal of kmem_zone_init wrappers accidentally changed a slab cache name from "xfs_trans" to "xf_trans". Fix this so that userspace consumers of /proc/slabinfo and /sys/kernel/slab can find it again. Fixes: b1231760e443 ("xfs: Remove slab init wrappers") Signed-off-by: Anthony Iliopoulos Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_super.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 04af5d17abc7..6323974d6b3e 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1934,7 +1934,7 @@ xfs_init_zones(void) if (!xfs_ifork_zone) goto out_destroy_da_state_zone; - xfs_trans_zone = kmem_cache_create("xf_trans", + xfs_trans_zone = kmem_cache_create("xfs_trans", sizeof(struct xfs_trans), 0, 0, NULL); if (!xfs_trans_zone) From f874e16870cc59be2c6444ea3d02665f942e665a Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Sun, 3 Jul 2022 08:04:55 +0300 Subject: [PATCH 1219/1842] xfs: update superblock counters correctly for !lazysbcount commit 6543990a168acf366f4b6174d7bd46ba15a8a2a6 upstream. Keep the mount superblock counters up to date for !lazysbcount filesystems so that when we log the superblock they do not need updating in any way because they are already correct. It's found by what Zorro reported: 1. mkfs.xfs -f -l lazy-count=0 -m crc=0 $dev 2. mount $dev $mnt 3. fsstress -d $mnt -p 100 -n 1000 (maybe need more or less io load) 4. umount $mnt 5. xfs_repair -n $dev and I've seen no problem with this patch. Signed-off-by: Dave Chinner Reported-by: Zorro Lang Reviewed-by: Gao Xiang Signed-off-by: Gao Xiang Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Reviewed-by: Brian Foster Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/libxfs/xfs_sb.c | 16 +++++++++++++--- fs/xfs/xfs_trans.c | 3 +++ 2 files changed, 16 insertions(+), 3 deletions(-) diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c index 5aeafa59ed27..66e8353da2f3 100644 --- a/fs/xfs/libxfs/xfs_sb.c +++ b/fs/xfs/libxfs/xfs_sb.c @@ -956,9 +956,19 @@ xfs_log_sb( struct xfs_mount *mp = tp->t_mountp; struct xfs_buf *bp = xfs_trans_getsb(tp); - mp->m_sb.sb_icount = percpu_counter_sum(&mp->m_icount); - mp->m_sb.sb_ifree = percpu_counter_sum(&mp->m_ifree); - mp->m_sb.sb_fdblocks = percpu_counter_sum(&mp->m_fdblocks); + /* + * Lazy sb counters don't update the in-core superblock so do that now. + * If this is at unmount, the counters will be exactly correct, but at + * any other time they will only be ballpark correct because of + * reservations that have been taken out percpu counters. If we have an + * unclean shutdown, this will be corrected by log recovery rebuilding + * the counters from the AGF block counts. + */ + if (xfs_sb_version_haslazysbcount(&mp->m_sb)) { + mp->m_sb.sb_icount = percpu_counter_sum(&mp->m_icount); + mp->m_sb.sb_ifree = percpu_counter_sum(&mp->m_ifree); + mp->m_sb.sb_fdblocks = percpu_counter_sum(&mp->m_fdblocks); + } xfs_sb_to_disk(bp->b_addr, &mp->m_sb); xfs_trans_buf_set_type(tp, bp, XFS_BLFT_SB_BUF); diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c index 2d7deacea2cf..36166bae24a6 100644 --- a/fs/xfs/xfs_trans.c +++ b/fs/xfs/xfs_trans.c @@ -615,6 +615,9 @@ xfs_trans_unreserve_and_mod_sb( /* apply remaining deltas */ spin_lock(&mp->m_sb_lock); + mp->m_sb.sb_fdblocks += tp->t_fdblocks_delta + tp->t_res_fdblocks_delta; + mp->m_sb.sb_icount += idelta; + mp->m_sb.sb_ifree += ifreedelta; mp->m_sb.sb_frextents += rtxdelta; mp->m_sb.sb_dblocks += tp->t_dblocks_delta; mp->m_sb.sb_agcount += tp->t_agcount_delta; From 9203dfb3ed6b3ab07bc6784c55ba7bc3f8b3d7b9 Mon Sep 17 00:00:00 2001 From: "Darrick J. Wong" Date: Sun, 3 Jul 2022 08:04:56 +0300 Subject: [PATCH 1220/1842] xfs: fix xfs_reflink_unshare usage of filemap_write_and_wait_range commit d4f74e162d238ce00a640af5f0611c3f51dad70e upstream. The final parameter of filemap_write_and_wait_range is the end of the range to flush, not the length of the range to flush. Fixes: 46afb0628b86 ("xfs: only flush the unshared range in xfs_reflink_unshare") Signed-off-by: Darrick J. Wong Reviewed-by: Chandan Babu R Reviewed-by: Brian Foster Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_reflink.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c index 6fa05fb78189..aa46b75d75af 100644 --- a/fs/xfs/xfs_reflink.c +++ b/fs/xfs/xfs_reflink.c @@ -1503,7 +1503,8 @@ xfs_reflink_unshare( if (error) goto out; - error = filemap_write_and_wait_range(inode->i_mapping, offset, len); + error = filemap_write_and_wait_range(inode->i_mapping, offset, + offset + len - 1); if (error) goto out; From 5f614f5f70bf401a75af60b0adf9e7d63b05ad8a Mon Sep 17 00:00:00 2001 From: Eric Dumazet Date: Thu, 23 Jun 2022 05:04:36 +0000 Subject: [PATCH 1221/1842] tcp: add a missing nf_reset_ct() in 3WHS handling commit 6f0012e35160cd08a53e46e3b3bbf724b92dfe68 upstream. When the third packet of 3WHS connection establishment contains payload, it is added into socket receive queue without the XFRM check and the drop of connection tracking context. This means that if the data is left unread in the socket receive queue, conntrack module can not be unloaded. As most applications usually reads the incoming data immediately after accept(), bug has been hiding for quite a long time. Commit 68822bdf76f1 ("net: generalize skb freeing deferral to per-cpu lists") exposed this bug because even if the application reads this data, the skb with nfct state could stay in a per-cpu cache for an arbitrary time, if said cpu no longer process RX softirqs. Many thanks to Ilya Maximets for reporting this issue, and for testing various patches: https://lore.kernel.org/netdev/20220619003919.394622-1-i.maximets@ovn.org/ Note that I also added a missing xfrm4_policy_check() call, although this is probably not a big issue, as the SYN packet should have been dropped earlier. Fixes: b59c270104f0 ("[NETFILTER]: Keep conntrack reference until IPsec policy checks are done") Reported-by: Ilya Maximets Signed-off-by: Eric Dumazet Cc: Florian Westphal Cc: Pablo Neira Ayuso Cc: Steffen Klassert Tested-by: Ilya Maximets Reviewed-by: Ilya Maximets Link: https://lore.kernel.org/r/20220623050436.1290307-1-edumazet@google.com Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- net/ipv4/tcp_ipv4.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 017cd666387f..32c60122db9c 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1980,7 +1980,8 @@ int tcp_v4_rcv(struct sk_buff *skb) struct sock *nsk; sk = req->rsk_listener; - if (unlikely(tcp_v4_inbound_md5_hash(sk, skb, dif, sdif))) { + if (unlikely(!xfrm4_policy_check(sk, XFRM_POLICY_IN, skb) || + tcp_v4_inbound_md5_hash(sk, skb, dif, sdif))) { sk_drops_add(sk, skb); reqsk_put(req); goto discard_it; @@ -2019,6 +2020,7 @@ int tcp_v4_rcv(struct sk_buff *skb) } goto discard_and_relse; } + nf_reset_ct(skb); if (nsk == sk) { reqsk_put(req); tcp_v4_restore_cb(skb); From 79963021fd718b74bed4cbc98f5f49d3ba6fb48c Mon Sep 17 00:00:00 2001 From: Demi Marie Obenour Date: Tue, 21 Jun 2022 22:27:26 -0400 Subject: [PATCH 1222/1842] xen/gntdev: Avoid blocking in unmap_grant_pages() commit dbe97cff7dd9f0f75c524afdd55ad46be3d15295 upstream. unmap_grant_pages() currently waits for the pages to no longer be used. In https://github.com/QubesOS/qubes-issues/issues/7481, this lead to a deadlock against i915: i915 was waiting for gntdev's MMU notifier to finish, while gntdev was waiting for i915 to free its pages. I also believe this is responsible for various deadlocks I have experienced in the past. Avoid these problems by making unmap_grant_pages async. This requires making it return void, as any errors will not be available when the function returns. Fortunately, the only use of the return value is a WARN_ON(), which can be replaced by a WARN_ON when the error is detected. Additionally, a failed call will not prevent further calls from being made, but this is harmless. Because unmap_grant_pages is now async, the grant handle will be sent to INVALID_GRANT_HANDLE too late to prevent multiple unmaps of the same handle. Instead, a separate bool array is allocated for this purpose. This wastes memory, but stuffing this information in padding bytes is too fragile. Furthermore, it is necessary to grab a reference to the map before making the asynchronous call, and release the reference when the call returns. It is also necessary to guard against reentrancy in gntdev_map_put(), and to handle the case where userspace tries to map a mapping whose contents have not all been freed yet. Fixes: 745282256c75 ("xen/gntdev: safely unmap grants in case they are still in use") Cc: stable@vger.kernel.org Signed-off-by: Demi Marie Obenour Reviewed-by: Juergen Gross Link: https://lore.kernel.org/r/20220622022726.2538-1-demi@invisiblethingslab.com Signed-off-by: Juergen Gross Signed-off-by: Greg Kroah-Hartman --- drivers/xen/gntdev-common.h | 7 ++ drivers/xen/gntdev.c | 152 +++++++++++++++++++++++++----------- 2 files changed, 112 insertions(+), 47 deletions(-) diff --git a/drivers/xen/gntdev-common.h b/drivers/xen/gntdev-common.h index 20d7d059dadb..40ef379c28ab 100644 --- a/drivers/xen/gntdev-common.h +++ b/drivers/xen/gntdev-common.h @@ -16,6 +16,7 @@ #include #include #include +#include struct gntdev_dmabuf_priv; @@ -56,6 +57,7 @@ struct gntdev_grant_map { struct gnttab_unmap_grant_ref *unmap_ops; struct gnttab_map_grant_ref *kmap_ops; struct gnttab_unmap_grant_ref *kunmap_ops; + bool *being_removed; struct page **pages; unsigned long pages_vm_start; @@ -73,6 +75,11 @@ struct gntdev_grant_map { /* Needed to avoid allocation in gnttab_dma_free_pages(). */ xen_pfn_t *frames; #endif + + /* Number of live grants */ + atomic_t live_grants; + /* Needed to avoid allocation in __unmap_grant_pages */ + struct gntab_unmap_queue_data unmap_data; }; struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count, diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c index 54778aadf618..f415c056ff8a 100644 --- a/drivers/xen/gntdev.c +++ b/drivers/xen/gntdev.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include @@ -60,10 +61,11 @@ module_param(limit, uint, 0644); MODULE_PARM_DESC(limit, "Maximum number of grants that may be mapped by one mapping request"); +/* True in PV mode, false otherwise */ static int use_ptemod; -static int unmap_grant_pages(struct gntdev_grant_map *map, - int offset, int pages); +static void unmap_grant_pages(struct gntdev_grant_map *map, + int offset, int pages); static struct miscdevice gntdev_miscdev; @@ -120,6 +122,7 @@ static void gntdev_free_map(struct gntdev_grant_map *map) kvfree(map->unmap_ops); kvfree(map->kmap_ops); kvfree(map->kunmap_ops); + kvfree(map->being_removed); kfree(map); } @@ -140,12 +143,15 @@ struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count, add->kunmap_ops = kvcalloc(count, sizeof(add->kunmap_ops[0]), GFP_KERNEL); add->pages = kvcalloc(count, sizeof(add->pages[0]), GFP_KERNEL); + add->being_removed = + kvcalloc(count, sizeof(add->being_removed[0]), GFP_KERNEL); if (NULL == add->grants || NULL == add->map_ops || NULL == add->unmap_ops || NULL == add->kmap_ops || NULL == add->kunmap_ops || - NULL == add->pages) + NULL == add->pages || + NULL == add->being_removed) goto err; #ifdef CONFIG_XEN_GRANT_DMA_ALLOC @@ -240,9 +246,36 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map) if (!refcount_dec_and_test(&map->users)) return; - if (map->pages && !use_ptemod) + if (map->pages && !use_ptemod) { + /* + * Increment the reference count. This ensures that the + * subsequent call to unmap_grant_pages() will not wind up + * re-entering itself. It *can* wind up calling + * gntdev_put_map() recursively, but such calls will be with a + * reference count greater than 1, so they will return before + * this code is reached. The recursion depth is thus limited to + * 1. Do NOT use refcount_inc() here, as it will detect that + * the reference count is zero and WARN(). + */ + refcount_set(&map->users, 1); + + /* + * Unmap the grants. This may or may not be asynchronous, so it + * is possible that the reference count is 1 on return, but it + * could also be greater than 1. + */ unmap_grant_pages(map, 0, map->count); + /* Check if the memory now needs to be freed */ + if (!refcount_dec_and_test(&map->users)) + return; + + /* + * All pages have been returned to the hypervisor, so free the + * map. + */ + } + if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) { notify_remote_via_evtchn(map->notify.event); evtchn_put(map->notify.event); @@ -288,6 +321,7 @@ static int set_grant_ptes_as_special(pte_t *pte, unsigned long addr, void *data) int gntdev_map_grant_pages(struct gntdev_grant_map *map) { + size_t alloced = 0; int i, err = 0; if (!use_ptemod) { @@ -336,87 +370,109 @@ int gntdev_map_grant_pages(struct gntdev_grant_map *map) map->pages, map->count); for (i = 0; i < map->count; i++) { - if (map->map_ops[i].status == GNTST_okay) + if (map->map_ops[i].status == GNTST_okay) { map->unmap_ops[i].handle = map->map_ops[i].handle; - else if (!err) + if (!use_ptemod) + alloced++; + } else if (!err) err = -EINVAL; if (map->flags & GNTMAP_device_map) map->unmap_ops[i].dev_bus_addr = map->map_ops[i].dev_bus_addr; if (use_ptemod) { - if (map->kmap_ops[i].status == GNTST_okay) + if (map->kmap_ops[i].status == GNTST_okay) { + if (map->map_ops[i].status == GNTST_okay) + alloced++; map->kunmap_ops[i].handle = map->kmap_ops[i].handle; - else if (!err) + } else if (!err) err = -EINVAL; } } + atomic_add(alloced, &map->live_grants); return err; } -static int __unmap_grant_pages(struct gntdev_grant_map *map, int offset, - int pages) +static void __unmap_grant_pages_done(int result, + struct gntab_unmap_queue_data *data) { - int i, err = 0; - struct gntab_unmap_queue_data unmap_data; + unsigned int i; + struct gntdev_grant_map *map = data->data; + unsigned int offset = data->unmap_ops - map->unmap_ops; - if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) { - int pgno = (map->notify.addr >> PAGE_SHIFT); - if (pgno >= offset && pgno < offset + pages) { - /* No need for kmap, pages are in lowmem */ - uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno])); - tmp[map->notify.addr & (PAGE_SIZE-1)] = 0; - map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE; - } - } - - unmap_data.unmap_ops = map->unmap_ops + offset; - unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL; - unmap_data.pages = map->pages + offset; - unmap_data.count = pages; - - err = gnttab_unmap_refs_sync(&unmap_data); - if (err) - return err; - - for (i = 0; i < pages; i++) { - if (map->unmap_ops[offset+i].status) - err = -EINVAL; + for (i = 0; i < data->count; i++) { + WARN_ON(map->unmap_ops[offset+i].status); pr_debug("unmap handle=%d st=%d\n", map->unmap_ops[offset+i].handle, map->unmap_ops[offset+i].status); map->unmap_ops[offset+i].handle = -1; } - return err; + /* + * Decrease the live-grant counter. This must happen after the loop to + * prevent premature reuse of the grants by gnttab_mmap(). + */ + atomic_sub(data->count, &map->live_grants); + + /* Release reference taken by __unmap_grant_pages */ + gntdev_put_map(NULL, map); } -static int unmap_grant_pages(struct gntdev_grant_map *map, int offset, - int pages) +static void __unmap_grant_pages(struct gntdev_grant_map *map, int offset, + int pages) { - int range, err = 0; + if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) { + int pgno = (map->notify.addr >> PAGE_SHIFT); + + if (pgno >= offset && pgno < offset + pages) { + /* No need for kmap, pages are in lowmem */ + uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno])); + + tmp[map->notify.addr & (PAGE_SIZE-1)] = 0; + map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE; + } + } + + map->unmap_data.unmap_ops = map->unmap_ops + offset; + map->unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL; + map->unmap_data.pages = map->pages + offset; + map->unmap_data.count = pages; + map->unmap_data.done = __unmap_grant_pages_done; + map->unmap_data.data = map; + refcount_inc(&map->users); /* to keep map alive during async call below */ + + gnttab_unmap_refs_async(&map->unmap_data); +} + +static void unmap_grant_pages(struct gntdev_grant_map *map, int offset, + int pages) +{ + int range; + + if (atomic_read(&map->live_grants) == 0) + return; /* Nothing to do */ pr_debug("unmap %d+%d [%d+%d]\n", map->index, map->count, offset, pages); /* It is possible the requested range will have a "hole" where we * already unmapped some of the grants. Only unmap valid ranges. */ - while (pages && !err) { - while (pages && map->unmap_ops[offset].handle == -1) { + while (pages) { + while (pages && map->being_removed[offset]) { offset++; pages--; } range = 0; while (range < pages) { - if (map->unmap_ops[offset+range].handle == -1) + if (map->being_removed[offset + range]) break; + map->being_removed[offset + range] = true; range++; } - err = __unmap_grant_pages(map, offset, range); + if (range) + __unmap_grant_pages(map, offset, range); offset += range; pages -= range; } - - return err; } /* ------------------------------------------------------------------ */ @@ -468,7 +524,6 @@ static bool gntdev_invalidate(struct mmu_interval_notifier *mn, struct gntdev_grant_map *map = container_of(mn, struct gntdev_grant_map, notifier); unsigned long mstart, mend; - int err; if (!mmu_notifier_range_blockable(range)) return false; @@ -489,10 +544,9 @@ static bool gntdev_invalidate(struct mmu_interval_notifier *mn, map->index, map->count, map->vma->vm_start, map->vma->vm_end, range->start, range->end, mstart, mend); - err = unmap_grant_pages(map, + unmap_grant_pages(map, (mstart - map->vma->vm_start) >> PAGE_SHIFT, (mend - mstart) >> PAGE_SHIFT); - WARN_ON(err); return true; } @@ -980,6 +1034,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma) goto unlock_out; if (use_ptemod && map->vma) goto unlock_out; + if (atomic_read(&map->live_grants)) { + err = -EAGAIN; + goto unlock_out; + } refcount_inc(&map->users); vma->vm_ops = &gntdev_vmops; From 652fd40eb01cc2c9348af293dbc7b1983a91e958 Mon Sep 17 00:00:00 2001 From: Liang He Date: Wed, 15 Jun 2022 17:48:07 +0800 Subject: [PATCH 1223/1842] drivers: cpufreq: Add missing of_node_put() in qoriq-cpufreq.c [ Upstream commit 4ff5a9b6d95f3524bf6d27147df497eb21968300 ] In qoriq_cpufreq_probe(), of_find_matching_node() will return a node pointer with refcount incremented. We should use of_node_put() when it is not used anymore. Fixes: 157f527639da ("cpufreq: qoriq: convert to a platform driver") [ Viresh: Fixed Author's name in commit log ] Signed-off-by: Liang He Signed-off-by: Viresh Kumar Signed-off-by: Sasha Levin --- drivers/cpufreq/qoriq-cpufreq.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/cpufreq/qoriq-cpufreq.c b/drivers/cpufreq/qoriq-cpufreq.c index 6b6b20da2bcf..573b417e1483 100644 --- a/drivers/cpufreq/qoriq-cpufreq.c +++ b/drivers/cpufreq/qoriq-cpufreq.c @@ -275,6 +275,7 @@ static int qoriq_cpufreq_probe(struct platform_device *pdev) np = of_find_matching_node(NULL, qoriq_cpufreq_blacklist); if (np) { + of_node_put(np); dev_info(&pdev->dev, "Disabling due to erratum A-008083"); return -ENODEV; } From 25055da22a0f8f8323a7486049c6cc7a17381f2d Mon Sep 17 00:00:00 2001 From: kernel test robot Date: Sat, 27 Mar 2021 10:29:32 +0100 Subject: [PATCH 1224/1842] sit: use min [ Upstream commit 284fda1eff8a8b27d2cafd7dc8fb423d13720f21 ] Opportunity for min() Generated by: scripts/coccinelle/misc/minmax.cocci CC: Denis Efremov Reported-by: kernel test robot Signed-off-by: kernel test robot Reviewed-by: David Ahern Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv6/sit.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c index bab0e99f6e35..0be82586ce32 100644 --- a/net/ipv6/sit.c +++ b/net/ipv6/sit.c @@ -323,7 +323,7 @@ static int ipip6_tunnel_get_prl(struct net_device *dev, struct ifreq *ifr) rcu_read_lock(); - ca = t->prl_count < cmax ? t->prl_count : cmax; + ca = min(t->prl_count, cmax); if (!kp) { /* We don't try hard to allocate much memory for From f72d410dbf8dbfb6b87b3d45a1cebde7a5de9b7b Mon Sep 17 00:00:00 2001 From: katrinzhou Date: Tue, 28 Jun 2022 11:50:30 +0800 Subject: [PATCH 1225/1842] ipv6/sit: fix ipip6_tunnel_get_prl return value [ Upstream commit adabdd8f6acabc0c3fdbba2e7f5a2edd9c5ef22d ] When kcalloc fails, ipip6_tunnel_get_prl() should return -ENOMEM. Move the position of label "out" to return correctly. Addresses-Coverity: ("Unused value") Fixes: 300aaeeaab5f ("[IPV6] SIT: Add SIOCGETPRL ioctl to get/dump PRL.") Signed-off-by: katrinzhou Reviewed-by: Eric Dumazet Reviewed-by: David Ahern Link: https://lore.kernel.org/r/20220628035030.1039171-1-zys.zljxml@gmail.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- net/ipv6/sit.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c index 0be82586ce32..3c92e8cacbba 100644 --- a/net/ipv6/sit.c +++ b/net/ipv6/sit.c @@ -321,8 +321,6 @@ static int ipip6_tunnel_get_prl(struct net_device *dev, struct ifreq *ifr) kcalloc(cmax, sizeof(*kp), GFP_KERNEL | __GFP_NOWARN) : NULL; - rcu_read_lock(); - ca = min(t->prl_count, cmax); if (!kp) { @@ -338,7 +336,7 @@ static int ipip6_tunnel_get_prl(struct net_device *dev, struct ifreq *ifr) } } - c = 0; + rcu_read_lock(); for_each_prl_rcu(t->prl) { if (c >= cmax) break; @@ -350,7 +348,7 @@ static int ipip6_tunnel_get_prl(struct net_device *dev, struct ifreq *ifr) if (kprl.addr != htonl(INADDR_ANY)) break; } -out: + rcu_read_unlock(); len = sizeof(*kp) * c; @@ -359,7 +357,7 @@ static int ipip6_tunnel_get_prl(struct net_device *dev, struct ifreq *ifr) ret = -EFAULT; kfree(kp); - +out: return ret; } From 14894cf6925c342047d3c63ca9c815a08b4685dc Mon Sep 17 00:00:00 2001 From: Yang Yingliang Date: Fri, 1 Jul 2022 15:41:53 +0800 Subject: [PATCH 1226/1842] hwmon: (ibmaem) don't call platform_device_del() if platform_device_add() fails [ Upstream commit d0e51022a025ca5350fafb8e413a6fe5d4baf833 ] If platform_device_add() fails, it no need to call platform_device_del(), split platform_device_unregister() into platform_device_del/put(), so platform_device_put() can be called separately. Fixes: 8808a793f052 ("ibmaem: new driver for power/energy/temp meters in IBM System X hardware") Reported-by: Hulk Robot Signed-off-by: Yang Yingliang Link: https://lore.kernel.org/r/20220701074153.4021556-1-yangyingliang@huawei.com Signed-off-by: Guenter Roeck Signed-off-by: Sasha Levin --- drivers/hwmon/ibmaem.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/drivers/hwmon/ibmaem.c b/drivers/hwmon/ibmaem.c index a4ec85207782..2e6d6a5cffa1 100644 --- a/drivers/hwmon/ibmaem.c +++ b/drivers/hwmon/ibmaem.c @@ -550,7 +550,7 @@ static int aem_init_aem1_inst(struct aem_ipmi_data *probe, u8 module_handle) res = platform_device_add(data->pdev); if (res) - goto ipmi_err; + goto dev_add_err; platform_set_drvdata(data->pdev, data); @@ -598,7 +598,9 @@ static int aem_init_aem1_inst(struct aem_ipmi_data *probe, u8 module_handle) ipmi_destroy_user(data->ipmi.user); ipmi_err: platform_set_drvdata(data->pdev, NULL); - platform_device_unregister(data->pdev); + platform_device_del(data->pdev); +dev_add_err: + platform_device_put(data->pdev); dev_err: ida_simple_remove(&aem_ida, data->id); id_err: @@ -690,7 +692,7 @@ static int aem_init_aem2_inst(struct aem_ipmi_data *probe, res = platform_device_add(data->pdev); if (res) - goto ipmi_err; + goto dev_add_err; platform_set_drvdata(data->pdev, data); @@ -738,7 +740,9 @@ static int aem_init_aem2_inst(struct aem_ipmi_data *probe, ipmi_destroy_user(data->ipmi.user); ipmi_err: platform_set_drvdata(data->pdev, NULL); - platform_device_unregister(data->pdev); + platform_device_del(data->pdev); +dev_add_err: + platform_device_put(data->pdev); dev_err: ida_simple_remove(&aem_ida, data->id); id_err: From 54cd556487d4972b188d3a8e178317ddd5340471 Mon Sep 17 00:00:00 2001 From: Shuah Khan Date: Thu, 9 Dec 2021 16:19:09 -0700 Subject: [PATCH 1227/1842] selftests/rseq: remove ARRAY_SIZE define from individual tests commit 07ad4f7629d4802ff0d962b0ac23ea6445964e2a upstream. ARRAY_SIZE is defined in several selftests. Remove definitions from individual test files and include header file for the define instead. ARRAY_SIZE define is added in a separate patch to prepare for this change. Remove ARRAY_SIZE from rseq tests and pickup the one defined in kselftest.h. Signed-off-by: Shuah Khan Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/rseq/basic_percpu_ops_test.c | 3 +-- tools/testing/selftests/rseq/rseq.c | 3 +-- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/rseq/basic_percpu_ops_test.c b/tools/testing/selftests/rseq/basic_percpu_ops_test.c index eb3f6db36d36..b953a52ff706 100644 --- a/tools/testing/selftests/rseq/basic_percpu_ops_test.c +++ b/tools/testing/selftests/rseq/basic_percpu_ops_test.c @@ -9,10 +9,9 @@ #include #include +#include "../kselftest.h" #include "rseq.h" -#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0])) - struct percpu_lock_entry { intptr_t v; } __attribute__((aligned(128))); diff --git a/tools/testing/selftests/rseq/rseq.c b/tools/testing/selftests/rseq/rseq.c index 7159eb777fd3..fb440dfca158 100644 --- a/tools/testing/selftests/rseq/rseq.c +++ b/tools/testing/selftests/rseq/rseq.c @@ -27,10 +27,9 @@ #include #include +#include "../kselftest.h" #include "rseq.h" -#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0])) - __thread volatile struct rseq __rseq_abi = { .cpu_id = RSEQ_CPU_ID_UNINITIALIZED, }; From 3e77ed4f905264da4988ed7835767f7d9f294487 Mon Sep 17 00:00:00 2001 From: Mathieu Desnoyers Date: Mon, 24 Jan 2022 12:12:39 -0500 Subject: [PATCH 1228/1842] selftests/rseq: introduce own copy of rseq uapi header commit 5c105d55a9dc9e01535116ccfc26e703168a574f upstream. The Linux kernel rseq uapi header has a broken layout for the rseq_cs.ptr field on 32-bit little endian architectures. The entire rseq_cs.ptr field is planned for removal, leaving only the 64-bit rseq_cs.ptr64 field available. Both glibc and librseq use their own copy of the Linux kernel uapi header, where they introduce proper union fields to access to the 32-bit low order bits of the rseq_cs pointer on 32-bit architectures. Introduce a copy of the Linux kernel uapi headers in the Linux kernel selftests. Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220124171253.22072-2-mathieu.desnoyers@efficios.com Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/rseq/rseq-abi.h | 151 ++++++++++++++++++++++++ tools/testing/selftests/rseq/rseq.c | 14 +-- tools/testing/selftests/rseq/rseq.h | 10 +- 3 files changed, 161 insertions(+), 14 deletions(-) create mode 100644 tools/testing/selftests/rseq/rseq-abi.h diff --git a/tools/testing/selftests/rseq/rseq-abi.h b/tools/testing/selftests/rseq/rseq-abi.h new file mode 100644 index 000000000000..a8c44d9af71f --- /dev/null +++ b/tools/testing/selftests/rseq/rseq-abi.h @@ -0,0 +1,151 @@ +/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */ +#ifndef _RSEQ_ABI_H +#define _RSEQ_ABI_H + +/* + * rseq-abi.h + * + * Restartable sequences system call API + * + * Copyright (c) 2015-2022 Mathieu Desnoyers + */ + +#include +#include + +enum rseq_abi_cpu_id_state { + RSEQ_ABI_CPU_ID_UNINITIALIZED = -1, + RSEQ_ABI_CPU_ID_REGISTRATION_FAILED = -2, +}; + +enum rseq_abi_flags { + RSEQ_ABI_FLAG_UNREGISTER = (1 << 0), +}; + +enum rseq_abi_cs_flags_bit { + RSEQ_ABI_CS_FLAG_NO_RESTART_ON_PREEMPT_BIT = 0, + RSEQ_ABI_CS_FLAG_NO_RESTART_ON_SIGNAL_BIT = 1, + RSEQ_ABI_CS_FLAG_NO_RESTART_ON_MIGRATE_BIT = 2, +}; + +enum rseq_abi_cs_flags { + RSEQ_ABI_CS_FLAG_NO_RESTART_ON_PREEMPT = + (1U << RSEQ_ABI_CS_FLAG_NO_RESTART_ON_PREEMPT_BIT), + RSEQ_ABI_CS_FLAG_NO_RESTART_ON_SIGNAL = + (1U << RSEQ_ABI_CS_FLAG_NO_RESTART_ON_SIGNAL_BIT), + RSEQ_ABI_CS_FLAG_NO_RESTART_ON_MIGRATE = + (1U << RSEQ_ABI_CS_FLAG_NO_RESTART_ON_MIGRATE_BIT), +}; + +/* + * struct rseq_abi_cs is aligned on 4 * 8 bytes to ensure it is always + * contained within a single cache-line. It is usually declared as + * link-time constant data. + */ +struct rseq_abi_cs { + /* Version of this structure. */ + __u32 version; + /* enum rseq_abi_cs_flags */ + __u32 flags; + __u64 start_ip; + /* Offset from start_ip. */ + __u64 post_commit_offset; + __u64 abort_ip; +} __attribute__((aligned(4 * sizeof(__u64)))); + +/* + * struct rseq_abi is aligned on 4 * 8 bytes to ensure it is always + * contained within a single cache-line. + * + * A single struct rseq_abi per thread is allowed. + */ +struct rseq_abi { + /* + * Restartable sequences cpu_id_start field. Updated by the + * kernel. Read by user-space with single-copy atomicity + * semantics. This field should only be read by the thread which + * registered this data structure. Aligned on 32-bit. Always + * contains a value in the range of possible CPUs, although the + * value may not be the actual current CPU (e.g. if rseq is not + * initialized). This CPU number value should always be compared + * against the value of the cpu_id field before performing a rseq + * commit or returning a value read from a data structure indexed + * using the cpu_id_start value. + */ + __u32 cpu_id_start; + /* + * Restartable sequences cpu_id field. Updated by the kernel. + * Read by user-space with single-copy atomicity semantics. This + * field should only be read by the thread which registered this + * data structure. Aligned on 32-bit. Values + * RSEQ_CPU_ID_UNINITIALIZED and RSEQ_CPU_ID_REGISTRATION_FAILED + * have a special semantic: the former means "rseq uninitialized", + * and latter means "rseq initialization failed". This value is + * meant to be read within rseq critical sections and compared + * with the cpu_id_start value previously read, before performing + * the commit instruction, or read and compared with the + * cpu_id_start value before returning a value loaded from a data + * structure indexed using the cpu_id_start value. + */ + __u32 cpu_id; + /* + * Restartable sequences rseq_cs field. + * + * Contains NULL when no critical section is active for the current + * thread, or holds a pointer to the currently active struct rseq_cs. + * + * Updated by user-space, which sets the address of the currently + * active rseq_cs at the beginning of assembly instruction sequence + * block, and set to NULL by the kernel when it restarts an assembly + * instruction sequence block, as well as when the kernel detects that + * it is preempting or delivering a signal outside of the range + * targeted by the rseq_cs. Also needs to be set to NULL by user-space + * before reclaiming memory that contains the targeted struct rseq_cs. + * + * Read and set by the kernel. Set by user-space with single-copy + * atomicity semantics. This field should only be updated by the + * thread which registered this data structure. Aligned on 64-bit. + */ + union { + __u64 ptr64; + + /* + * The "arch" field provides architecture accessor for + * the ptr field based on architecture pointer size and + * endianness. + */ + struct { +#ifdef __LP64__ + __u64 ptr; +#elif defined(__BYTE_ORDER) ? (__BYTE_ORDER == __BIG_ENDIAN) : defined(__BIG_ENDIAN) + __u32 padding; /* Initialized to zero. */ + __u32 ptr; +#else + __u32 ptr; + __u32 padding; /* Initialized to zero. */ +#endif + } arch; + } rseq_cs; + + /* + * Restartable sequences flags field. + * + * This field should only be updated by the thread which + * registered this data structure. Read by the kernel. + * Mainly used for single-stepping through rseq critical sections + * with debuggers. + * + * - RSEQ_ABI_CS_FLAG_NO_RESTART_ON_PREEMPT + * Inhibit instruction sequence block restart on preemption + * for this thread. + * - RSEQ_ABI_CS_FLAG_NO_RESTART_ON_SIGNAL + * Inhibit instruction sequence block restart on signal + * delivery for this thread. + * - RSEQ_ABI_CS_FLAG_NO_RESTART_ON_MIGRATE + * Inhibit instruction sequence block restart on migration for + * this thread. + */ + __u32 flags; +} __attribute__((aligned(4 * sizeof(__u64)))); + +#endif /* _RSEQ_ABI_H */ diff --git a/tools/testing/selftests/rseq/rseq.c b/tools/testing/selftests/rseq/rseq.c index fb440dfca158..bfe1b2692ffc 100644 --- a/tools/testing/selftests/rseq/rseq.c +++ b/tools/testing/selftests/rseq/rseq.c @@ -30,8 +30,8 @@ #include "../kselftest.h" #include "rseq.h" -__thread volatile struct rseq __rseq_abi = { - .cpu_id = RSEQ_CPU_ID_UNINITIALIZED, +__thread volatile struct rseq_abi __rseq_abi = { + .cpu_id = RSEQ_ABI_CPU_ID_UNINITIALIZED, }; /* @@ -66,7 +66,7 @@ static void signal_restore(sigset_t oldset) abort(); } -static int sys_rseq(volatile struct rseq *rseq_abi, uint32_t rseq_len, +static int sys_rseq(volatile struct rseq_abi *rseq_abi, uint32_t rseq_len, int flags, uint32_t sig) { return syscall(__NR_rseq, rseq_abi, rseq_len, flags, sig); @@ -86,13 +86,13 @@ int rseq_register_current_thread(void) } if (__rseq_refcount++) goto end; - rc = sys_rseq(&__rseq_abi, sizeof(struct rseq), 0, RSEQ_SIG); + rc = sys_rseq(&__rseq_abi, sizeof(struct rseq_abi), 0, RSEQ_SIG); if (!rc) { assert(rseq_current_cpu_raw() >= 0); goto end; } if (errno != EBUSY) - __rseq_abi.cpu_id = RSEQ_CPU_ID_REGISTRATION_FAILED; + __rseq_abi.cpu_id = RSEQ_ABI_CPU_ID_REGISTRATION_FAILED; ret = -1; __rseq_refcount--; end: @@ -114,8 +114,8 @@ int rseq_unregister_current_thread(void) } if (--__rseq_refcount) goto end; - rc = sys_rseq(&__rseq_abi, sizeof(struct rseq), - RSEQ_FLAG_UNREGISTER, RSEQ_SIG); + rc = sys_rseq(&__rseq_abi, sizeof(struct rseq_abi), + RSEQ_ABI_FLAG_UNREGISTER, RSEQ_SIG); if (!rc) goto end; __rseq_refcount = 1; diff --git a/tools/testing/selftests/rseq/rseq.h b/tools/testing/selftests/rseq/rseq.h index 3f63eb362b92..cb6bbc53b586 100644 --- a/tools/testing/selftests/rseq/rseq.h +++ b/tools/testing/selftests/rseq/rseq.h @@ -16,7 +16,7 @@ #include #include #include -#include +#include "rseq-abi.h" /* * Empty code injection macros, override when testing. @@ -43,7 +43,7 @@ #define RSEQ_INJECT_FAILED #endif -extern __thread volatile struct rseq __rseq_abi; +extern __thread volatile struct rseq_abi __rseq_abi; extern int __rseq_handled; #define rseq_likely(x) __builtin_expect(!!(x), 1) @@ -139,11 +139,7 @@ static inline uint32_t rseq_current_cpu(void) static inline void rseq_clear_rseq_cs(void) { -#ifdef __LP64__ - __rseq_abi.rseq_cs.ptr = 0; -#else - __rseq_abi.rseq_cs.ptr.ptr32 = 0; -#endif + __rseq_abi.rseq_cs.arch.ptr = 0; } /* From 68e1232c6e931d671db9e949fafbe651b511e8a4 Mon Sep 17 00:00:00 2001 From: Mathieu Desnoyers Date: Mon, 24 Jan 2022 12:12:41 -0500 Subject: [PATCH 1229/1842] selftests/rseq: Remove useless assignment to cpu variable commit 930378d056eac2c96407b02aafe4938d0ac9cc37 upstream. Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220124171253.22072-4-mathieu.desnoyers@efficios.com Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/rseq/param_test.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/tools/testing/selftests/rseq/param_test.c b/tools/testing/selftests/rseq/param_test.c index 384589095864..cff01f04f0df 100644 --- a/tools/testing/selftests/rseq/param_test.c +++ b/tools/testing/selftests/rseq/param_test.c @@ -368,9 +368,7 @@ void *test_percpu_spinlock_thread(void *arg) abort(); reps = thread_data->reps; for (i = 0; i < reps; i++) { - int cpu = rseq_cpu_start(); - - cpu = rseq_this_cpu_lock(&data->lock); + int cpu = rseq_this_cpu_lock(&data->lock); data->c[cpu].count++; rseq_percpu_unlock(&data->lock, cpu); #ifndef BENCHMARK From 3c2a416c80cccba30c39029e6a051823b034f211 Mon Sep 17 00:00:00 2001 From: Mathieu Desnoyers Date: Mon, 24 Jan 2022 12:12:42 -0500 Subject: [PATCH 1230/1842] selftests/rseq: Remove volatile from __rseq_abi commit 94b80a19ebfe347a01301d750040a61c38200e2b upstream. This is done in preparation for the selftest uplift to become compatible with glibc-2.35. All accesses to the __rseq_abi fields are volatile, but remove the volatile from the TLS variable declaration, otherwise we are stuck with volatile for the upcoming rseq_get_abi() helper. Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220124171253.22072-5-mathieu.desnoyers@efficios.com Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/rseq/rseq.c | 4 ++-- tools/testing/selftests/rseq/rseq.h | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/rseq/rseq.c b/tools/testing/selftests/rseq/rseq.c index bfe1b2692ffc..1f905b60728a 100644 --- a/tools/testing/selftests/rseq/rseq.c +++ b/tools/testing/selftests/rseq/rseq.c @@ -30,7 +30,7 @@ #include "../kselftest.h" #include "rseq.h" -__thread volatile struct rseq_abi __rseq_abi = { +__thread struct rseq_abi __rseq_abi = { .cpu_id = RSEQ_ABI_CPU_ID_UNINITIALIZED, }; @@ -92,7 +92,7 @@ int rseq_register_current_thread(void) goto end; } if (errno != EBUSY) - __rseq_abi.cpu_id = RSEQ_ABI_CPU_ID_REGISTRATION_FAILED; + RSEQ_WRITE_ONCE(__rseq_abi.cpu_id, RSEQ_ABI_CPU_ID_REGISTRATION_FAILED); ret = -1; __rseq_refcount--; end: diff --git a/tools/testing/selftests/rseq/rseq.h b/tools/testing/selftests/rseq/rseq.h index cb6bbc53b586..d580f8e9c001 100644 --- a/tools/testing/selftests/rseq/rseq.h +++ b/tools/testing/selftests/rseq/rseq.h @@ -43,7 +43,7 @@ #define RSEQ_INJECT_FAILED #endif -extern __thread volatile struct rseq_abi __rseq_abi; +extern __thread struct rseq_abi __rseq_abi; extern int __rseq_handled; #define rseq_likely(x) __builtin_expect(!!(x), 1) @@ -139,7 +139,7 @@ static inline uint32_t rseq_current_cpu(void) static inline void rseq_clear_rseq_cs(void) { - __rseq_abi.rseq_cs.arch.ptr = 0; + RSEQ_WRITE_ONCE(__rseq_abi.rseq_cs.arch.ptr, 0); } /* From 4a78bf83e226142ceceba882a577ade96502e2ba Mon Sep 17 00:00:00 2001 From: Mathieu Desnoyers Date: Mon, 24 Jan 2022 12:12:43 -0500 Subject: [PATCH 1231/1842] selftests/rseq: Introduce rseq_get_abi() helper commit e546cd48ccc456074ddb8920732aef4af65d7ca7 upstream. This is done in preparation for the selftest uplift to become compatible with glibc-2.35. glibc-2.35 exposes the rseq per-thread data in the TCB, accessible at an offset from the thread pointer, rather than through an actual Thread-Local Storage (TLS) variable, as the kernel selftests initially expected. Introduce a rseq_get_abi() helper, initially using the __rseq_abi TLS variable, in preparation for changing this userspace ABI for one which is compatible with glibc-2.35. Note that the __rseq_abi TLS and glibc-2.35's ABI for per-thread data cannot actively coexist in a process, because the kernel supports only a single rseq registration per thread. Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220124171253.22072-6-mathieu.desnoyers@efficios.com Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/rseq/rseq-arm.h | 32 +++++++++++------------ tools/testing/selftests/rseq/rseq-arm64.h | 32 +++++++++++------------ tools/testing/selftests/rseq/rseq-mips.h | 32 +++++++++++------------ tools/testing/selftests/rseq/rseq-ppc.h | 32 +++++++++++------------ tools/testing/selftests/rseq/rseq-s390.h | 24 ++++++++--------- tools/testing/selftests/rseq/rseq-x86.h | 30 ++++++++++----------- tools/testing/selftests/rseq/rseq.h | 11 +++++--- 7 files changed, 99 insertions(+), 94 deletions(-) diff --git a/tools/testing/selftests/rseq/rseq-arm.h b/tools/testing/selftests/rseq/rseq-arm.h index 5943c816c07c..6716540e17c6 100644 --- a/tools/testing/selftests/rseq/rseq-arm.h +++ b/tools/testing/selftests/rseq/rseq-arm.h @@ -185,8 +185,8 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) "5:\n\t" : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), [v] "m" (*v), [expect] "r" (expect), [newv] "r" (newv) @@ -255,8 +255,8 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, "5:\n\t" : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* final store input */ [v] "m" (*v), [expectnot] "r" (expectnot), @@ -316,8 +316,8 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) "5:\n\t" : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), [v] "m" (*v), [count] "Ir" (count) RSEQ_INJECT_INPUT @@ -381,8 +381,8 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, "5:\n\t" : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -457,8 +457,8 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, "5:\n\t" : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -537,8 +537,8 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, "5:\n\t" : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* cmp2 input */ [v2] "m" (*v2), [expect2] "r" (expect2), @@ -657,8 +657,8 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, "8:\n\t" : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* final store input */ [v] "m" (*v), [expect] "r" (expect), @@ -782,8 +782,8 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, "8:\n\t" : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* final store input */ [v] "m" (*v), [expect] "r" (expect), diff --git a/tools/testing/selftests/rseq/rseq-arm64.h b/tools/testing/selftests/rseq/rseq-arm64.h index 200dae9e4208..b9d9b3aa6e9b 100644 --- a/tools/testing/selftests/rseq/rseq-arm64.h +++ b/tools/testing/selftests/rseq/rseq-arm64.h @@ -230,8 +230,8 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "Qo" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), [v] "Qo" (*v), [expect] "r" (expect), [newv] "r" (newv) @@ -287,8 +287,8 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "Qo" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), [v] "Qo" (*v), [expectnot] "r" (expectnot), [load] "Qo" (*load), @@ -337,8 +337,8 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "Qo" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), [v] "Qo" (*v), [count] "r" (count) RSEQ_INJECT_INPUT @@ -388,8 +388,8 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "Qo" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), [expect] "r" (expect), [v] "Qo" (*v), [newv] "r" (newv), @@ -447,8 +447,8 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "Qo" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), [expect] "r" (expect), [v] "Qo" (*v), [newv] "r" (newv), @@ -508,8 +508,8 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "Qo" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), [v] "Qo" (*v), [expect] "r" (expect), [v2] "Qo" (*v2), @@ -569,8 +569,8 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "Qo" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), [expect] "r" (expect), [v] "Qo" (*v), [newv] "r" (newv), @@ -629,8 +629,8 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "Qo" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), [expect] "r" (expect), [v] "Qo" (*v), [newv] "r" (newv), diff --git a/tools/testing/selftests/rseq/rseq-mips.h b/tools/testing/selftests/rseq/rseq-mips.h index e989e7c14b09..2b1f5bd95268 100644 --- a/tools/testing/selftests/rseq/rseq-mips.h +++ b/tools/testing/selftests/rseq/rseq-mips.h @@ -190,8 +190,8 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) "5:\n\t" : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), [v] "m" (*v), [expect] "r" (expect), [newv] "r" (newv) @@ -258,8 +258,8 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, "5:\n\t" : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* final store input */ [v] "m" (*v), [expectnot] "r" (expectnot), @@ -319,8 +319,8 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) "5:\n\t" : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), [v] "m" (*v), [count] "Ir" (count) RSEQ_INJECT_INPUT @@ -382,8 +382,8 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, "5:\n\t" : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -456,8 +456,8 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, "5:\n\t" : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -532,8 +532,8 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, "5:\n\t" : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* cmp2 input */ [v2] "m" (*v2), [expect2] "r" (expect2), @@ -649,8 +649,8 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, "8:\n\t" : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* final store input */ [v] "m" (*v), [expect] "r" (expect), @@ -771,8 +771,8 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, "8:\n\t" : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* final store input */ [v] "m" (*v), [expect] "r" (expect), diff --git a/tools/testing/selftests/rseq/rseq-ppc.h b/tools/testing/selftests/rseq/rseq-ppc.h index 76be90196fe4..2e6b7572ba08 100644 --- a/tools/testing/selftests/rseq/rseq-ppc.h +++ b/tools/testing/selftests/rseq/rseq-ppc.h @@ -235,8 +235,8 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), [v] "m" (*v), [expect] "r" (expect), [newv] "r" (newv) @@ -301,8 +301,8 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* final store input */ [v] "m" (*v), [expectnot] "r" (expectnot), @@ -359,8 +359,8 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* final store input */ [v] "m" (*v), [count] "r" (count) @@ -419,8 +419,8 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -489,8 +489,8 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -560,8 +560,8 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* cmp2 input */ [v2] "m" (*v2), [expect2] "r" (expect2), @@ -635,8 +635,8 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* final store input */ [v] "m" (*v), [expect] "r" (expect), @@ -711,8 +711,8 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* final store input */ [v] "m" (*v), [expect] "r" (expect), diff --git a/tools/testing/selftests/rseq/rseq-s390.h b/tools/testing/selftests/rseq/rseq-s390.h index 8ef94ad1cbb4..b906e044d2a3 100644 --- a/tools/testing/selftests/rseq/rseq-s390.h +++ b/tools/testing/selftests/rseq/rseq-s390.h @@ -165,8 +165,8 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), [v] "m" (*v), [expect] "r" (expect), [newv] "r" (newv) @@ -233,8 +233,8 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* final store input */ [v] "m" (*v), [expectnot] "r" (expectnot), @@ -288,8 +288,8 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* final store input */ [v] "m" (*v), [count] "r" (count) @@ -347,8 +347,8 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -426,8 +426,8 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* cmp2 input */ [v2] "m" (*v2), [expect2] "r" (expect2), @@ -534,8 +534,8 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, #endif : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [current_cpu_id] "m" (__rseq_abi.cpu_id), - [rseq_cs] "m" (__rseq_abi.rseq_cs), + [current_cpu_id] "m" (rseq_get_abi()->cpu_id), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs), /* final store input */ [v] "m" (*v), [expect] "r" (expect), diff --git a/tools/testing/selftests/rseq/rseq-x86.h b/tools/testing/selftests/rseq/rseq-x86.h index 640411518e46..1d9fa0516e53 100644 --- a/tools/testing/selftests/rseq/rseq-x86.h +++ b/tools/testing/selftests/rseq/rseq-x86.h @@ -141,7 +141,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (&__rseq_abi), + [rseq_abi] "r" (rseq_get_abi()), [v] "m" (*v), [expect] "r" (expect), [newv] "r" (newv) @@ -207,7 +207,7 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (&__rseq_abi), + [rseq_abi] "r" (rseq_get_abi()), /* final store input */ [v] "m" (*v), [expectnot] "r" (expectnot), @@ -258,7 +258,7 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (&__rseq_abi), + [rseq_abi] "r" (rseq_get_abi()), /* final store input */ [v] "m" (*v), [count] "er" (count) @@ -314,7 +314,7 @@ int rseq_offset_deref_addv(intptr_t *ptr, off_t off, intptr_t inc, int cpu) RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (&__rseq_abi), + [rseq_abi] "r" (rseq_get_abi()), /* final store input */ [ptr] "m" (*ptr), [off] "er" (off), @@ -372,7 +372,7 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (&__rseq_abi), + [rseq_abi] "r" (rseq_get_abi()), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -449,7 +449,7 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (&__rseq_abi), + [rseq_abi] "r" (rseq_get_abi()), /* cmp2 input */ [v2] "m" (*v2), [expect2] "r" (expect2), @@ -555,7 +555,7 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, #endif : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (&__rseq_abi), + [rseq_abi] "r" (rseq_get_abi()), /* final store input */ [v] "m" (*v), [expect] "r" (expect), @@ -719,7 +719,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (&__rseq_abi), + [rseq_abi] "r" (rseq_get_abi()), [v] "m" (*v), [expect] "r" (expect), [newv] "r" (newv) @@ -785,7 +785,7 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (&__rseq_abi), + [rseq_abi] "r" (rseq_get_abi()), /* final store input */ [v] "m" (*v), [expectnot] "r" (expectnot), @@ -836,7 +836,7 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (&__rseq_abi), + [rseq_abi] "r" (rseq_get_abi()), /* final store input */ [v] "m" (*v), [count] "ir" (count) @@ -894,7 +894,7 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (&__rseq_abi), + [rseq_abi] "r" (rseq_get_abi()), /* try store input */ [v2] "m" (*v2), [newv2] "m" (newv2), @@ -962,7 +962,7 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (&__rseq_abi), + [rseq_abi] "r" (rseq_get_abi()), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -1032,7 +1032,7 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (&__rseq_abi), + [rseq_abi] "r" (rseq_get_abi()), /* cmp2 input */ [v2] "m" (*v2), [expect2] "r" (expect2), @@ -1142,7 +1142,7 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, #endif : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (&__rseq_abi), + [rseq_abi] "r" (rseq_get_abi()), /* final store input */ [v] "m" (*v), [expect] "m" (expect), @@ -1255,7 +1255,7 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, #endif : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (&__rseq_abi), + [rseq_abi] "r" (rseq_get_abi()), /* final store input */ [v] "m" (*v), [expect] "m" (expect), diff --git a/tools/testing/selftests/rseq/rseq.h b/tools/testing/selftests/rseq/rseq.h index d580f8e9c001..ca668a2af172 100644 --- a/tools/testing/selftests/rseq/rseq.h +++ b/tools/testing/selftests/rseq/rseq.h @@ -46,6 +46,11 @@ extern __thread struct rseq_abi __rseq_abi; extern int __rseq_handled; +static inline struct rseq_abi *rseq_get_abi(void) +{ + return &__rseq_abi; +} + #define rseq_likely(x) __builtin_expect(!!(x), 1) #define rseq_unlikely(x) __builtin_expect(!!(x), 0) #define rseq_barrier() __asm__ __volatile__("" : : : "memory") @@ -108,7 +113,7 @@ int32_t rseq_fallback_current_cpu(void); */ static inline int32_t rseq_current_cpu_raw(void) { - return RSEQ_ACCESS_ONCE(__rseq_abi.cpu_id); + return RSEQ_ACCESS_ONCE(rseq_get_abi()->cpu_id); } /* @@ -124,7 +129,7 @@ static inline int32_t rseq_current_cpu_raw(void) */ static inline uint32_t rseq_cpu_start(void) { - return RSEQ_ACCESS_ONCE(__rseq_abi.cpu_id_start); + return RSEQ_ACCESS_ONCE(rseq_get_abi()->cpu_id_start); } static inline uint32_t rseq_current_cpu(void) @@ -139,7 +144,7 @@ static inline uint32_t rseq_current_cpu(void) static inline void rseq_clear_rseq_cs(void) { - RSEQ_WRITE_ONCE(__rseq_abi.rseq_cs.arch.ptr, 0); + RSEQ_WRITE_ONCE(rseq_get_abi()->rseq_cs.arch.ptr, 0); } /* From c79e564535c036a16f471d750887afb47b27d48a Mon Sep 17 00:00:00 2001 From: Mathieu Desnoyers Date: Mon, 24 Jan 2022 12:12:44 -0500 Subject: [PATCH 1232/1842] selftests/rseq: Introduce thread pointer getters commit 886ddfba933f5ce9d76c278165d834d114ba4ffc upstream. This is done in preparation for the selftest uplift to become compatible with glibc-2.35. glibc-2.35 exposes the rseq per-thread data in the TCB, accessible at an offset from the thread pointer. The toolchains do not implement accessing the thread pointer on all architectures. Provide thread pointer getters for ppc and x86 which lack (or lacked until recently) toolchain support. Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220124171253.22072-7-mathieu.desnoyers@efficios.com Signed-off-by: Greg Kroah-Hartman --- .../rseq/rseq-generic-thread-pointer.h | 25 ++++++++++++ .../selftests/rseq/rseq-ppc-thread-pointer.h | 30 ++++++++++++++ .../selftests/rseq/rseq-thread-pointer.h | 19 +++++++++ .../selftests/rseq/rseq-x86-thread-pointer.h | 40 +++++++++++++++++++ 4 files changed, 114 insertions(+) create mode 100644 tools/testing/selftests/rseq/rseq-generic-thread-pointer.h create mode 100644 tools/testing/selftests/rseq/rseq-ppc-thread-pointer.h create mode 100644 tools/testing/selftests/rseq/rseq-thread-pointer.h create mode 100644 tools/testing/selftests/rseq/rseq-x86-thread-pointer.h diff --git a/tools/testing/selftests/rseq/rseq-generic-thread-pointer.h b/tools/testing/selftests/rseq/rseq-generic-thread-pointer.h new file mode 100644 index 000000000000..38c584661571 --- /dev/null +++ b/tools/testing/selftests/rseq/rseq-generic-thread-pointer.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: LGPL-2.1-only OR MIT */ +/* + * rseq-generic-thread-pointer.h + * + * (C) Copyright 2021 - Mathieu Desnoyers + */ + +#ifndef _RSEQ_GENERIC_THREAD_POINTER +#define _RSEQ_GENERIC_THREAD_POINTER + +#ifdef __cplusplus +extern "C" { +#endif + +/* Use gcc builtin thread pointer. */ +static inline void *rseq_thread_pointer(void) +{ + return __builtin_thread_pointer(); +} + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/tools/testing/selftests/rseq/rseq-ppc-thread-pointer.h b/tools/testing/selftests/rseq/rseq-ppc-thread-pointer.h new file mode 100644 index 000000000000..263eee84fb76 --- /dev/null +++ b/tools/testing/selftests/rseq/rseq-ppc-thread-pointer.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: LGPL-2.1-only OR MIT */ +/* + * rseq-ppc-thread-pointer.h + * + * (C) Copyright 2021 - Mathieu Desnoyers + */ + +#ifndef _RSEQ_PPC_THREAD_POINTER +#define _RSEQ_PPC_THREAD_POINTER + +#ifdef __cplusplus +extern "C" { +#endif + +static inline void *rseq_thread_pointer(void) +{ +#ifdef __powerpc64__ + register void *__result asm ("r13"); +#else + register void *__result asm ("r2"); +#endif + asm ("" : "=r" (__result)); + return __result; +} + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/tools/testing/selftests/rseq/rseq-thread-pointer.h b/tools/testing/selftests/rseq/rseq-thread-pointer.h new file mode 100644 index 000000000000..977c25d758b2 --- /dev/null +++ b/tools/testing/selftests/rseq/rseq-thread-pointer.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: LGPL-2.1-only OR MIT */ +/* + * rseq-thread-pointer.h + * + * (C) Copyright 2021 - Mathieu Desnoyers + */ + +#ifndef _RSEQ_THREAD_POINTER +#define _RSEQ_THREAD_POINTER + +#if defined(__x86_64__) || defined(__i386__) +#include "rseq-x86-thread-pointer.h" +#elif defined(__PPC__) +#include "rseq-ppc-thread-pointer.h" +#else +#include "rseq-generic-thread-pointer.h" +#endif + +#endif diff --git a/tools/testing/selftests/rseq/rseq-x86-thread-pointer.h b/tools/testing/selftests/rseq/rseq-x86-thread-pointer.h new file mode 100644 index 000000000000..d3133587d996 --- /dev/null +++ b/tools/testing/selftests/rseq/rseq-x86-thread-pointer.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: LGPL-2.1-only OR MIT */ +/* + * rseq-x86-thread-pointer.h + * + * (C) Copyright 2021 - Mathieu Desnoyers + */ + +#ifndef _RSEQ_X86_THREAD_POINTER +#define _RSEQ_X86_THREAD_POINTER + +#include + +#ifdef __cplusplus +extern "C" { +#endif + +#if __GNUC_PREREQ (11, 1) +static inline void *rseq_thread_pointer(void) +{ + return __builtin_thread_pointer(); +} +#else +static inline void *rseq_thread_pointer(void) +{ + void *__result; + +# ifdef __x86_64__ + __asm__ ("mov %%fs:0, %0" : "=r" (__result)); +# else + __asm__ ("mov %%gs:0, %0" : "=r" (__result)); +# endif + return __result; +} +#endif /* !GCC 11 */ + +#ifdef __cplusplus +} +#endif + +#endif From e85fdae4df7262e73689c809513d8eb3094c6320 Mon Sep 17 00:00:00 2001 From: Mathieu Desnoyers Date: Mon, 24 Jan 2022 12:12:45 -0500 Subject: [PATCH 1233/1842] selftests/rseq: Uplift rseq selftests for compatibility with glibc-2.35 commit 233e667e1ae3e348686bd9dd0172e62a09d852e1 upstream. glibc-2.35 (upcoming release date 2022-02-01) exposes the rseq per-thread data in the TCB, accessible at an offset from the thread pointer, rather than through an actual Thread-Local Storage (TLS) variable, as the Linux kernel selftests initially expected. The __rseq_abi TLS and glibc-2.35's ABI for per-thread data cannot actively coexist in a process, because the kernel supports only a single rseq registration per thread. Here is the scheme introduced to ensure selftests can work both with an older glibc and with glibc-2.35+: - librseq exposes its own "rseq_offset, rseq_size, rseq_flags" ABI. - librseq queries for glibc rseq ABI (__rseq_offset, __rseq_size, __rseq_flags) using dlsym() in a librseq library constructor. If those are found, copy their values into rseq_offset, rseq_size, and rseq_flags. - Else, if those glibc symbols are not found, handle rseq registration from librseq and use its own IE-model TLS to implement the rseq ABI per-thread storage. Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220124171253.22072-8-mathieu.desnoyers@efficios.com Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/rseq/Makefile | 2 +- tools/testing/selftests/rseq/rseq.c | 171 ++++++++++++-------------- tools/testing/selftests/rseq/rseq.h | 13 +- 3 files changed, 93 insertions(+), 93 deletions(-) diff --git a/tools/testing/selftests/rseq/Makefile b/tools/testing/selftests/rseq/Makefile index 2af9d39a9716..215e1067f037 100644 --- a/tools/testing/selftests/rseq/Makefile +++ b/tools/testing/selftests/rseq/Makefile @@ -6,7 +6,7 @@ endif CFLAGS += -O2 -Wall -g -I./ -I../../../../usr/include/ -L$(OUTPUT) -Wl,-rpath=./ \ $(CLANG_FLAGS) -LDLIBS += -lpthread +LDLIBS += -lpthread -ldl # Own dependencies because we only want to build against 1st prerequisite, but # still track changes to header files and depend on shared object. diff --git a/tools/testing/selftests/rseq/rseq.c b/tools/testing/selftests/rseq/rseq.c index 1f905b60728a..07ba0d463a96 100644 --- a/tools/testing/selftests/rseq/rseq.c +++ b/tools/testing/selftests/rseq/rseq.c @@ -26,103 +26,113 @@ #include #include #include +#include #include "../kselftest.h" #include "rseq.h" -__thread struct rseq_abi __rseq_abi = { +static const int *libc_rseq_offset_p; +static const unsigned int *libc_rseq_size_p; +static const unsigned int *libc_rseq_flags_p; + +/* Offset from the thread pointer to the rseq area. */ +int rseq_offset; + +/* Size of the registered rseq area. 0 if the registration was + unsuccessful. */ +unsigned int rseq_size = -1U; + +/* Flags used during rseq registration. */ +unsigned int rseq_flags; + +static int rseq_ownership; + +static +__thread struct rseq_abi __rseq_abi __attribute__((tls_model("initial-exec"))) = { .cpu_id = RSEQ_ABI_CPU_ID_UNINITIALIZED, }; -/* - * Shared with other libraries. This library may take rseq ownership if it is - * still 0 when executing the library constructor. Set to 1 by library - * constructor when handling rseq. Set to 0 in destructor if handling rseq. - */ -int __rseq_handled; - -/* Whether this library have ownership of rseq registration. */ -static int rseq_ownership; - -static __thread volatile uint32_t __rseq_refcount; - -static void signal_off_save(sigset_t *oldset) -{ - sigset_t set; - int ret; - - sigfillset(&set); - ret = pthread_sigmask(SIG_BLOCK, &set, oldset); - if (ret) - abort(); -} - -static void signal_restore(sigset_t oldset) -{ - int ret; - - ret = pthread_sigmask(SIG_SETMASK, &oldset, NULL); - if (ret) - abort(); -} - -static int sys_rseq(volatile struct rseq_abi *rseq_abi, uint32_t rseq_len, +static int sys_rseq(struct rseq_abi *rseq_abi, uint32_t rseq_len, int flags, uint32_t sig) { return syscall(__NR_rseq, rseq_abi, rseq_len, flags, sig); } +int rseq_available(void) +{ + int rc; + + rc = sys_rseq(NULL, 0, 0, 0); + if (rc != -1) + abort(); + switch (errno) { + case ENOSYS: + return 0; + case EINVAL: + return 1; + default: + abort(); + } +} + int rseq_register_current_thread(void) { - int rc, ret = 0; - sigset_t oldset; + int rc; - if (!rseq_ownership) + if (!rseq_ownership) { + /* Treat libc's ownership as a successful registration. */ return 0; - signal_off_save(&oldset); - if (__rseq_refcount == UINT_MAX) { - ret = -1; - goto end; } - if (__rseq_refcount++) - goto end; rc = sys_rseq(&__rseq_abi, sizeof(struct rseq_abi), 0, RSEQ_SIG); - if (!rc) { - assert(rseq_current_cpu_raw() >= 0); - goto end; - } - if (errno != EBUSY) - RSEQ_WRITE_ONCE(__rseq_abi.cpu_id, RSEQ_ABI_CPU_ID_REGISTRATION_FAILED); - ret = -1; - __rseq_refcount--; -end: - signal_restore(oldset); - return ret; + if (rc) + return -1; + assert(rseq_current_cpu_raw() >= 0); + return 0; } int rseq_unregister_current_thread(void) { - int rc, ret = 0; - sigset_t oldset; + int rc; - if (!rseq_ownership) + if (!rseq_ownership) { + /* Treat libc's ownership as a successful unregistration. */ return 0; - signal_off_save(&oldset); - if (!__rseq_refcount) { - ret = -1; - goto end; } - if (--__rseq_refcount) - goto end; - rc = sys_rseq(&__rseq_abi, sizeof(struct rseq_abi), - RSEQ_ABI_FLAG_UNREGISTER, RSEQ_SIG); - if (!rc) - goto end; - __rseq_refcount = 1; - ret = -1; -end: - signal_restore(oldset); - return ret; + rc = sys_rseq(&__rseq_abi, sizeof(struct rseq_abi), RSEQ_ABI_FLAG_UNREGISTER, RSEQ_SIG); + if (rc) + return -1; + return 0; +} + +static __attribute__((constructor)) +void rseq_init(void) +{ + libc_rseq_offset_p = dlsym(RTLD_NEXT, "__rseq_offset"); + libc_rseq_size_p = dlsym(RTLD_NEXT, "__rseq_size"); + libc_rseq_flags_p = dlsym(RTLD_NEXT, "__rseq_flags"); + if (libc_rseq_size_p && libc_rseq_offset_p && libc_rseq_flags_p) { + /* rseq registration owned by glibc */ + rseq_offset = *libc_rseq_offset_p; + rseq_size = *libc_rseq_size_p; + rseq_flags = *libc_rseq_flags_p; + return; + } + if (!rseq_available()) + return; + rseq_ownership = 1; + rseq_offset = (void *)&__rseq_abi - rseq_thread_pointer(); + rseq_size = sizeof(struct rseq_abi); + rseq_flags = 0; +} + +static __attribute__((destructor)) +void rseq_exit(void) +{ + if (!rseq_ownership) + return; + rseq_offset = 0; + rseq_size = -1U; + rseq_ownership = 0; } int32_t rseq_fallback_current_cpu(void) @@ -136,20 +146,3 @@ int32_t rseq_fallback_current_cpu(void) } return cpu; } - -void __attribute__((constructor)) rseq_init(void) -{ - /* Check whether rseq is handled by another library. */ - if (__rseq_handled) - return; - __rseq_handled = 1; - rseq_ownership = 1; -} - -void __attribute__((destructor)) rseq_fini(void) -{ - if (!rseq_ownership) - return; - __rseq_handled = 0; - rseq_ownership = 0; -} diff --git a/tools/testing/selftests/rseq/rseq.h b/tools/testing/selftests/rseq/rseq.h index ca668a2af172..17531ccd3090 100644 --- a/tools/testing/selftests/rseq/rseq.h +++ b/tools/testing/selftests/rseq/rseq.h @@ -43,12 +43,19 @@ #define RSEQ_INJECT_FAILED #endif -extern __thread struct rseq_abi __rseq_abi; -extern int __rseq_handled; +#include "rseq-thread-pointer.h" + +/* Offset from the thread pointer to the rseq area. */ +extern int rseq_offset; +/* Size of the registered rseq area. 0 if the registration was + unsuccessful. */ +extern unsigned int rseq_size; +/* Flags used during rseq registration. */ +extern unsigned int rseq_flags; static inline struct rseq_abi *rseq_get_abi(void) { - return &__rseq_abi; + return (struct rseq_abi *) ((uintptr_t) rseq_thread_pointer() + rseq_offset); } #define rseq_likely(x) __builtin_expect(!!(x), 1) From d4f631ea2dd681e3b093e9e0a8bce508b9bb7336 Mon Sep 17 00:00:00 2001 From: Mathieu Desnoyers Date: Mon, 24 Jan 2022 12:12:46 -0500 Subject: [PATCH 1234/1842] selftests/rseq: Fix ppc32: wrong rseq_cs 32-bit field pointer on big endian commit 24d1136a29da5953de5c0cbc6c83eb62a1e0bf14 upstream. ppc32 incorrectly uses padding as rseq_cs pointer field. Fix this by using the rseq_cs.arch.ptr field. Use this field across all architectures. Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220124171253.22072-9-mathieu.desnoyers@efficios.com Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/rseq/rseq-arm.h | 16 ++++++++-------- tools/testing/selftests/rseq/rseq-arm64.h | 16 ++++++++-------- tools/testing/selftests/rseq/rseq-mips.h | 16 ++++++++-------- tools/testing/selftests/rseq/rseq-ppc.h | 16 ++++++++-------- tools/testing/selftests/rseq/rseq-s390.h | 12 ++++++------ 5 files changed, 38 insertions(+), 38 deletions(-) diff --git a/tools/testing/selftests/rseq/rseq-arm.h b/tools/testing/selftests/rseq/rseq-arm.h index 6716540e17c6..5f567b3b40f2 100644 --- a/tools/testing/selftests/rseq/rseq-arm.h +++ b/tools/testing/selftests/rseq/rseq-arm.h @@ -186,7 +186,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), [v] "m" (*v), [expect] "r" (expect), [newv] "r" (newv) @@ -256,7 +256,7 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* final store input */ [v] "m" (*v), [expectnot] "r" (expectnot), @@ -317,7 +317,7 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), [v] "m" (*v), [count] "Ir" (count) RSEQ_INJECT_INPUT @@ -382,7 +382,7 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -458,7 +458,7 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -538,7 +538,7 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* cmp2 input */ [v2] "m" (*v2), [expect2] "r" (expect2), @@ -658,7 +658,7 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* final store input */ [v] "m" (*v), [expect] "r" (expect), @@ -783,7 +783,7 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* final store input */ [v] "m" (*v), [expect] "r" (expect), diff --git a/tools/testing/selftests/rseq/rseq-arm64.h b/tools/testing/selftests/rseq/rseq-arm64.h index b9d9b3aa6e9b..d0f2b7feee94 100644 --- a/tools/testing/selftests/rseq/rseq-arm64.h +++ b/tools/testing/selftests/rseq/rseq-arm64.h @@ -231,7 +231,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), [v] "Qo" (*v), [expect] "r" (expect), [newv] "r" (newv) @@ -288,7 +288,7 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), [v] "Qo" (*v), [expectnot] "r" (expectnot), [load] "Qo" (*load), @@ -338,7 +338,7 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), [v] "Qo" (*v), [count] "r" (count) RSEQ_INJECT_INPUT @@ -389,7 +389,7 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), [expect] "r" (expect), [v] "Qo" (*v), [newv] "r" (newv), @@ -448,7 +448,7 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), [expect] "r" (expect), [v] "Qo" (*v), [newv] "r" (newv), @@ -509,7 +509,7 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), [v] "Qo" (*v), [expect] "r" (expect), [v2] "Qo" (*v2), @@ -570,7 +570,7 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), [expect] "r" (expect), [v] "Qo" (*v), [newv] "r" (newv), @@ -630,7 +630,7 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "Qo" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), [expect] "r" (expect), [v] "Qo" (*v), [newv] "r" (newv), diff --git a/tools/testing/selftests/rseq/rseq-mips.h b/tools/testing/selftests/rseq/rseq-mips.h index 2b1f5bd95268..6df54273825d 100644 --- a/tools/testing/selftests/rseq/rseq-mips.h +++ b/tools/testing/selftests/rseq/rseq-mips.h @@ -191,7 +191,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), [v] "m" (*v), [expect] "r" (expect), [newv] "r" (newv) @@ -259,7 +259,7 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* final store input */ [v] "m" (*v), [expectnot] "r" (expectnot), @@ -320,7 +320,7 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), [v] "m" (*v), [count] "Ir" (count) RSEQ_INJECT_INPUT @@ -383,7 +383,7 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -457,7 +457,7 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -533,7 +533,7 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* cmp2 input */ [v2] "m" (*v2), [expect2] "r" (expect2), @@ -650,7 +650,7 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* final store input */ [v] "m" (*v), [expect] "r" (expect), @@ -772,7 +772,7 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* final store input */ [v] "m" (*v), [expect] "r" (expect), diff --git a/tools/testing/selftests/rseq/rseq-ppc.h b/tools/testing/selftests/rseq/rseq-ppc.h index 2e6b7572ba08..c4ba1375285d 100644 --- a/tools/testing/selftests/rseq/rseq-ppc.h +++ b/tools/testing/selftests/rseq/rseq-ppc.h @@ -236,7 +236,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), [v] "m" (*v), [expect] "r" (expect), [newv] "r" (newv) @@ -302,7 +302,7 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* final store input */ [v] "m" (*v), [expectnot] "r" (expectnot), @@ -360,7 +360,7 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* final store input */ [v] "m" (*v), [count] "r" (count) @@ -420,7 +420,7 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -490,7 +490,7 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -561,7 +561,7 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* cmp2 input */ [v2] "m" (*v2), [expect2] "r" (expect2), @@ -636,7 +636,7 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* final store input */ [v] "m" (*v), [expect] "r" (expect), @@ -712,7 +712,7 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* final store input */ [v] "m" (*v), [expect] "r" (expect), diff --git a/tools/testing/selftests/rseq/rseq-s390.h b/tools/testing/selftests/rseq/rseq-s390.h index b906e044d2a3..9927021f8bd0 100644 --- a/tools/testing/selftests/rseq/rseq-s390.h +++ b/tools/testing/selftests/rseq/rseq-s390.h @@ -166,7 +166,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), [v] "m" (*v), [expect] "r" (expect), [newv] "r" (newv) @@ -234,7 +234,7 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* final store input */ [v] "m" (*v), [expectnot] "r" (expectnot), @@ -289,7 +289,7 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* final store input */ [v] "m" (*v), [count] "r" (count) @@ -348,7 +348,7 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -427,7 +427,7 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* cmp2 input */ [v2] "m" (*v2), [expect2] "r" (expect2), @@ -535,7 +535,7 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), [current_cpu_id] "m" (rseq_get_abi()->cpu_id), - [rseq_cs] "m" (rseq_get_abi()->rseq_cs), + [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr), /* final store input */ [v] "m" (*v), [expect] "r" (expect), From dbc1f0ee6044d9ad9239c2530e9bd067a5021097 Mon Sep 17 00:00:00 2001 From: Mathieu Desnoyers Date: Mon, 24 Jan 2022 12:12:47 -0500 Subject: [PATCH 1235/1842] selftests/rseq: Fix ppc32 missing instruction selection "u" and "x" for load/store commit de6b52a21420a18dc8a36438d581efd1313d5fe3 upstream. Building the rseq basic test with gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.12) Target: powerpc-linux-gnu leads to these errors: /tmp/ccieEWxU.s: Assembler messages: /tmp/ccieEWxU.s:118: Error: syntax error; found `,', expected `(' /tmp/ccieEWxU.s:118: Error: junk at end of line: `,8' /tmp/ccieEWxU.s:121: Error: syntax error; found `,', expected `(' /tmp/ccieEWxU.s:121: Error: junk at end of line: `,8' /tmp/ccieEWxU.s:626: Error: syntax error; found `,', expected `(' /tmp/ccieEWxU.s:626: Error: junk at end of line: `,8' /tmp/ccieEWxU.s:629: Error: syntax error; found `,', expected `(' /tmp/ccieEWxU.s:629: Error: junk at end of line: `,8' /tmp/ccieEWxU.s:735: Error: syntax error; found `,', expected `(' /tmp/ccieEWxU.s:735: Error: junk at end of line: `,8' /tmp/ccieEWxU.s:738: Error: syntax error; found `,', expected `(' /tmp/ccieEWxU.s:738: Error: junk at end of line: `,8' /tmp/ccieEWxU.s:741: Error: syntax error; found `,', expected `(' /tmp/ccieEWxU.s:741: Error: junk at end of line: `,8' Makefile:581: recipe for target 'basic_percpu_ops_test.o' failed Based on discussion with Linux powerpc maintainers and review of the use of the "m" operand in powerpc kernel code, add the missing %Un%Xn (where n is operand number) to the lwz, stw, ld, and std instructions when used with "m" operands. Using "WORD" to mean either a 32-bit or 64-bit type depending on the architecture is misleading. The term "WORD" really means a 32-bit type in both 32-bit and 64-bit powerpc assembler. The intent here is to wrap load/store to intptr_t into common macros for both 32-bit and 64-bit. Rename the macros with a RSEQ_ prefix, and use the terms "INT" for always 32-bit type, and "LONG" for architecture bitness-sized type. Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220124171253.22072-10-mathieu.desnoyers@efficios.com Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/rseq/rseq-ppc.h | 55 +++++++++++++------------ 1 file changed, 28 insertions(+), 27 deletions(-) diff --git a/tools/testing/selftests/rseq/rseq-ppc.h b/tools/testing/selftests/rseq/rseq-ppc.h index c4ba1375285d..87befda47ba4 100644 --- a/tools/testing/selftests/rseq/rseq-ppc.h +++ b/tools/testing/selftests/rseq/rseq-ppc.h @@ -47,10 +47,13 @@ do { \ #ifdef __PPC64__ -#define STORE_WORD "std " -#define LOAD_WORD "ld " -#define LOADX_WORD "ldx " -#define CMP_WORD "cmpd " +#define RSEQ_STORE_LONG(arg) "std%U[" __rseq_str(arg) "]%X[" __rseq_str(arg) "] " /* To memory ("m" constraint) */ +#define RSEQ_STORE_INT(arg) "stw%U[" __rseq_str(arg) "]%X[" __rseq_str(arg) "] " /* To memory ("m" constraint) */ +#define RSEQ_LOAD_LONG(arg) "ld%U[" __rseq_str(arg) "]%X[" __rseq_str(arg) "] " /* From memory ("m" constraint) */ +#define RSEQ_LOAD_INT(arg) "lwz%U[" __rseq_str(arg) "]%X[" __rseq_str(arg) "] " /* From memory ("m" constraint) */ +#define RSEQ_LOADX_LONG "ldx " /* From base register ("b" constraint) */ +#define RSEQ_CMP_LONG "cmpd " +#define RSEQ_CMP_LONG_INT "cmpdi " #define __RSEQ_ASM_DEFINE_TABLE(label, version, flags, \ start_ip, post_commit_offset, abort_ip) \ @@ -89,10 +92,13 @@ do { \ #else /* #ifdef __PPC64__ */ -#define STORE_WORD "stw " -#define LOAD_WORD "lwz " -#define LOADX_WORD "lwzx " -#define CMP_WORD "cmpw " +#define RSEQ_STORE_LONG(arg) "stw%U[" __rseq_str(arg) "]%X[" __rseq_str(arg) "] " /* To memory ("m" constraint) */ +#define RSEQ_STORE_INT(arg) RSEQ_STORE_LONG(arg) /* To memory ("m" constraint) */ +#define RSEQ_LOAD_LONG(arg) "lwz%U[" __rseq_str(arg) "]%X[" __rseq_str(arg) "] " /* From memory ("m" constraint) */ +#define RSEQ_LOAD_INT(arg) RSEQ_LOAD_LONG(arg) /* From memory ("m" constraint) */ +#define RSEQ_LOADX_LONG "lwzx " /* From base register ("b" constraint) */ +#define RSEQ_CMP_LONG "cmpw " +#define RSEQ_CMP_LONG_INT "cmpwi " #define __RSEQ_ASM_DEFINE_TABLE(label, version, flags, \ start_ip, post_commit_offset, abort_ip) \ @@ -125,7 +131,7 @@ do { \ RSEQ_INJECT_ASM(1) \ "lis %%r17, (" __rseq_str(cs_label) ")@ha\n\t" \ "addi %%r17, %%r17, (" __rseq_str(cs_label) ")@l\n\t" \ - "stw %%r17, %[" __rseq_str(rseq_cs) "]\n\t" \ + RSEQ_STORE_INT(rseq_cs) "%%r17, %[" __rseq_str(rseq_cs) "]\n\t" \ __rseq_str(label) ":\n\t" #endif /* #ifdef __PPC64__ */ @@ -136,7 +142,7 @@ do { \ #define RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, label) \ RSEQ_INJECT_ASM(2) \ - "lwz %%r17, %[" __rseq_str(current_cpu_id) "]\n\t" \ + RSEQ_LOAD_INT(current_cpu_id) "%%r17, %[" __rseq_str(current_cpu_id) "]\n\t" \ "cmpw cr7, %[" __rseq_str(cpu_id) "], %%r17\n\t" \ "bne- cr7, " __rseq_str(label) "\n\t" @@ -153,25 +159,25 @@ do { \ * RSEQ_ASM_OP_* (else): doesn't have hard-code registers(unless cr7) */ #define RSEQ_ASM_OP_CMPEQ(var, expect, label) \ - LOAD_WORD "%%r17, %[" __rseq_str(var) "]\n\t" \ - CMP_WORD "cr7, %%r17, %[" __rseq_str(expect) "]\n\t" \ + RSEQ_LOAD_LONG(var) "%%r17, %[" __rseq_str(var) "]\n\t" \ + RSEQ_CMP_LONG "cr7, %%r17, %[" __rseq_str(expect) "]\n\t" \ "bne- cr7, " __rseq_str(label) "\n\t" #define RSEQ_ASM_OP_CMPNE(var, expectnot, label) \ - LOAD_WORD "%%r17, %[" __rseq_str(var) "]\n\t" \ - CMP_WORD "cr7, %%r17, %[" __rseq_str(expectnot) "]\n\t" \ + RSEQ_LOAD_LONG(var) "%%r17, %[" __rseq_str(var) "]\n\t" \ + RSEQ_CMP_LONG "cr7, %%r17, %[" __rseq_str(expectnot) "]\n\t" \ "beq- cr7, " __rseq_str(label) "\n\t" #define RSEQ_ASM_OP_STORE(value, var) \ - STORE_WORD "%[" __rseq_str(value) "], %[" __rseq_str(var) "]\n\t" + RSEQ_STORE_LONG(var) "%[" __rseq_str(value) "], %[" __rseq_str(var) "]\n\t" /* Load @var to r17 */ #define RSEQ_ASM_OP_R_LOAD(var) \ - LOAD_WORD "%%r17, %[" __rseq_str(var) "]\n\t" + RSEQ_LOAD_LONG(var) "%%r17, %[" __rseq_str(var) "]\n\t" /* Store r17 to @var */ #define RSEQ_ASM_OP_R_STORE(var) \ - STORE_WORD "%%r17, %[" __rseq_str(var) "]\n\t" + RSEQ_STORE_LONG(var) "%%r17, %[" __rseq_str(var) "]\n\t" /* Add @count to r17 */ #define RSEQ_ASM_OP_R_ADD(count) \ @@ -179,11 +185,11 @@ do { \ /* Load (r17 + voffp) to r17 */ #define RSEQ_ASM_OP_R_LOADX(voffp) \ - LOADX_WORD "%%r17, %[" __rseq_str(voffp) "], %%r17\n\t" + RSEQ_LOADX_LONG "%%r17, %[" __rseq_str(voffp) "], %%r17\n\t" /* TODO: implement a faster memcpy. */ #define RSEQ_ASM_OP_R_MEMCPY() \ - "cmpdi %%r19, 0\n\t" \ + RSEQ_CMP_LONG_INT "%%r19, 0\n\t" \ "beq 333f\n\t" \ "addi %%r20, %%r20, -1\n\t" \ "addi %%r21, %%r21, -1\n\t" \ @@ -191,16 +197,16 @@ do { \ "lbzu %%r18, 1(%%r20)\n\t" \ "stbu %%r18, 1(%%r21)\n\t" \ "addi %%r19, %%r19, -1\n\t" \ - "cmpdi %%r19, 0\n\t" \ + RSEQ_CMP_LONG_INT "%%r19, 0\n\t" \ "bne 222b\n\t" \ "333:\n\t" \ #define RSEQ_ASM_OP_R_FINAL_STORE(var, post_commit_label) \ - STORE_WORD "%%r17, %[" __rseq_str(var) "]\n\t" \ + RSEQ_STORE_LONG(var) "%%r17, %[" __rseq_str(var) "]\n\t" \ __rseq_str(post_commit_label) ":\n\t" #define RSEQ_ASM_OP_FINAL_STORE(value, var, post_commit_label) \ - STORE_WORD "%[" __rseq_str(value) "], %[" __rseq_str(var) "]\n\t" \ + RSEQ_STORE_LONG(var) "%[" __rseq_str(value) "], %[" __rseq_str(var) "]\n\t" \ __rseq_str(post_commit_label) ":\n\t" static inline __attribute__((always_inline)) @@ -743,9 +749,4 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, #endif } -#undef STORE_WORD -#undef LOAD_WORD -#undef LOADX_WORD -#undef CMP_WORD - #endif /* !RSEQ_SKIP_FASTPATH */ From 35c6f5047ff3a7bab38c4405aca03c605b5e27dc Mon Sep 17 00:00:00 2001 From: Mathieu Desnoyers Date: Mon, 24 Jan 2022 12:12:48 -0500 Subject: [PATCH 1236/1842] selftests/rseq: Fix ppc32 offsets by using long rather than off_t commit 26dc8a6d8e11552f3b797b5aafe01071ca32d692 upstream. The semantic of off_t is for file offsets. We mean to use it as an offset from a pointer. We really expect it to fit in a single register, and not use a 64-bit type on 32-bit architectures. Fix runtime issues on ppc32 where the offset is always 0 due to inconsistency between the argument type (off_t -> 64-bit) and type expected by the inline assembler (32-bit). Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220124171253.22072-11-mathieu.desnoyers@efficios.com Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/rseq/basic_percpu_ops_test.c | 2 +- tools/testing/selftests/rseq/param_test.c | 2 +- tools/testing/selftests/rseq/rseq-arm.h | 2 +- tools/testing/selftests/rseq/rseq-arm64.h | 2 +- tools/testing/selftests/rseq/rseq-mips.h | 2 +- tools/testing/selftests/rseq/rseq-ppc.h | 2 +- tools/testing/selftests/rseq/rseq-s390.h | 2 +- tools/testing/selftests/rseq/rseq-skip.h | 2 +- tools/testing/selftests/rseq/rseq-x86.h | 6 +++--- 9 files changed, 11 insertions(+), 11 deletions(-) diff --git a/tools/testing/selftests/rseq/basic_percpu_ops_test.c b/tools/testing/selftests/rseq/basic_percpu_ops_test.c index b953a52ff706..517756afc2a4 100644 --- a/tools/testing/selftests/rseq/basic_percpu_ops_test.c +++ b/tools/testing/selftests/rseq/basic_percpu_ops_test.c @@ -167,7 +167,7 @@ struct percpu_list_node *this_cpu_list_pop(struct percpu_list *list, for (;;) { struct percpu_list_node *head; intptr_t *targetptr, expectnot, *load; - off_t offset; + long offset; int ret, cpu; cpu = rseq_cpu_start(); diff --git a/tools/testing/selftests/rseq/param_test.c b/tools/testing/selftests/rseq/param_test.c index cff01f04f0df..2656c9af4239 100644 --- a/tools/testing/selftests/rseq/param_test.c +++ b/tools/testing/selftests/rseq/param_test.c @@ -549,7 +549,7 @@ struct percpu_list_node *this_cpu_list_pop(struct percpu_list *list, for (;;) { struct percpu_list_node *head; intptr_t *targetptr, expectnot, *load; - off_t offset; + long offset; int ret; cpu = rseq_cpu_start(); diff --git a/tools/testing/selftests/rseq/rseq-arm.h b/tools/testing/selftests/rseq/rseq-arm.h index 5f567b3b40f2..ae476af70152 100644 --- a/tools/testing/selftests/rseq/rseq-arm.h +++ b/tools/testing/selftests/rseq/rseq-arm.h @@ -217,7 +217,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) static inline __attribute__((always_inline)) int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, - off_t voffp, intptr_t *load, int cpu) + long voffp, intptr_t *load, int cpu) { RSEQ_INJECT_C(9) diff --git a/tools/testing/selftests/rseq/rseq-arm64.h b/tools/testing/selftests/rseq/rseq-arm64.h index d0f2b7feee94..7806817062c2 100644 --- a/tools/testing/selftests/rseq/rseq-arm64.h +++ b/tools/testing/selftests/rseq/rseq-arm64.h @@ -259,7 +259,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) static inline __attribute__((always_inline)) int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, - off_t voffp, intptr_t *load, int cpu) + long voffp, intptr_t *load, int cpu) { RSEQ_INJECT_C(9) diff --git a/tools/testing/selftests/rseq/rseq-mips.h b/tools/testing/selftests/rseq/rseq-mips.h index 6df54273825d..0d1d9255da70 100644 --- a/tools/testing/selftests/rseq/rseq-mips.h +++ b/tools/testing/selftests/rseq/rseq-mips.h @@ -222,7 +222,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) static inline __attribute__((always_inline)) int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, - off_t voffp, intptr_t *load, int cpu) + long voffp, intptr_t *load, int cpu) { RSEQ_INJECT_C(9) diff --git a/tools/testing/selftests/rseq/rseq-ppc.h b/tools/testing/selftests/rseq/rseq-ppc.h index 87befda47ba4..aa18c0eabf40 100644 --- a/tools/testing/selftests/rseq/rseq-ppc.h +++ b/tools/testing/selftests/rseq/rseq-ppc.h @@ -270,7 +270,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) static inline __attribute__((always_inline)) int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, - off_t voffp, intptr_t *load, int cpu) + long voffp, intptr_t *load, int cpu) { RSEQ_INJECT_C(9) diff --git a/tools/testing/selftests/rseq/rseq-s390.h b/tools/testing/selftests/rseq/rseq-s390.h index 9927021f8bd0..0f523b3ecdef 100644 --- a/tools/testing/selftests/rseq/rseq-s390.h +++ b/tools/testing/selftests/rseq/rseq-s390.h @@ -198,7 +198,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) */ static inline __attribute__((always_inline)) int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, - off_t voffp, intptr_t *load, int cpu) + long voffp, intptr_t *load, int cpu) { RSEQ_INJECT_C(9) diff --git a/tools/testing/selftests/rseq/rseq-skip.h b/tools/testing/selftests/rseq/rseq-skip.h index 72750b5905a9..7b53dac1fcdd 100644 --- a/tools/testing/selftests/rseq/rseq-skip.h +++ b/tools/testing/selftests/rseq/rseq-skip.h @@ -13,7 +13,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) static inline __attribute__((always_inline)) int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, - off_t voffp, intptr_t *load, int cpu) + long voffp, intptr_t *load, int cpu) { return -1; } diff --git a/tools/testing/selftests/rseq/rseq-x86.h b/tools/testing/selftests/rseq/rseq-x86.h index 1d9fa0516e53..0ee6c041d4be 100644 --- a/tools/testing/selftests/rseq/rseq-x86.h +++ b/tools/testing/selftests/rseq/rseq-x86.h @@ -172,7 +172,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) */ static inline __attribute__((always_inline)) int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, - off_t voffp, intptr_t *load, int cpu) + long voffp, intptr_t *load, int cpu) { RSEQ_INJECT_C(9) @@ -286,7 +286,7 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) * *pval += inc; */ static inline __attribute__((always_inline)) -int rseq_offset_deref_addv(intptr_t *ptr, off_t off, intptr_t inc, int cpu) +int rseq_offset_deref_addv(intptr_t *ptr, long off, intptr_t inc, int cpu) { RSEQ_INJECT_C(9) @@ -750,7 +750,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) */ static inline __attribute__((always_inline)) int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, - off_t voffp, intptr_t *load, int cpu) + long voffp, intptr_t *load, int cpu) { RSEQ_INJECT_C(9) From ba4d79af7101b61895aa84cc72aad07464744c54 Mon Sep 17 00:00:00 2001 From: Mathieu Desnoyers Date: Mon, 24 Jan 2022 12:12:49 -0500 Subject: [PATCH 1237/1842] selftests/rseq: Fix warnings about #if checks of undefined tokens commit d7ed99ade3e62b755584eea07b4e499e79240527 upstream. Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220124171253.22072-12-mathieu.desnoyers@efficios.com Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/rseq/param_test.c | 2 +- tools/testing/selftests/rseq/rseq-x86.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/rseq/param_test.c b/tools/testing/selftests/rseq/param_test.c index 2656c9af4239..e29ecc7158e8 100644 --- a/tools/testing/selftests/rseq/param_test.c +++ b/tools/testing/selftests/rseq/param_test.c @@ -161,7 +161,7 @@ unsigned int yield_mod_cnt, nr_abort; " cbnz " INJECT_ASM_REG ", 222b\n" \ "333:\n" -#elif __PPC__ +#elif defined(__PPC__) #define RSEQ_INJECT_INPUT \ , [loop_cnt_1]"m"(loop_cnt[1]) \ diff --git a/tools/testing/selftests/rseq/rseq-x86.h b/tools/testing/selftests/rseq/rseq-x86.h index 0ee6c041d4be..e444563c6999 100644 --- a/tools/testing/selftests/rseq/rseq-x86.h +++ b/tools/testing/selftests/rseq/rseq-x86.h @@ -600,7 +600,7 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, #endif /* !RSEQ_SKIP_FASTPATH */ -#elif __i386__ +#elif defined(__i386__) #define rseq_smp_mb() \ __asm__ __volatile__ ("lock; addl $0,-128(%%esp)" ::: "memory", "cc") From 7e1a0a9a44421b75329d44d56517e090b53e43b0 Mon Sep 17 00:00:00 2001 From: Mathieu Desnoyers Date: Mon, 24 Jan 2022 12:12:50 -0500 Subject: [PATCH 1238/1842] selftests/rseq: Remove arm/mips asm goto compiler work-around commit 94c5cf2a0e193afffef8de48ddc42de6df7cac93 upstream. The arm and mips work-around for asm goto size guess issues are not properly documented, and lack reference to specific compiler versions, upstream compiler bug tracker entry, and reproducer. I can only find a loosely documented patch in my original LKML rseq post refering to gcc < 7 on ARM, but it does not appear to be sufficient to track the exact issue. Also, I am not sure MIPS really has the same limitation. Therefore, remove the work-around until we can properly document this. Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Link: https://lore.kernel.org/lkml/20171121141900.18471-17-mathieu.desnoyers@efficios.com/ Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/rseq/rseq-arm.h | 37 ------------------------ tools/testing/selftests/rseq/rseq-mips.h | 37 ------------------------ 2 files changed, 74 deletions(-) diff --git a/tools/testing/selftests/rseq/rseq-arm.h b/tools/testing/selftests/rseq/rseq-arm.h index ae476af70152..64c3a622107b 100644 --- a/tools/testing/selftests/rseq/rseq-arm.h +++ b/tools/testing/selftests/rseq/rseq-arm.h @@ -147,14 +147,11 @@ do { \ teardown \ "b %l[" __rseq_str(cmpfail_label) "]\n\t" -#define rseq_workaround_gcc_asm_size_guess() __asm__ __volatile__("") - static inline __attribute__((always_inline)) int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) { RSEQ_INJECT_C(9) - rseq_workaround_gcc_asm_size_guess(); __asm__ __volatile__ goto ( RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail]) @@ -198,14 +195,11 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) , error1, error2 #endif ); - rseq_workaround_gcc_asm_size_guess(); return 0; abort: - rseq_workaround_gcc_asm_size_guess(); RSEQ_INJECT_FAILED return -1; cmpfail: - rseq_workaround_gcc_asm_size_guess(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: @@ -221,7 +215,6 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, { RSEQ_INJECT_C(9) - rseq_workaround_gcc_asm_size_guess(); __asm__ __volatile__ goto ( RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail]) @@ -270,14 +263,11 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, , error1, error2 #endif ); - rseq_workaround_gcc_asm_size_guess(); return 0; abort: - rseq_workaround_gcc_asm_size_guess(); RSEQ_INJECT_FAILED return -1; cmpfail: - rseq_workaround_gcc_asm_size_guess(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: @@ -292,7 +282,6 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) { RSEQ_INJECT_C(9) - rseq_workaround_gcc_asm_size_guess(); __asm__ __volatile__ goto ( RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */ #ifdef RSEQ_COMPARE_TWICE @@ -328,10 +317,8 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) , error1 #endif ); - rseq_workaround_gcc_asm_size_guess(); return 0; abort: - rseq_workaround_gcc_asm_size_guess(); RSEQ_INJECT_FAILED return -1; #ifdef RSEQ_COMPARE_TWICE @@ -347,7 +334,6 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, { RSEQ_INJECT_C(9) - rseq_workaround_gcc_asm_size_guess(); __asm__ __volatile__ goto ( RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail]) @@ -398,14 +384,11 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, , error1, error2 #endif ); - rseq_workaround_gcc_asm_size_guess(); return 0; abort: - rseq_workaround_gcc_asm_size_guess(); RSEQ_INJECT_FAILED return -1; cmpfail: - rseq_workaround_gcc_asm_size_guess(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: @@ -422,7 +405,6 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, { RSEQ_INJECT_C(9) - rseq_workaround_gcc_asm_size_guess(); __asm__ __volatile__ goto ( RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail]) @@ -474,14 +456,11 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, , error1, error2 #endif ); - rseq_workaround_gcc_asm_size_guess(); return 0; abort: - rseq_workaround_gcc_asm_size_guess(); RSEQ_INJECT_FAILED return -1; cmpfail: - rseq_workaround_gcc_asm_size_guess(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: @@ -498,7 +477,6 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, { RSEQ_INJECT_C(9) - rseq_workaround_gcc_asm_size_guess(); __asm__ __volatile__ goto ( RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail]) @@ -554,14 +532,11 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, , error1, error2, error3 #endif ); - rseq_workaround_gcc_asm_size_guess(); return 0; abort: - rseq_workaround_gcc_asm_size_guess(); RSEQ_INJECT_FAILED return -1; cmpfail: - rseq_workaround_gcc_asm_size_guess(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: @@ -582,7 +557,6 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, RSEQ_INJECT_C(9) - rseq_workaround_gcc_asm_size_guess(); __asm__ __volatile__ goto ( RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail]) @@ -678,21 +652,16 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, , error1, error2 #endif ); - rseq_workaround_gcc_asm_size_guess(); return 0; abort: - rseq_workaround_gcc_asm_size_guess(); RSEQ_INJECT_FAILED return -1; cmpfail: - rseq_workaround_gcc_asm_size_guess(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: - rseq_workaround_gcc_asm_size_guess(); rseq_bug("cpu_id comparison failed"); error2: - rseq_workaround_gcc_asm_size_guess(); rseq_bug("expected value comparison failed"); #endif } @@ -706,7 +675,6 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, RSEQ_INJECT_C(9) - rseq_workaround_gcc_asm_size_guess(); __asm__ __volatile__ goto ( RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail]) @@ -803,21 +771,16 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, , error1, error2 #endif ); - rseq_workaround_gcc_asm_size_guess(); return 0; abort: - rseq_workaround_gcc_asm_size_guess(); RSEQ_INJECT_FAILED return -1; cmpfail: - rseq_workaround_gcc_asm_size_guess(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: - rseq_workaround_gcc_asm_size_guess(); rseq_bug("cpu_id comparison failed"); error2: - rseq_workaround_gcc_asm_size_guess(); rseq_bug("expected value comparison failed"); #endif } diff --git a/tools/testing/selftests/rseq/rseq-mips.h b/tools/testing/selftests/rseq/rseq-mips.h index 0d1d9255da70..878739fae2fd 100644 --- a/tools/testing/selftests/rseq/rseq-mips.h +++ b/tools/testing/selftests/rseq/rseq-mips.h @@ -154,14 +154,11 @@ do { \ teardown \ "b %l[" __rseq_str(cmpfail_label) "]\n\t" -#define rseq_workaround_gcc_asm_size_guess() __asm__ __volatile__("") - static inline __attribute__((always_inline)) int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) { RSEQ_INJECT_C(9) - rseq_workaround_gcc_asm_size_guess(); __asm__ __volatile__ goto ( RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail]) @@ -203,14 +200,11 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) , error1, error2 #endif ); - rseq_workaround_gcc_asm_size_guess(); return 0; abort: - rseq_workaround_gcc_asm_size_guess(); RSEQ_INJECT_FAILED return -1; cmpfail: - rseq_workaround_gcc_asm_size_guess(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: @@ -226,7 +220,6 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, { RSEQ_INJECT_C(9) - rseq_workaround_gcc_asm_size_guess(); __asm__ __volatile__ goto ( RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail]) @@ -273,14 +266,11 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, , error1, error2 #endif ); - rseq_workaround_gcc_asm_size_guess(); return 0; abort: - rseq_workaround_gcc_asm_size_guess(); RSEQ_INJECT_FAILED return -1; cmpfail: - rseq_workaround_gcc_asm_size_guess(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: @@ -295,7 +285,6 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) { RSEQ_INJECT_C(9) - rseq_workaround_gcc_asm_size_guess(); __asm__ __volatile__ goto ( RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */ #ifdef RSEQ_COMPARE_TWICE @@ -331,10 +320,8 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) , error1 #endif ); - rseq_workaround_gcc_asm_size_guess(); return 0; abort: - rseq_workaround_gcc_asm_size_guess(); RSEQ_INJECT_FAILED return -1; #ifdef RSEQ_COMPARE_TWICE @@ -350,7 +337,6 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, { RSEQ_INJECT_C(9) - rseq_workaround_gcc_asm_size_guess(); __asm__ __volatile__ goto ( RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail]) @@ -399,14 +385,11 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, , error1, error2 #endif ); - rseq_workaround_gcc_asm_size_guess(); return 0; abort: - rseq_workaround_gcc_asm_size_guess(); RSEQ_INJECT_FAILED return -1; cmpfail: - rseq_workaround_gcc_asm_size_guess(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: @@ -423,7 +406,6 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, { RSEQ_INJECT_C(9) - rseq_workaround_gcc_asm_size_guess(); __asm__ __volatile__ goto ( RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail]) @@ -473,14 +455,11 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, , error1, error2 #endif ); - rseq_workaround_gcc_asm_size_guess(); return 0; abort: - rseq_workaround_gcc_asm_size_guess(); RSEQ_INJECT_FAILED return -1; cmpfail: - rseq_workaround_gcc_asm_size_guess(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: @@ -497,7 +476,6 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, { RSEQ_INJECT_C(9) - rseq_workaround_gcc_asm_size_guess(); __asm__ __volatile__ goto ( RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail]) @@ -549,14 +527,11 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, , error1, error2, error3 #endif ); - rseq_workaround_gcc_asm_size_guess(); return 0; abort: - rseq_workaround_gcc_asm_size_guess(); RSEQ_INJECT_FAILED return -1; cmpfail: - rseq_workaround_gcc_asm_size_guess(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: @@ -577,7 +552,6 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, RSEQ_INJECT_C(9) - rseq_workaround_gcc_asm_size_guess(); __asm__ __volatile__ goto ( RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail]) @@ -670,21 +644,16 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, , error1, error2 #endif ); - rseq_workaround_gcc_asm_size_guess(); return 0; abort: - rseq_workaround_gcc_asm_size_guess(); RSEQ_INJECT_FAILED return -1; cmpfail: - rseq_workaround_gcc_asm_size_guess(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: - rseq_workaround_gcc_asm_size_guess(); rseq_bug("cpu_id comparison failed"); error2: - rseq_workaround_gcc_asm_size_guess(); rseq_bug("expected value comparison failed"); #endif } @@ -698,7 +667,6 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, RSEQ_INJECT_C(9) - rseq_workaround_gcc_asm_size_guess(); __asm__ __volatile__ goto ( RSEQ_ASM_DEFINE_TABLE(9, 1f, 2f, 4f) /* start, commit, abort */ RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[cmpfail]) @@ -792,21 +760,16 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, , error1, error2 #endif ); - rseq_workaround_gcc_asm_size_guess(); return 0; abort: - rseq_workaround_gcc_asm_size_guess(); RSEQ_INJECT_FAILED return -1; cmpfail: - rseq_workaround_gcc_asm_size_guess(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: - rseq_workaround_gcc_asm_size_guess(); rseq_bug("cpu_id comparison failed"); error2: - rseq_workaround_gcc_asm_size_guess(); rseq_bug("expected value comparison failed"); #endif } From a4312e2d8192a1f38697a39b928fea6d89161800 Mon Sep 17 00:00:00 2001 From: Mathieu Desnoyers Date: Mon, 24 Jan 2022 12:12:51 -0500 Subject: [PATCH 1239/1842] selftests/rseq: Fix: work-around asm goto compiler bugs commit b53823fb2ef854222853be164f3b1e815f315144 upstream. gcc and clang each have their own compiler bugs with respect to asm goto. Implement a work-around for compiler versions known to have those bugs. gcc prior to 4.8.2 miscompiles asm goto. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670 gcc prior to 8.1.0 miscompiles asm goto at O1. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103908 clang prior to version 13.0.1 miscompiles asm goto at O2. https://github.com/llvm/llvm-project/issues/52735 Work around these issues by adding a volatile inline asm with memory clobber in the fallthrough after the asm goto and at each label target. Emit this for all compilers in case other similar issues are found in the future. Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220124171253.22072-14-mathieu.desnoyers@efficios.com Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/rseq/compiler.h | 30 ++++++++++ tools/testing/selftests/rseq/rseq-arm.h | 39 +++++++++++++ tools/testing/selftests/rseq/rseq-arm64.h | 45 +++++++++++++-- tools/testing/selftests/rseq/rseq-ppc.h | 39 +++++++++++++ tools/testing/selftests/rseq/rseq-s390.h | 29 ++++++++++ tools/testing/selftests/rseq/rseq-x86.h | 68 +++++++++++++++++++++++ tools/testing/selftests/rseq/rseq.h | 1 + 7 files changed, 245 insertions(+), 6 deletions(-) create mode 100644 tools/testing/selftests/rseq/compiler.h diff --git a/tools/testing/selftests/rseq/compiler.h b/tools/testing/selftests/rseq/compiler.h new file mode 100644 index 000000000000..876eb6a7f75b --- /dev/null +++ b/tools/testing/selftests/rseq/compiler.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: LGPL-2.1-only OR MIT */ +/* + * rseq/compiler.h + * + * Work-around asm goto compiler bugs. + * + * (C) Copyright 2021 - Mathieu Desnoyers + */ + +#ifndef RSEQ_COMPILER_H +#define RSEQ_COMPILER_H + +/* + * gcc prior to 4.8.2 miscompiles asm goto. + * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670 + * + * gcc prior to 8.1.0 miscompiles asm goto at O1. + * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103908 + * + * clang prior to version 13.0.1 miscompiles asm goto at O2. + * https://github.com/llvm/llvm-project/issues/52735 + * + * Work around these issues by adding a volatile inline asm with + * memory clobber in the fallthrough after the asm goto and at each + * label target. Emit this for all compilers in case other similar + * issues are found in the future. + */ +#define rseq_after_asm_goto() asm volatile ("" : : : "memory") + +#endif /* RSEQ_COMPILER_H_ */ diff --git a/tools/testing/selftests/rseq/rseq-arm.h b/tools/testing/selftests/rseq/rseq-arm.h index 64c3a622107b..893a11eca9d5 100644 --- a/tools/testing/selftests/rseq/rseq-arm.h +++ b/tools/testing/selftests/rseq/rseq-arm.h @@ -195,16 +195,21 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -263,16 +268,21 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -317,12 +327,15 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) , error1 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); #endif } @@ -384,16 +397,21 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -456,16 +474,21 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -532,18 +555,24 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, , error1, error2, error3 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("1st expected value comparison failed"); error3: + rseq_after_asm_goto(); rseq_bug("2nd expected value comparison failed"); #endif } @@ -652,16 +681,21 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -771,16 +805,21 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } diff --git a/tools/testing/selftests/rseq/rseq-arm64.h b/tools/testing/selftests/rseq/rseq-arm64.h index 7806817062c2..cbe190a4d005 100644 --- a/tools/testing/selftests/rseq/rseq-arm64.h +++ b/tools/testing/selftests/rseq/rseq-arm64.h @@ -242,17 +242,21 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) , error1, error2 #endif ); - + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -300,16 +304,21 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -348,12 +357,15 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) , error1 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); #endif } @@ -402,17 +414,21 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, , error1, error2 #endif ); - + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -461,17 +477,21 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, , error1, error2 #endif ); - + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -522,19 +542,24 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, , error1, error2, error3 #endif ); - + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); error3: + rseq_after_asm_goto(); rseq_bug("2nd expected value comparison failed"); #endif } @@ -584,17 +609,21 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, , error1, error2 #endif ); - + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -644,17 +673,21 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, , error1, error2 #endif ); - + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } diff --git a/tools/testing/selftests/rseq/rseq-ppc.h b/tools/testing/selftests/rseq/rseq-ppc.h index aa18c0eabf40..bab8e0b9fb11 100644 --- a/tools/testing/selftests/rseq/rseq-ppc.h +++ b/tools/testing/selftests/rseq/rseq-ppc.h @@ -254,16 +254,21 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -322,16 +327,21 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -378,12 +388,15 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) , error1 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); #endif } @@ -442,16 +455,21 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -512,16 +530,21 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -583,18 +606,24 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, , error1, error2, error3 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("1st expected value comparison failed"); error3: + rseq_after_asm_goto(); rseq_bug("2nd expected value comparison failed"); #endif } @@ -659,16 +688,21 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -735,16 +769,21 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } diff --git a/tools/testing/selftests/rseq/rseq-s390.h b/tools/testing/selftests/rseq/rseq-s390.h index 0f523b3ecdef..4e6dc5f0cb42 100644 --- a/tools/testing/selftests/rseq/rseq-s390.h +++ b/tools/testing/selftests/rseq/rseq-s390.h @@ -178,16 +178,21 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -248,16 +253,21 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -301,12 +311,15 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) , error1 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); #endif } @@ -364,16 +377,21 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -443,18 +461,24 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, , error1, error2, error3 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("1st expected value comparison failed"); error3: + rseq_after_asm_goto(); rseq_bug("2nd expected value comparison failed"); #endif } @@ -555,16 +579,21 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } diff --git a/tools/testing/selftests/rseq/rseq-x86.h b/tools/testing/selftests/rseq/rseq-x86.h index e444563c6999..4ab2e74811ab 100644 --- a/tools/testing/selftests/rseq/rseq-x86.h +++ b/tools/testing/selftests/rseq/rseq-x86.h @@ -152,16 +152,21 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -220,16 +225,21 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -269,12 +279,15 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) , error1 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); #endif } @@ -387,16 +400,21 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -464,18 +482,24 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, , error1, error2, error3 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("1st expected value comparison failed"); error3: + rseq_after_asm_goto(); rseq_bug("2nd expected value comparison failed"); #endif } @@ -574,16 +598,21 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -730,16 +759,21 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -798,16 +832,21 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -847,12 +886,15 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) , error1 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); #endif } @@ -909,16 +951,21 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -977,16 +1024,21 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif @@ -1047,18 +1099,24 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, , error1, error2, error3 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("1st expected value comparison failed"); error3: + rseq_after_asm_goto(); rseq_bug("2nd expected value comparison failed"); #endif } @@ -1161,16 +1219,21 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } @@ -1274,16 +1337,21 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, , error1, error2 #endif ); + rseq_after_asm_goto(); return 0; abort: + rseq_after_asm_goto(); RSEQ_INJECT_FAILED return -1; cmpfail: + rseq_after_asm_goto(); return 1; #ifdef RSEQ_COMPARE_TWICE error1: + rseq_after_asm_goto(); rseq_bug("cpu_id comparison failed"); error2: + rseq_after_asm_goto(); rseq_bug("expected value comparison failed"); #endif } diff --git a/tools/testing/selftests/rseq/rseq.h b/tools/testing/selftests/rseq/rseq.h index 17531ccd3090..6bd0ac466b4a 100644 --- a/tools/testing/selftests/rseq/rseq.h +++ b/tools/testing/selftests/rseq/rseq.h @@ -17,6 +17,7 @@ #include #include #include "rseq-abi.h" +#include "compiler.h" /* * Empty code injection macros, override when testing. From 27f6361cb4151de69f756480bd9264893bbc1ad1 Mon Sep 17 00:00:00 2001 From: Mathieu Desnoyers Date: Mon, 24 Jan 2022 12:12:52 -0500 Subject: [PATCH 1240/1842] selftests/rseq: x86-64: use %fs segment selector for accessing rseq thread area commit 4e15bb766b6c6e963a4d33629034d0ec3b7637df upstream. Rather than use rseq_get_abi() and pass its result through a register to the inline assembler, directly access the per-thread rseq area through a memory reference combining the %fs segment selector, the constant offset of the field in struct rseq, and the rseq_offset value (in a register). Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220124171253.22072-15-mathieu.desnoyers@efficios.com Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/rseq/rseq-x86.h | 58 +++++++++++++------------ 1 file changed, 30 insertions(+), 28 deletions(-) diff --git a/tools/testing/selftests/rseq/rseq-x86.h b/tools/testing/selftests/rseq/rseq-x86.h index 4ab2e74811ab..29769664edaa 100644 --- a/tools/testing/selftests/rseq/rseq-x86.h +++ b/tools/testing/selftests/rseq/rseq-x86.h @@ -28,6 +28,8 @@ #ifdef __x86_64__ +#define RSEQ_ASM_TP_SEGMENT %%fs + #define rseq_smp_mb() \ __asm__ __volatile__ ("lock; addl $0,-128(%%rsp)" ::: "memory", "cc") #define rseq_smp_rmb() rseq_barrier() @@ -123,14 +125,14 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2]) #endif /* Start rseq by storing table entry pointer into rseq_cs. */ - RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi])) - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f) + RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset])) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f) RSEQ_INJECT_ASM(3) "cmpq %[v], %[expect]\n\t" "jnz %l[cmpfail]\n\t" RSEQ_INJECT_ASM(4) #ifdef RSEQ_COMPARE_TWICE - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1]) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1]) "cmpq %[v], %[expect]\n\t" "jnz %l[error2]\n\t" #endif @@ -141,7 +143,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (rseq_get_abi()), + [rseq_offset] "r" ((long)rseq_offset), [v] "m" (*v), [expect] "r" (expect), [newv] "r" (newv) @@ -189,15 +191,15 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2]) #endif /* Start rseq by storing table entry pointer into rseq_cs. */ - RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi])) - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f) + RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset])) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f) RSEQ_INJECT_ASM(3) "movq %[v], %%rbx\n\t" "cmpq %%rbx, %[expectnot]\n\t" "je %l[cmpfail]\n\t" RSEQ_INJECT_ASM(4) #ifdef RSEQ_COMPARE_TWICE - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1]) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1]) "movq %[v], %%rbx\n\t" "cmpq %%rbx, %[expectnot]\n\t" "je %l[error2]\n\t" @@ -212,7 +214,7 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (rseq_get_abi()), + [rseq_offset] "r" ((long)rseq_offset), /* final store input */ [v] "m" (*v), [expectnot] "r" (expectnot), @@ -255,11 +257,11 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1]) #endif /* Start rseq by storing table entry pointer into rseq_cs. */ - RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi])) - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f) + RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset])) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f) RSEQ_INJECT_ASM(3) #ifdef RSEQ_COMPARE_TWICE - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1]) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1]) #endif /* final store */ "addq %[count], %[v]\n\t" @@ -268,7 +270,7 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (rseq_get_abi()), + [rseq_offset] "r" ((long)rseq_offset), /* final store input */ [v] "m" (*v), [count] "er" (count) @@ -309,11 +311,11 @@ int rseq_offset_deref_addv(intptr_t *ptr, long off, intptr_t inc, int cpu) RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1]) #endif /* Start rseq by storing table entry pointer into rseq_cs. */ - RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi])) - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f) + RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset])) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f) RSEQ_INJECT_ASM(3) #ifdef RSEQ_COMPARE_TWICE - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1]) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1]) #endif /* get p+v */ "movq %[ptr], %%rbx\n\t" @@ -327,7 +329,7 @@ int rseq_offset_deref_addv(intptr_t *ptr, long off, intptr_t inc, int cpu) RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (rseq_get_abi()), + [rseq_offset] "r" ((long)rseq_offset), /* final store input */ [ptr] "m" (*ptr), [off] "er" (off), @@ -364,14 +366,14 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2]) #endif /* Start rseq by storing table entry pointer into rseq_cs. */ - RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi])) - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f) + RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset])) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f) RSEQ_INJECT_ASM(3) "cmpq %[v], %[expect]\n\t" "jnz %l[cmpfail]\n\t" RSEQ_INJECT_ASM(4) #ifdef RSEQ_COMPARE_TWICE - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1]) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1]) "cmpq %[v], %[expect]\n\t" "jnz %l[error2]\n\t" #endif @@ -385,7 +387,7 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (rseq_get_abi()), + [rseq_offset] "r" ((long)rseq_offset), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -444,8 +446,8 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error3]) #endif /* Start rseq by storing table entry pointer into rseq_cs. */ - RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi])) - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f) + RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset])) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f) RSEQ_INJECT_ASM(3) "cmpq %[v], %[expect]\n\t" "jnz %l[cmpfail]\n\t" @@ -454,7 +456,7 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, "jnz %l[cmpfail]\n\t" RSEQ_INJECT_ASM(5) #ifdef RSEQ_COMPARE_TWICE - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1]) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1]) "cmpq %[v], %[expect]\n\t" "jnz %l[error2]\n\t" "cmpq %[v2], %[expect2]\n\t" @@ -467,7 +469,7 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (rseq_get_abi()), + [rseq_offset] "r" ((long)rseq_offset), /* cmp2 input */ [v2] "m" (*v2), [expect2] "r" (expect2), @@ -524,14 +526,14 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, "movq %[dst], %[rseq_scratch1]\n\t" "movq %[len], %[rseq_scratch2]\n\t" /* Start rseq by storing table entry pointer into rseq_cs. */ - RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi])) - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f) + RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset])) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f) RSEQ_INJECT_ASM(3) "cmpq %[v], %[expect]\n\t" "jnz 5f\n\t" RSEQ_INJECT_ASM(4) #ifdef RSEQ_COMPARE_TWICE - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 6f) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 6f) "cmpq %[v], %[expect]\n\t" "jnz 7f\n\t" #endif @@ -579,7 +581,7 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, #endif : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (rseq_get_abi()), + [rseq_offset] "r" ((long)rseq_offset), /* final store input */ [v] "m" (*v), [expect] "r" (expect), From 7e617278bf3afd4ed81c04a75e4a32ce8fb5bf44 Mon Sep 17 00:00:00 2001 From: Mathieu Desnoyers Date: Mon, 24 Jan 2022 12:12:53 -0500 Subject: [PATCH 1241/1842] selftests/rseq: x86-32: use %gs segment selector for accessing rseq thread area commit 127b6429d235ab7c358223bbfd8a8b8d8cc799b6 upstream. Rather than use rseq_get_abi() and pass its result through a register to the inline assembler, directly access the per-thread rseq area through a memory reference combining the %gs segment selector, the constant offset of the field in struct rseq, and the rseq_offset value (in a register). Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220124171253.22072-16-mathieu.desnoyers@efficios.com Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/rseq/rseq-x86.h | 66 +++++++++++++------------ 1 file changed, 34 insertions(+), 32 deletions(-) diff --git a/tools/testing/selftests/rseq/rseq-x86.h b/tools/testing/selftests/rseq/rseq-x86.h index 29769664edaa..f704d3664327 100644 --- a/tools/testing/selftests/rseq/rseq-x86.h +++ b/tools/testing/selftests/rseq/rseq-x86.h @@ -633,6 +633,8 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, #elif defined(__i386__) +#define RSEQ_ASM_TP_SEGMENT %%gs + #define rseq_smp_mb() \ __asm__ __volatile__ ("lock; addl $0,-128(%%esp)" ::: "memory", "cc") #define rseq_smp_rmb() \ @@ -732,14 +734,14 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2]) #endif /* Start rseq by storing table entry pointer into rseq_cs. */ - RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi])) - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f) + RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset])) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f) RSEQ_INJECT_ASM(3) "cmpl %[v], %[expect]\n\t" "jnz %l[cmpfail]\n\t" RSEQ_INJECT_ASM(4) #ifdef RSEQ_COMPARE_TWICE - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1]) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1]) "cmpl %[v], %[expect]\n\t" "jnz %l[error2]\n\t" #endif @@ -750,7 +752,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (rseq_get_abi()), + [rseq_offset] "r" (rseq_offset), [v] "m" (*v), [expect] "r" (expect), [newv] "r" (newv) @@ -798,15 +800,15 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2]) #endif /* Start rseq by storing table entry pointer into rseq_cs. */ - RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi])) - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f) + RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset])) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f) RSEQ_INJECT_ASM(3) "movl %[v], %%ebx\n\t" "cmpl %%ebx, %[expectnot]\n\t" "je %l[cmpfail]\n\t" RSEQ_INJECT_ASM(4) #ifdef RSEQ_COMPARE_TWICE - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1]) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1]) "movl %[v], %%ebx\n\t" "cmpl %%ebx, %[expectnot]\n\t" "je %l[error2]\n\t" @@ -821,7 +823,7 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (rseq_get_abi()), + [rseq_offset] "r" (rseq_offset), /* final store input */ [v] "m" (*v), [expectnot] "r" (expectnot), @@ -864,11 +866,11 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error1]) #endif /* Start rseq by storing table entry pointer into rseq_cs. */ - RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi])) - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f) + RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset])) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f) RSEQ_INJECT_ASM(3) #ifdef RSEQ_COMPARE_TWICE - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1]) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1]) #endif /* final store */ "addl %[count], %[v]\n\t" @@ -877,7 +879,7 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (rseq_get_abi()), + [rseq_offset] "r" (rseq_offset), /* final store input */ [v] "m" (*v), [count] "ir" (count) @@ -916,14 +918,14 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2]) #endif /* Start rseq by storing table entry pointer into rseq_cs. */ - RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi])) - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f) + RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset])) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f) RSEQ_INJECT_ASM(3) "cmpl %[v], %[expect]\n\t" "jnz %l[cmpfail]\n\t" RSEQ_INJECT_ASM(4) #ifdef RSEQ_COMPARE_TWICE - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1]) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1]) "cmpl %[v], %[expect]\n\t" "jnz %l[error2]\n\t" #endif @@ -938,7 +940,7 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (rseq_get_abi()), + [rseq_offset] "r" (rseq_offset), /* try store input */ [v2] "m" (*v2), [newv2] "m" (newv2), @@ -987,15 +989,15 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error2]) #endif /* Start rseq by storing table entry pointer into rseq_cs. */ - RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi])) - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f) + RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset])) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f) RSEQ_INJECT_ASM(3) "movl %[expect], %%eax\n\t" "cmpl %[v], %%eax\n\t" "jnz %l[cmpfail]\n\t" RSEQ_INJECT_ASM(4) #ifdef RSEQ_COMPARE_TWICE - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1]) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1]) "movl %[expect], %%eax\n\t" "cmpl %[v], %%eax\n\t" "jnz %l[error2]\n\t" @@ -1011,7 +1013,7 @@ int rseq_cmpeqv_trystorev_storev_release(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (rseq_get_abi()), + [rseq_offset] "r" (rseq_offset), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -1062,8 +1064,8 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_EXIT_POINT(1f, %l[error3]) #endif /* Start rseq by storing table entry pointer into rseq_cs. */ - RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi])) - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f) + RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset])) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f) RSEQ_INJECT_ASM(3) "cmpl %[v], %[expect]\n\t" "jnz %l[cmpfail]\n\t" @@ -1072,7 +1074,7 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, "jnz %l[cmpfail]\n\t" RSEQ_INJECT_ASM(5) #ifdef RSEQ_COMPARE_TWICE - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), %l[error1]) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), %l[error1]) "cmpl %[v], %[expect]\n\t" "jnz %l[error2]\n\t" "cmpl %[expect2], %[v2]\n\t" @@ -1086,7 +1088,7 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (rseq_get_abi()), + [rseq_offset] "r" (rseq_offset), /* cmp2 input */ [v2] "m" (*v2), [expect2] "r" (expect2), @@ -1144,15 +1146,15 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, "movl %[dst], %[rseq_scratch1]\n\t" "movl %[len], %[rseq_scratch2]\n\t" /* Start rseq by storing table entry pointer into rseq_cs. */ - RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi])) - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f) + RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset])) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f) RSEQ_INJECT_ASM(3) "movl %[expect], %%eax\n\t" "cmpl %%eax, %[v]\n\t" "jnz 5f\n\t" RSEQ_INJECT_ASM(4) #ifdef RSEQ_COMPARE_TWICE - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 6f) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 6f) "movl %[expect], %%eax\n\t" "cmpl %%eax, %[v]\n\t" "jnz 7f\n\t" @@ -1202,7 +1204,7 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, #endif : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (rseq_get_abi()), + [rseq_offset] "r" (rseq_offset), /* final store input */ [v] "m" (*v), [expect] "m" (expect), @@ -1261,15 +1263,15 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, "movl %[dst], %[rseq_scratch1]\n\t" "movl %[len], %[rseq_scratch2]\n\t" /* Start rseq by storing table entry pointer into rseq_cs. */ - RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_CS_OFFSET(%[rseq_abi])) - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 4f) + RSEQ_ASM_STORE_RSEQ_CS(1, 3b, RSEQ_ASM_TP_SEGMENT:RSEQ_CS_OFFSET(%[rseq_offset])) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 4f) RSEQ_INJECT_ASM(3) "movl %[expect], %%eax\n\t" "cmpl %%eax, %[v]\n\t" "jnz 5f\n\t" RSEQ_INJECT_ASM(4) #ifdef RSEQ_COMPARE_TWICE - RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_CPU_ID_OFFSET(%[rseq_abi]), 6f) + RSEQ_ASM_CMP_CPU_ID(cpu_id, RSEQ_ASM_TP_SEGMENT:RSEQ_CPU_ID_OFFSET(%[rseq_offset]), 6f) "movl %[expect], %%eax\n\t" "cmpl %%eax, %[v]\n\t" "jnz 7f\n\t" @@ -1320,7 +1322,7 @@ int rseq_cmpeqv_trymemcpy_storev_release(intptr_t *v, intptr_t expect, #endif : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_abi] "r" (rseq_get_abi()), + [rseq_offset] "r" (rseq_offset), /* final store input */ [v] "m" (*v), [expect] "m" (expect), From d341e5a754803188fa12e1b679b10f53bcd0030a Mon Sep 17 00:00:00 2001 From: Mathieu Desnoyers Date: Thu, 3 Feb 2022 10:05:32 -0500 Subject: [PATCH 1242/1842] selftests/rseq: Change type of rseq_offset to ptrdiff_t commit 889c5d60fbcf332c8b6ab7054d45f2768914a375 upstream. Just before the 2.35 release of glibc, the __rseq_offset userspace ABI was changed from int to ptrdiff_t. Adapt to this change in the kernel selftests. Signed-off-by: Mathieu Desnoyers Signed-off-by: Peter Zijlstra (Intel) Link: https://sourceware.org/pipermail/libc-alpha/2022-February/136024.html Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/rseq/rseq-x86.h | 14 +++++++------- tools/testing/selftests/rseq/rseq.c | 5 +++-- tools/testing/selftests/rseq/rseq.h | 3 ++- 3 files changed, 12 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/rseq/rseq-x86.h b/tools/testing/selftests/rseq/rseq-x86.h index f704d3664327..bd01dc41ca13 100644 --- a/tools/testing/selftests/rseq/rseq-x86.h +++ b/tools/testing/selftests/rseq/rseq-x86.h @@ -143,7 +143,7 @@ int rseq_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu) RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_offset] "r" ((long)rseq_offset), + [rseq_offset] "r" (rseq_offset), [v] "m" (*v), [expect] "r" (expect), [newv] "r" (newv) @@ -214,7 +214,7 @@ int rseq_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_offset] "r" ((long)rseq_offset), + [rseq_offset] "r" (rseq_offset), /* final store input */ [v] "m" (*v), [expectnot] "r" (expectnot), @@ -270,7 +270,7 @@ int rseq_addv(intptr_t *v, intptr_t count, int cpu) RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_offset] "r" ((long)rseq_offset), + [rseq_offset] "r" (rseq_offset), /* final store input */ [v] "m" (*v), [count] "er" (count) @@ -329,7 +329,7 @@ int rseq_offset_deref_addv(intptr_t *ptr, long off, intptr_t inc, int cpu) RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_offset] "r" ((long)rseq_offset), + [rseq_offset] "r" (rseq_offset), /* final store input */ [ptr] "m" (*ptr), [off] "er" (off), @@ -387,7 +387,7 @@ int rseq_cmpeqv_trystorev_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_offset] "r" ((long)rseq_offset), + [rseq_offset] "r" (rseq_offset), /* try store input */ [v2] "m" (*v2), [newv2] "r" (newv2), @@ -469,7 +469,7 @@ int rseq_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, RSEQ_ASM_DEFINE_ABORT(4, "", abort) : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_offset] "r" ((long)rseq_offset), + [rseq_offset] "r" (rseq_offset), /* cmp2 input */ [v2] "m" (*v2), [expect2] "r" (expect2), @@ -581,7 +581,7 @@ int rseq_cmpeqv_trymemcpy_storev(intptr_t *v, intptr_t expect, #endif : /* gcc asm goto does not allow outputs */ : [cpu_id] "r" (cpu), - [rseq_offset] "r" ((long)rseq_offset), + [rseq_offset] "r" (rseq_offset), /* final store input */ [v] "m" (*v), [expect] "r" (expect), diff --git a/tools/testing/selftests/rseq/rseq.c b/tools/testing/selftests/rseq/rseq.c index 07ba0d463a96..986b9458efb2 100644 --- a/tools/testing/selftests/rseq/rseq.c +++ b/tools/testing/selftests/rseq/rseq.c @@ -27,16 +27,17 @@ #include #include #include +#include #include "../kselftest.h" #include "rseq.h" -static const int *libc_rseq_offset_p; +static const ptrdiff_t *libc_rseq_offset_p; static const unsigned int *libc_rseq_size_p; static const unsigned int *libc_rseq_flags_p; /* Offset from the thread pointer to the rseq area. */ -int rseq_offset; +ptrdiff_t rseq_offset; /* Size of the registered rseq area. 0 if the registration was unsuccessful. */ diff --git a/tools/testing/selftests/rseq/rseq.h b/tools/testing/selftests/rseq/rseq.h index 6bd0ac466b4a..9d850b290c2e 100644 --- a/tools/testing/selftests/rseq/rseq.h +++ b/tools/testing/selftests/rseq/rseq.h @@ -16,6 +16,7 @@ #include #include #include +#include #include "rseq-abi.h" #include "compiler.h" @@ -47,7 +48,7 @@ #include "rseq-thread-pointer.h" /* Offset from the thread pointer to the rseq area. */ -extern int rseq_offset; +extern ptrdiff_t rseq_offset; /* Size of the registered rseq area. 0 if the registration was unsuccessful. */ extern unsigned int rseq_size; From cfea428030be836d79a7690968232bb7fa4410f1 Mon Sep 17 00:00:00 2001 From: Roger Pau Monne Date: Wed, 30 Mar 2022 09:03:48 +0200 Subject: [PATCH 1243/1842] xen/blkfront: fix leaking data in shared pages MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 2f446ffe9d737e9a844b97887919c4fda18246e7 upstream. When allocating pages to be used for shared communication with the backend always zero them, this avoids leaking unintended data present on the pages. This is CVE-2022-26365, part of XSA-403. Signed-off-by: Roger Pau Monné Reviewed-by: Jan Beulich Reviewed-by: Juergen Gross Signed-off-by: Juergen Gross Signed-off-by: Greg Kroah-Hartman --- drivers/block/xen-blkfront.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 47d4bb23d6f3..fffb7c3118b1 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -311,7 +311,7 @@ static int fill_grant_buffer(struct blkfront_ring_info *rinfo, int num) goto out_of_memory; if (info->feature_persistent) { - granted_page = alloc_page(GFP_NOIO); + granted_page = alloc_page(GFP_NOIO | __GFP_ZERO); if (!granted_page) { kfree(gnt_list_entry); goto out_of_memory; @@ -1753,7 +1753,7 @@ static int setup_blkring(struct xenbus_device *dev, for (i = 0; i < info->nr_ring_pages; i++) rinfo->ring_ref[i] = GRANT_INVALID_REF; - sring = alloc_pages_exact(ring_size, GFP_NOIO); + sring = alloc_pages_exact(ring_size, GFP_NOIO | __GFP_ZERO); if (!sring) { xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring"); return -ENOMEM; @@ -2293,7 +2293,8 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo) BUG_ON(!list_empty(&rinfo->indirect_pages)); for (i = 0; i < num; i++) { - struct page *indirect_page = alloc_page(GFP_KERNEL); + struct page *indirect_page = alloc_page(GFP_KERNEL | + __GFP_ZERO); if (!indirect_page) goto out_of_memory; list_add(&indirect_page->lru, &rinfo->indirect_pages); From 728d68bfe68d92eae1407b8a9edc7817d6227404 Mon Sep 17 00:00:00 2001 From: Roger Pau Monne Date: Wed, 6 Apr 2022 17:38:04 +0200 Subject: [PATCH 1244/1842] xen/netfront: fix leaking data in shared pages MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 307c8de2b02344805ebead3440d8feed28f2f010 upstream. When allocating pages to be used for shared communication with the backend always zero them, this avoids leaking unintended data present on the pages. This is CVE-2022-33740, part of XSA-403. Signed-off-by: Roger Pau Monné Reviewed-by: Jan Beulich Reviewed-by: Juergen Gross Signed-off-by: Juergen Gross Signed-off-by: Greg Kroah-Hartman --- drivers/net/xen-netfront.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index 1a69b5246133..e5157126057c 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -273,7 +273,8 @@ static struct sk_buff *xennet_alloc_one_rx_buffer(struct netfront_queue *queue) if (unlikely(!skb)) return NULL; - page = page_pool_dev_alloc_pages(queue->page_pool); + page = page_pool_alloc_pages(queue->page_pool, + GFP_ATOMIC | __GFP_NOWARN | __GFP_ZERO); if (unlikely(!page)) { kfree_skb(skb); return NULL; From 4923217af5742a796821272ee03f8d6de15c0cca Mon Sep 17 00:00:00 2001 From: Roger Pau Monne Date: Thu, 7 Apr 2022 12:20:06 +0200 Subject: [PATCH 1245/1842] xen/netfront: force data bouncing when backend is untrusted MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 4491001c2e0fa69efbb748c96ec96b100a5cdb7e upstream. Bounce all data on the skbs to be transmitted into zeroed pages if the backend is untrusted. This avoids leaking data present in the pages shared with the backend but not part of the skb fragments. This requires introducing a new helper in order to allocate skbs with a size multiple of XEN_PAGE_SIZE so we don't leak contiguous data on the granted pages. Reporting whether the backend is to be trusted can be done using a module parameter, or from the xenstore frontend path as set by the toolstack when adding the device. This is CVE-2022-33741, part of XSA-403. Signed-off-by: Roger Pau Monné Reviewed-by: Juergen Gross Signed-off-by: Juergen Gross Signed-off-by: Greg Kroah-Hartman --- drivers/net/xen-netfront.c | 49 ++++++++++++++++++++++++++++++++++++-- 1 file changed, 47 insertions(+), 2 deletions(-) diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index e5157126057c..bae3ad139f06 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -66,6 +66,10 @@ module_param_named(max_queues, xennet_max_queues, uint, 0644); MODULE_PARM_DESC(max_queues, "Maximum number of queues per virtual interface"); +static bool __read_mostly xennet_trusted = true; +module_param_named(trusted, xennet_trusted, bool, 0644); +MODULE_PARM_DESC(trusted, "Is the backend trusted"); + #define XENNET_TIMEOUT (5 * HZ) static const struct ethtool_ops xennet_ethtool_ops; @@ -175,6 +179,9 @@ struct netfront_info { /* Is device behaving sane? */ bool broken; + /* Should skbs be bounced into a zeroed buffer? */ + bool bounce; + atomic_t rx_gso_checksum_fixup; }; @@ -670,6 +677,33 @@ static int xennet_xdp_xmit(struct net_device *dev, int n, return n - drops; } +struct sk_buff *bounce_skb(const struct sk_buff *skb) +{ + unsigned int headerlen = skb_headroom(skb); + /* Align size to allocate full pages and avoid contiguous data leaks */ + unsigned int size = ALIGN(skb_end_offset(skb) + skb->data_len, + XEN_PAGE_SIZE); + struct sk_buff *n = alloc_skb(size, GFP_ATOMIC | __GFP_ZERO); + + if (!n) + return NULL; + + if (!IS_ALIGNED((uintptr_t)n->head, XEN_PAGE_SIZE)) { + WARN_ONCE(1, "misaligned skb allocated\n"); + kfree_skb(n); + return NULL; + } + + /* Set the data pointer */ + skb_reserve(n, headerlen); + /* Set the tail pointer and length */ + skb_put(n, skb->len); + + BUG_ON(skb_copy_bits(skb, -headerlen, n->head, headerlen + skb->len)); + + skb_copy_header(n, skb); + return n; +} #define MAX_XEN_SKB_FRAGS (65536 / XEN_PAGE_SIZE + 1) @@ -723,9 +757,13 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev /* The first req should be at least ETH_HLEN size or the packet will be * dropped by netback. + * + * If the backend is not trusted bounce all data to zeroed pages to + * avoid exposing contiguous data on the granted page not belonging to + * the skb. */ - if (unlikely(PAGE_SIZE - offset < ETH_HLEN)) { - nskb = skb_copy(skb, GFP_ATOMIC); + if (np->bounce || unlikely(PAGE_SIZE - offset < ETH_HLEN)) { + nskb = bounce_skb(skb); if (!nskb) goto drop; dev_consume_skb_any(skb); @@ -2249,6 +2287,10 @@ static int talk_to_netback(struct xenbus_device *dev, info->netdev->irq = 0; + /* Check if backend is trusted. */ + info->bounce = !xennet_trusted || + !xenbus_read_unsigned(dev->nodename, "trusted", 1); + /* Check if backend supports multiple queues */ max_queues = xenbus_read_unsigned(info->xbdev->otherend, "multi-queue-max-queues", 1); @@ -2415,6 +2457,9 @@ static int xennet_connect(struct net_device *dev) return err; if (np->netback_has_xdp_headroom) pr_info("backend supports XDP headroom\n"); + if (np->bounce) + dev_info(&np->xbdev->dev, + "bouncing transmitted data to zeroed pages\n"); /* talk_to_netback() sets the correct number of queues */ num_queues = dev->real_num_tx_queues; From cbbd2d2531539212ff090aecbea9877c996e6ce6 Mon Sep 17 00:00:00 2001 From: Roger Pau Monne Date: Thu, 7 Apr 2022 13:04:24 +0200 Subject: [PATCH 1246/1842] xen/blkfront: force data bouncing when backend is untrusted MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 2400617da7eebf9167d71a46122828bc479d64c9 upstream. Split the current bounce buffering logic used with persistent grants into it's own option, and allow enabling it independently of persistent grants. This allows to reuse the same code paths to perform the bounce buffering required to avoid leaking contiguous data in shared pages not part of the request fragments. Reporting whether the backend is to be trusted can be done using a module parameter, or from the xenstore frontend path as set by the toolstack when adding the device. This is CVE-2022-33742, part of XSA-403. Signed-off-by: Roger Pau Monné Reviewed-by: Juergen Gross Signed-off-by: Juergen Gross Signed-off-by: Greg Kroah-Hartman --- drivers/block/xen-blkfront.c | 49 +++++++++++++++++++++++++----------- 1 file changed, 34 insertions(+), 15 deletions(-) diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index fffb7c3118b1..abbb68b6d9bd 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -151,6 +151,10 @@ static unsigned int xen_blkif_max_ring_order; module_param_named(max_ring_page_order, xen_blkif_max_ring_order, int, 0444); MODULE_PARM_DESC(max_ring_page_order, "Maximum order of pages to be used for the shared ring"); +static bool __read_mostly xen_blkif_trusted = true; +module_param_named(trusted, xen_blkif_trusted, bool, 0644); +MODULE_PARM_DESC(trusted, "Is the backend trusted"); + #define BLK_RING_SIZE(info) \ __CONST_RING_SIZE(blkif, XEN_PAGE_SIZE * (info)->nr_ring_pages) @@ -208,6 +212,7 @@ struct blkfront_info unsigned int feature_discard:1; unsigned int feature_secdiscard:1; unsigned int feature_persistent:1; + unsigned int bounce:1; unsigned int discard_granularity; unsigned int discard_alignment; /* Number of 4KB segments handled */ @@ -310,7 +315,7 @@ static int fill_grant_buffer(struct blkfront_ring_info *rinfo, int num) if (!gnt_list_entry) goto out_of_memory; - if (info->feature_persistent) { + if (info->bounce) { granted_page = alloc_page(GFP_NOIO | __GFP_ZERO); if (!granted_page) { kfree(gnt_list_entry); @@ -330,7 +335,7 @@ static int fill_grant_buffer(struct blkfront_ring_info *rinfo, int num) list_for_each_entry_safe(gnt_list_entry, n, &rinfo->grants, node) { list_del(&gnt_list_entry->node); - if (info->feature_persistent) + if (info->bounce) __free_page(gnt_list_entry->page); kfree(gnt_list_entry); i--; @@ -376,7 +381,7 @@ static struct grant *get_grant(grant_ref_t *gref_head, /* Assign a gref to this page */ gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head); BUG_ON(gnt_list_entry->gref == -ENOSPC); - if (info->feature_persistent) + if (info->bounce) grant_foreign_access(gnt_list_entry, info); else { /* Grant access to the GFN passed by the caller */ @@ -400,7 +405,7 @@ static struct grant *get_indirect_grant(grant_ref_t *gref_head, /* Assign a gref to this page */ gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head); BUG_ON(gnt_list_entry->gref == -ENOSPC); - if (!info->feature_persistent) { + if (!info->bounce) { struct page *indirect_page; /* Fetch a pre-allocated page to use for indirect grefs */ @@ -715,7 +720,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri .grant_idx = 0, .segments = NULL, .rinfo = rinfo, - .need_copy = rq_data_dir(req) && info->feature_persistent, + .need_copy = rq_data_dir(req) && info->bounce, }; /* @@ -1035,11 +1040,12 @@ static void xlvbd_flush(struct blkfront_info *info) { blk_queue_write_cache(info->rq, info->feature_flush ? true : false, info->feature_fua ? true : false); - pr_info("blkfront: %s: %s %s %s %s %s\n", + pr_info("blkfront: %s: %s %s %s %s %s %s %s\n", info->gd->disk_name, flush_info(info), "persistent grants:", info->feature_persistent ? "enabled;" : "disabled;", "indirect descriptors:", - info->max_indirect_segments ? "enabled;" : "disabled;"); + info->max_indirect_segments ? "enabled;" : "disabled;", + "bounce buffer:", info->bounce ? "enabled" : "disabled;"); } static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset) @@ -1273,7 +1279,7 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo) if (!list_empty(&rinfo->indirect_pages)) { struct page *indirect_page, *n; - BUG_ON(info->feature_persistent); + BUG_ON(info->bounce); list_for_each_entry_safe(indirect_page, n, &rinfo->indirect_pages, lru) { list_del(&indirect_page->lru); __free_page(indirect_page); @@ -1290,7 +1296,7 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo) 0, 0UL); rinfo->persistent_gnts_c--; } - if (info->feature_persistent) + if (info->bounce) __free_page(persistent_gnt->page); kfree(persistent_gnt); } @@ -1311,7 +1317,7 @@ static void blkif_free_ring(struct blkfront_ring_info *rinfo) for (j = 0; j < segs; j++) { persistent_gnt = rinfo->shadow[i].grants_used[j]; gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL); - if (info->feature_persistent) + if (info->bounce) __free_page(persistent_gnt->page); kfree(persistent_gnt); } @@ -1501,7 +1507,7 @@ static int blkif_completion(unsigned long *id, data.s = s; num_sg = s->num_sg; - if (bret->operation == BLKIF_OP_READ && info->feature_persistent) { + if (bret->operation == BLKIF_OP_READ && info->bounce) { for_each_sg(s->sg, sg, num_sg, i) { BUG_ON(sg->offset + sg->length > PAGE_SIZE); @@ -1560,7 +1566,7 @@ static int blkif_completion(unsigned long *id, * Add the used indirect page back to the list of * available pages for indirect grefs. */ - if (!info->feature_persistent) { + if (!info->bounce) { indirect_page = s->indirect_grants[i]->page; list_add(&indirect_page->lru, &rinfo->indirect_pages); } @@ -1857,6 +1863,10 @@ static int talk_to_blkback(struct xenbus_device *dev, if (!info) return -ENODEV; + /* Check if backend is trusted. */ + info->bounce = !xen_blkif_trusted || + !xenbus_read_unsigned(dev->nodename, "trusted", 1); + max_page_order = xenbus_read_unsigned(info->xbdev->otherend, "max-ring-page-order", 0); ring_page_order = min(xen_blkif_max_ring_order, max_page_order); @@ -2283,10 +2293,10 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo) if (err) goto out_of_memory; - if (!info->feature_persistent && info->max_indirect_segments) { + if (!info->bounce && info->max_indirect_segments) { /* - * We are using indirect descriptors but not persistent - * grants, we need to allocate a set of pages that can be + * We are using indirect descriptors but don't have a bounce + * buffer, we need to allocate a set of pages that can be * used for mapping indirect grefs */ int num = INDIRECT_GREFS(grants) * BLK_RING_SIZE(info); @@ -2387,6 +2397,8 @@ static void blkfront_gather_backend_features(struct blkfront_info *info) info->feature_persistent = !!xenbus_read_unsigned(info->xbdev->otherend, "feature-persistent", 0); + if (info->feature_persistent) + info->bounce = true; indirect_segments = xenbus_read_unsigned(info->xbdev->otherend, "feature-max-indirect-segments", 0); @@ -2760,6 +2772,13 @@ static void blkfront_delay_work(struct work_struct *work) struct blkfront_info *info; bool need_schedule_work = false; + /* + * Note that when using bounce buffers but not persistent grants + * there's no need to run blkfront_delay_work because grants are + * revoked in blkif_completion or else an error is reported and the + * connection is closed. + */ + mutex_lock(&blkfront_mutex); list_for_each_entry(info, &info_list, info_list) { From 547b7c640df545a344358ede93e491a89194cdfa Mon Sep 17 00:00:00 2001 From: Jan Beulich Date: Fri, 1 Jul 2022 09:57:19 +0200 Subject: [PATCH 1247/1842] xen-netfront: restore __skb_queue_tail() positioning in xennet_get_responses() commit f63c2c2032c2e3caad9add3b82cc6e91c376fd26 upstream. The commit referenced below moved the invocation past the "next" label, without any explanation. In fact this allows misbehaving backends undue control over the domain the frontend runs in, as earlier detected errors require the skb to not be freed (it may be retained for later processing via xennet_move_rx_slot(), or it may simply be unsafe to have it freed). This is CVE-2022-33743 / XSA-405. Fixes: 6c5aa6fc4def ("xen networking: add basic XDP support for xen-netfront") Signed-off-by: Jan Beulich Reviewed-by: Juergen Gross Signed-off-by: Juergen Gross Signed-off-by: Greg Kroah-Hartman --- drivers/net/xen-netfront.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index bae3ad139f06..569f3c8e7b75 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -1096,8 +1096,10 @@ static int xennet_get_responses(struct netfront_queue *queue, } } rcu_read_unlock(); -next: + __skb_queue_tail(list, skb); + +next: if (!(rx->flags & XEN_NETRXF_more_data)) break; From 43c8d33ce353091f15312cb6de3531517d7bba90 Mon Sep 17 00:00:00 2001 From: Oleksandr Tyshchenko Date: Fri, 1 Jul 2022 09:57:42 +0200 Subject: [PATCH 1248/1842] xen/arm: Fix race in RB-tree based P2M accounting commit b75cd218274e01d026dc5240e86fdeb44bbed0c8 upstream. During the PV driver life cycle the mappings are added to the RB-tree by set_foreign_p2m_mapping(), which is called from gnttab_map_refs() and are removed by clear_foreign_p2m_mapping() which is called from gnttab_unmap_refs(). As both functions end up calling __set_phys_to_machine_multi() which updates the RB-tree, this function can be called concurrently. There is already a "p2m_lock" to protect against concurrent accesses, but the problem is that the first read of "phys_to_mach.rb_node" in __set_phys_to_machine_multi() is not covered by it, so this might lead to the incorrect mappings update (removing in our case) in RB-tree. In my environment the related issue happens rarely and only when PV net backend is running, the xen_add_phys_to_mach_entry() claims that it cannot add new pfn <-> mfn mapping to the tree since it is already exists which results in a failure when mapping foreign pages. But there might be other bad consequences related to the non-protected root reads such use-after-free, etc. While at it, also fix the similar usage in __pfn_to_mfn(), so initialize "struct rb_node *n" with the "p2m_lock" held in both functions to avoid possible bad consequences. This is CVE-2022-33744 / XSA-406. Signed-off-by: Oleksandr Tyshchenko Reviewed-by: Stefano Stabellini Signed-off-by: Juergen Gross Signed-off-by: Greg Kroah-Hartman --- arch/arm/xen/p2m.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/arm/xen/p2m.c b/arch/arm/xen/p2m.c index acb464547a54..4a1991a103ea 100644 --- a/arch/arm/xen/p2m.c +++ b/arch/arm/xen/p2m.c @@ -62,11 +62,12 @@ static int xen_add_phys_to_mach_entry(struct xen_p2m_entry *new) unsigned long __pfn_to_mfn(unsigned long pfn) { - struct rb_node *n = phys_to_mach.rb_node; + struct rb_node *n; struct xen_p2m_entry *entry; unsigned long irqflags; read_lock_irqsave(&p2m_lock, irqflags); + n = phys_to_mach.rb_node; while (n) { entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys); if (entry->pfn <= pfn && @@ -153,10 +154,11 @@ bool __set_phys_to_machine_multi(unsigned long pfn, int rc; unsigned long irqflags; struct xen_p2m_entry *p2m_entry; - struct rb_node *n = phys_to_mach.rb_node; + struct rb_node *n; if (mfn == INVALID_P2M_ENTRY) { write_lock_irqsave(&p2m_lock, irqflags); + n = phys_to_mach.rb_node; while (n) { p2m_entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys); if (p2m_entry->pfn <= pfn && From f1a53bb27f17b26ba4080cd567c784457a64dd19 Mon Sep 17 00:00:00 2001 From: Carlo Lobrano Date: Fri, 3 Sep 2021 14:09:53 +0200 Subject: [PATCH 1249/1842] net: usb: qmi_wwan: add Telit 0x1060 composition commit 8d17a33b076d24aa4861f336a125c888fb918605 upstream. This patch adds support for Telit LN920 0x1060 composition 0x1060: tty, adb, rmnet, tty, tty, tty, tty Signed-off-by: Carlo Lobrano Signed-off-by: David S. Miller Cc: Fabio Porcedda Signed-off-by: Greg Kroah-Hartman --- drivers/net/usb/qmi_wwan.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c index 597766d14563..62cc4677c4c9 100644 --- a/drivers/net/usb/qmi_wwan.c +++ b/drivers/net/usb/qmi_wwan.c @@ -1293,6 +1293,7 @@ static const struct usb_device_id products[] = { {QMI_QUIRK_SET_DTR(0x1bc7, 0x1031, 3)}, /* Telit LE910C1-EUX */ {QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */ {QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)}, /* Telit FN980 */ + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)}, /* Telit LN920 */ {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */ {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */ {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */ From 7055e34462445f88c7de5a72be68ba655fa5d700 Mon Sep 17 00:00:00 2001 From: Daniele Palmas Date: Fri, 10 Dec 2021 10:57:22 +0100 Subject: [PATCH 1250/1842] net: usb: qmi_wwan: add Telit 0x1070 composition MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 94f2a444f28a649926c410eb9a38afb13a83ebe0 upstream. Add the following Telit FN990 composition: 0x1070: tty, adb, rmnet, tty, tty, tty, tty Signed-off-by: Daniele Palmas Acked-by: Bjørn Mork Link: https://lore.kernel.org/r/20211210095722.22269-1-dnlplm@gmail.com Signed-off-by: Jakub Kicinski Cc: Fabio Porcedda Signed-off-by: Greg Kroah-Hartman --- drivers/net/usb/qmi_wwan.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c index 62cc4677c4c9..48e8b94e4a7c 100644 --- a/drivers/net/usb/qmi_wwan.c +++ b/drivers/net/usb/qmi_wwan.c @@ -1294,6 +1294,7 @@ static const struct usb_device_id products[] = { {QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */ {QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)}, /* Telit FN980 */ {QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)}, /* Telit LN920 */ + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1070, 2)}, /* Telit FN990 */ {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */ {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */ {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */ From 0e21ef18019c5f3c83935f27e13995f37e1982f4 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 30 Jun 2022 11:55:42 +0200 Subject: [PATCH 1251/1842] clocksource/drivers/ixp4xx: remove EXPORT_SYMBOL_GPL from ixp4xx_timer_setup() ixp4xx_timer_setup is exported, and so can not be an __init function. But it does not need to be exported as it is only called from one in-kernel function, so just remove the EXPORT_SYMBOL_GPL() marking to resolve the build warning. This is fixed "properly" in commit 41929c9f628b ("clocksource/drivers/ixp4xx: Drop boardfile probe path") but that can not be backported to older kernels as the reworking of the IXP4xx codebase is not suitable for stable releases. Cc: Linus Walleij Cc: Daniel Lezcano Signed-off-by: Greg Kroah-Hartman --- drivers/clocksource/timer-ixp4xx.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/clocksource/timer-ixp4xx.c b/drivers/clocksource/timer-ixp4xx.c index 9396745e1c17..ad904bbbac6f 100644 --- a/drivers/clocksource/timer-ixp4xx.c +++ b/drivers/clocksource/timer-ixp4xx.c @@ -258,7 +258,6 @@ void __init ixp4xx_timer_setup(resource_size_t timerbase, } ixp4xx_timer_register(base, timer_irq, timer_freq); } -EXPORT_SYMBOL_GPL(ixp4xx_timer_setup); #ifdef CONFIG_OF static __init int ixp4xx_of_timer_init(struct device_node *np) From 7208d1236f72085a3290c41463ee3ee246d8afa5 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 7 Jul 2022 17:52:23 +0200 Subject: [PATCH 1252/1842] Linux 5.10.129 Link: https://lore.kernel.org/r/20220705115615.323395630@linuxfoundation.org Tested-by: Jon Hunter Tested-by: Florian Fainelli Tested-by: Sudip Mukherjee Tested-by: Guenter Roeck Tested-by: Shuah Khan Tested-by: Hulk Robot Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index b89ad8a987db..7d52cee37488 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 128 +SUBLEVEL = 129 EXTRAVERSION = NAME = Dare mighty things From 6c32496964da0dc230cea763a0e934b2e02dabd5 Mon Sep 17 00:00:00 2001 From: Jann Horn Date: Wed, 8 Jun 2022 20:22:05 +0200 Subject: [PATCH 1253/1842] mm/slub: add missing TID updates on slab deactivation commit eeaa345e128515135ccb864c04482180c08e3259 upstream. The fastpath in slab_alloc_node() assumes that c->slab is stable as long as the TID stays the same. However, two places in __slab_alloc() currently don't update the TID when deactivating the CPU slab. If multiple operations race the right way, this could lead to an object getting lost; or, in an even more unlikely situation, it could even lead to an object being freed onto the wrong slab's freelist, messing up the `inuse` counter and eventually causing a page to be freed to the page allocator while it still contains slab objects. (I haven't actually tested these cases though, this is just based on looking at the code. Writing testcases for this stuff seems like it'd be a pain...) The race leading to state inconsistency is (all operations on the same CPU and kmem_cache): - task A: begin do_slab_free(): - read TID - read pcpu freelist (==NULL) - check `slab == c->slab` (true) - [PREEMPT A->B] - task B: begin slab_alloc_node(): - fastpath fails (`c->freelist` is NULL) - enter __slab_alloc() - slub_get_cpu_ptr() (disables preemption) - enter ___slab_alloc() - take local_lock_irqsave() - read c->freelist as NULL - get_freelist() returns NULL - write `c->slab = NULL` - drop local_unlock_irqrestore() - goto new_slab - slub_percpu_partial() is NULL - get_partial() returns NULL - slub_put_cpu_ptr() (enables preemption) - [PREEMPT B->A] - task A: finish do_slab_free(): - this_cpu_cmpxchg_double() succeeds() - [CORRUPT STATE: c->slab==NULL, c->freelist!=NULL] From there, the object on c->freelist will get lost if task B is allowed to continue from here: It will proceed to the retry_load_slab label, set c->slab, then jump to load_freelist, which clobbers c->freelist. But if we instead continue as follows, we get worse corruption: - task A: run __slab_free() on object from other struct slab: - CPU_PARTIAL_FREE case (slab was on no list, is now on pcpu partial) - task A: run slab_alloc_node() with NUMA node constraint: - fastpath fails (c->slab is NULL) - call __slab_alloc() - slub_get_cpu_ptr() (disables preemption) - enter ___slab_alloc() - c->slab is NULL: goto new_slab - slub_percpu_partial() is non-NULL - set c->slab to slub_percpu_partial(c) - [CORRUPT STATE: c->slab points to slab-1, c->freelist has objects from slab-2] - goto redo - node_match() fails - goto deactivate_slab - existing c->freelist is passed into deactivate_slab() - inuse count of slab-1 is decremented to account for object from slab-2 At this point, the inuse count of slab-1 is 1 lower than it should be. This means that if we free all allocated objects in slab-1 except for one, SLUB will think that slab-1 is completely unused, and may free its page, leading to use-after-free. Fixes: c17dda40a6a4e ("slub: Separate out kmem_cache_cpu processing from deactivate_slab") Fixes: 03e404af26dc2 ("slub: fast release on full slab") Cc: stable@vger.kernel.org Signed-off-by: Jann Horn Acked-by: Christoph Lameter Acked-by: David Rientjes Reviewed-by: Muchun Song Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Vlastimil Babka Link: https://lore.kernel.org/r/20220608182205.2945720-1-jannh@google.com Signed-off-by: Greg Kroah-Hartman --- mm/slub.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 1384dc906833..b395ef064544 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2297,6 +2297,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, c->page = NULL; c->freelist = NULL; + c->tid = next_tid(c->tid); } /* @@ -2430,8 +2431,6 @@ static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c) { stat(s, CPUSLAB_FLUSH); deactivate_slab(s, c->page, c->freelist, c); - - c->tid = next_tid(c->tid); } /* @@ -2717,6 +2716,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, if (!freelist) { c->page = NULL; + c->tid = next_tid(c->tid); stat(s, DEACTIVATE_BYPASS); goto new_slab; } From b75d4bec85b81109726c0a7d982067e766714509 Mon Sep 17 00:00:00 2001 From: Tim Crawford Date: Fri, 24 Jun 2022 08:41:09 -0600 Subject: [PATCH 1254/1842] ALSA: hda/realtek: Add quirk for Clevo L140PU commit 11bea26929a1a3a9dd1a287b60c2f471701bf706 upstream. Fixes headset detection on Clevo L140PU. Signed-off-by: Tim Crawford Cc: Link: https://lore.kernel.org/r/20220624144109.3957-1-tcrawford@system76.com Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 604f55ec7944..f7645720d29c 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -8934,6 +8934,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1558, 0x70f4, "Clevo NH77EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1558, 0x70f6, "Clevo NH77DPQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1558, 0x7716, "Clevo NS50PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1558, 0x7718, "Clevo L140PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1558, 0x8521, "Clevo NH77D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), From 85cd41070df992d3c0dfd828866fdd243d3b774a Mon Sep 17 00:00:00 2001 From: Oliver Hartkopp Date: Fri, 20 May 2022 20:32:39 +0200 Subject: [PATCH 1255/1842] can: bcm: use call_rcu() instead of costly synchronize_rcu() commit f1b4e32aca0811aa011c76e5d6cf2fa19224b386 upstream. In commit d5f9023fa61e ("can: bcm: delay release of struct bcm_op after synchronize_rcu()") Thadeu Lima de Souza Cascardo introduced two synchronize_rcu() calls in bcm_release() (only once at socket close) and in bcm_delete_rx_op() (called on removal of each single bcm_op). Unfortunately this slow removal of the bcm_op's affects user space applications like cansniffer where the modification of a filter removes 2048 bcm_op's which blocks the cansniffer application for 40(!) seconds. In commit 181d4447905d ("can: gw: use call_rcu() instead of costly synchronize_rcu()") Eric Dumazet replaced the synchronize_rcu() calls with several call_rcu()'s to safely remove the data structures after the removal of CAN ID subscriptions with can_rx_unregister() calls. This patch adopts Erics approach for the can-bcm which should be applicable since the removal of tasklet_kill() in bcm_remove_op() and the introduction of the HRTIMER_MODE_SOFT timer handling in Linux 5.4. Fixes: d5f9023fa61e ("can: bcm: delay release of struct bcm_op after synchronize_rcu()") # >= 5.4 Link: https://lore.kernel.org/all/20220520183239.19111-1-socketcan@hartkopp.net Cc: stable@vger.kernel.org Cc: Eric Dumazet Cc: Norbert Slusarek Cc: Thadeu Lima de Souza Cascardo Signed-off-by: Oliver Hartkopp Signed-off-by: Marc Kleine-Budde Signed-off-by: Greg Kroah-Hartman --- net/can/bcm.c | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/net/can/bcm.c b/net/can/bcm.c index 0928a39c4423..e918a0f3cda2 100644 --- a/net/can/bcm.c +++ b/net/can/bcm.c @@ -100,6 +100,7 @@ static inline u64 get_u64(const struct canfd_frame *cp, int offset) struct bcm_op { struct list_head list; + struct rcu_head rcu; int ifindex; canid_t can_id; u32 flags; @@ -718,10 +719,9 @@ static struct bcm_op *bcm_find_op(struct list_head *ops, return NULL; } -static void bcm_remove_op(struct bcm_op *op) +static void bcm_free_op_rcu(struct rcu_head *rcu_head) { - hrtimer_cancel(&op->timer); - hrtimer_cancel(&op->thrtimer); + struct bcm_op *op = container_of(rcu_head, struct bcm_op, rcu); if ((op->frames) && (op->frames != &op->sframe)) kfree(op->frames); @@ -732,6 +732,14 @@ static void bcm_remove_op(struct bcm_op *op) kfree(op); } +static void bcm_remove_op(struct bcm_op *op) +{ + hrtimer_cancel(&op->timer); + hrtimer_cancel(&op->thrtimer); + + call_rcu(&op->rcu, bcm_free_op_rcu); +} + static void bcm_rx_unreg(struct net_device *dev, struct bcm_op *op) { if (op->rx_reg_dev == dev) { @@ -757,6 +765,9 @@ static int bcm_delete_rx_op(struct list_head *ops, struct bcm_msg_head *mh, if ((op->can_id == mh->can_id) && (op->ifindex == ifindex) && (op->flags & CAN_FD_FRAME) == (mh->flags & CAN_FD_FRAME)) { + /* disable automatic timer on frame reception */ + op->flags |= RX_NO_AUTOTIMER; + /* * Don't care if we're bound or not (due to netdev * problems) can_rx_unregister() is always a save @@ -785,7 +796,6 @@ static int bcm_delete_rx_op(struct list_head *ops, struct bcm_msg_head *mh, bcm_rx_handler, op); list_del(&op->list); - synchronize_rcu(); bcm_remove_op(op); return 1; /* done */ } From b6f4b347a1fb4ddf217ca3ed15ce04c7d4d8cbeb Mon Sep 17 00:00:00 2001 From: Liang He Date: Sun, 19 Jun 2022 15:02:57 +0800 Subject: [PATCH 1256/1842] can: grcan: grcan_probe(): remove extra of_node_get() commit 562fed945ea482833667f85496eeda766d511386 upstream. In grcan_probe(), of_find_node_by_path() has already increased the refcount. There is no need to call of_node_get() again, so remove it. Link: https://lore.kernel.org/all/20220619070257.4067022-1-windhl@126.com Fixes: 1e93ed26acf0 ("can: grcan: grcan_probe(): fix broken system id check for errata workaround needs") Cc: stable@vger.kernel.org # v5.18 Cc: Andreas Larsson Signed-off-by: Liang He Signed-off-by: Marc Kleine-Budde Signed-off-by: Greg Kroah-Hartman --- drivers/net/can/grcan.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/net/can/grcan.c b/drivers/net/can/grcan.c index 4923f9ac6a43..3794a7b2c64e 100644 --- a/drivers/net/can/grcan.c +++ b/drivers/net/can/grcan.c @@ -1659,7 +1659,6 @@ static int grcan_probe(struct platform_device *ofdev) */ sysid_parent = of_find_node_by_path("/ambapp0"); if (sysid_parent) { - of_node_get(sysid_parent); err = of_property_read_u32(sysid_parent, "systemid", &sysid); if (!err && ((sysid & GRLIB_VERSION_MASK) >= GRCAN_TXBUG_SAFE_GRLIB_VERSION)) From d0b8e223998866b3e7b2895927d4e9689b0a80d8 Mon Sep 17 00:00:00 2001 From: Rhett Aultman Date: Sun, 3 Jul 2022 19:33:06 +0200 Subject: [PATCH 1257/1842] can: gs_usb: gs_usb_open/close(): fix memory leak commit 2bda24ef95c0311ab93bda00db40486acf30bd0a upstream. The gs_usb driver appears to suffer from a malady common to many USB CAN adapter drivers in that it performs usb_alloc_coherent() to allocate a number of USB request blocks (URBs) for RX, and then later relies on usb_kill_anchored_urbs() to free them, but this doesn't actually free them. As a result, this may be leaking DMA memory that's been used by the driver. This commit is an adaptation of the techniques found in the esd_usb2 driver where a similar design pattern led to a memory leak. It explicitly frees the RX URBs and their DMA memory via a call to usb_free_coherent(). Since the RX URBs were allocated in the gs_can_open(), we remove them in gs_can_close() rather than in the disconnect function as was done in esd_usb2. For more information, see the 928150fad41b ("can: esd_usb2: fix memory leak"). Link: https://lore.kernel.org/all/alpine.DEB.2.22.394.2206031547001.1630869@thelappy Fixes: d08e973a77d1 ("can: gs_usb: Added support for the GS_USB CAN devices") Cc: stable@vger.kernel.org Signed-off-by: Rhett Aultman Signed-off-by: Marc Kleine-Budde Signed-off-by: Greg Kroah-Hartman --- drivers/net/can/usb/gs_usb.c | 23 +++++++++++++++++++++-- 1 file changed, 21 insertions(+), 2 deletions(-) diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c index e023c401f4f7..1bfc497da9ac 100644 --- a/drivers/net/can/usb/gs_usb.c +++ b/drivers/net/can/usb/gs_usb.c @@ -184,6 +184,8 @@ struct gs_can { struct usb_anchor tx_submitted; atomic_t active_tx_urbs; + void *rxbuf[GS_MAX_RX_URBS]; + dma_addr_t rxbuf_dma[GS_MAX_RX_URBS]; }; /* usb interface struct */ @@ -592,6 +594,7 @@ static int gs_can_open(struct net_device *netdev) for (i = 0; i < GS_MAX_RX_URBS; i++) { struct urb *urb; u8 *buf; + dma_addr_t buf_dma; /* alloc rx urb */ urb = usb_alloc_urb(0, GFP_KERNEL); @@ -602,7 +605,7 @@ static int gs_can_open(struct net_device *netdev) buf = usb_alloc_coherent(dev->udev, sizeof(struct gs_host_frame), GFP_KERNEL, - &urb->transfer_dma); + &buf_dma); if (!buf) { netdev_err(netdev, "No memory left for USB buffer\n"); @@ -610,6 +613,8 @@ static int gs_can_open(struct net_device *netdev) return -ENOMEM; } + urb->transfer_dma = buf_dma; + /* fill, anchor, and submit rx urb */ usb_fill_bulk_urb(urb, dev->udev, @@ -633,10 +638,17 @@ static int gs_can_open(struct net_device *netdev) rc); usb_unanchor_urb(urb); + usb_free_coherent(dev->udev, + sizeof(struct gs_host_frame), + buf, + buf_dma); usb_free_urb(urb); break; } + dev->rxbuf[i] = buf; + dev->rxbuf_dma[i] = buf_dma; + /* Drop reference, * USB core will take care of freeing it */ @@ -701,13 +713,20 @@ static int gs_can_close(struct net_device *netdev) int rc; struct gs_can *dev = netdev_priv(netdev); struct gs_usb *parent = dev->parent; + unsigned int i; netif_stop_queue(netdev); /* Stop polling */ parent->active_channels--; - if (!parent->active_channels) + if (!parent->active_channels) { usb_kill_anchored_urbs(&parent->rx_submitted); + for (i = 0; i < GS_MAX_RX_URBS; i++) + usb_free_coherent(dev->udev, + sizeof(struct gs_host_frame), + dev->rxbuf[i], + dev->rxbuf_dma[i]); + } /* Stop sending URBs */ usb_kill_anchored_urbs(&dev->tx_submitted); From 9adec7334969d9a20b9359ee745d17efc8f9890f Mon Sep 17 00:00:00 2001 From: Daniel Borkmann Date: Fri, 1 Jul 2022 14:47:24 +0200 Subject: [PATCH 1258/1842] bpf: Fix incorrect verifier simulation around jmp32's jeq/jne commit a12ca6277eca6aeeccf66e840c23a2b520e24c8f upstream. Kuee reported a quirk in the jmp32's jeq/jne simulation, namely that the register value does not match expectations for the fall-through path. For example: Before fix: 0: R1=ctx(off=0,imm=0) R10=fp0 0: (b7) r2 = 0 ; R2_w=P0 1: (b7) r6 = 563 ; R6_w=P563 2: (87) r2 = -r2 ; R2_w=Pscalar() 3: (87) r2 = -r2 ; R2_w=Pscalar() 4: (4c) w2 |= w6 ; R2_w=Pscalar(umin=563,umax=4294967295,var_off=(0x233; 0xfffffdcc),s32_min=-2147483085) R6_w=P563 5: (56) if w2 != 0x8 goto pc+1 ; R2_w=P571 <--- [*] 6: (95) exit R0 !read_ok After fix: 0: R1=ctx(off=0,imm=0) R10=fp0 0: (b7) r2 = 0 ; R2_w=P0 1: (b7) r6 = 563 ; R6_w=P563 2: (87) r2 = -r2 ; R2_w=Pscalar() 3: (87) r2 = -r2 ; R2_w=Pscalar() 4: (4c) w2 |= w6 ; R2_w=Pscalar(umin=563,umax=4294967295,var_off=(0x233; 0xfffffdcc),s32_min=-2147483085) R6_w=P563 5: (56) if w2 != 0x8 goto pc+1 ; R2_w=P8 <--- [*] 6: (95) exit R0 !read_ok As can be seen on line 5 for the branch fall-through path in R2 [*] is that given condition w2 != 0x8 is false, verifier should conclude that r2 = 8 as upper 32 bit are known to be zero. However, verifier incorrectly concludes that r2 = 571 which is far off. The problem is it only marks false{true}_reg as known in the switch for JE/NE case, but at the end of the function, it uses {false,true}_{64,32}off to update {false,true}_reg->var_off and they still hold the prior value of {false,true}_reg->var_off before it got marked as known. The subsequent __reg_combine_32_into_64() then propagates this old var_off and derives new bounds. The information between min/max bounds on {false,true}_reg from setting the register to known const combined with the {false,true}_reg->var_off based on the old information then derives wrong register data. Fix it by detangling the BPF_JEQ/BPF_JNE cases and updating relevant {false,true}_{64,32}off tnums along with the register marking to known constant. Fixes: 3f50f132d840 ("bpf: Verifier, do explicit ALU32 bounds tracking") Reported-by: Kuee K1r0a Signed-off-by: Daniel Borkmann Signed-off-by: Andrii Nakryiko Acked-by: John Fastabend Link: https://lore.kernel.org/bpf/20220701124727.11153-1-daniel@iogearbox.net Signed-off-by: Greg Kroah-Hartman --- kernel/bpf/verifier.c | 43 +++++++++++++++++++++++++------------------ 1 file changed, 25 insertions(+), 18 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 015bf2ba4a0b..20ffc609259c 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7512,26 +7512,33 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg, return; switch (opcode) { + /* JEQ/JNE comparison doesn't change the register equivalence. + * + * r1 = r2; + * if (r1 == 42) goto label; + * ... + * label: // here both r1 and r2 are known to be 42. + * + * Hence when marking register as known preserve it's ID. + */ case BPF_JEQ: - case BPF_JNE: - { - struct bpf_reg_state *reg = - opcode == BPF_JEQ ? true_reg : false_reg; - - /* JEQ/JNE comparison doesn't change the register equivalence. - * r1 = r2; - * if (r1 == 42) goto label; - * ... - * label: // here both r1 and r2 are known to be 42. - * - * Hence when marking register as known preserve it's ID. - */ - if (is_jmp32) - __mark_reg32_known(reg, val32); - else - ___mark_reg_known(reg, val); + if (is_jmp32) { + __mark_reg32_known(true_reg, val32); + true_32off = tnum_subreg(true_reg->var_off); + } else { + ___mark_reg_known(true_reg, val); + true_64off = true_reg->var_off; + } + break; + case BPF_JNE: + if (is_jmp32) { + __mark_reg32_known(false_reg, val32); + false_32off = tnum_subreg(false_reg->var_off); + } else { + ___mark_reg_known(false_reg, val); + false_64off = false_reg->var_off; + } break; - } case BPF_JSET: if (is_jmp32) { false_32off = tnum_and(false_32off, tnum_const(~val32)); From e917be1f83ea14a68b3cf64d3da9968eaf991dae Mon Sep 17 00:00:00 2001 From: Daniel Borkmann Date: Fri, 1 Jul 2022 14:47:25 +0200 Subject: [PATCH 1259/1842] bpf: Fix insufficient bounds propagation from adjust_scalar_min_max_vals commit 3844d153a41adea718202c10ae91dc96b37453b5 upstream. Kuee reported a corner case where the tnum becomes constant after the call to __reg_bound_offset(), but the register's bounds are not, that is, its min bounds are still not equal to the register's max bounds. This in turn allows to leak pointers through turning a pointer register as is into an unknown scalar via adjust_ptr_min_max_vals(). Before: func#0 @0 0: R1=ctx(off=0,imm=0,umax=0,var_off=(0x0; 0x0)) R10=fp(off=0,imm=0,umax=0,var_off=(0x0; 0x0)) 0: (b7) r0 = 1 ; R0_w=scalar(imm=1,umin=1,umax=1,var_off=(0x1; 0x0)) 1: (b7) r3 = 0 ; R3_w=scalar(imm=0,umax=0,var_off=(0x0; 0x0)) 2: (87) r3 = -r3 ; R3_w=scalar() 3: (87) r3 = -r3 ; R3_w=scalar() 4: (47) r3 |= 32767 ; R3_w=scalar(smin=-9223372036854743041,umin=32767,var_off=(0x7fff; 0xffffffffffff8000),s32_min=-2147450881) 5: (75) if r3 s>= 0x0 goto pc+1 ; R3_w=scalar(umin=9223372036854808575,var_off=(0x8000000000007fff; 0x7fffffffffff8000),s32_min=-2147450881,u32_min=32767) 6: (95) exit from 5 to 7: R0=scalar(imm=1,umin=1,umax=1,var_off=(0x1; 0x0)) R1=ctx(off=0,imm=0,umax=0,var_off=(0x0; 0x0)) R3=scalar(umin=32767,umax=9223372036854775807,var_off=(0x7fff; 0x7fffffffffff8000),s32_min=-2147450881) R10=fp(off=0,imm=0,umax=0,var_off=(0x0; 0x0)) 7: (d5) if r3 s<= 0x8000 goto pc+1 ; R3=scalar(umin=32769,umax=9223372036854775807,var_off=(0x7fff; 0x7fffffffffff8000),s32_min=-2147450881,u32_min=32767) 8: (95) exit from 7 to 9: R0=scalar(imm=1,umin=1,umax=1,var_off=(0x1; 0x0)) R1=ctx(off=0,imm=0,umax=0,var_off=(0x0; 0x0)) R3=scalar(umin=32767,umax=32768,var_off=(0x7fff; 0x8000)) R10=fp(off=0,imm=0,umax=0,var_off=(0x0; 0x0)) 9: (07) r3 += -32767 ; R3_w=scalar(imm=0,umax=1,var_off=(0x0; 0x0)) <--- [*] 10: (95) exit What can be seen here is that R3=scalar(umin=32767,umax=32768,var_off=(0x7fff; 0x8000)) after the operation R3 += -32767 results in a 'malformed' constant, that is, R3_w=scalar(imm=0,umax=1,var_off=(0x0; 0x0)). Intersecting with var_off has not been done at that point via __update_reg_bounds(), which would have improved the umax to be equal to umin. Refactor the tnum <> min/max bounds information flow into a reg_bounds_sync() helper and use it consistently everywhere. After the fix, bounds have been corrected to R3_w=scalar(imm=0,umax=0,var_off=(0x0; 0x0)) and thus the register is regarded as a 'proper' constant scalar of 0. After: func#0 @0 0: R1=ctx(off=0,imm=0,umax=0,var_off=(0x0; 0x0)) R10=fp(off=0,imm=0,umax=0,var_off=(0x0; 0x0)) 0: (b7) r0 = 1 ; R0_w=scalar(imm=1,umin=1,umax=1,var_off=(0x1; 0x0)) 1: (b7) r3 = 0 ; R3_w=scalar(imm=0,umax=0,var_off=(0x0; 0x0)) 2: (87) r3 = -r3 ; R3_w=scalar() 3: (87) r3 = -r3 ; R3_w=scalar() 4: (47) r3 |= 32767 ; R3_w=scalar(smin=-9223372036854743041,umin=32767,var_off=(0x7fff; 0xffffffffffff8000),s32_min=-2147450881) 5: (75) if r3 s>= 0x0 goto pc+1 ; R3_w=scalar(umin=9223372036854808575,var_off=(0x8000000000007fff; 0x7fffffffffff8000),s32_min=-2147450881,u32_min=32767) 6: (95) exit from 5 to 7: R0=scalar(imm=1,umin=1,umax=1,var_off=(0x1; 0x0)) R1=ctx(off=0,imm=0,umax=0,var_off=(0x0; 0x0)) R3=scalar(umin=32767,umax=9223372036854775807,var_off=(0x7fff; 0x7fffffffffff8000),s32_min=-2147450881) R10=fp(off=0,imm=0,umax=0,var_off=(0x0; 0x0)) 7: (d5) if r3 s<= 0x8000 goto pc+1 ; R3=scalar(umin=32769,umax=9223372036854775807,var_off=(0x7fff; 0x7fffffffffff8000),s32_min=-2147450881,u32_min=32767) 8: (95) exit from 7 to 9: R0=scalar(imm=1,umin=1,umax=1,var_off=(0x1; 0x0)) R1=ctx(off=0,imm=0,umax=0,var_off=(0x0; 0x0)) R3=scalar(umin=32767,umax=32768,var_off=(0x7fff; 0x8000)) R10=fp(off=0,imm=0,umax=0,var_off=(0x0; 0x0)) 9: (07) r3 += -32767 ; R3_w=scalar(imm=0,umax=0,var_off=(0x0; 0x0)) <--- [*] 10: (95) exit Fixes: b03c9f9fdc37 ("bpf/verifier: track signed and unsigned min/max values") Reported-by: Kuee K1r0a Signed-off-by: Daniel Borkmann Signed-off-by: Andrii Nakryiko Acked-by: John Fastabend Link: https://lore.kernel.org/bpf/20220701124727.11153-2-daniel@iogearbox.net Signed-off-by: Greg Kroah-Hartman --- kernel/bpf/verifier.c | 72 ++++++++++++++----------------------------- 1 file changed, 23 insertions(+), 49 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 20ffc609259c..15ddc4292bc0 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1249,6 +1249,21 @@ static void __reg_bound_offset(struct bpf_reg_state *reg) reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off); } +static void reg_bounds_sync(struct bpf_reg_state *reg) +{ + /* We might have learned new bounds from the var_off. */ + __update_reg_bounds(reg); + /* We might have learned something about the sign bit. */ + __reg_deduce_bounds(reg); + /* We might have learned some bits from the bounds. */ + __reg_bound_offset(reg); + /* Intersecting with the old var_off might have improved our bounds + * slightly, e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc), + * then new var_off is (0; 0x7f...fc) which improves our umax. + */ + __update_reg_bounds(reg); +} + static bool __reg32_bound_s64(s32 a) { return a >= 0 && a <= S32_MAX; @@ -1290,16 +1305,8 @@ static void __reg_combine_32_into_64(struct bpf_reg_state *reg) * so they do not impact tnum bounds calculation. */ __mark_reg64_unbounded(reg); - __update_reg_bounds(reg); } - - /* Intersecting with the old var_off might have improved our bounds - * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc), - * then new var_off is (0; 0x7f...fc) which improves our umax. - */ - __reg_deduce_bounds(reg); - __reg_bound_offset(reg); - __update_reg_bounds(reg); + reg_bounds_sync(reg); } static bool __reg64_bound_s32(s64 a) @@ -1315,7 +1322,6 @@ static bool __reg64_bound_u32(u64 a) static void __reg_combine_64_into_32(struct bpf_reg_state *reg) { __mark_reg32_unbounded(reg); - if (__reg64_bound_s32(reg->smin_value) && __reg64_bound_s32(reg->smax_value)) { reg->s32_min_value = (s32)reg->smin_value; reg->s32_max_value = (s32)reg->smax_value; @@ -1324,14 +1330,7 @@ static void __reg_combine_64_into_32(struct bpf_reg_state *reg) reg->u32_min_value = (u32)reg->umin_value; reg->u32_max_value = (u32)reg->umax_value; } - - /* Intersecting with the old var_off might have improved our bounds - * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc), - * then new var_off is (0; 0x7f...fc) which improves our umax. - */ - __reg_deduce_bounds(reg); - __reg_bound_offset(reg); - __update_reg_bounds(reg); + reg_bounds_sync(reg); } /* Mark a register as having a completely unknown (scalar) value. */ @@ -5230,9 +5229,7 @@ static void do_refine_retval_range(struct bpf_reg_state *regs, int ret_type, ret_reg->s32_max_value = meta->msize_max_value; ret_reg->smin_value = -MAX_ERRNO; ret_reg->s32_min_value = -MAX_ERRNO; - __reg_deduce_bounds(ret_reg); - __reg_bound_offset(ret_reg); - __update_reg_bounds(ret_reg); + reg_bounds_sync(ret_reg); } static int @@ -6197,11 +6194,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env, if (!check_reg_sane_offset(env, dst_reg, ptr_reg->type)) return -EINVAL; - - __update_reg_bounds(dst_reg); - __reg_deduce_bounds(dst_reg); - __reg_bound_offset(dst_reg); - + reg_bounds_sync(dst_reg); if (sanitize_check_bounds(env, insn, dst_reg) < 0) return -EACCES; if (sanitize_needed(opcode)) { @@ -6939,10 +6932,7 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env, /* ALU32 ops are zero extended into 64bit register */ if (alu32) zext_32_to_64(dst_reg); - - __update_reg_bounds(dst_reg); - __reg_deduce_bounds(dst_reg); - __reg_bound_offset(dst_reg); + reg_bounds_sync(dst_reg); return 0; } @@ -7131,10 +7121,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) insn->dst_reg); } zext_32_to_64(dst_reg); - - __update_reg_bounds(dst_reg); - __reg_deduce_bounds(dst_reg); - __reg_bound_offset(dst_reg); + reg_bounds_sync(dst_reg); } } else { /* case: R = imm @@ -7693,21 +7680,8 @@ static void __reg_combine_min_max(struct bpf_reg_state *src_reg, dst_reg->smax_value); src_reg->var_off = dst_reg->var_off = tnum_intersect(src_reg->var_off, dst_reg->var_off); - /* We might have learned new bounds from the var_off. */ - __update_reg_bounds(src_reg); - __update_reg_bounds(dst_reg); - /* We might have learned something about the sign bit. */ - __reg_deduce_bounds(src_reg); - __reg_deduce_bounds(dst_reg); - /* We might have learned some bits from the bounds. */ - __reg_bound_offset(src_reg); - __reg_bound_offset(dst_reg); - /* Intersecting with the old var_off might have improved our bounds - * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc), - * then new var_off is (0; 0x7f...fc) which improves our umax. - */ - __update_reg_bounds(src_reg); - __update_reg_bounds(dst_reg); + reg_bounds_sync(src_reg); + reg_bounds_sync(dst_reg); } static void reg_combine_min_max(struct bpf_reg_state *true_src, From 0085da9df3dced730027923a6b48f58e9016af91 Mon Sep 17 00:00:00 2001 From: Oliver Neukum Date: Tue, 5 Jul 2022 14:53:51 +0200 Subject: [PATCH 1260/1842] usbnet: fix memory leak in error case commit b55a21b764c1e182014630fa5486d717484ac58f upstream. usbnet_write_cmd_async() mixed up which buffers need to be freed in which error case. v2: add Fixes tag v3: fix uninitialized buf pointer Fixes: 877bd862f32b8 ("usbnet: introduce usbnet 3 command helpers") Signed-off-by: Oliver Neukum Link: https://lore.kernel.org/r/20220705125351.17309-1-oneukum@suse.com Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/net/usb/usbnet.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c index 74a833ad7aa9..58dd77efcaad 100644 --- a/drivers/net/usb/usbnet.c +++ b/drivers/net/usb/usbnet.c @@ -2102,7 +2102,7 @@ static void usbnet_async_cmd_cb(struct urb *urb) int usbnet_write_cmd_async(struct usbnet *dev, u8 cmd, u8 reqtype, u16 value, u16 index, const void *data, u16 size) { - struct usb_ctrlrequest *req = NULL; + struct usb_ctrlrequest *req; struct urb *urb; int err = -ENOMEM; void *buf = NULL; @@ -2120,7 +2120,7 @@ int usbnet_write_cmd_async(struct usbnet *dev, u8 cmd, u8 reqtype, if (!buf) { netdev_err(dev->net, "Error allocating buffer" " in %s!\n", __func__); - goto fail_free; + goto fail_free_urb; } } @@ -2144,14 +2144,21 @@ int usbnet_write_cmd_async(struct usbnet *dev, u8 cmd, u8 reqtype, if (err < 0) { netdev_err(dev->net, "Error submitting the control" " message: status=%d\n", err); - goto fail_free; + goto fail_free_all; } return 0; +fail_free_all: + kfree(req); fail_free_buf: kfree(buf); -fail_free: - kfree(req); + /* + * avoid a double free + * needed because the flag can be set only + * after filling the URB + */ + urb->transfer_flags = 0; +fail_free_urb: usb_free_urb(urb); fail: return err; From 4f59d12efe3067661a66be13166d790341831948 Mon Sep 17 00:00:00 2001 From: Duoming Zhou Date: Tue, 5 Jul 2022 20:56:10 +0800 Subject: [PATCH 1261/1842] net: rose: fix UAF bug caused by rose_t0timer_expiry commit 148ca04518070910739dfc4eeda765057856403d upstream. There are UAF bugs caused by rose_t0timer_expiry(). The root cause is that del_timer() could not stop the timer handler that is running and there is no synchronization. One of the race conditions is shown below: (thread 1) | (thread 2) | rose_device_event | rose_rt_device_down | rose_remove_neigh rose_t0timer_expiry | rose_stop_t0timer(rose_neigh) ... | del_timer(&neigh->t0timer) | kfree(rose_neigh) //[1]FREE neigh->dce_mode //[2]USE | The rose_neigh is deallocated in position [1] and use in position [2]. The crash trace triggered by POC is like below: BUG: KASAN: use-after-free in expire_timers+0x144/0x320 Write of size 8 at addr ffff888009b19658 by task swapper/0/0 ... Call Trace: dump_stack_lvl+0xbf/0xee print_address_description+0x7b/0x440 print_report+0x101/0x230 ? expire_timers+0x144/0x320 kasan_report+0xed/0x120 ? expire_timers+0x144/0x320 expire_timers+0x144/0x320 __run_timers+0x3ff/0x4d0 run_timer_softirq+0x41/0x80 __do_softirq+0x233/0x544 ... This patch changes rose_stop_ftimer() and rose_stop_t0timer() in rose_remove_neigh() to del_timer_sync() in order that the timer handler could be finished before the resources such as rose_neigh and so on are deallocated. As a result, the UAF bugs could be mitigated. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Duoming Zhou Link: https://lore.kernel.org/r/20220705125610.77971-1-duoming@zju.edu.cn Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- net/rose/rose_route.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/rose/rose_route.c b/net/rose/rose_route.c index 6e35703ff353..95b198f84a3a 100644 --- a/net/rose/rose_route.c +++ b/net/rose/rose_route.c @@ -227,8 +227,8 @@ static void rose_remove_neigh(struct rose_neigh *rose_neigh) { struct rose_neigh *s; - rose_stop_ftimer(rose_neigh); - rose_stop_t0timer(rose_neigh); + del_timer_sync(&rose_neigh->ftimer); + del_timer_sync(&rose_neigh->t0timer); skb_queue_purge(&rose_neigh->queue); From 4a6430b99f67842617c7208ca55a411e903ba03a Mon Sep 17 00:00:00 2001 From: Pablo Neira Ayuso Date: Sat, 2 Jul 2022 04:16:31 +0200 Subject: [PATCH 1262/1842] netfilter: nft_set_pipapo: release elements in clone from abort path commit 9827a0e6e23bf43003cd3d5b7fb11baf59a35e1e upstream. New elements that reside in the clone are not released in case that the transaction is aborted. [16302.231754] ------------[ cut here ]------------ [16302.231756] WARNING: CPU: 0 PID: 100509 at net/netfilter/nf_tables_api.c:1864 nf_tables_chain_destroy+0x26/0x127 [nf_tables] [...] [16302.231882] CPU: 0 PID: 100509 Comm: nft Tainted: G W 5.19.0-rc3+ #155 [...] [16302.231887] RIP: 0010:nf_tables_chain_destroy+0x26/0x127 [nf_tables] [16302.231899] Code: f3 fe ff ff 41 55 41 54 55 53 48 8b 6f 10 48 89 fb 48 c7 c7 82 96 d9 a0 8b 55 50 48 8b 75 58 e8 de f5 92 e0 83 7d 50 00 74 09 <0f> 0b 5b 5d 41 5c 41 5d c3 4c 8b 65 00 48 8b 7d 08 49 39 fc 74 05 [...] [16302.231917] Call Trace: [16302.231919] [16302.231921] __nf_tables_abort.cold+0x23/0x28 [nf_tables] [16302.231934] nf_tables_abort+0x30/0x50 [nf_tables] [16302.231946] nfnetlink_rcv_batch+0x41a/0x840 [nfnetlink] [16302.231952] ? __nla_validate_parse+0x48/0x190 [16302.231959] nfnetlink_rcv+0x110/0x129 [nfnetlink] [16302.231963] netlink_unicast+0x211/0x340 [16302.231969] netlink_sendmsg+0x21e/0x460 Add nft_set_pipapo_match_destroy() helper function to release the elements in the lookup tables. Stefano Brivio says: "We additionally look for elements pointers in the cloned matching data if priv->dirty is set, because that means that cloned data might point to additional elements we did not commit to the working copy yet (such as the abort path case, but perhaps not limited to it)." Fixes: 3c4287f62044 ("nf_tables: Add set type for arbitrary concatenation of ranges") Reviewed-by: Stefano Brivio Signed-off-by: Pablo Neira Ayuso Signed-off-by: Greg Kroah-Hartman --- net/netfilter/nft_set_pipapo.c | 48 +++++++++++++++++++++++----------- 1 file changed, 33 insertions(+), 15 deletions(-) diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c index f67c4436c5d3..949da87dbb06 100644 --- a/net/netfilter/nft_set_pipapo.c +++ b/net/netfilter/nft_set_pipapo.c @@ -2120,6 +2120,32 @@ static int nft_pipapo_init(const struct nft_set *set, return err; } +/** + * nft_set_pipapo_match_destroy() - Destroy elements from key mapping array + * @set: nftables API set representation + * @m: matching data pointing to key mapping array + */ +static void nft_set_pipapo_match_destroy(const struct nft_set *set, + struct nft_pipapo_match *m) +{ + struct nft_pipapo_field *f; + int i, r; + + for (i = 0, f = m->f; i < m->field_count - 1; i++, f++) + ; + + for (r = 0; r < f->rules; r++) { + struct nft_pipapo_elem *e; + + if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e) + continue; + + e = f->mt[r].e; + + nft_set_elem_destroy(set, e, true); + } +} + /** * nft_pipapo_destroy() - Free private data for set and all committed elements * @set: nftables API set representation @@ -2128,26 +2154,13 @@ static void nft_pipapo_destroy(const struct nft_set *set) { struct nft_pipapo *priv = nft_set_priv(set); struct nft_pipapo_match *m; - struct nft_pipapo_field *f; - int i, r, cpu; + int cpu; m = rcu_dereference_protected(priv->match, true); if (m) { rcu_barrier(); - for (i = 0, f = m->f; i < m->field_count - 1; i++, f++) - ; - - for (r = 0; r < f->rules; r++) { - struct nft_pipapo_elem *e; - - if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e) - continue; - - e = f->mt[r].e; - - nft_set_elem_destroy(set, e, true); - } + nft_set_pipapo_match_destroy(set, m); #ifdef NFT_PIPAPO_ALIGN free_percpu(m->scratch_aligned); @@ -2161,6 +2174,11 @@ static void nft_pipapo_destroy(const struct nft_set *set) } if (priv->clone) { + m = priv->clone; + + if (priv->dirty) + nft_set_pipapo_match_destroy(set, m); + #ifdef NFT_PIPAPO_ALIGN free_percpu(priv->clone->scratch_aligned); #endif From 0a5e36dbcb448a7a8ba63d1d4b6ade2c9d3cc8bf Mon Sep 17 00:00:00 2001 From: Pablo Neira Ayuso Date: Sat, 2 Jul 2022 04:16:30 +0200 Subject: [PATCH 1263/1842] netfilter: nf_tables: stricter validation of element data commit 7e6bc1f6cabcd30aba0b11219d8e01b952eacbb6 upstream. Make sure element data type and length do not mismatch the one specified by the set declaration. Fixes: 7d7402642eaf ("netfilter: nf_tables: variable sized set element keys / data") Reported-by: Hugues ANGUELKOV Signed-off-by: Pablo Neira Ayuso Signed-off-by: Greg Kroah-Hartman --- net/netfilter/nf_tables_api.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c index 3c17fadaab5f..e5622e925ea9 100644 --- a/net/netfilter/nf_tables_api.c +++ b/net/netfilter/nf_tables_api.c @@ -4886,13 +4886,20 @@ static int nft_setelem_parse_data(struct nft_ctx *ctx, struct nft_set *set, struct nft_data *data, struct nlattr *attr) { + u32 dtype; int err; err = nft_data_init(ctx, data, NFT_DATA_VALUE_MAXLEN, desc, attr); if (err < 0) return err; - if (desc->type != NFT_DATA_VERDICT && desc->len != set->dlen) { + if (set->dtype == NFT_DATA_VERDICT) + dtype = NFT_DATA_VERDICT; + else + dtype = NFT_DATA_VALUE; + + if (dtype != desc->type || + set->dlen != desc->len) { nft_data_release(data, desc->type); return -EINVAL; } From 963c80f070ed513b9c6ac29c9236801fba6eefc4 Mon Sep 17 00:00:00 2001 From: Yian Chen Date: Fri, 20 May 2022 17:21:15 -0700 Subject: [PATCH 1264/1842] iommu/vt-d: Fix PCI bus rescan device hot add commit 316f92a705a4c2bf4712135180d56f3cca09243a upstream. Notifier calling chain uses priority to determine the execution order of the notifiers or listeners registered to the chain. PCI bus device hot add utilizes the notification mechanism. The current code sets low priority (INT_MIN) to Intel dmar_pci_bus_notifier and postpones DMAR decoding after adding new device into IOMMU. The result is that struct device pointer cannot be found in DRHD search for the new device's DMAR/IOMMU. Subsequently, the device is put under the "catch-all" IOMMU instead of the correct one. This could cause system hang when device TLB invalidation is sent to the wrong IOMMU. Invalidation timeout error and hard lockup have been observed and data inconsistency/crush may occur as well. This patch fixes the issue by setting a positive priority(1) for dmar_pci_bus_notifier while the priority of IOMMU bus notifier uses the default value(0), therefore DMAR decoding will be in advance of DRHD search for a new device to find the correct IOMMU. Following is a 2-step example that triggers the bug by simulating PCI device hot add behavior in Intel Sapphire Rapids server. echo 1 > /sys/bus/pci/devices/0000:6a:01.0/remove echo 1 > /sys/bus/pci/rescan Fixes: 59ce0515cdaf ("iommu/vt-d: Update DRHD/RMRR/ATSR device scope") Cc: stable@vger.kernel.org # v3.15+ Reported-by: Zhang, Bernice Signed-off-by: Jacob Pan Signed-off-by: Yian Chen Link: https://lore.kernel.org/r/20220521002115.1624069-1-yian.chen@intel.com Signed-off-by: Joerg Roedel Signed-off-by: Greg Kroah-Hartman --- drivers/iommu/intel/dmar.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c index b8d0b56a7575..70d569b80ecf 100644 --- a/drivers/iommu/intel/dmar.c +++ b/drivers/iommu/intel/dmar.c @@ -385,7 +385,7 @@ static int dmar_pci_bus_notifier(struct notifier_block *nb, static struct notifier_block dmar_pci_bus_nb = { .notifier_call = dmar_pci_bus_notifier, - .priority = INT_MIN, + .priority = 1, }; static struct dmar_drhd_unit * From d03e8ed72d7dae6874ee6cf7ae02029a727e7559 Mon Sep 17 00:00:00 2001 From: Guiling Deng Date: Tue, 28 Jun 2022 09:36:41 -0700 Subject: [PATCH 1265/1842] fbdev: fbmem: Fix logo center image dx issue commit 955f04766d4e6eb94bf3baa539e096808c74ebfb upstream. Image.dx gets wrong value because of missing '()'. If xres == logo->width and n == 1, image.dx = -16. Signed-off-by: Guiling Deng Fixes: 3d8b1933eb1c ("fbdev: fbmem: add config option to center the bootup logo") Cc: stable@vger.kernel.org # v5.0+ Signed-off-by: Helge Deller Signed-off-by: Greg Kroah-Hartman --- drivers/video/fbdev/core/fbmem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c index 00939ca2065a..bf61770f6f6b 100644 --- a/drivers/video/fbdev/core/fbmem.c +++ b/drivers/video/fbdev/core/fbmem.c @@ -513,7 +513,7 @@ static int fb_show_logo_line(struct fb_info *info, int rotate, while (n && (n * (logo->width + 8) - 8 > xres)) --n; - image.dx = (xres - n * (logo->width + 8) - 8) / 2; + image.dx = (xres - (n * (logo->width + 8) - 8)) / 2; image.dy = y ?: (yres - logo->height) / 2; } else { image.dx = 0; From b81212828ad19ab3eccf00626cd04099215060bf Mon Sep 17 00:00:00 2001 From: Helge Deller Date: Wed, 29 Jun 2022 15:53:55 +0200 Subject: [PATCH 1266/1842] fbmem: Check virtual screen sizes in fb_set_var() commit 6c11df58fd1ac0aefcb3b227f72769272b939e56 upstream. Verify that the fbdev or drm driver correctly adjusted the virtual screen sizes. On failure report the failing driver and reject the screen size change. Signed-off-by: Helge Deller Reviewed-by: Geert Uytterhoeven Cc: stable@vger.kernel.org # v5.4+ Signed-off-by: Greg Kroah-Hartman --- drivers/video/fbdev/core/fbmem.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c index bf61770f6f6b..213e738d9eb7 100644 --- a/drivers/video/fbdev/core/fbmem.c +++ b/drivers/video/fbdev/core/fbmem.c @@ -1019,6 +1019,16 @@ fb_set_var(struct fb_info *info, struct fb_var_screeninfo *var) if (ret) return ret; + /* verify that virtual resolution >= physical resolution */ + if (var->xres_virtual < var->xres || + var->yres_virtual < var->yres) { + pr_warn("WARNING: fbcon: Driver '%s' missed to adjust virtual screen size (%ux%u vs. %ux%u)\n", + info->fix.id, + var->xres_virtual, var->yres_virtual, + var->xres, var->yres); + return -EINVAL; + } + if ((var->activate & FB_ACTIVATE_MASK) != FB_ACTIVATE_NOW) return 0; From b727561ddc9360de9631af2d970d8ffed676a750 Mon Sep 17 00:00:00 2001 From: Helge Deller Date: Sat, 25 Jun 2022 12:56:49 +0200 Subject: [PATCH 1267/1842] fbcon: Disallow setting font bigger than screen size commit 65a01e601dbba8b7a51a2677811f70f783766682 upstream. Prevent that users set a font size which is bigger than the physical screen. It's unlikely this may happen (because screens are usually much larger than the fonts and each font char is limited to 32x32 pixels), but it may happen on smaller screens/LCD displays. Signed-off-by: Helge Deller Reviewed-by: Daniel Vetter Reviewed-by: Geert Uytterhoeven Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: Greg Kroah-Hartman --- drivers/video/fbdev/core/fbcon.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c index 13de2bebb09a..5bf68050a26e 100644 --- a/drivers/video/fbdev/core/fbcon.c +++ b/drivers/video/fbdev/core/fbcon.c @@ -2510,6 +2510,11 @@ static int fbcon_set_font(struct vc_data *vc, struct console_font *font, if (charcount != 256 && charcount != 512) return -EINVAL; + /* font bigger than screen resolution ? */ + if (w > FBCON_SWAP(info->var.rotate, info->var.xres, info->var.yres) || + h > FBCON_SWAP(info->var.rotate, info->var.yres, info->var.xres)) + return -EINVAL; + /* Make sure drawing engine can handle the font */ if (!(info->pixmap.blit_x & (1 << (font->width - 1))) || !(info->pixmap.blit_y & (1 << (font->height - 1)))) From cecb806c766c78e1be62b6b7b1483ef59bbaeabe Mon Sep 17 00:00:00 2001 From: Helge Deller Date: Sat, 25 Jun 2022 13:00:34 +0200 Subject: [PATCH 1268/1842] fbcon: Prevent that screen size is smaller than font size commit e64242caef18b4a5840b0e7a9bff37abd4f4f933 upstream. We need to prevent that users configure a screen size which is smaller than the currently selected font size. Otherwise rendering chars on the screen will access memory outside the graphics memory region. This patch adds a new function fbcon_modechange_possible() which implements this check and which later may be extended with other checks if necessary. The new function is called from the FBIOPUT_VSCREENINFO ioctl handler in fbmem.c, which will return -EINVAL if userspace asked for a too small screen size. Signed-off-by: Helge Deller Reviewed-by: Geert Uytterhoeven Cc: stable@vger.kernel.org # v5.4+ Signed-off-by: Greg Kroah-Hartman --- drivers/video/fbdev/core/fbcon.c | 28 ++++++++++++++++++++++++++++ drivers/video/fbdev/core/fbmem.c | 4 +++- include/linux/fbcon.h | 4 ++++ 3 files changed, 35 insertions(+), 1 deletion(-) diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c index 5bf68050a26e..76fedfd1b1b0 100644 --- a/drivers/video/fbdev/core/fbcon.c +++ b/drivers/video/fbdev/core/fbcon.c @@ -2776,6 +2776,34 @@ void fbcon_update_vcs(struct fb_info *info, bool all) } EXPORT_SYMBOL(fbcon_update_vcs); +/* let fbcon check if it supports a new screen resolution */ +int fbcon_modechange_possible(struct fb_info *info, struct fb_var_screeninfo *var) +{ + struct fbcon_ops *ops = info->fbcon_par; + struct vc_data *vc; + unsigned int i; + + WARN_CONSOLE_UNLOCKED(); + + if (!ops) + return 0; + + /* prevent setting a screen size which is smaller than font size */ + for (i = first_fb_vc; i <= last_fb_vc; i++) { + vc = vc_cons[i].d; + if (!vc || vc->vc_mode != KD_TEXT || + registered_fb[con2fb_map[i]] != info) + continue; + + if (vc->vc_font.width > FBCON_SWAP(var->rotate, var->xres, var->yres) || + vc->vc_font.height > FBCON_SWAP(var->rotate, var->yres, var->xres)) + return -EINVAL; + } + + return 0; +} +EXPORT_SYMBOL_GPL(fbcon_modechange_possible); + int fbcon_mode_deleted(struct fb_info *info, struct fb_videomode *mode) { diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c index 213e738d9eb7..d787a344b3b8 100644 --- a/drivers/video/fbdev/core/fbmem.c +++ b/drivers/video/fbdev/core/fbmem.c @@ -1119,7 +1119,9 @@ static long do_fb_ioctl(struct fb_info *info, unsigned int cmd, return -EFAULT; console_lock(); lock_fb_info(info); - ret = fb_set_var(info, &var); + ret = fbcon_modechange_possible(info, &var); + if (!ret) + ret = fb_set_var(info, &var); if (!ret) fbcon_update_vcs(info, var.activate & FB_ACTIVATE_ALL); unlock_fb_info(info); diff --git a/include/linux/fbcon.h b/include/linux/fbcon.h index ff5596dd30f8..2382dec6d6ab 100644 --- a/include/linux/fbcon.h +++ b/include/linux/fbcon.h @@ -15,6 +15,8 @@ void fbcon_new_modelist(struct fb_info *info); void fbcon_get_requirement(struct fb_info *info, struct fb_blit_caps *caps); void fbcon_fb_blanked(struct fb_info *info, int blank); +int fbcon_modechange_possible(struct fb_info *info, + struct fb_var_screeninfo *var); void fbcon_update_vcs(struct fb_info *info, bool all); void fbcon_remap_all(struct fb_info *info); int fbcon_set_con2fb_map_ioctl(void __user *argp); @@ -33,6 +35,8 @@ static inline void fbcon_new_modelist(struct fb_info *info) {} static inline void fbcon_get_requirement(struct fb_info *info, struct fb_blit_caps *caps) {} static inline void fbcon_fb_blanked(struct fb_info *info, int blank) {} +static inline int fbcon_modechange_possible(struct fb_info *info, + struct fb_var_screeninfo *var) { return 0; } static inline void fbcon_update_vcs(struct fb_info *info, bool all) {} static inline void fbcon_remap_all(struct fb_info *info) {} static inline int fbcon_set_con2fb_map_ioctl(void __user *argp) { return 0; } From d6931bff1cc19bfbea7becd1c7be4296a86f737f Mon Sep 17 00:00:00 2001 From: "Rafael J. Wysocki" Date: Mon, 27 Jun 2022 20:42:18 +0200 Subject: [PATCH 1269/1842] PM: runtime: Redefine pm_runtime_release_supplier() commit 07358194badf73e267289b40b761f5dc56928eab upstream. Instead of passing an extra bool argument to pm_runtime_release_supplier(), make its callers take care of triggering a runtime-suspend of the supplier device as needed. No expected functional impact. Suggested-by: Greg Kroah-Hartman Signed-off-by: Rafael J. Wysocki Reviewed-by: Greg Kroah-Hartman Cc: 5.1+ # 5.1+ Signed-off-by: Greg Kroah-Hartman --- drivers/base/core.c | 3 ++- drivers/base/power/runtime.c | 20 +++++++++----------- include/linux/pm_runtime.h | 5 ++--- 3 files changed, 13 insertions(+), 15 deletions(-) diff --git a/drivers/base/core.c b/drivers/base/core.c index c0566aff5355..9a874a58d690 100644 --- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -348,7 +348,8 @@ static void device_link_release_fn(struct work_struct *work) /* Ensure that all references to the link object have been dropped. */ device_link_synchronize_removal(); - pm_runtime_release_supplier(link, true); + pm_runtime_release_supplier(link); + pm_request_idle(link->supplier); put_device(link->consumer); put_device(link->supplier); diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c index 157331940488..835a39e84c1d 100644 --- a/drivers/base/power/runtime.c +++ b/drivers/base/power/runtime.c @@ -308,13 +308,10 @@ static int rpm_get_suppliers(struct device *dev) /** * pm_runtime_release_supplier - Drop references to device link's supplier. * @link: Target device link. - * @check_idle: Whether or not to check if the supplier device is idle. * - * Drop all runtime PM references associated with @link to its supplier device - * and if @check_idle is set, check if that device is idle (and so it can be - * suspended). + * Drop all runtime PM references associated with @link to its supplier device. */ -void pm_runtime_release_supplier(struct device_link *link, bool check_idle) +void pm_runtime_release_supplier(struct device_link *link) { struct device *supplier = link->supplier; @@ -327,9 +324,6 @@ void pm_runtime_release_supplier(struct device_link *link, bool check_idle) while (refcount_dec_not_one(&link->rpm_active) && atomic_read(&supplier->power.usage_count) > 0) pm_runtime_put_noidle(supplier); - - if (check_idle) - pm_request_idle(supplier); } static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend) @@ -337,8 +331,11 @@ static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend) struct device_link *link; list_for_each_entry_rcu(link, &dev->links.suppliers, c_node, - device_links_read_lock_held()) - pm_runtime_release_supplier(link, try_to_suspend); + device_links_read_lock_held()) { + pm_runtime_release_supplier(link); + if (try_to_suspend) + pm_request_idle(link->supplier); + } } static void rpm_put_suppliers(struct device *dev) @@ -1776,7 +1773,8 @@ void pm_runtime_drop_link(struct device_link *link) return; pm_runtime_drop_link_count(link->consumer); - pm_runtime_release_supplier(link, true); + pm_runtime_release_supplier(link); + pm_request_idle(link->supplier); } static bool pm_runtime_need_not_resume(struct device *dev) diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h index 30091ab5de28..718600e83020 100644 --- a/include/linux/pm_runtime.h +++ b/include/linux/pm_runtime.h @@ -58,7 +58,7 @@ extern void pm_runtime_get_suppliers(struct device *dev); extern void pm_runtime_put_suppliers(struct device *dev); extern void pm_runtime_new_link(struct device *dev); extern void pm_runtime_drop_link(struct device_link *link); -extern void pm_runtime_release_supplier(struct device_link *link, bool check_idle); +extern void pm_runtime_release_supplier(struct device_link *link); /** * pm_runtime_get_if_in_use - Conditionally bump up runtime PM usage counter. @@ -280,8 +280,7 @@ static inline void pm_runtime_get_suppliers(struct device *dev) {} static inline void pm_runtime_put_suppliers(struct device *dev) {} static inline void pm_runtime_new_link(struct device *dev) {} static inline void pm_runtime_drop_link(struct device_link *link) {} -static inline void pm_runtime_release_supplier(struct device_link *link, - bool check_idle) {} +static inline void pm_runtime_release_supplier(struct device_link *link) {} #endif /* !CONFIG_PM */ From 96fa24eb1a383f6473c591cec953fcddfa950fdb Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Thu, 23 Jun 2022 13:02:31 -0700 Subject: [PATCH 1270/1842] memregion: Fix memregion_free() fallback definition commit f50974eee5c4a5de1e4f1a3d873099f170df25f8 upstream. In the CONFIG_MEMREGION=n case, memregion_free() is meant to be a static inline. 0day reports: In file included from drivers/cxl/core/port.c:4: include/linux/memregion.h:19:6: warning: no previous prototype for function 'memregion_free' [-Wmissing-prototypes] Mark memregion_free() static. Fixes: 33dd70752cd7 ("lib: Uplevel the pmem "region" ida to a global allocator") Reported-by: kernel test robot Reviewed-by: Alison Schofield Link: https://lore.kernel.org/r/165601455171.4042645.3350844271068713515.stgit@dwillia2-xfh Signed-off-by: Dan Williams Signed-off-by: Greg Kroah-Hartman --- include/linux/memregion.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/memregion.h b/include/linux/memregion.h index e11595256cac..c04c4fd2e209 100644 --- a/include/linux/memregion.h +++ b/include/linux/memregion.h @@ -16,7 +16,7 @@ static inline int memregion_alloc(gfp_t gfp) { return -ENOMEM; } -void memregion_free(int id) +static inline void memregion_free(int id) { } #endif From 19104425c962e0b4cbdaa94cecec7be1b5197a83 Mon Sep 17 00:00:00 2001 From: Hsin-Yi Wang Date: Fri, 1 Jul 2022 01:33:29 +0800 Subject: [PATCH 1271/1842] video: of_display_timing.h: include errno.h commit 3663a2fb325b8782524f3edb0ae32d6faa615109 upstream. If CONFIG_OF is not enabled, default of_get_display_timing() returns an errno, so include the header. Fixes: 422b67e0b31a ("videomode: provide dummy inline functions for !CONFIG_OF") Suggested-by: Stephen Boyd Signed-off-by: Hsin-Yi Wang Reviewed-by: Stephen Boyd Signed-off-by: Helge Deller Signed-off-by: Greg Kroah-Hartman --- include/video/of_display_timing.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/video/of_display_timing.h b/include/video/of_display_timing.h index e1126a74882a..eff166fdd81b 100644 --- a/include/video/of_display_timing.h +++ b/include/video/of_display_timing.h @@ -8,6 +8,8 @@ #ifndef __LINUX_OF_DISPLAY_TIMING_H #define __LINUX_OF_DISPLAY_TIMING_H +#include + struct device_node; struct display_timing; struct display_timings; From 79af7be44ccb14e09ed13f56fc0cd1523f453b4f Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 30 Jun 2022 14:16:54 +0200 Subject: [PATCH 1272/1842] powerpc/powernv: delay rng platform device creation until later in boot commit 887502826549caa7e4215fd9e628f48f14c0825a upstream. The platform device for the rng must be created much later in boot. Otherwise it tries to connect to a parent that doesn't yet exist, resulting in this splat: [ 0.000478] kobject: '(null)' ((____ptrval____)): is not initialized, yet kobject_get() is being called. [ 0.002925] [c000000002a0fb30] [c00000000073b0bc] kobject_get+0x8c/0x100 (unreliable) [ 0.003071] [c000000002a0fba0] [c00000000087e464] device_add+0xf4/0xb00 [ 0.003194] [c000000002a0fc80] [c000000000a7f6e4] of_device_add+0x64/0x80 [ 0.003321] [c000000002a0fcb0] [c000000000a800d0] of_platform_device_create_pdata+0xd0/0x1b0 [ 0.003476] [c000000002a0fd00] [c00000000201fa44] pnv_get_random_long_early+0x240/0x2e4 [ 0.003623] [c000000002a0fe20] [c000000002060c38] random_init+0xc0/0x214 This patch fixes the issue by doing the platform device creation inside of machine_subsys_initcall. Fixes: f3eac426657d ("powerpc/powernv: wire up rng during setup_arch") Cc: stable@vger.kernel.org Reported-by: Sachin Sant Signed-off-by: Jason A. Donenfeld Tested-by: Sachin Sant [mpe: Change "of node" to "platform device" in change log] Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220630121654.1939181-1-Jason@zx2c4.com Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/platforms/powernv/rng.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/platforms/powernv/rng.c b/arch/powerpc/platforms/powernv/rng.c index 2b5a1a41234c..236bd2ba51b9 100644 --- a/arch/powerpc/platforms/powernv/rng.c +++ b/arch/powerpc/platforms/powernv/rng.c @@ -176,12 +176,8 @@ static int __init pnv_get_random_long_early(unsigned long *v) NULL) != pnv_get_random_long_early) return 0; - for_each_compatible_node(dn, NULL, "ibm,power-rng") { - if (rng_create(dn)) - continue; - /* Create devices for hwrng driver */ - of_platform_device_create(dn, NULL, NULL); - } + for_each_compatible_node(dn, NULL, "ibm,power-rng") + rng_create(dn); if (!ppc_md.get_random_seed) return 0; @@ -205,10 +201,18 @@ void __init pnv_rng_init(void) static int __init pnv_rng_late_init(void) { + struct device_node *dn; unsigned long v; + /* In case it wasn't called during init for some other reason. */ if (ppc_md.get_random_seed == pnv_get_random_long_early) pnv_get_random_long_early(&v); + + if (ppc_md.get_random_seed == powernv_get_random_long) { + for_each_compatible_node(dn, NULL, "ibm,power-rng") + of_platform_device_create(dn, NULL, NULL); + } + return 0; } machine_subsys_initcall(powernv, pnv_rng_late_init); From f439d08ef1a2d469f36dce3fb2767acab5903124 Mon Sep 17 00:00:00 2001 From: Jimmy Assarsson Date: Fri, 8 Jul 2022 20:48:17 +0200 Subject: [PATCH 1273/1842] can: kvaser_usb: replace run-time checks with struct kvaser_usb_driver_info commit 49f274c72357d2d74cba70b172cf369768909707 upstream. Unify and move compile-time known information into new struct kvaser_usb_driver_info, in favor of run-time checks. All Kvaser USBcanII supports listen-only mode and error counter reporting. Link: https://lore.kernel.org/all/20220603083820.800246-2-extja@kvaser.com Suggested-by: Marc Kleine-Budde Cc: stable@vger.kernel.org Signed-off-by: Jimmy Assarsson [mkl: move struct kvaser_usb_driver_info into kvaser_usb_core.c] Signed-off-by: Marc Kleine-Budde Signed-off-by: Greg Kroah-Hartman --- drivers/net/can/usb/kvaser_usb/kvaser_usb.h | 22 +- .../net/can/usb/kvaser_usb/kvaser_usb_core.c | 244 ++++++++++-------- .../net/can/usb/kvaser_usb/kvaser_usb_leaf.c | 24 +- 3 files changed, 155 insertions(+), 135 deletions(-) diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h index 390b6bde883c..2e358e20d3d9 100644 --- a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h +++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h @@ -35,9 +35,9 @@ #define KVASER_USB_RX_BUFFER_SIZE 3072 #define KVASER_USB_MAX_NET_DEVICES 5 -/* USB devices features */ -#define KVASER_USB_HAS_SILENT_MODE BIT(0) -#define KVASER_USB_HAS_TXRX_ERRORS BIT(1) +/* Kvaser USB device quirks */ +#define KVASER_USB_QUIRK_HAS_SILENT_MODE BIT(0) +#define KVASER_USB_QUIRK_HAS_TXRX_ERRORS BIT(1) /* Device capabilities */ #define KVASER_USB_CAP_BERR_CAP 0x01 @@ -65,12 +65,7 @@ struct kvaser_usb_dev_card_data_hydra { struct kvaser_usb_dev_card_data { u32 ctrlmode_supported; u32 capabilities; - union { - struct { - enum kvaser_usb_leaf_family family; - } leaf; - struct kvaser_usb_dev_card_data_hydra hydra; - }; + struct kvaser_usb_dev_card_data_hydra hydra; }; /* Context for an outstanding, not yet ACKed, transmission */ @@ -84,7 +79,7 @@ struct kvaser_usb { struct usb_device *udev; struct usb_interface *intf; struct kvaser_usb_net_priv *nets[KVASER_USB_MAX_NET_DEVICES]; - const struct kvaser_usb_dev_ops *ops; + const struct kvaser_usb_driver_info *driver_info; const struct kvaser_usb_dev_cfg *cfg; struct usb_endpoint_descriptor *bulk_in, *bulk_out; @@ -166,6 +161,12 @@ struct kvaser_usb_dev_ops { int *cmd_len, u16 transid); }; +struct kvaser_usb_driver_info { + u32 quirks; + enum kvaser_usb_leaf_family family; + const struct kvaser_usb_dev_ops *ops; +}; + struct kvaser_usb_dev_cfg { const struct can_clock clock; const unsigned int timestamp_freq; @@ -185,4 +186,5 @@ int kvaser_usb_send_cmd_async(struct kvaser_usb_net_priv *priv, void *cmd, int len); int kvaser_usb_can_rx_over_error(struct net_device *netdev); + #endif /* KVASER_USB_H */ diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c index 0f1d3e807d63..ae43c423a8ce 100644 --- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c +++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c @@ -79,104 +79,125 @@ #define USB_ATI_MEMO_PRO_2HS_V2_PRODUCT_ID 269 #define USB_HYBRID_PRO_CANLIN_PRODUCT_ID 270 -static inline bool kvaser_is_leaf(const struct usb_device_id *id) -{ - return (id->idProduct >= USB_LEAF_DEVEL_PRODUCT_ID && - id->idProduct <= USB_CAN_R_PRODUCT_ID) || - (id->idProduct >= USB_LEAF_LITE_V2_PRODUCT_ID && - id->idProduct <= USB_MINI_PCIE_2HS_PRODUCT_ID); -} +static const struct kvaser_usb_driver_info kvaser_usb_driver_info_hydra = { + .quirks = 0, + .ops = &kvaser_usb_hydra_dev_ops, +}; -static inline bool kvaser_is_usbcan(const struct usb_device_id *id) -{ - return id->idProduct >= USB_USBCAN_REVB_PRODUCT_ID && - id->idProduct <= USB_MEMORATOR_PRODUCT_ID; -} +static const struct kvaser_usb_driver_info kvaser_usb_driver_info_usbcan = { + .quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS | + KVASER_USB_QUIRK_HAS_SILENT_MODE, + .family = KVASER_USBCAN, + .ops = &kvaser_usb_leaf_dev_ops, +}; -static inline bool kvaser_is_hydra(const struct usb_device_id *id) -{ - return id->idProduct >= USB_BLACKBIRD_V2_PRODUCT_ID && - id->idProduct <= USB_HYBRID_PRO_CANLIN_PRODUCT_ID; -} +static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf = { + .quirks = 0, + .family = KVASER_LEAF, + .ops = &kvaser_usb_leaf_dev_ops, +}; + +static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf_err = { + .quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS, + .family = KVASER_LEAF, + .ops = &kvaser_usb_leaf_dev_ops, +}; + +static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf_err_listen = { + .quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS | + KVASER_USB_QUIRK_HAS_SILENT_MODE, + .family = KVASER_LEAF, + .ops = &kvaser_usb_leaf_dev_ops, +}; static const struct usb_device_id kvaser_usb_table[] = { /* Leaf USB product IDs */ - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_DEVEL_PRODUCT_ID) }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_PRODUCT_ID) }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_DEVEL_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | - KVASER_USB_HAS_SILENT_MODE }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | - KVASER_USB_HAS_SILENT_MODE }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_LS_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | - KVASER_USB_HAS_SILENT_MODE }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_SWC_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | - KVASER_USB_HAS_SILENT_MODE }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_LIN_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | - KVASER_USB_HAS_SILENT_MODE }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_LS_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | - KVASER_USB_HAS_SILENT_MODE }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_SWC_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | - KVASER_USB_HAS_SILENT_MODE }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_DEVEL_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | - KVASER_USB_HAS_SILENT_MODE }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_HSHS_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | - KVASER_USB_HAS_SILENT_MODE }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, { USB_DEVICE(KVASER_VENDOR_ID, USB_UPRO_HSHS_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_GI_PRODUCT_ID) }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_GI_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_OBDII_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | - KVASER_USB_HAS_SILENT_MODE }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_HSLS_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_CH_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, { USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_SPRO_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, { USB_DEVICE(KVASER_VENDOR_ID, USB_OEM_MERCURY_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, { USB_DEVICE(KVASER_VENDOR_ID, USB_OEM_LEAF_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, { USB_DEVICE(KVASER_VENDOR_ID, USB_CAN_R_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_V2_PRODUCT_ID) }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_HS_PRODUCT_ID) }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID) }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_2HS_PRODUCT_ID) }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_2HS_PRODUCT_ID) }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_V2_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_HS_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_2HS_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_2HS_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, /* USBCANII USB product IDs */ { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN2_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan }, { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_REVB_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan }, { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMORATOR_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan }, { USB_DEVICE(KVASER_VENDOR_ID, USB_VCI2_PRODUCT_ID), - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan }, /* Minihydra USB product IDs */ - { USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_V2_PRODUCT_ID) }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_5HS_PRODUCT_ID) }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_5HS_PRODUCT_ID) }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_4HS_PRODUCT_ID) }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_HS_V2_PRODUCT_ID) }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_2HS_V2_PRODUCT_ID) }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_2HS_PRODUCT_ID) }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_2HS_V2_PRODUCT_ID) }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_CANLIN_PRODUCT_ID) }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_USBCAN_PRO_2HS_V2_PRODUCT_ID) }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_MEMO_PRO_2HS_V2_PRODUCT_ID) }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_CANLIN_PRODUCT_ID) }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_V2_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_5HS_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_5HS_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_4HS_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_HS_V2_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_2HS_V2_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_2HS_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_2HS_V2_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_2CANLIN_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_USBCAN_PRO_2HS_V2_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_MEMO_PRO_2HS_V2_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, + { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_CANLIN_PRODUCT_ID), + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, { } }; MODULE_DEVICE_TABLE(usb, kvaser_usb_table); @@ -267,6 +288,7 @@ int kvaser_usb_can_rx_over_error(struct net_device *netdev) static void kvaser_usb_read_bulk_callback(struct urb *urb) { struct kvaser_usb *dev = urb->context; + const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops; int err; unsigned int i; @@ -283,8 +305,8 @@ static void kvaser_usb_read_bulk_callback(struct urb *urb) goto resubmit_urb; } - dev->ops->dev_read_bulk_callback(dev, urb->transfer_buffer, - urb->actual_length); + ops->dev_read_bulk_callback(dev, urb->transfer_buffer, + urb->actual_length); resubmit_urb: usb_fill_bulk_urb(urb, dev->udev, @@ -378,6 +400,7 @@ static int kvaser_usb_open(struct net_device *netdev) { struct kvaser_usb_net_priv *priv = netdev_priv(netdev); struct kvaser_usb *dev = priv->dev; + const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops; int err; err = open_candev(netdev); @@ -388,11 +411,11 @@ static int kvaser_usb_open(struct net_device *netdev) if (err) goto error; - err = dev->ops->dev_set_opt_mode(priv); + err = ops->dev_set_opt_mode(priv); if (err) goto error; - err = dev->ops->dev_start_chip(priv); + err = ops->dev_start_chip(priv); if (err) { netdev_warn(netdev, "Cannot start device, error %d\n", err); goto error; @@ -449,22 +472,23 @@ static int kvaser_usb_close(struct net_device *netdev) { struct kvaser_usb_net_priv *priv = netdev_priv(netdev); struct kvaser_usb *dev = priv->dev; + const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops; int err; netif_stop_queue(netdev); - err = dev->ops->dev_flush_queue(priv); + err = ops->dev_flush_queue(priv); if (err) netdev_warn(netdev, "Cannot flush queue, error %d\n", err); - if (dev->ops->dev_reset_chip) { - err = dev->ops->dev_reset_chip(dev, priv->channel); + if (ops->dev_reset_chip) { + err = ops->dev_reset_chip(dev, priv->channel); if (err) netdev_warn(netdev, "Cannot reset card, error %d\n", err); } - err = dev->ops->dev_stop_chip(priv); + err = ops->dev_stop_chip(priv); if (err) netdev_warn(netdev, "Cannot stop device, error %d\n", err); @@ -503,6 +527,7 @@ static netdev_tx_t kvaser_usb_start_xmit(struct sk_buff *skb, { struct kvaser_usb_net_priv *priv = netdev_priv(netdev); struct kvaser_usb *dev = priv->dev; + const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops; struct net_device_stats *stats = &netdev->stats; struct kvaser_usb_tx_urb_context *context = NULL; struct urb *urb; @@ -545,8 +570,8 @@ static netdev_tx_t kvaser_usb_start_xmit(struct sk_buff *skb, goto freeurb; } - buf = dev->ops->dev_frame_to_cmd(priv, skb, &context->dlc, &cmd_len, - context->echo_index); + buf = ops->dev_frame_to_cmd(priv, skb, &context->dlc, &cmd_len, + context->echo_index); if (!buf) { stats->tx_dropped++; dev_kfree_skb(skb); @@ -630,15 +655,16 @@ static void kvaser_usb_remove_interfaces(struct kvaser_usb *dev) } } -static int kvaser_usb_init_one(struct kvaser_usb *dev, - const struct usb_device_id *id, int channel) +static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel) { struct net_device *netdev; struct kvaser_usb_net_priv *priv; + const struct kvaser_usb_driver_info *driver_info = dev->driver_info; + const struct kvaser_usb_dev_ops *ops = driver_info->ops; int err; - if (dev->ops->dev_reset_chip) { - err = dev->ops->dev_reset_chip(dev, channel); + if (ops->dev_reset_chip) { + err = ops->dev_reset_chip(dev, channel); if (err) return err; } @@ -667,20 +693,19 @@ static int kvaser_usb_init_one(struct kvaser_usb *dev, priv->can.state = CAN_STATE_STOPPED; priv->can.clock.freq = dev->cfg->clock.freq; priv->can.bittiming_const = dev->cfg->bittiming_const; - priv->can.do_set_bittiming = dev->ops->dev_set_bittiming; - priv->can.do_set_mode = dev->ops->dev_set_mode; - if ((id->driver_info & KVASER_USB_HAS_TXRX_ERRORS) || + priv->can.do_set_bittiming = ops->dev_set_bittiming; + priv->can.do_set_mode = ops->dev_set_mode; + if ((driver_info->quirks & KVASER_USB_QUIRK_HAS_TXRX_ERRORS) || (priv->dev->card_data.capabilities & KVASER_USB_CAP_BERR_CAP)) - priv->can.do_get_berr_counter = dev->ops->dev_get_berr_counter; - if (id->driver_info & KVASER_USB_HAS_SILENT_MODE) + priv->can.do_get_berr_counter = ops->dev_get_berr_counter; + if (driver_info->quirks & KVASER_USB_QUIRK_HAS_SILENT_MODE) priv->can.ctrlmode_supported |= CAN_CTRLMODE_LISTENONLY; priv->can.ctrlmode_supported |= dev->card_data.ctrlmode_supported; if (priv->can.ctrlmode_supported & CAN_CTRLMODE_FD) { priv->can.data_bittiming_const = dev->cfg->data_bittiming_const; - priv->can.do_set_data_bittiming = - dev->ops->dev_set_data_bittiming; + priv->can.do_set_data_bittiming = ops->dev_set_data_bittiming; } netdev->flags |= IFF_ECHO; @@ -711,29 +736,22 @@ static int kvaser_usb_probe(struct usb_interface *intf, struct kvaser_usb *dev; int err; int i; + const struct kvaser_usb_driver_info *driver_info; + const struct kvaser_usb_dev_ops *ops; + + driver_info = (const struct kvaser_usb_driver_info *)id->driver_info; + if (!driver_info) + return -ENODEV; dev = devm_kzalloc(&intf->dev, sizeof(*dev), GFP_KERNEL); if (!dev) return -ENOMEM; - if (kvaser_is_leaf(id)) { - dev->card_data.leaf.family = KVASER_LEAF; - dev->ops = &kvaser_usb_leaf_dev_ops; - } else if (kvaser_is_usbcan(id)) { - dev->card_data.leaf.family = KVASER_USBCAN; - dev->ops = &kvaser_usb_leaf_dev_ops; - } else if (kvaser_is_hydra(id)) { - dev->ops = &kvaser_usb_hydra_dev_ops; - } else { - dev_err(&intf->dev, - "Product ID (%d) is not a supported Kvaser USB device\n", - id->idProduct); - return -ENODEV; - } - dev->intf = intf; + dev->driver_info = driver_info; + ops = driver_info->ops; - err = dev->ops->dev_setup_endpoints(dev); + err = ops->dev_setup_endpoints(dev); if (err) { dev_err(&intf->dev, "Cannot get usb endpoint(s)"); return err; @@ -747,22 +765,22 @@ static int kvaser_usb_probe(struct usb_interface *intf, dev->card_data.ctrlmode_supported = 0; dev->card_data.capabilities = 0; - err = dev->ops->dev_init_card(dev); + err = ops->dev_init_card(dev); if (err) { dev_err(&intf->dev, "Failed to initialize card, error %d\n", err); return err; } - err = dev->ops->dev_get_software_info(dev); + err = ops->dev_get_software_info(dev); if (err) { dev_err(&intf->dev, "Cannot get software info, error %d\n", err); return err; } - if (dev->ops->dev_get_software_details) { - err = dev->ops->dev_get_software_details(dev); + if (ops->dev_get_software_details) { + err = ops->dev_get_software_details(dev); if (err) { dev_err(&intf->dev, "Cannot get software details, error %d\n", err); @@ -780,14 +798,14 @@ static int kvaser_usb_probe(struct usb_interface *intf, dev_dbg(&intf->dev, "Max outstanding tx = %d URBs\n", dev->max_tx_urbs); - err = dev->ops->dev_get_card_info(dev); + err = ops->dev_get_card_info(dev); if (err) { dev_err(&intf->dev, "Cannot get card info, error %d\n", err); return err; } - if (dev->ops->dev_get_capabilities) { - err = dev->ops->dev_get_capabilities(dev); + if (ops->dev_get_capabilities) { + err = ops->dev_get_capabilities(dev); if (err) { dev_err(&intf->dev, "Cannot get capabilities, error %d\n", err); @@ -797,7 +815,7 @@ static int kvaser_usb_probe(struct usb_interface *intf, } for (i = 0; i < dev->nchannels; i++) { - err = kvaser_usb_init_one(dev, id, i); + err = kvaser_usb_init_one(dev, i); if (err) { kvaser_usb_remove_interfaces(dev); return err; diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c index 8b5d1add899a..fb51f80012a0 100644 --- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c +++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c @@ -405,7 +405,7 @@ kvaser_usb_leaf_frame_to_cmd(const struct kvaser_usb_net_priv *priv, sizeof(struct kvaser_cmd_tx_can); cmd->u.tx_can.channel = priv->channel; - switch (dev->card_data.leaf.family) { + switch (dev->driver_info->family) { case KVASER_LEAF: cmd_tx_can_flags = &cmd->u.tx_can.leaf.flags; break; @@ -551,7 +551,7 @@ static int kvaser_usb_leaf_get_software_info_inner(struct kvaser_usb *dev) if (err) return err; - switch (dev->card_data.leaf.family) { + switch (dev->driver_info->family) { case KVASER_LEAF: kvaser_usb_leaf_get_software_info_leaf(dev, &cmd.u.leaf.softinfo); break; @@ -598,7 +598,7 @@ static int kvaser_usb_leaf_get_card_info(struct kvaser_usb *dev) dev->nchannels = cmd.u.cardinfo.nchannels; if (dev->nchannels > KVASER_USB_MAX_NET_DEVICES || - (dev->card_data.leaf.family == KVASER_USBCAN && + (dev->driver_info->family == KVASER_USBCAN && dev->nchannels > MAX_USBCAN_NET_DEVICES)) return -EINVAL; @@ -734,7 +734,7 @@ kvaser_usb_leaf_rx_error_update_can_state(struct kvaser_usb_net_priv *priv, new_state < CAN_STATE_BUS_OFF) priv->can.can_stats.restarts++; - switch (dev->card_data.leaf.family) { + switch (dev->driver_info->family) { case KVASER_LEAF: if (es->leaf.error_factor) { priv->can.can_stats.bus_error++; @@ -813,7 +813,7 @@ static void kvaser_usb_leaf_rx_error(const struct kvaser_usb *dev, } } - switch (dev->card_data.leaf.family) { + switch (dev->driver_info->family) { case KVASER_LEAF: if (es->leaf.error_factor) { cf->can_id |= CAN_ERR_BUSERROR | CAN_ERR_PROT; @@ -1005,7 +1005,7 @@ static void kvaser_usb_leaf_rx_can_msg(const struct kvaser_usb *dev, stats = &priv->netdev->stats; if ((cmd->u.rx_can_header.flag & MSG_FLAG_ERROR_FRAME) && - (dev->card_data.leaf.family == KVASER_LEAF && + (dev->driver_info->family == KVASER_LEAF && cmd->id == CMD_LEAF_LOG_MESSAGE)) { kvaser_usb_leaf_leaf_rx_error(dev, cmd); return; @@ -1021,7 +1021,7 @@ static void kvaser_usb_leaf_rx_can_msg(const struct kvaser_usb *dev, return; } - switch (dev->card_data.leaf.family) { + switch (dev->driver_info->family) { case KVASER_LEAF: rx_data = cmd->u.leaf.rx_can.data; break; @@ -1036,7 +1036,7 @@ static void kvaser_usb_leaf_rx_can_msg(const struct kvaser_usb *dev, return; } - if (dev->card_data.leaf.family == KVASER_LEAF && cmd->id == + if (dev->driver_info->family == KVASER_LEAF && cmd->id == CMD_LEAF_LOG_MESSAGE) { cf->can_id = le32_to_cpu(cmd->u.leaf.log_message.id); if (cf->can_id & KVASER_EXTENDED_FRAME) @@ -1133,14 +1133,14 @@ static void kvaser_usb_leaf_handle_command(const struct kvaser_usb *dev, break; case CMD_LEAF_LOG_MESSAGE: - if (dev->card_data.leaf.family != KVASER_LEAF) + if (dev->driver_info->family != KVASER_LEAF) goto warn; kvaser_usb_leaf_rx_can_msg(dev, cmd); break; case CMD_CHIP_STATE_EVENT: case CMD_CAN_ERROR_EVENT: - if (dev->card_data.leaf.family == KVASER_LEAF) + if (dev->driver_info->family == KVASER_LEAF) kvaser_usb_leaf_leaf_rx_error(dev, cmd); else kvaser_usb_leaf_usbcan_rx_error(dev, cmd); @@ -1152,12 +1152,12 @@ static void kvaser_usb_leaf_handle_command(const struct kvaser_usb *dev, /* Ignored commands */ case CMD_USBCAN_CLOCK_OVERFLOW_EVENT: - if (dev->card_data.leaf.family != KVASER_USBCAN) + if (dev->driver_info->family != KVASER_USBCAN) goto warn; break; case CMD_FLUSH_QUEUE_REPLY: - if (dev->card_data.leaf.family != KVASER_LEAF) + if (dev->driver_info->family != KVASER_LEAF) goto warn; break; From a741e762e1995096572774a1bf93243f8c44b913 Mon Sep 17 00:00:00 2001 From: Jimmy Assarsson Date: Fri, 8 Jul 2022 20:48:18 +0200 Subject: [PATCH 1274/1842] can: kvaser_usb: kvaser_usb_leaf: fix CAN clock frequency regression commit e6c80e601053ffdac5709f11ff3ec1e19ed05f7b upstream. The firmware of M32C based Leaf devices expects bittiming parameters calculated for 16MHz clock. Since we use the actual clock frequency of the device, the device may end up with wrong bittiming parameters, depending on user requested parameters. This regression affects M32C based Leaf devices with non-16MHz clock. Fixes: 854a2bede1f0 ("can: kvaser_usb: get CAN clock frequency from device") Link: https://lore.kernel.org/all/20220603083820.800246-3-extja@kvaser.com Cc: stable@vger.kernel.org Signed-off-by: Jimmy Assarsson Signed-off-by: Marc Kleine-Budde Signed-off-by: Greg Kroah-Hartman --- drivers/net/can/usb/kvaser_usb/kvaser_usb.h | 1 + .../net/can/usb/kvaser_usb/kvaser_usb_core.c | 29 ++++++++++++------- .../net/can/usb/kvaser_usb/kvaser_usb_leaf.c | 25 ++++++++++------ 3 files changed, 36 insertions(+), 19 deletions(-) diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h index 2e358e20d3d9..478e2eeec136 100644 --- a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h +++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h @@ -38,6 +38,7 @@ /* Kvaser USB device quirks */ #define KVASER_USB_QUIRK_HAS_SILENT_MODE BIT(0) #define KVASER_USB_QUIRK_HAS_TXRX_ERRORS BIT(1) +#define KVASER_USB_QUIRK_IGNORE_CLK_FREQ BIT(2) /* Device capabilities */ #define KVASER_USB_CAP_BERR_CAP 0x01 diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c index ae43c423a8ce..416763fd1f11 100644 --- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c +++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c @@ -92,26 +92,33 @@ static const struct kvaser_usb_driver_info kvaser_usb_driver_info_usbcan = { }; static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf = { - .quirks = 0, + .quirks = KVASER_USB_QUIRK_IGNORE_CLK_FREQ, .family = KVASER_LEAF, .ops = &kvaser_usb_leaf_dev_ops, }; static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf_err = { - .quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS, + .quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS | + KVASER_USB_QUIRK_IGNORE_CLK_FREQ, .family = KVASER_LEAF, .ops = &kvaser_usb_leaf_dev_ops, }; static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf_err_listen = { .quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS | - KVASER_USB_QUIRK_HAS_SILENT_MODE, + KVASER_USB_QUIRK_HAS_SILENT_MODE | + KVASER_USB_QUIRK_IGNORE_CLK_FREQ, .family = KVASER_LEAF, .ops = &kvaser_usb_leaf_dev_ops, }; +static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leafimx = { + .quirks = 0, + .ops = &kvaser_usb_leaf_dev_ops, +}; + static const struct usb_device_id kvaser_usb_table[] = { - /* Leaf USB product IDs */ + /* Leaf M32C USB product IDs */ { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_DEVEL_PRODUCT_ID), .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_PRODUCT_ID), @@ -152,16 +159,18 @@ static const struct usb_device_id kvaser_usb_table[] = { .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, { USB_DEVICE(KVASER_VENDOR_ID, USB_CAN_R_PRODUCT_ID), .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, + + /* Leaf i.MX28 USB product IDs */ { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_V2_PRODUCT_ID), - .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_HS_PRODUCT_ID), - .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID), - .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_2HS_PRODUCT_ID), - .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_2HS_PRODUCT_ID), - .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, /* USBCANII USB product IDs */ { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN2_PRODUCT_ID), @@ -190,7 +199,7 @@ static const struct usb_device_id kvaser_usb_table[] = { .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_2HS_V2_PRODUCT_ID), .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, - { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_2CANLIN_PRODUCT_ID), + { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_CANLIN_PRODUCT_ID), .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_USBCAN_PRO_2HS_V2_PRODUCT_ID), .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c index fb51f80012a0..b8c2f2c30050 100644 --- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c +++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c @@ -525,16 +525,23 @@ static void kvaser_usb_leaf_get_software_info_leaf(struct kvaser_usb *dev, dev->fw_version = le32_to_cpu(softinfo->fw_version); dev->max_tx_urbs = le16_to_cpu(softinfo->max_outstanding_tx); - switch (sw_options & KVASER_USB_LEAF_SWOPTION_FREQ_MASK) { - case KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK: + if (dev->driver_info->quirks & KVASER_USB_QUIRK_IGNORE_CLK_FREQ) { + /* Firmware expects bittiming parameters calculated for 16MHz + * clock, regardless of the actual clock + */ dev->cfg = &kvaser_usb_leaf_dev_cfg_16mhz; - break; - case KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK: - dev->cfg = &kvaser_usb_leaf_dev_cfg_24mhz; - break; - case KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK: - dev->cfg = &kvaser_usb_leaf_dev_cfg_32mhz; - break; + } else { + switch (sw_options & KVASER_USB_LEAF_SWOPTION_FREQ_MASK) { + case KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK: + dev->cfg = &kvaser_usb_leaf_dev_cfg_16mhz; + break; + case KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK: + dev->cfg = &kvaser_usb_leaf_dev_cfg_24mhz; + break; + case KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK: + dev->cfg = &kvaser_usb_leaf_dev_cfg_32mhz; + break; + } } } From 852952ea0e15728d8bfbe34dc224b53a030372d4 Mon Sep 17 00:00:00 2001 From: Jimmy Assarsson Date: Fri, 8 Jul 2022 20:48:19 +0200 Subject: [PATCH 1275/1842] can: kvaser_usb: kvaser_usb_leaf: fix bittiming limits commit b3b6df2c56d80b8c6740433cff5f016668b8de70 upstream. Use correct bittiming limits depending on device. For devices based on USBcanII, Leaf M32C or Leaf i.MX28. Fixes: 080f40a6fa28 ("can: kvaser_usb: Add support for Kvaser CAN/USB devices") Fixes: b4f20130af23 ("can: kvaser_usb: add support for Kvaser Leaf v2 and usb mini PCIe") Fixes: f5d4abea3ce0 ("can: kvaser_usb: Add support for the USBcan-II family") Link: https://lore.kernel.org/all/20220603083820.800246-4-extja@kvaser.com Cc: stable@vger.kernel.org Signed-off-by: Jimmy Assarsson [mkl: remove stray netlink.h include] [mkl: keep struct can_bittiming_const kvaser_usb_flexc_bittiming_const in kvaser_usb_hydra.c] Signed-off-by: Marc Kleine-Budde Signed-off-by: Greg Kroah-Hartman --- drivers/net/can/usb/kvaser_usb/kvaser_usb.h | 2 + .../net/can/usb/kvaser_usb/kvaser_usb_hydra.c | 4 +- .../net/can/usb/kvaser_usb/kvaser_usb_leaf.c | 76 +++++++++++-------- 3 files changed, 47 insertions(+), 35 deletions(-) diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h index 478e2eeec136..61e67986b625 100644 --- a/drivers/net/can/usb/kvaser_usb/kvaser_usb.h +++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb.h @@ -188,4 +188,6 @@ int kvaser_usb_send_cmd_async(struct kvaser_usb_net_priv *priv, void *cmd, int kvaser_usb_can_rx_over_error(struct net_device *netdev); +extern const struct can_bittiming_const kvaser_usb_flexc_bittiming_const; + #endif /* KVASER_USB_H */ diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c index 218fadc91155..a7c408acb0c0 100644 --- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c +++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c @@ -371,7 +371,7 @@ static const struct can_bittiming_const kvaser_usb_hydra_kcan_bittiming_c = { .brp_inc = 1, }; -static const struct can_bittiming_const kvaser_usb_hydra_flexc_bittiming_c = { +const struct can_bittiming_const kvaser_usb_flexc_bittiming_const = { .name = "kvaser_usb_flex", .tseg1_min = 4, .tseg1_max = 16, @@ -2024,5 +2024,5 @@ static const struct kvaser_usb_dev_cfg kvaser_usb_hydra_dev_cfg_flexc = { .freq = 24000000, }, .timestamp_freq = 1, - .bittiming_const = &kvaser_usb_hydra_flexc_bittiming_c, + .bittiming_const = &kvaser_usb_flexc_bittiming_const, }; diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c index b8c2f2c30050..0e0403dd0550 100644 --- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c +++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c @@ -100,16 +100,6 @@ #define USBCAN_ERROR_STATE_RX_ERROR BIT(1) #define USBCAN_ERROR_STATE_BUSERROR BIT(2) -/* bittiming parameters */ -#define KVASER_USB_TSEG1_MIN 1 -#define KVASER_USB_TSEG1_MAX 16 -#define KVASER_USB_TSEG2_MIN 1 -#define KVASER_USB_TSEG2_MAX 8 -#define KVASER_USB_SJW_MAX 4 -#define KVASER_USB_BRP_MIN 1 -#define KVASER_USB_BRP_MAX 64 -#define KVASER_USB_BRP_INC 1 - /* ctrl modes */ #define KVASER_CTRL_MODE_NORMAL 1 #define KVASER_CTRL_MODE_SILENT 2 @@ -342,48 +332,68 @@ struct kvaser_usb_err_summary { }; }; -static const struct can_bittiming_const kvaser_usb_leaf_bittiming_const = { - .name = "kvaser_usb", - .tseg1_min = KVASER_USB_TSEG1_MIN, - .tseg1_max = KVASER_USB_TSEG1_MAX, - .tseg2_min = KVASER_USB_TSEG2_MIN, - .tseg2_max = KVASER_USB_TSEG2_MAX, - .sjw_max = KVASER_USB_SJW_MAX, - .brp_min = KVASER_USB_BRP_MIN, - .brp_max = KVASER_USB_BRP_MAX, - .brp_inc = KVASER_USB_BRP_INC, +static const struct can_bittiming_const kvaser_usb_leaf_m16c_bittiming_const = { + .name = "kvaser_usb_ucii", + .tseg1_min = 4, + .tseg1_max = 16, + .tseg2_min = 2, + .tseg2_max = 8, + .sjw_max = 4, + .brp_min = 1, + .brp_max = 16, + .brp_inc = 1, }; -static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_8mhz = { +static const struct can_bittiming_const kvaser_usb_leaf_m32c_bittiming_const = { + .name = "kvaser_usb_leaf", + .tseg1_min = 3, + .tseg1_max = 16, + .tseg2_min = 2, + .tseg2_max = 8, + .sjw_max = 4, + .brp_min = 2, + .brp_max = 128, + .brp_inc = 2, +}; + +static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_usbcan_dev_cfg = { .clock = { .freq = 8000000, }, .timestamp_freq = 1, - .bittiming_const = &kvaser_usb_leaf_bittiming_const, + .bittiming_const = &kvaser_usb_leaf_m16c_bittiming_const, }; -static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_16mhz = { +static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_m32c_dev_cfg = { .clock = { .freq = 16000000, }, .timestamp_freq = 1, - .bittiming_const = &kvaser_usb_leaf_bittiming_const, + .bittiming_const = &kvaser_usb_leaf_m32c_bittiming_const, }; -static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_24mhz = { +static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_16mhz = { + .clock = { + .freq = 16000000, + }, + .timestamp_freq = 1, + .bittiming_const = &kvaser_usb_flexc_bittiming_const, +}; + +static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_24mhz = { .clock = { .freq = 24000000, }, .timestamp_freq = 1, - .bittiming_const = &kvaser_usb_leaf_bittiming_const, + .bittiming_const = &kvaser_usb_flexc_bittiming_const, }; -static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_32mhz = { +static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_32mhz = { .clock = { .freq = 32000000, }, .timestamp_freq = 1, - .bittiming_const = &kvaser_usb_leaf_bittiming_const, + .bittiming_const = &kvaser_usb_flexc_bittiming_const, }; static void * @@ -529,17 +539,17 @@ static void kvaser_usb_leaf_get_software_info_leaf(struct kvaser_usb *dev, /* Firmware expects bittiming parameters calculated for 16MHz * clock, regardless of the actual clock */ - dev->cfg = &kvaser_usb_leaf_dev_cfg_16mhz; + dev->cfg = &kvaser_usb_leaf_m32c_dev_cfg; } else { switch (sw_options & KVASER_USB_LEAF_SWOPTION_FREQ_MASK) { case KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK: - dev->cfg = &kvaser_usb_leaf_dev_cfg_16mhz; + dev->cfg = &kvaser_usb_leaf_imx_dev_cfg_16mhz; break; case KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK: - dev->cfg = &kvaser_usb_leaf_dev_cfg_24mhz; + dev->cfg = &kvaser_usb_leaf_imx_dev_cfg_24mhz; break; case KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK: - dev->cfg = &kvaser_usb_leaf_dev_cfg_32mhz; + dev->cfg = &kvaser_usb_leaf_imx_dev_cfg_32mhz; break; } } @@ -566,7 +576,7 @@ static int kvaser_usb_leaf_get_software_info_inner(struct kvaser_usb *dev) dev->fw_version = le32_to_cpu(cmd.u.usbcan.softinfo.fw_version); dev->max_tx_urbs = le16_to_cpu(cmd.u.usbcan.softinfo.max_outstanding_tx); - dev->cfg = &kvaser_usb_leaf_dev_cfg_8mhz; + dev->cfg = &kvaser_usb_leaf_usbcan_dev_cfg; break; } From e14930e9f9c657cf109326752871d3c701b12326 Mon Sep 17 00:00:00 2001 From: Eric Sandeen Date: Thu, 7 Jul 2022 16:07:53 -0700 Subject: [PATCH 1276/1842] xfs: remove incorrect ASSERT in xfs_rename commit e445976537ad139162980bee015b7364e5b64fff upstream. This ASSERT in xfs_rename is a) incorrect, because (RENAME_WHITEOUT|RENAME_NOREPLACE) is a valid combination, and b) unnecessary, because actual invalid flag combinations are already handled at the vfs level in do_renameat2() before we get called. So, remove it. Reported-by: Paolo Bonzini Signed-off-by: Eric Sandeen Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Fixes: 7dcf5c3e4527 ("xfs: add RENAME_WHITEOUT support") Signed-off-by: Kuniyuki Iwashima Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_inode.c | 1 - 1 file changed, 1 deletion(-) diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index e958b1c74561..03497741aef7 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -3170,7 +3170,6 @@ xfs_rename( * appropriately. */ if (flags & RENAME_WHITEOUT) { - ASSERT(!(flags & (RENAME_NOREPLACE | RENAME_EXCHANGE))); error = xfs_rename_alloc_whiteout(target_dp, &wip); if (error) return error; From 3d90607e7e6afa89768b0aaa915b58bd2b849276 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Thu, 12 May 2022 06:16:10 +0400 Subject: [PATCH 1277/1842] ARM: meson: Fix refcount leak in meson_smp_prepare_cpus [ Upstream commit 34d2cd3fccced12b958b8848e3eff0ee4296764c ] of_find_compatible_node() returns a node pointer with refcount incremented, we should use of_node_put() on it when done. Add missing of_node_put() to avoid refcount leak. Fixes: d850f3e5d296 ("ARM: meson: Add SMP bringup code for Meson8 and Meson8b") Signed-off-by: Miaoqian Lin Reviewed-by: Martin Blumenstingl Signed-off-by: Neil Armstrong Link: https://lore.kernel.org/r/20220512021611.47921-1-linmq006@gmail.com Signed-off-by: Sasha Levin --- arch/arm/mach-meson/platsmp.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm/mach-meson/platsmp.c b/arch/arm/mach-meson/platsmp.c index 4b8ad728bb42..32ac60b89fdc 100644 --- a/arch/arm/mach-meson/platsmp.c +++ b/arch/arm/mach-meson/platsmp.c @@ -71,6 +71,7 @@ static void __init meson_smp_prepare_cpus(const char *scu_compatible, } sram_base = of_iomap(node, 0); + of_node_put(node); if (!sram_base) { pr_err("Couldn't map SRAM registers\n"); return; @@ -91,6 +92,7 @@ static void __init meson_smp_prepare_cpus(const char *scu_compatible, } scu_base = of_iomap(node, 0); + of_node_put(node); if (!scu_base) { pr_err("Couldn't map SCU registers\n"); return; From 2c0d10ce002aca369605d88898623c0bece1576a Mon Sep 17 00:00:00 2001 From: Samuel Holland Date: Wed, 25 May 2022 21:49:56 -0500 Subject: [PATCH 1278/1842] pinctrl: sunxi: a83t: Fix NAND function name for some pins [ Upstream commit aaefa29270d9551b604165a08406543efa9d16f5 ] The other NAND pins on Port C use the "nand0" function name. "nand0" also matches all of the other Allwinner SoCs. Fixes: 4730f33f0d82 ("pinctrl: sunxi: add allwinner A83T PIO controller support") Signed-off-by: Samuel Holland Acked-by: Jernej Skrabec Link: https://lore.kernel.org/r/20220526024956.49500-1-samuel@sholland.org Signed-off-by: Linus Walleij Signed-off-by: Sasha Levin --- drivers/pinctrl/sunxi/pinctrl-sun8i-a83t.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/pinctrl/sunxi/pinctrl-sun8i-a83t.c b/drivers/pinctrl/sunxi/pinctrl-sun8i-a83t.c index 4ada80317a3b..b5c1a8f363f3 100644 --- a/drivers/pinctrl/sunxi/pinctrl-sun8i-a83t.c +++ b/drivers/pinctrl/sunxi/pinctrl-sun8i-a83t.c @@ -158,26 +158,26 @@ static const struct sunxi_desc_pin sun8i_a83t_pins[] = { SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 14), SUNXI_FUNCTION(0x0, "gpio_in"), SUNXI_FUNCTION(0x1, "gpio_out"), - SUNXI_FUNCTION(0x2, "nand"), /* DQ6 */ + SUNXI_FUNCTION(0x2, "nand0"), /* DQ6 */ SUNXI_FUNCTION(0x3, "mmc2")), /* D6 */ SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 15), SUNXI_FUNCTION(0x0, "gpio_in"), SUNXI_FUNCTION(0x1, "gpio_out"), - SUNXI_FUNCTION(0x2, "nand"), /* DQ7 */ + SUNXI_FUNCTION(0x2, "nand0"), /* DQ7 */ SUNXI_FUNCTION(0x3, "mmc2")), /* D7 */ SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 16), SUNXI_FUNCTION(0x0, "gpio_in"), SUNXI_FUNCTION(0x1, "gpio_out"), - SUNXI_FUNCTION(0x2, "nand"), /* DQS */ + SUNXI_FUNCTION(0x2, "nand0"), /* DQS */ SUNXI_FUNCTION(0x3, "mmc2")), /* RST */ SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 17), SUNXI_FUNCTION(0x0, "gpio_in"), SUNXI_FUNCTION(0x1, "gpio_out"), - SUNXI_FUNCTION(0x2, "nand")), /* CE2 */ + SUNXI_FUNCTION(0x2, "nand0")), /* CE2 */ SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 18), SUNXI_FUNCTION(0x0, "gpio_in"), SUNXI_FUNCTION(0x1, "gpio_out"), - SUNXI_FUNCTION(0x2, "nand")), /* CE3 */ + SUNXI_FUNCTION(0x2, "nand0")), /* CE3 */ /* Hole */ SUNXI_PIN(SUNXI_PINCTRL_PIN(D, 2), SUNXI_FUNCTION(0x0, "gpio_in"), From 6bf74a1e748fff9413a330775454871ba0936643 Mon Sep 17 00:00:00 2001 From: Konrad Dybcio Date: Sun, 1 May 2022 20:40:16 +0200 Subject: [PATCH 1279/1842] arm64: dts: qcom: msm8994: Fix CPU6/7 reg values [ Upstream commit 47bf59c4755930f616dd90c8c6a85f40a6d347ea ] CPU6 and CPU7 were mistakengly pointing to CPU5 reg. Fix it. Fixes: 02d8091bbca0 ("arm64: dts: qcom: msm8994: Add a proper CPU map") Signed-off-by: Konrad Dybcio Signed-off-by: Bjorn Andersson Link: https://lore.kernel.org/r/20220501184016.64138-1-konrad.dybcio@somainline.org Signed-off-by: Sasha Levin --- arch/arm64/boot/dts/qcom/msm8994.dtsi | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi index 297408b947ff..aeb5762566e9 100644 --- a/arch/arm64/boot/dts/qcom/msm8994.dtsi +++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi @@ -92,7 +92,7 @@ CPU5: cpu@101 { CPU6: cpu@102 { device_type = "cpu"; compatible = "arm,cortex-a57"; - reg = <0x0 0x101>; + reg = <0x0 0x102>; enable-method = "psci"; next-level-cache = <&L2_1>; }; @@ -100,7 +100,7 @@ CPU6: cpu@102 { CPU7: cpu@103 { device_type = "cpu"; compatible = "arm,cortex-a57"; - reg = <0x0 0x101>; + reg = <0x0 0x103>; enable-method = "psci"; next-level-cache = <&L2_1>; }; From 43319ee6a07516afc5e6926441dbd8da1fb4ab98 Mon Sep 17 00:00:00 2001 From: Peng Fan Date: Wed, 22 Jun 2022 14:13:57 +0800 Subject: [PATCH 1280/1842] arm64: dts: imx8mp-evk: correct mmc pad settings [ Upstream commit 01785f1f156511c4f285786b4192245d4f476bf1 ] According to RM bit layout, BIT3 and BIT0 are reserved. 8 7 6 5 4 3 2 1 0 PE HYS PUE ODE FSEL X DSE X Not set reserved bit. Fixes: 9e847693c6f3 ("arm64: dts: freescale: Add i.MX8MP EVK board support") Signed-off-by: Peng Fan Reviewed-by: Rasmus Villemoes Signed-off-by: Shawn Guo Signed-off-by: Sasha Levin --- arch/arm64/boot/dts/freescale/imx8mp-evk.dts | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts index c13b4a02d12f..64f0455e14f8 100644 --- a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts +++ b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts @@ -161,7 +161,7 @@ MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c3 pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp { fsl,pins = < - MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x41 + MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x40 >; }; @@ -180,7 +180,7 @@ MX8MP_IOMUXC_SD2_DATA0__USDHC2_DATA0 0x1d0 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d0 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d0 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d0 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 >; }; @@ -192,7 +192,7 @@ MX8MP_IOMUXC_SD2_DATA0__USDHC2_DATA0 0x1d4 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d4 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d4 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 >; }; @@ -204,7 +204,7 @@ MX8MP_IOMUXC_SD2_DATA0__USDHC2_DATA0 0x1d6 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d6 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d6 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d6 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 >; }; From 17b3883ba55f1e4f76bc676dfb284055e0f4d4df Mon Sep 17 00:00:00 2001 From: Sherry Sun Date: Wed, 22 Jun 2022 14:13:58 +0800 Subject: [PATCH 1281/1842] arm64: dts: imx8mp-evk: correct the uart2 pinctl value [ Upstream commit 2d4fb72b681205eed4553d8802632bd3270be3ba ] According to the IOMUXC_SW_PAD_CTL_PAD_UART2_RXD/TXD register define in imx8mp RM, bit0 and bit3 are reserved, and the uart2 rx/tx pin should enable the pull up, so need to set bit8 to 1. The original pinctl value 0x49 is incorrect and needs to be changed to 0x140, same as uart1 and uart3. Fixes: 9e847693c6f3 ("arm64: dts: freescale: Add i.MX8MP EVK board support") Reviewed-by: Haibo Chen Signed-off-by: Sherry Sun Signed-off-by: Peng Fan Reviewed-by: Rasmus Villemoes Signed-off-by: Shawn Guo Signed-off-by: Sasha Levin --- arch/arm64/boot/dts/freescale/imx8mp-evk.dts | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts index 64f0455e14f8..5011adb5ff1f 100644 --- a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts +++ b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts @@ -167,8 +167,8 @@ MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x40 pinctrl_uart2: uart2grp { fsl,pins = < - MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX 0x49 - MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX 0x49 + MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX 0x140 + MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX 0x140 >; }; From 2ade1b1d92f674063ec5e4008bb6353392d771a4 Mon Sep 17 00:00:00 2001 From: Peng Fan Date: Wed, 22 Jun 2022 14:13:59 +0800 Subject: [PATCH 1282/1842] arm64: dts: imx8mp-evk: correct gpio-led pad settings [ Upstream commit b838582ab8d5fb11b2c0275056a9f34e1d94fece ] 0x19 is not a valid setting. According to RM bit layout, BIT3 and BIT0 are reserved. 8 7 6 5 4 3 2 1 0 PE HYS PUE ODE FSEL X DSE X Correct setting with PE PUE set, DSE set to 0. Fixes: 50d336b12f34 ("arm64: dts: imx8mp-evk: Add GPIO LED support") Signed-off-by: Peng Fan Reviewed-by: Rasmus Villemoes Signed-off-by: Shawn Guo Signed-off-by: Sasha Levin --- arch/arm64/boot/dts/freescale/imx8mp-evk.dts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts index 5011adb5ff1f..c0663a6c8376 100644 --- a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts +++ b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts @@ -148,7 +148,7 @@ MX8MP_IOMUXC_SAI1_RXD0__GPIO4_IO02 0x19 pinctrl_gpio_led: gpioledgrp { fsl,pins = < - MX8MP_IOMUXC_NAND_READY_B__GPIO3_IO16 0x19 + MX8MP_IOMUXC_NAND_READY_B__GPIO3_IO16 0x140 >; }; From e1cda2a03d81560ac0badb5fce3aae66bc8a4ae3 Mon Sep 17 00:00:00 2001 From: Peng Fan Date: Wed, 22 Jun 2022 14:14:05 +0800 Subject: [PATCH 1283/1842] arm64: dts: imx8mp-evk: correct I2C3 pad settings [ Upstream commit 0836de513ebaae5f03014641eac996290d67493d ] According to RM bit layout, BIT3 and BIT0 are reserved. 8 7 6 5 4 3 2 1 0 PE HYS PUE ODE FSEL X DSE X Although function is not broken, we should not set reserved bit. Fixes: 5e4a67ff7f69 ("arm64: dts: imx8mp-evk: Add i2c3 support") Signed-off-by: Peng Fan Reviewed-by: Rasmus Villemoes Signed-off-by: Shawn Guo Signed-off-by: Sasha Levin --- arch/arm64/boot/dts/freescale/imx8mp-evk.dts | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts index c0663a6c8376..c016f5b7d24a 100644 --- a/arch/arm64/boot/dts/freescale/imx8mp-evk.dts +++ b/arch/arm64/boot/dts/freescale/imx8mp-evk.dts @@ -154,8 +154,8 @@ MX8MP_IOMUXC_NAND_READY_B__GPIO3_IO16 0x140 pinctrl_i2c3: i2c3grp { fsl,pins = < - MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL 0x400001c3 - MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c3 + MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL 0x400001c2 + MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c2 >; }; From 1d0c3ced2d1c1c0858ee042d29d4c9ed07d987bf Mon Sep 17 00:00:00 2001 From: Andrei Lalaev Date: Wed, 25 May 2022 22:04:25 +0300 Subject: [PATCH 1284/1842] pinctrl: sunxi: sunxi_pconf_set: use correct offset [ Upstream commit cd4c1e65a32afd003b08ad4aafe1e4d3e4e8e61b ] Some Allwinner SoCs have 2 pinctrls (PIO and R_PIO). Previous implementation used absolute pin numbering and it was incorrect for R_PIO pinctrl. It's necessary to take into account the base pin number. Fixes: 90be64e27621 ("pinctrl: sunxi: implement pin_config_set") Signed-off-by: Andrei Lalaev Reviewed-by: Samuel Holland Link: https://lore.kernel.org/r/20220525190423.410609-1-andrey.lalaev@gmail.com Signed-off-by: Linus Walleij Signed-off-by: Sasha Levin --- drivers/pinctrl/sunxi/pinctrl-sunxi.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/pinctrl/sunxi/pinctrl-sunxi.c b/drivers/pinctrl/sunxi/pinctrl-sunxi.c index be7f4f95f455..24c861434bf1 100644 --- a/drivers/pinctrl/sunxi/pinctrl-sunxi.c +++ b/drivers/pinctrl/sunxi/pinctrl-sunxi.c @@ -544,6 +544,8 @@ static int sunxi_pconf_set(struct pinctrl_dev *pctldev, unsigned pin, struct sunxi_pinctrl *pctl = pinctrl_dev_get_drvdata(pctldev); int i; + pin -= pctl->desc->pin_base; + for (i = 0; i < num_configs; i++) { enum pin_config_param param; unsigned long flags; From 4c3e73a66a2775723e1c5960424894d8a3394d09 Mon Sep 17 00:00:00 2001 From: Stephan Gerhold Date: Mon, 27 Jun 2022 15:59:38 +0200 Subject: [PATCH 1285/1842] arm64: dts: qcom: msm8992-*: Fix vdd_lvs1_2-supply typo [ Upstream commit 5fb779558f1c97e2bf2794cb59553e569c38e2f9 ] "make dtbs_check" complains about the missing "-supply" suffix for vdd_lvs1_2 which is clearly a typo, originally introduced in the msm8994-smd-rpm.dtsi file and apparently later copied to msm8992-xiaomi-libra.dts: msm8992-lg-bullhead-rev-10/101.dtb: pm8994-regulators: 'vdd_lvs1_2' does not match any of the regexes: '.*-supply$', '^((s|l|lvs|5vs)[0-9]*)|(boost-bypass)|(bob)$', 'pinctrl-[0-9]+' From schema: regulator/qcom,smd-rpm-regulator.yaml msm8992-xiaomi-libra.dtb: pm8994-regulators: 'vdd_lvs1_2' does not match any of the regexes: '.*-supply$', '^((s|l|lvs|5vs)[0-9]*)|(boost-bypass)|(bob)$', 'pinctrl-[0-9]+' From schema: regulator/qcom,smd-rpm-regulator.yaml Reported-by: Rob Herring Cc: Konrad Dybcio Fixes: f3b2c99e73be ("arm64: dts: Enable onboard SDHCI on msm8992") Fixes: 0f5cdb31e850 ("arm64: dts: qcom: Add Xiaomi Libra (Mi 4C) device tree") Signed-off-by: Stephan Gerhold Reviewed-by: Konrad Dybcio Signed-off-by: Bjorn Andersson Link: https://lore.kernel.org/r/20220627135938.2901871-1-stephan.gerhold@kernkonzept.com Signed-off-by: Sasha Levin --- arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts | 2 +- arch/arm64/boot/dts/qcom/msm8992-xiaomi-libra.dts | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts b/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts index cb82864a90ef..42f2b235011f 100644 --- a/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts +++ b/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts @@ -64,7 +64,7 @@ pm8994-regulators { vdd_l17_29-supply = <&vreg_vph_pwr>; vdd_l20_21-supply = <&vreg_vph_pwr>; vdd_l25-supply = <&pm8994_s5>; - vdd_lvs1_2 = <&pm8994_s4>; + vdd_lvs1_2-supply = <&pm8994_s4>; pm8994_s1: s1 { regulator-min-microvolt = <800000>; diff --git a/arch/arm64/boot/dts/qcom/msm8992-xiaomi-libra.dts b/arch/arm64/boot/dts/qcom/msm8992-xiaomi-libra.dts index 4f64ca3ea1ef..6ed2a9c01e8c 100644 --- a/arch/arm64/boot/dts/qcom/msm8992-xiaomi-libra.dts +++ b/arch/arm64/boot/dts/qcom/msm8992-xiaomi-libra.dts @@ -151,7 +151,7 @@ pm8994-regulators { vdd_l17_29-supply = <&vreg_vph_pwr>; vdd_l20_21-supply = <&vreg_vph_pwr>; vdd_l25-supply = <&pm8994_s5>; - vdd_lvs1_2 = <&pm8994_s4>; + vdd_lvs1_2-supply = <&pm8994_s4>; pm8994_s1: s1 { /* unused */ From b40ac801cbb11c161f2901bfab39aa42f0e73d4d Mon Sep 17 00:00:00 2001 From: Claudiu Beznea Date: Mon, 23 May 2022 12:24:19 +0300 Subject: [PATCH 1286/1842] ARM: at91: pm: use proper compatible for sama5d2's rtc [ Upstream commit ddc980da8043779119acaca106c6d9b445c9b65b ] Use proper compatible strings for SAMA5D2's RTC IPs. This is necessary for configuring wakeup sources for ULP1 PM mode. Fixes: d7484f5c6b3b ("ARM: at91: pm: configure wakeup sources for ULP1 mode") Signed-off-by: Claudiu Beznea Link: https://lore.kernel.org/r/20220523092421.317345-2-claudiu.beznea@microchip.com Signed-off-by: Sasha Levin --- arch/arm/mach-at91/pm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c index 3f015cb6ec2b..6a68ff0466e0 100644 --- a/arch/arm/mach-at91/pm.c +++ b/arch/arm/mach-at91/pm.c @@ -104,7 +104,7 @@ static const struct wakeup_source_info ws_info[] = { static const struct of_device_id sama5d2_ws_ids[] = { { .compatible = "atmel,sama5d2-gem", .data = &ws_info[0] }, - { .compatible = "atmel,at91rm9200-rtc", .data = &ws_info[1] }, + { .compatible = "atmel,sama5d2-rtc", .data = &ws_info[1] }, { .compatible = "atmel,sama5d3-udc", .data = &ws_info[2] }, { .compatible = "atmel,at91rm9200-ohci", .data = &ws_info[2] }, { .compatible = "usb-ohci", .data = &ws_info[2] }, From ade03e5ea7782915b917bd36cb28737cdcfa1946 Mon Sep 17 00:00:00 2001 From: Claudiu Beznea Date: Mon, 23 May 2022 12:24:20 +0300 Subject: [PATCH 1287/1842] ARM: at91: pm: use proper compatibles for sam9x60's rtc and rtt [ Upstream commit 641522665dbb25ce117c78746df1aad8b58c80e5 ] Use proper compatible strings for SAM9X60's RTC and RTT IPs. These are necessary for configuring wakeup sources for ULP1 PM mode. Fixes: eaedc0d379da ("ARM: at91: pm: add ULP1 support for SAM9X60") Signed-off-by: Claudiu Beznea Link: https://lore.kernel.org/r/20220523092421.317345-3-claudiu.beznea@microchip.com Signed-off-by: Sasha Levin --- arch/arm/mach-at91/pm.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c index 6a68ff0466e0..f2ce2d094925 100644 --- a/arch/arm/mach-at91/pm.c +++ b/arch/arm/mach-at91/pm.c @@ -115,12 +115,12 @@ static const struct of_device_id sama5d2_ws_ids[] = { }; static const struct of_device_id sam9x60_ws_ids[] = { - { .compatible = "atmel,at91sam9x5-rtc", .data = &ws_info[1] }, + { .compatible = "microchip,sam9x60-rtc", .data = &ws_info[1] }, { .compatible = "atmel,at91rm9200-ohci", .data = &ws_info[2] }, { .compatible = "usb-ohci", .data = &ws_info[2] }, { .compatible = "atmel,at91sam9g45-ehci", .data = &ws_info[2] }, { .compatible = "usb-ehci", .data = &ws_info[2] }, - { .compatible = "atmel,at91sam9260-rtt", .data = &ws_info[4] }, + { .compatible = "microchip,sam9x60-rtt", .data = &ws_info[4] }, { .compatible = "cdns,sam9x60-macb", .data = &ws_info[5] }, { /* sentinel */ } }; From 9b7d8e28b686d57e734c3f4877a7acd6f037cf59 Mon Sep 17 00:00:00 2001 From: Eugen Hristev Date: Tue, 7 Jun 2022 12:04:54 +0300 Subject: [PATCH 1288/1842] ARM: dts: at91: sam9x60ek: fix eeprom compatible and size [ Upstream commit f2cbbc3f926316ccf8ef9363d8a60c1110afc1c7 ] The board has a microchip 24aa025e48 eeprom, which is a 2 Kbits memory, so it's compatible with at24c02 not at24c32. Also the size property is wrong, it's not 128 bytes, but 256 bytes. Thus removing and leaving it to the default (256). Fixes: 1e5f532c27371 ("ARM: dts: at91: sam9x60: add device tree for soc and board") Signed-off-by: Eugen Hristev Reviewed-by: Claudiu Beznea Signed-off-by: Claudiu Beznea Link: https://lore.kernel.org/r/20220607090455.80433-1-eugen.hristev@microchip.com Signed-off-by: Sasha Levin --- arch/arm/boot/dts/at91-sam9x60ek.dts | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/arm/boot/dts/at91-sam9x60ek.dts b/arch/arm/boot/dts/at91-sam9x60ek.dts index b1068cca4228..fd8dc1183b3e 100644 --- a/arch/arm/boot/dts/at91-sam9x60ek.dts +++ b/arch/arm/boot/dts/at91-sam9x60ek.dts @@ -233,10 +233,9 @@ i2c0: i2c@600 { status = "okay"; eeprom@53 { - compatible = "atmel,24c32"; + compatible = "atmel,24c02"; reg = <0x53>; pagesize = <16>; - size = <128>; status = "okay"; }; }; From 87d2bb888259936125106e0e071b883f940658b4 Mon Sep 17 00:00:00 2001 From: Eugen Hristev Date: Tue, 7 Jun 2022 12:04:55 +0300 Subject: [PATCH 1289/1842] ARM: dts: at91: sama5d2_icp: fix eeprom compatibles [ Upstream commit 416ce193d73a734ded6d09fe141017b38af1c567 ] The eeprom memories on the board are microchip 24aa025e48, which are 2 Kbits and are compatible with at24c02 not at24c32. Fixes: 68a95ef72cefe ("ARM: dts: at91: sama5d2-icp: add SAMA5D2-ICP") Signed-off-by: Eugen Hristev Reviewed-by: Claudiu Beznea Signed-off-by: Claudiu Beznea Link: https://lore.kernel.org/r/20220607090455.80433-2-eugen.hristev@microchip.com Signed-off-by: Sasha Levin --- arch/arm/boot/dts/at91-sama5d2_icp.dts | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm/boot/dts/at91-sama5d2_icp.dts b/arch/arm/boot/dts/at91-sama5d2_icp.dts index 308d472bd104..634411d13b4a 100644 --- a/arch/arm/boot/dts/at91-sama5d2_icp.dts +++ b/arch/arm/boot/dts/at91-sama5d2_icp.dts @@ -317,21 +317,21 @@ &i2c1 { status = "okay"; eeprom@50 { - compatible = "atmel,24c32"; + compatible = "atmel,24c02"; reg = <0x50>; pagesize = <16>; status = "okay"; }; eeprom@52 { - compatible = "atmel,24c32"; + compatible = "atmel,24c02"; reg = <0x52>; pagesize = <16>; status = "disabled"; }; eeprom@53 { - compatible = "atmel,24c32"; + compatible = "atmel,24c02"; reg = <0x53>; pagesize = <16>; status = "disabled"; From 5561bddd0599aa94a20036db7d7c495556771757 Mon Sep 17 00:00:00 2001 From: Ivan Malov Date: Tue, 28 Jun 2022 12:18:48 +0300 Subject: [PATCH 1290/1842] xsk: Clear page contiguity bit when unmapping pool [ Upstream commit 512d1999b8e94a5d43fba3afc73e774849674742 ] When a XSK pool gets mapped, xp_check_dma_contiguity() adds bit 0x1 to pages' DMA addresses that go in ascending order and at 4K stride. The problem is that the bit does not get cleared before doing unmap. As a result, a lot of warnings from iommu_dma_unmap_page() are seen in dmesg, which indicates that lookups by iommu_iova_to_phys() fail. Fixes: 2b43470add8c ("xsk: Introduce AF_XDP buffer allocation API") Signed-off-by: Ivan Malov Signed-off-by: Daniel Borkmann Acked-by: Magnus Karlsson Link: https://lore.kernel.org/bpf/20220628091848.534803-1-ivan.malov@oktetlabs.ru Signed-off-by: Sasha Levin --- net/xdp/xsk_buff_pool.c | 1 + 1 file changed, 1 insertion(+) diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index 2ef6f926610e..e63a285a9856 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -318,6 +318,7 @@ static void __xp_dma_unmap(struct xsk_dma_map *dma_map, unsigned long attrs) for (i = 0; i < dma_map->dma_pages_cnt; i++) { dma = &dma_map->dma_pages[i]; if (*dma) { + *dma &= ~XSK_NEXT_PG_CONTIG_MASK; dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE, DMA_BIDIRECTIONAL, attrs); *dma = 0; From 2b4659c145bafe3cdaf2a2cde08fd34b7ecbbba5 Mon Sep 17 00:00:00 2001 From: Lukasz Cieplicki Date: Tue, 31 May 2022 12:54:20 +0200 Subject: [PATCH 1291/1842] i40e: Fix dropped jumbo frames statistics [ Upstream commit 1adb1563e7b7ec659379a18e607e8bc3522d8a78 ] Dropped packets caused by too large frames were not included in dropped RX packets statistics. Issue was caused by not reading the GL_RXERR1 register. That register stores count of packet which was have been dropped due to too large size. Fix it by reading GL_RXERR1 register for each interface. Repro steps: Send a packet larger than the set MTU to SUT Observe rx statists: ethtool -S | grep rx | grep -v ": 0" Fixes: 41a9e55c89be ("i40e: add missing VSI statistics") Signed-off-by: Lukasz Cieplicki Signed-off-by: Jedrzej Jagielski Tested-by: Gurucharan (A Contingent worker at Intel) Signed-off-by: Tony Nguyen Signed-off-by: Sasha Levin --- drivers/net/ethernet/intel/i40e/i40e.h | 16 ++++ drivers/net/ethernet/intel/i40e/i40e_main.c | 73 +++++++++++++++++++ .../net/ethernet/intel/i40e/i40e_register.h | 13 ++++ drivers/net/ethernet/intel/i40e/i40e_type.h | 1 + 4 files changed, 103 insertions(+) diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h index effdc3361266..dd630b6bc74b 100644 --- a/drivers/net/ethernet/intel/i40e/i40e.h +++ b/drivers/net/ethernet/intel/i40e/i40e.h @@ -37,6 +37,7 @@ #include #include #include +#include #include "i40e_type.h" #include "i40e_prototype.h" #include @@ -991,6 +992,21 @@ static inline void i40e_write_fd_input_set(struct i40e_pf *pf, (u32)(val & 0xFFFFFFFFULL)); } +/** + * i40e_get_pf_count - get PCI PF count. + * @hw: pointer to a hw. + * + * Reports the function number of the highest PCI physical + * function plus 1 as it is loaded from the NVM. + * + * Return: PCI PF count. + **/ +static inline u32 i40e_get_pf_count(struct i40e_hw *hw) +{ + return FIELD_GET(I40E_GLGEN_PCIFCNCNT_PCIPFCNT_MASK, + rd32(hw, I40E_GLGEN_PCIFCNCNT)); +} + /* needed by i40e_ethtool.c */ int i40e_up(struct i40e_vsi *vsi); void i40e_down(struct i40e_vsi *vsi); diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 614f3e995100..58453f7958df 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -548,6 +548,47 @@ void i40e_pf_reset_stats(struct i40e_pf *pf) pf->hw_csum_rx_error = 0; } +/** + * i40e_compute_pci_to_hw_id - compute index form PCI function. + * @vsi: ptr to the VSI to read from. + * @hw: ptr to the hardware info. + **/ +static u32 i40e_compute_pci_to_hw_id(struct i40e_vsi *vsi, struct i40e_hw *hw) +{ + int pf_count = i40e_get_pf_count(hw); + + if (vsi->type == I40E_VSI_SRIOV) + return (hw->port * BIT(7)) / pf_count + vsi->vf_id; + + return hw->port + BIT(7); +} + +/** + * i40e_stat_update64 - read and update a 64 bit stat from the chip. + * @hw: ptr to the hardware info. + * @hireg: the high 32 bit reg to read. + * @loreg: the low 32 bit reg to read. + * @offset_loaded: has the initial offset been loaded yet. + * @offset: ptr to current offset value. + * @stat: ptr to the stat. + * + * Since the device stats are not reset at PFReset, they will not + * be zeroed when the driver starts. We'll save the first values read + * and use them as offsets to be subtracted from the raw values in order + * to report stats that count from zero. + **/ +static void i40e_stat_update64(struct i40e_hw *hw, u32 hireg, u32 loreg, + bool offset_loaded, u64 *offset, u64 *stat) +{ + u64 new_data; + + new_data = rd64(hw, loreg); + + if (!offset_loaded || new_data < *offset) + *offset = new_data; + *stat = new_data - *offset; +} + /** * i40e_stat_update48 - read and update a 48 bit stat from the chip * @hw: ptr to the hardware info @@ -619,6 +660,34 @@ static void i40e_stat_update_and_clear32(struct i40e_hw *hw, u32 reg, u64 *stat) *stat += new_data; } +/** + * i40e_stats_update_rx_discards - update rx_discards. + * @vsi: ptr to the VSI to be updated. + * @hw: ptr to the hardware info. + * @stat_idx: VSI's stat_counter_idx. + * @offset_loaded: ptr to the VSI's stat_offsets_loaded. + * @stat_offset: ptr to stat_offset to store first read of specific register. + * @stat: ptr to VSI's stat to be updated. + **/ +static void +i40e_stats_update_rx_discards(struct i40e_vsi *vsi, struct i40e_hw *hw, + int stat_idx, bool offset_loaded, + struct i40e_eth_stats *stat_offset, + struct i40e_eth_stats *stat) +{ + u64 rx_rdpc, rx_rxerr; + + i40e_stat_update32(hw, I40E_GLV_RDPC(stat_idx), offset_loaded, + &stat_offset->rx_discards, &rx_rdpc); + i40e_stat_update64(hw, + I40E_GL_RXERR1H(i40e_compute_pci_to_hw_id(vsi, hw)), + I40E_GL_RXERR1L(i40e_compute_pci_to_hw_id(vsi, hw)), + offset_loaded, &stat_offset->rx_discards_other, + &rx_rxerr); + + stat->rx_discards = rx_rdpc + rx_rxerr; +} + /** * i40e_update_eth_stats - Update VSI-specific ethernet statistics counters. * @vsi: the VSI to be updated @@ -678,6 +747,10 @@ void i40e_update_eth_stats(struct i40e_vsi *vsi) I40E_GLV_BPTCL(stat_idx), vsi->stat_offsets_loaded, &oes->tx_broadcast, &es->tx_broadcast); + + i40e_stats_update_rx_discards(vsi, hw, stat_idx, + vsi->stat_offsets_loaded, oes, es); + vsi->stat_offsets_loaded = true; } diff --git a/drivers/net/ethernet/intel/i40e/i40e_register.h b/drivers/net/ethernet/intel/i40e/i40e_register.h index 8335f151ceef..117fd9674d7f 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_register.h +++ b/drivers/net/ethernet/intel/i40e/i40e_register.h @@ -77,6 +77,11 @@ #define I40E_GLGEN_MSRWD_MDIWRDATA_SHIFT 0 #define I40E_GLGEN_MSRWD_MDIRDDATA_SHIFT 16 #define I40E_GLGEN_MSRWD_MDIRDDATA_MASK I40E_MASK(0xFFFF, I40E_GLGEN_MSRWD_MDIRDDATA_SHIFT) +#define I40E_GLGEN_PCIFCNCNT 0x001C0AB4 /* Reset: PCIR */ +#define I40E_GLGEN_PCIFCNCNT_PCIPFCNT_SHIFT 0 +#define I40E_GLGEN_PCIFCNCNT_PCIPFCNT_MASK I40E_MASK(0x1F, I40E_GLGEN_PCIFCNCNT_PCIPFCNT_SHIFT) +#define I40E_GLGEN_PCIFCNCNT_PCIVFCNT_SHIFT 16 +#define I40E_GLGEN_PCIFCNCNT_PCIVFCNT_MASK I40E_MASK(0xFF, I40E_GLGEN_PCIFCNCNT_PCIVFCNT_SHIFT) #define I40E_GLGEN_RSTAT 0x000B8188 /* Reset: POR */ #define I40E_GLGEN_RSTAT_DEVSTATE_SHIFT 0 #define I40E_GLGEN_RSTAT_DEVSTATE_MASK I40E_MASK(0x3, I40E_GLGEN_RSTAT_DEVSTATE_SHIFT) @@ -461,6 +466,14 @@ #define I40E_VFQF_HKEY1_MAX_INDEX 12 #define I40E_VFQF_HLUT1(_i, _VF) (0x00220000 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...15, _VF=0...127 */ /* Reset: CORER */ #define I40E_VFQF_HLUT1_MAX_INDEX 15 +#define I40E_GL_RXERR1H(_i) (0x00318004 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_RXERR1H_MAX_INDEX 143 +#define I40E_GL_RXERR1H_RXERR1H_SHIFT 0 +#define I40E_GL_RXERR1H_RXERR1H_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_RXERR1H_RXERR1H_SHIFT) +#define I40E_GL_RXERR1L(_i) (0x00318000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_RXERR1L_MAX_INDEX 143 +#define I40E_GL_RXERR1L_RXERR1L_SHIFT 0 +#define I40E_GL_RXERR1L_RXERR1L_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_RXERR1L_RXERR1L_SHIFT) #define I40E_GLPRT_BPRCH(_i) (0x003005E4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ #define I40E_GLPRT_BPRCL(_i) (0x003005E0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ #define I40E_GLPRT_BPTCH(_i) (0x00300A04 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h index add67f7b73e8..446672a7e39f 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_type.h +++ b/drivers/net/ethernet/intel/i40e/i40e_type.h @@ -1172,6 +1172,7 @@ struct i40e_eth_stats { u64 tx_broadcast; /* bptc */ u64 tx_discards; /* tdpc */ u64 tx_errors; /* tepc */ + u64 rx_discards_other; /* rxerr1 */ }; /* Statistics collected per VEB per TC */ From 23cdc57d88d14d76f2bf0c9d747ef2508858d7e5 Mon Sep 17 00:00:00 2001 From: Rick Lindsley Date: Sat, 2 Jul 2022 03:37:12 -0700 Subject: [PATCH 1292/1842] ibmvnic: Properly dispose of all skbs during a failover. [ Upstream commit 1b18f09d31cfa7148df15a7d5c5e0e86f105f7d1 ] During a reset, there may have been transmits in flight that are no longer valid and cannot be fulfilled. Resetting and clearing the queues is insufficient; each skb also needs to be explicitly freed so that upper levels are not left waiting for confirmation of a transmit that will never happen. If this happens frequently enough, the apparent backlog will cause TCP to begin "congestion control" unnecessarily, culminating in permanently decreased throughput. Fixes: d7c0ef36bde03 ("ibmvnic: Free and re-allocate scrqs when tx/rx scrqs change") Tested-by: Nick Child Reviewed-by: Brian King Signed-off-by: Rick Lindsley Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/ibm/ibmvnic.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c index 61fb2a092451..7fe2e47dc83d 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.c +++ b/drivers/net/ethernet/ibm/ibmvnic.c @@ -5228,6 +5228,15 @@ static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter, bool reset) release_sub_crqs(adapter, 0); rc = init_sub_crqs(adapter); } else { + /* no need to reinitialize completely, but we do + * need to clean up transmits that were in flight + * when we processed the reset. Failure to do so + * will confound the upper layer, usually TCP, by + * creating the illusion of transmits that are + * awaiting completion. + */ + clean_tx_pools(adapter); + rc = reset_sub_crq_queues(adapter); } } else { From 859b889029fc55261003f4eea2e8527c91b9b7aa Mon Sep 17 00:00:00 2001 From: Vladimir Oltean Date: Sun, 3 Jul 2022 10:36:24 +0300 Subject: [PATCH 1293/1842] selftests: forwarding: fix flood_unicast_test when h2 supports IFF_UNICAST_FLT [ Upstream commit b8e629b05f5d23f9649c901bef09fab8b0c2e4b9 ] As mentioned in the blamed commit, flood_unicast_test() works by checking the match count on a tc filter placed on the receiving interface. But the second host interface (host2_if) has no interest in receiving a packet with MAC DA de:ad:be:ef:13:37, so its RX filter drops it even before the ingress tc filter gets to be executed. So we will incorrectly get the message "Packet was not flooded when should", when in fact, the packet was flooded as expected but dropped due to an unrelated reason, at some other layer on the receiving side. Force h2 to accept this packet by temporarily placing it in promiscuous mode. Alternatively we could either deliver to its MAC address or use tcpdump_start, but this has the fewest complications. This fixes the "flooding" test from bridge_vlan_aware.sh and bridge_vlan_unaware.sh, which calls flood_test from the lib. Fixes: 236dd50bf67a ("selftests: forwarding: Add a test for flooded traffic") Signed-off-by: Vladimir Oltean Reviewed-by: Ido Schimmel Tested-by: Ido Schimmel Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- tools/testing/selftests/net/forwarding/lib.sh | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tools/testing/selftests/net/forwarding/lib.sh b/tools/testing/selftests/net/forwarding/lib.sh index be6fa808d219..094a1104e49d 100644 --- a/tools/testing/selftests/net/forwarding/lib.sh +++ b/tools/testing/selftests/net/forwarding/lib.sh @@ -1129,6 +1129,7 @@ flood_test_do() # Add an ACL on `host2_if` which will tell us whether the packet # was flooded to it or not. + ip link set $host2_if promisc on tc qdisc add dev $host2_if ingress tc filter add dev $host2_if ingress protocol ip pref 1 handle 101 \ flower dst_mac $mac action drop @@ -1146,6 +1147,7 @@ flood_test_do() tc filter del dev $host2_if ingress protocol ip pref 1 handle 101 flower tc qdisc del dev $host2_if ingress + ip link set $host2_if promisc off return $err } From 9906c223400fbad19c73605b7b0cf7b14b3ac82f Mon Sep 17 00:00:00 2001 From: Vladimir Oltean Date: Sun, 3 Jul 2022 10:36:25 +0300 Subject: [PATCH 1294/1842] selftests: forwarding: fix learning_test when h1 supports IFF_UNICAST_FLT [ Upstream commit 1a635d3e1c80626237fdae47a5545b6655d8d81c ] The first host interface has by default no interest in receiving packets MAC DA de:ad:be:ef:13:37, so it might drop them before they hit the tc filter and this might confuse the selftest. Enable promiscuous mode such that the filter properly counts received packets. Fixes: d4deb01467ec ("selftests: forwarding: Add a test for FDB learning") Signed-off-by: Vladimir Oltean Reviewed-by: Ido Schimmel Tested-by: Ido Schimmel Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- tools/testing/selftests/net/forwarding/lib.sh | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tools/testing/selftests/net/forwarding/lib.sh b/tools/testing/selftests/net/forwarding/lib.sh index 094a1104e49d..fbda7603f3b3 100644 --- a/tools/testing/selftests/net/forwarding/lib.sh +++ b/tools/testing/selftests/net/forwarding/lib.sh @@ -1063,6 +1063,7 @@ learning_test() # FDB entry was installed. bridge link set dev $br_port1 flood off + ip link set $host1_if promisc on tc qdisc add dev $host1_if ingress tc filter add dev $host1_if ingress protocol ip pref 1 handle 101 \ flower dst_mac $mac action drop @@ -1112,6 +1113,7 @@ learning_test() tc filter del dev $host1_if ingress protocol ip pref 1 handle 101 flower tc qdisc del dev $host1_if ingress + ip link set $host1_if promisc off bridge link set dev $br_port1 flood on From 904f622ec78e0c59d36554f0524956fb43b18dc7 Mon Sep 17 00:00:00 2001 From: Vladimir Oltean Date: Sun, 3 Jul 2022 10:36:26 +0300 Subject: [PATCH 1295/1842] selftests: forwarding: fix error message in learning_test [ Upstream commit 83844aacab2015da1dba1df0cc61fc4b4c4e8076 ] When packets are not received, they aren't received on $host1_if, so the message talking about the second host not receiving them is incorrect. Fix it. Fixes: d4deb01467ec ("selftests: forwarding: Add a test for FDB learning") Signed-off-by: Vladimir Oltean Reviewed-by: Ido Schimmel Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- tools/testing/selftests/net/forwarding/lib.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/testing/selftests/net/forwarding/lib.sh b/tools/testing/selftests/net/forwarding/lib.sh index fbda7603f3b3..54020d05a62b 100644 --- a/tools/testing/selftests/net/forwarding/lib.sh +++ b/tools/testing/selftests/net/forwarding/lib.sh @@ -1074,7 +1074,7 @@ learning_test() tc -j -s filter show dev $host1_if ingress \ | jq -e ".[] | select(.options.handle == 101) \ | select(.options.actions[0].stats.packets == 1)" &> /dev/null - check_fail $? "Packet reached second host when should not" + check_fail $? "Packet reached first host when should not" $MZ $host1_if -c 1 -p 64 -a $mac -t ip -q sleep 1 From 26938bd28c0c1d177e85db83659766968a8a59d9 Mon Sep 17 00:00:00 2001 From: Heiner Kallweit Date: Tue, 5 Jul 2022 21:15:22 +0200 Subject: [PATCH 1296/1842] r8169: fix accessing unset transport header [ Upstream commit faa4e04e5e140a6d02260289a8fba8fd8d7a3003 ] 66e4c8d95008 ("net: warn if transport header was not set") added a check that triggers a warning in r8169, see [0]. The commit referenced in the Fixes tag refers to the change from which the patch applies cleanly, there's nothing wrong with this commit. It seems the actual issue (not bug, because the warning is harmless here) was introduced with bdfa4ed68187 ("r8169: use Giant Send"). [0] https://bugzilla.kernel.org/show_bug.cgi?id=216157 Fixes: 8d520b4de3ed ("r8169: work around RTL8125 UDP hw bug") Reported-by: Erhard F. Tested-by: Erhard F. Signed-off-by: Heiner Kallweit Link: https://lore.kernel.org/r/1b2c2b29-3dc0-f7b6-5694-97ec526d51a0@gmail.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/realtek/r8169_main.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c index 5eac3f494d9e..c025dadcce28 100644 --- a/drivers/net/ethernet/realtek/r8169_main.c +++ b/drivers/net/ethernet/realtek/r8169_main.c @@ -4183,7 +4183,6 @@ static void rtl8169_tso_csum_v1(struct sk_buff *skb, u32 *opts) static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp, struct sk_buff *skb, u32 *opts) { - u32 transport_offset = (u32)skb_transport_offset(skb); struct skb_shared_info *shinfo = skb_shinfo(skb); u32 mss = shinfo->gso_size; @@ -4200,7 +4199,7 @@ static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp, WARN_ON_ONCE(1); } - opts[0] |= transport_offset << GTTCPHO_SHIFT; + opts[0] |= skb_transport_offset(skb) << GTTCPHO_SHIFT; opts[1] |= mss << TD1_MSS_SHIFT; } else if (skb->ip_summed == CHECKSUM_PARTIAL) { u8 ip_protocol; @@ -4228,7 +4227,7 @@ static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp, else WARN_ON_ONCE(1); - opts[1] |= transport_offset << TCPHO_SHIFT; + opts[1] |= skb_transport_offset(skb) << TCPHO_SHIFT; } else { unsigned int padto = rtl_quirk_packet_padto(tp, skb); @@ -4401,14 +4400,13 @@ static netdev_features_t rtl8169_features_check(struct sk_buff *skb, struct net_device *dev, netdev_features_t features) { - int transport_offset = skb_transport_offset(skb); struct rtl8169_private *tp = netdev_priv(dev); if (skb_is_gso(skb)) { if (tp->mac_version == RTL_GIGA_MAC_VER_34) features = rtl8168evl_fix_tso(skb, features); - if (transport_offset > GTTCPHO_MAX && + if (skb_transport_offset(skb) > GTTCPHO_MAX && rtl_chip_supports_csum_v2(tp)) features &= ~NETIF_F_ALL_TSO; } else if (skb->ip_summed == CHECKSUM_PARTIAL) { @@ -4419,7 +4417,7 @@ static netdev_features_t rtl8169_features_check(struct sk_buff *skb, if (rtl_quirk_packet_padto(tp, skb)) features &= ~NETIF_F_CSUM_MASK; - if (transport_offset > TCPHO_MAX && + if (skb_transport_offset(skb) > TCPHO_MAX && rtl_chip_supports_csum_v2(tp)) features &= ~NETIF_F_CSUM_MASK; } From 9b329edd77cae63cdc59aaa87b0cb81c8a471291 Mon Sep 17 00:00:00 2001 From: Satish Nagireddy Date: Tue, 28 Jun 2022 12:12:16 -0700 Subject: [PATCH 1297/1842] i2c: cadence: Unregister the clk notifier in error path [ Upstream commit 3501f0c663063513ad604fb1b3f06af637d3396d ] This patch ensures that the clock notifier is unregistered when driver probe is returning error. Fixes: df8eb5691c48 ("i2c: Add driver for Cadence I2C controller") Signed-off-by: Satish Nagireddy Tested-by: Lars-Peter Clausen Reviewed-by: Michal Simek Signed-off-by: Wolfram Sang Signed-off-by: Sasha Levin --- drivers/i2c/busses/i2c-cadence.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c index 50e3ddba52ba..01564bd96c62 100644 --- a/drivers/i2c/busses/i2c-cadence.c +++ b/drivers/i2c/busses/i2c-cadence.c @@ -1289,6 +1289,7 @@ static int cdns_i2c_probe(struct platform_device *pdev) return 0; err_clk_dis: + clk_notifier_unregister(id->clk, &id->clk_rate_change_nb); clk_disable_unprepare(id->clk); pm_runtime_disable(&pdev->dev); pm_runtime_set_suspended(&pdev->dev); From 86884017bb634070ffe8be5c35bb09d1da27d98c Mon Sep 17 00:00:00 2001 From: Peter Robinson Date: Mon, 6 Jun 2022 17:10:34 +0100 Subject: [PATCH 1298/1842] dmaengine: imx-sdma: Allow imx8m for imx7 FW revs commit a7cd3cf0b2e5aaacfe5e02c472bd28e98e640be7 upstream. The revision of the imx-sdma IP that is in the i.MX8M series is the same is that as that in the i.MX7 series but the imx7d MODULE_FIRMWARE directive is wrapped in a condiditional which means it's not defined when built for aarch64 SOC_IMX8M platforms and hence you get the following errors when the driver loads on imx8m devices: imx-sdma 302c0000.dma-controller: Direct firmware load for imx/sdma/sdma-imx7d.bin failed with error -2 imx-sdma 302c0000.dma-controller: external firmware not found, using ROM firmware Add the SOC_IMX8M into the check so the firmware can load on i.MX8. Fixes: 1474d48bd639 ("arm64: dts: imx8mq: Add SDMA nodes") Fixes: 941acd566b18 ("dmaengine: imx-sdma: Only check ratio on parts that support 1:1") Signed-off-by: Peter Robinson Cc: stable@vger.kernel.org # v5.2+ Reviewed-by: Fabio Estevam Link: https://lore.kernel.org/r/20220606161034.3544803-1-pbrobinson@gmail.com Signed-off-by: Vinod Koul Signed-off-by: Greg Kroah-Hartman --- drivers/dma/imx-sdma.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c index 792c91cd1608..2283dcd8bf91 100644 --- a/drivers/dma/imx-sdma.c +++ b/drivers/dma/imx-sdma.c @@ -2212,7 +2212,7 @@ MODULE_DESCRIPTION("i.MX SDMA driver"); #if IS_ENABLED(CONFIG_SOC_IMX6Q) MODULE_FIRMWARE("imx/sdma/sdma-imx6q.bin"); #endif -#if IS_ENABLED(CONFIG_SOC_IMX7D) +#if IS_ENABLED(CONFIG_SOC_IMX7D) || IS_ENABLED(CONFIG_SOC_IMX8M) MODULE_FIRMWARE("imx/sdma/sdma-imx7d.bin"); #endif MODULE_LICENSE("GPL"); From af7d9d4abe848d5cb0143d4535b93aa1916086b0 Mon Sep 17 00:00:00 2001 From: Shuah Khan Date: Thu, 30 Jun 2022 20:32:55 -0600 Subject: [PATCH 1299/1842] misc: rtsx_usb: fix use of dma mapped buffer for usb bulk transfer commit eb7f8e28420372787933eec079735c35034bda7d upstream. rtsx_usb driver allocates coherent dma buffer for urb transfers. This buffer is passed to usb_bulk_msg() and usb core tries to map already mapped buffer running into a dma mapping error. xhci_hcd 0000:01:00.0: rejecting DMA map of vmalloc memory WARNING: CPU: 1 PID: 279 at include/linux/dma-mapping.h:326 usb_ hcd_map_urb_for_dma+0x7d6/0x820 ... xhci_map_urb_for_dma+0x291/0x4e0 usb_hcd_submit_urb+0x199/0x12b0 ... usb_submit_urb+0x3b8/0x9e0 usb_start_wait_urb+0xe3/0x2d0 usb_bulk_msg+0x115/0x240 rtsx_usb_transfer_data+0x185/0x1a8 [rtsx_usb] rtsx_usb_send_cmd+0xbb/0x123 [rtsx_usb] rtsx_usb_write_register+0x12c/0x143 [rtsx_usb] rtsx_usb_probe+0x226/0x4b2 [rtsx_usb] Fix it to use kmalloc() to get DMA-able memory region instead. Signed-off-by: Shuah Khan Cc: stable Link: https://lore.kernel.org/r/667d627d502e1ba9ff4f9b94966df3299d2d3c0d.1656642167.git.skhan@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman --- drivers/misc/cardreader/rtsx_usb.c | 13 +++++++------ include/linux/rtsx_usb.h | 1 - 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/misc/cardreader/rtsx_usb.c b/drivers/misc/cardreader/rtsx_usb.c index 1ef9b61077c4..e147cc8ab0fd 100644 --- a/drivers/misc/cardreader/rtsx_usb.c +++ b/drivers/misc/cardreader/rtsx_usb.c @@ -631,8 +631,7 @@ static int rtsx_usb_probe(struct usb_interface *intf, ucr->pusb_dev = usb_dev; - ucr->iobuf = usb_alloc_coherent(ucr->pusb_dev, IOBUF_SIZE, - GFP_KERNEL, &ucr->iobuf_dma); + ucr->iobuf = kmalloc(IOBUF_SIZE, GFP_KERNEL); if (!ucr->iobuf) return -ENOMEM; @@ -668,8 +667,9 @@ static int rtsx_usb_probe(struct usb_interface *intf, out_init_fail: usb_set_intfdata(ucr->pusb_intf, NULL); - usb_free_coherent(ucr->pusb_dev, IOBUF_SIZE, ucr->iobuf, - ucr->iobuf_dma); + kfree(ucr->iobuf); + ucr->iobuf = NULL; + ucr->cmd_buf = ucr->rsp_buf = NULL; return ret; } @@ -682,8 +682,9 @@ static void rtsx_usb_disconnect(struct usb_interface *intf) mfd_remove_devices(&intf->dev); usb_set_intfdata(ucr->pusb_intf, NULL); - usb_free_coherent(ucr->pusb_dev, IOBUF_SIZE, ucr->iobuf, - ucr->iobuf_dma); + kfree(ucr->iobuf); + ucr->iobuf = NULL; + ucr->cmd_buf = ucr->rsp_buf = NULL; } #ifdef CONFIG_PM diff --git a/include/linux/rtsx_usb.h b/include/linux/rtsx_usb.h index 159729cffd8e..a07f7341ebc2 100644 --- a/include/linux/rtsx_usb.h +++ b/include/linux/rtsx_usb.h @@ -55,7 +55,6 @@ struct rtsx_ucr { struct usb_interface *pusb_intf; struct usb_sg_request current_sg; unsigned char *iobuf; - dma_addr_t iobuf_dma; struct timer_list sg_timer; struct mutex dev_mutex; From ff79e0ca2bea386b3ca2df7fcb8cde2378e03786 Mon Sep 17 00:00:00 2001 From: Shuah Khan Date: Thu, 30 Jun 2022 20:32:56 -0600 Subject: [PATCH 1300/1842] misc: rtsx_usb: use separate command and response buffers commit 3776c78559853fd151be7c41e369fd076fb679d5 upstream. rtsx_usb uses same buffer for command and response. There could be a potential conflict using the same buffer for both especially if retries and timeouts are involved. Use separate command and response buffers to avoid conflicts. Signed-off-by: Shuah Khan Cc: stable Link: https://lore.kernel.org/r/07e3721804ff07aaab9ef5b39a5691d0718b9ade.1656642167.git.skhan@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman --- drivers/misc/cardreader/rtsx_usb.c | 26 +++++++++++++++++--------- include/linux/rtsx_usb.h | 1 - 2 files changed, 17 insertions(+), 10 deletions(-) diff --git a/drivers/misc/cardreader/rtsx_usb.c b/drivers/misc/cardreader/rtsx_usb.c index e147cc8ab0fd..4e2108052509 100644 --- a/drivers/misc/cardreader/rtsx_usb.c +++ b/drivers/misc/cardreader/rtsx_usb.c @@ -631,15 +631,18 @@ static int rtsx_usb_probe(struct usb_interface *intf, ucr->pusb_dev = usb_dev; - ucr->iobuf = kmalloc(IOBUF_SIZE, GFP_KERNEL); - if (!ucr->iobuf) + ucr->cmd_buf = kmalloc(IOBUF_SIZE, GFP_KERNEL); + if (!ucr->cmd_buf) return -ENOMEM; + ucr->rsp_buf = kmalloc(IOBUF_SIZE, GFP_KERNEL); + if (!ucr->rsp_buf) + goto out_free_cmd_buf; + usb_set_intfdata(intf, ucr); ucr->vendor_id = id->idVendor; ucr->product_id = id->idProduct; - ucr->cmd_buf = ucr->rsp_buf = ucr->iobuf; mutex_init(&ucr->dev_mutex); @@ -667,9 +670,11 @@ static int rtsx_usb_probe(struct usb_interface *intf, out_init_fail: usb_set_intfdata(ucr->pusb_intf, NULL); - kfree(ucr->iobuf); - ucr->iobuf = NULL; - ucr->cmd_buf = ucr->rsp_buf = NULL; + kfree(ucr->rsp_buf); + ucr->rsp_buf = NULL; +out_free_cmd_buf: + kfree(ucr->cmd_buf); + ucr->cmd_buf = NULL; return ret; } @@ -682,9 +687,12 @@ static void rtsx_usb_disconnect(struct usb_interface *intf) mfd_remove_devices(&intf->dev); usb_set_intfdata(ucr->pusb_intf, NULL); - kfree(ucr->iobuf); - ucr->iobuf = NULL; - ucr->cmd_buf = ucr->rsp_buf = NULL; + + kfree(ucr->cmd_buf); + ucr->cmd_buf = NULL; + + kfree(ucr->rsp_buf); + ucr->rsp_buf = NULL; } #ifdef CONFIG_PM diff --git a/include/linux/rtsx_usb.h b/include/linux/rtsx_usb.h index a07f7341ebc2..3247ed8e9ff0 100644 --- a/include/linux/rtsx_usb.h +++ b/include/linux/rtsx_usb.h @@ -54,7 +54,6 @@ struct rtsx_ucr { struct usb_device *pusb_dev; struct usb_interface *pusb_intf; struct usb_sg_request current_sg; - unsigned char *iobuf; struct timer_list sg_timer; struct mutex dev_mutex; From ca4a91958466243d50a28d7029394e5e71ef87a8 Mon Sep 17 00:00:00 2001 From: Shuah Khan Date: Fri, 1 Jul 2022 10:53:52 -0600 Subject: [PATCH 1301/1842] misc: rtsx_usb: set return value in rsp_buf alloc err path commit 2cd37c2e72449a7add6da1183d20a6247d6db111 upstream. Set return value in rsp_buf alloc error path before going to error handling. drivers/misc/cardreader/rtsx_usb.c:639:6: warning: variable 'ret' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized] if (!ucr->rsp_buf) ^~~~~~~~~~~~~ drivers/misc/cardreader/rtsx_usb.c:678:9: note: uninitialized use occurs here return ret; ^~~ drivers/misc/cardreader/rtsx_usb.c:639:2: note: remove the 'if' if its condition is always false if (!ucr->rsp_buf) ^~~~~~~~~~~~~~~~~~ drivers/misc/cardreader/rtsx_usb.c:622:9: note: initialize the variable 'ret' to silence this warning int ret; ^ = 0 Fixes: 3776c7855985 ("misc: rtsx_usb: use separate command and response buffers") Reported-by: kernel test robot Cc: stable Signed-off-by: Shuah Khan Link: https://lore.kernel.org/r/20220701165352.15687-1-skhan@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman --- drivers/misc/cardreader/rtsx_usb.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/misc/cardreader/rtsx_usb.c b/drivers/misc/cardreader/rtsx_usb.c index 4e2108052509..f150d8769f19 100644 --- a/drivers/misc/cardreader/rtsx_usb.c +++ b/drivers/misc/cardreader/rtsx_usb.c @@ -636,8 +636,10 @@ static int rtsx_usb_probe(struct usb_interface *intf, return -ENOMEM; ucr->rsp_buf = kmalloc(IOBUF_SIZE, GFP_KERNEL); - if (!ucr->rsp_buf) + if (!ucr->rsp_buf) { + ret = -ENOMEM; goto out_free_cmd_buf; + } usb_set_intfdata(intf, ucr); From 37995f034ff20933f1da12203753a5e6957813d7 Mon Sep 17 00:00:00 2001 From: Samuel Holland Date: Fri, 1 Jul 2022 22:19:02 -0500 Subject: [PATCH 1302/1842] dt-bindings: dma: allwinner,sun50i-a64-dma: Fix min/max typo commit 607a48c78e6b427b0b684d24e61c19e846ad65d6 upstream. The conditional block for variants with a second clock should have set minItems, not maxItems, which was already 2. Since clock-names requires two items, this typo should not have caused any problems. Fixes: edd14218bd66 ("dt-bindings: dmaengine: Convert Allwinner A31 and A64 DMA to a schema") Signed-off-by: Samuel Holland Reviewed-by: Rob Herring Link: https://lore.kernel.org/r/20220702031903.21703-1-samuel@sholland.org Signed-off-by: Vinod Koul Signed-off-by: Greg Kroah-Hartman --- .../devicetree/bindings/dma/allwinner,sun50i-a64-dma.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Documentation/devicetree/bindings/dma/allwinner,sun50i-a64-dma.yaml b/Documentation/devicetree/bindings/dma/allwinner,sun50i-a64-dma.yaml index 372679dbd216..7e250ce136ee 100644 --- a/Documentation/devicetree/bindings/dma/allwinner,sun50i-a64-dma.yaml +++ b/Documentation/devicetree/bindings/dma/allwinner,sun50i-a64-dma.yaml @@ -61,7 +61,7 @@ if: then: properties: clocks: - maxItems: 2 + minItems: 2 required: - clock-names From e23cfb3fdcbbc5eb11eeebb97fa2607cd740bd97 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Sun, 10 Jul 2022 13:55:49 -0700 Subject: [PATCH 1303/1842] ida: don't use BUG_ON() for debugging commit fc82bbf4dede758007763867d0282353c06d1121 upstream. This is another old BUG_ON() that just shouldn't exist (see also commit a382f8fee42c: "signal handling: don't use BUG_ON() for debugging"). In fact, as Matthew Wilcox points out, this condition shouldn't really even result in a warning, since a negative id allocation result is just a normal allocation failure: "I wonder if we should even warn here -- sure, the caller is trying to free something that wasn't allocated, but we don't warn for kfree(NULL)" and goes on to point out how that current error check is only causing people to unnecessarily do their own index range checking before freeing it. This was noted by Itay Iellin, because the bluetooth HCI socket cookie code does *not* do that range checking, and ends up just freeing the error case too, triggering the BUG_ON(). The HCI code requires CAP_NET_RAW, and seems to just result in an ugly splat, but there really is no reason to BUG_ON() here, and we have generally striven for allocation models where it's always ok to just do free(alloc()); even if the allocation were to fail for some random reason (usually obviously that "random" reason being some resource limit). Fixes: 88eca0207cf1 ("ida: simplified functions for id allocation") Reported-by: Itay Iellin Suggested-by: Matthew Wilcox Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- lib/idr.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/lib/idr.c b/lib/idr.c index f4ab4f4aa3c7..7ecdfdb5309e 100644 --- a/lib/idr.c +++ b/lib/idr.c @@ -491,7 +491,8 @@ void ida_free(struct ida *ida, unsigned int id) struct ida_bitmap *bitmap; unsigned long flags; - BUG_ON((int)id < 0); + if ((int)id < 0) + return; xas_lock_irqsave(&xas, flags); bitmap = xas_load(&xas); From 7b721f5aec92593404e1139fb2f50c266eb24e48 Mon Sep 17 00:00:00 2001 From: Dmitry Osipenko Date: Fri, 20 May 2022 21:14:32 +0300 Subject: [PATCH 1304/1842] dmaengine: pl330: Fix lockdep warning about non-static key commit b64b3b2f1d81f83519582e1feee87d77f51f5f17 upstream. The DEFINE_SPINLOCK() macro shouldn't be used for dynamically allocated spinlocks. The lockdep warns about this and disables locking validator. Fix the warning by making lock static. INFO: trying to register non-static key. The code is fine but needs lockdep annotation, or maybe you didn't initialize this object before use? turning off the locking correctness validator. Hardware name: Radxa ROCK Pi 4C (DT) Call trace: dump_backtrace.part.0+0xcc/0xe0 show_stack+0x18/0x6c dump_stack_lvl+0x8c/0xb8 dump_stack+0x18/0x34 register_lock_class+0x4a8/0x4cc __lock_acquire+0x78/0x20cc lock_acquire.part.0+0xe0/0x230 lock_acquire+0x68/0x84 _raw_spin_lock_irqsave+0x84/0xc4 add_desc+0x44/0xc0 pl330_get_desc+0x15c/0x1d0 pl330_prep_dma_cyclic+0x100/0x270 snd_dmaengine_pcm_trigger+0xec/0x1c0 dmaengine_pcm_trigger+0x18/0x24 ... Fixes: e588710311ee ("dmaengine: pl330: fix descriptor allocation fail") Signed-off-by: Dmitry Osipenko Link: https://lore.kernel.org/r/20220520181432.149904-1-dmitry.osipenko@collabora.com Signed-off-by: Vinod Koul Signed-off-by: Greg Kroah-Hartman --- drivers/dma/pl330.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c index 6dca548f4dab..5bbae99f2d34 100644 --- a/drivers/dma/pl330.c +++ b/drivers/dma/pl330.c @@ -2591,7 +2591,7 @@ static struct dma_pl330_desc *pl330_get_desc(struct dma_pl330_chan *pch) /* If the DMAC pool is empty, alloc new */ if (!desc) { - DEFINE_SPINLOCK(lock); + static DEFINE_SPINLOCK(lock); LIST_HEAD(pool); if (!add_desc(&pool, &lock, GFP_ATOMIC, 1)) From 1be247db203ed5cdd1bbd2c2588cef2185600531 Mon Sep 17 00:00:00 2001 From: Michael Walle Date: Thu, 26 May 2022 15:51:11 +0200 Subject: [PATCH 1305/1842] dmaengine: at_xdma: handle errors of at_xdmac_alloc_desc() correctly commit 3770d92bd5237d686e49da7b2fb86f53ee6ed259 upstream. It seems that it is valid to have less than the requested number of descriptors. But what is not valid and leads to subsequent errors is to have zero descriptors. In that case, abort the probing. Fixes: e1f7c9eee707 ("dmaengine: at_xdmac: creation of the atmel eXtended DMA Controller driver") Signed-off-by: Michael Walle Link: https://lore.kernel.org/r/20220526135111.1470926-1-michael@walle.cc Signed-off-by: Vinod Koul Signed-off-by: Greg Kroah-Hartman --- drivers/dma/at_xdmac.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c index 47552db6b8dc..b5d691ae45dc 100644 --- a/drivers/dma/at_xdmac.c +++ b/drivers/dma/at_xdmac.c @@ -1838,6 +1838,11 @@ static int at_xdmac_alloc_chan_resources(struct dma_chan *chan) for (i = 0; i < init_nr_desc_per_channel; i++) { desc = at_xdmac_alloc_desc(chan, GFP_KERNEL); if (!desc) { + if (i == 0) { + dev_warn(chan2dev(chan), + "can't allocate any descriptors\n"); + return -EIO; + } dev_warn(chan2dev(chan), "only %d descriptors have been allocated\n", i); break; From 37147e22cd8dfc0412495cb361708836157a4486 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Sun, 5 Jun 2022 08:27:23 +0400 Subject: [PATCH 1306/1842] dmaengine: ti: Fix refcount leak in ti_dra7_xbar_route_allocate commit c132fe78ad7b4ce8b5d49a501a15c29d08eeb23a upstream. of_parse_phandle() returns a node pointer with refcount incremented, we should use of_node_put() on it when not needed anymore. Add missing of_node_put() in to fix this. Fixes: ec9bfa1e1a79 ("dmaengine: ti-dma-crossbar: dra7: Use bitops instead of idr") Signed-off-by: Miaoqian Lin Link: https://lore.kernel.org/r/20220605042723.17668-2-linmq006@gmail.com Signed-off-by: Vinod Koul Signed-off-by: Greg Kroah-Hartman --- drivers/dma/ti/dma-crossbar.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/dma/ti/dma-crossbar.c b/drivers/dma/ti/dma-crossbar.c index 4ba8fa5d9c36..d0720190eb3c 100644 --- a/drivers/dma/ti/dma-crossbar.c +++ b/drivers/dma/ti/dma-crossbar.c @@ -268,6 +268,7 @@ static void *ti_dra7_xbar_route_allocate(struct of_phandle_args *dma_spec, mutex_unlock(&xbar->mutex); dev_err(&pdev->dev, "Run out of free DMA requests\n"); kfree(map); + of_node_put(dma_spec->np); return ERR_PTR(-ENOMEM); } set_bit(map->xbar_out, xbar->dma_inuse); From 8365b151fd50160aeb7d56551d958e4852522b7f Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Sun, 5 Jun 2022 08:27:22 +0400 Subject: [PATCH 1307/1842] dmaengine: ti: Add missing put_device in ti_dra7_xbar_route_allocate commit 615a4bfc426e11dba05c2cf343f9ac752fb381d2 upstream. of_find_device_by_node() takes reference, we should use put_device() to release it when not need anymore. Fixes: a074ae38f859 ("dmaengine: Add driver for TI DMA crossbar on DRA7x") Signed-off-by: Miaoqian Lin Acked-by: Peter Ujfalusi Link: https://lore.kernel.org/r/20220605042723.17668-1-linmq006@gmail.com Signed-off-by: Vinod Koul Signed-off-by: Greg Kroah-Hartman --- drivers/dma/ti/dma-crossbar.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/dma/ti/dma-crossbar.c b/drivers/dma/ti/dma-crossbar.c index d0720190eb3c..86ced7f2d771 100644 --- a/drivers/dma/ti/dma-crossbar.c +++ b/drivers/dma/ti/dma-crossbar.c @@ -245,6 +245,7 @@ static void *ti_dra7_xbar_route_allocate(struct of_phandle_args *dma_spec, if (dma_spec->args[0] >= xbar->xbar_requests) { dev_err(&pdev->dev, "Invalid XBAR request number: %d\n", dma_spec->args[0]); + put_device(&pdev->dev); return ERR_PTR(-EINVAL); } @@ -252,12 +253,14 @@ static void *ti_dra7_xbar_route_allocate(struct of_phandle_args *dma_spec, dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0); if (!dma_spec->np) { dev_err(&pdev->dev, "Can't get DMA master\n"); + put_device(&pdev->dev); return ERR_PTR(-EINVAL); } map = kzalloc(sizeof(*map), GFP_KERNEL); if (!map) { of_node_put(dma_spec->np); + put_device(&pdev->dev); return ERR_PTR(-ENOMEM); } @@ -269,6 +272,7 @@ static void *ti_dra7_xbar_route_allocate(struct of_phandle_args *dma_spec, dev_err(&pdev->dev, "Run out of free DMA requests\n"); kfree(map); of_node_put(dma_spec->np); + put_device(&pdev->dev); return ERR_PTR(-ENOMEM); } set_bit(map->xbar_out, xbar->dma_inuse); From 26ae9c361414418ed02d6e97b3d0c8eaa93be355 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Tue, 12 Jul 2022 16:32:23 +0200 Subject: [PATCH 1308/1842] Linux 5.10.130 Link: https://lore.kernel.org/r/20220711090541.764895984@linuxfoundation.org Tested-by: Pavel Machek (CIP) Tested-by: Florian Fainelli Tested-by: Guenter Roeck Tested-by: Shuah Khan Tested-by: Linux Kernel Functional Testing Tested-by: Salvatore Bonaccorso Tested-by: Jon Hunter Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 7d52cee37488..b0a35378803d 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 129 +SUBLEVEL = 130 EXTRAVERSION = NAME = Dare mighty things From 05982f0cbb4c85d0b52acefec769ff11813ac2a7 Mon Sep 17 00:00:00 2001 From: "Justin M. Forbes" Date: Thu, 2 Jun 2022 22:22:28 +0200 Subject: [PATCH 1309/1842] UPSTREAM: lib/crypto: add prompts back to crypto libraries commit e56e18985596617ae426ed5997fb2e737cffb58b upstream. Commit 6048fdcc5f269 ("lib/crypto: blake2s: include as built-in") took away a number of prompt texts from other crypto libraries. This makes values flip from built-in to module when oldconfig runs, and causes problems when these crypto libs need to be built in for thingslike BIG_KEYS. Fixes: 6048fdcc5f269 ("lib/crypto: blake2s: include as built-in") Cc: Herbert Xu Cc: linux-crypto@vger.kernel.org Signed-off-by: Justin M. Forbes [Jason: - moved menu into submenu of lib/ instead of root menu - fixed chacha sub-dependencies for CONFIG_CRYPTO] Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman (cherry picked from commit 630192aa45233acfe3d17952862ca215f6d31f09) Signed-off-by: Greg Kroah-Hartman Change-Id: I9f92399d94f5b00408203ce4a9237925e7435ec1 --- crypto/Kconfig | 2 -- lib/Kconfig | 2 ++ lib/crypto/Kconfig | 17 ++++++++++++----- 3 files changed, 14 insertions(+), 7 deletions(-) diff --git a/crypto/Kconfig b/crypto/Kconfig index 9ca6d0fb2090..185e9f129ef6 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1971,5 +1971,3 @@ source "crypto/asymmetric_keys/Kconfig" source "certs/Kconfig" endif # if CRYPTO - -source "lib/crypto/Kconfig" diff --git a/lib/Kconfig b/lib/Kconfig index 0abb7975586c..c5d48d64c923 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -101,6 +101,8 @@ config INDIRECT_PIO When in doubt, say N. +source "lib/crypto/Kconfig" + config CRC_CCITT tristate "CRC-CCITT functions" help diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index af3da5a8bde8..9856e291f414 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -1,5 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 +menu "Crypto library routines" + config CRYPTO_LIB_AES tristate @@ -31,7 +33,7 @@ config CRYPTO_ARCH_HAVE_LIB_CHACHA config CRYPTO_LIB_CHACHA_GENERIC tristate - select CRYPTO_ALGAPI + select XOR_BLOCKS help This symbol can be depended upon by arch implementations of the ChaCha library interface that require the generic code as a @@ -40,7 +42,8 @@ config CRYPTO_LIB_CHACHA_GENERIC of CRYPTO_LIB_CHACHA. config CRYPTO_LIB_CHACHA - tristate + tristate "ChaCha library interface" + depends on CRYPTO depends on CRYPTO_ARCH_HAVE_LIB_CHACHA || !CRYPTO_ARCH_HAVE_LIB_CHACHA select CRYPTO_LIB_CHACHA_GENERIC if CRYPTO_ARCH_HAVE_LIB_CHACHA=n help @@ -65,7 +68,7 @@ config CRYPTO_LIB_CURVE25519_GENERIC of CRYPTO_LIB_CURVE25519. config CRYPTO_LIB_CURVE25519 - tristate + tristate "Curve25519 scalar multiplication library" depends on CRYPTO_ARCH_HAVE_LIB_CURVE25519 || !CRYPTO_ARCH_HAVE_LIB_CURVE25519 select CRYPTO_LIB_CURVE25519_GENERIC if CRYPTO_ARCH_HAVE_LIB_CURVE25519=n help @@ -100,7 +103,7 @@ config CRYPTO_LIB_POLY1305_GENERIC of CRYPTO_LIB_POLY1305. config CRYPTO_LIB_POLY1305 - tristate + tristate "Poly1305 library interface" depends on CRYPTO_ARCH_HAVE_LIB_POLY1305 || !CRYPTO_ARCH_HAVE_LIB_POLY1305 select CRYPTO_LIB_POLY1305_GENERIC if CRYPTO_ARCH_HAVE_LIB_POLY1305=n help @@ -109,11 +112,15 @@ config CRYPTO_LIB_POLY1305 is available and enabled. config CRYPTO_LIB_CHACHA20POLY1305 - tristate + tristate "ChaCha20-Poly1305 AEAD support (8-byte nonce library version)" depends on CRYPTO_ARCH_HAVE_LIB_CHACHA || !CRYPTO_ARCH_HAVE_LIB_CHACHA depends on CRYPTO_ARCH_HAVE_LIB_POLY1305 || !CRYPTO_ARCH_HAVE_LIB_POLY1305 + depends on CRYPTO select CRYPTO_LIB_CHACHA select CRYPTO_LIB_POLY1305 + select CRYPTO_ALGAPI config CRYPTO_LIB_SHA256 tristate + +endmenu From 6cc2db3cdea6330d1af6742a0e4440fbe38689f3 Mon Sep 17 00:00:00 2001 From: Michal Kubecek Date: Mon, 23 May 2022 22:05:24 +0200 Subject: [PATCH 1310/1842] UPSTREAM: Revert "net: af_key: add check for pfkey_broadcast in function pfkey_process" This reverts commit 4dc2a5a8f6754492180741facf2a8787f2c415d7. A non-zero return value from pfkey_broadcast() does not necessarily mean an error occurred as this function returns -ESRCH when no registered listener received the message. In particular, a call with BROADCAST_PROMISC_ONLY flag and null one_sk argument can never return zero so that this commit in fact prevents processing any PF_KEY message. One visible effect is that racoon daemon fails to find encryption algorithms like aes and refuses to start. Excluding -ESRCH return value would fix this but it's not obvious that we really want to bail out here and most other callers of pfkey_broadcast() also ignore the return value. Also, as pointed out by Steffen Klassert, PF_KEY is kind of deprecated and newer userspace code should use netlink instead so that we should only disturb the code for really important fixes. v2: add a comment explaining why is the return value ignored Signed-off-by: Michal Kubecek Signed-off-by: Steffen Klassert (cherry picked from commit 9c90c9b3e50e16d03c7f87d63e9db373974781e0) Reported-by: Carlos Llamas Signed-off-by: Greg Kroah-Hartman Change-Id: Ia2df29e35828d87edfc45fa3fe359e0a5a0fcb71 --- net/key/af_key.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/net/key/af_key.c b/net/key/af_key.c index 61505b0df57d..9083dc5c9073 100644 --- a/net/key/af_key.c +++ b/net/key/af_key.c @@ -2830,10 +2830,12 @@ static int pfkey_process(struct sock *sk, struct sk_buff *skb, const struct sadb void *ext_hdrs[SADB_EXT_MAX]; int err; - err = pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL, - BROADCAST_PROMISC_ONLY, NULL, sock_net(sk)); - if (err) - return err; + /* Non-zero return value of pfkey_broadcast() does not always signal + * an error and even on an actual error we may still want to process + * the message so rather ignore the return value. + */ + pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL, + BROADCAST_PROMISC_ONLY, NULL, sock_net(sk)); memset(ext_hdrs, 0, sizeof(ext_hdrs)); err = parse_exthdrs(skb, hdr, ext_hdrs); From fee299e72ee2867ed391569de829a10eab481387 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Fri, 24 Jun 2022 11:42:55 +0200 Subject: [PATCH 1311/1842] ANDROID: random: add back removed callback functions Commit 6da877d2d46b ("random: replace custom notifier chain with standard one") in 5.15.44 removed the add_random_ready_callback() and del_random_ready_callback() functions, which are part of the current Android ABI in this branch. To handle this properly, add them back in order for out-of-tree code to continue working. Bug: 161946584 Fixes: 6da877d2d46b ("random: replace custom notifier chain with standard one") Signed-off-by: Greg Kroah-Hartman Change-Id: I422f3af1a28c5cf2e7f7806d6fb37d66901e3a33 --- drivers/char/random.c | 76 ++++++++++++++++++++++++++++++++++++++++++ include/linux/random.h | 13 ++++++++ 2 files changed, 89 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index 00b50ccc9fae..e77c9233c50b 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -180,6 +180,7 @@ int __cold unregister_random_ready_notifier(struct notifier_block *nb) } EXPORT_SYMBOL(unregister_random_ready_notifier); +static void process_oldschool_random_ready_list(void); static void __cold process_random_ready_list(void) { unsigned long flags; @@ -187,6 +188,8 @@ static void __cold process_random_ready_list(void) spin_lock_irqsave(&random_ready_chain_lock, flags); raw_notifier_call_chain(&random_ready_chain, 0, NULL); spin_unlock_irqrestore(&random_ready_chain_lock, flags); + + process_oldschool_random_ready_list(); } #define warn_unseeded_randomness() \ @@ -1533,3 +1536,76 @@ struct ctl_table random_table[] = { { } }; #endif /* CONFIG_SYSCTL */ + +/* + * Android KABI fixups + * + * Add back two functions that were being used by out-of-tree drivers. + * + * Yes, horrible hack, the things we do for FIPS "compliance"... + */ +static DEFINE_SPINLOCK(random_ready_list_lock); +static LIST_HEAD(random_ready_list); + +int add_random_ready_callback(struct random_ready_callback *rdy) +{ + struct module *owner; + unsigned long flags; + int err = -EALREADY; + + if (crng_ready()) + return err; + + owner = rdy->owner; + if (!try_module_get(owner)) + return -ENOENT; + + spin_lock_irqsave(&random_ready_list_lock, flags); + if (crng_ready()) + goto out; + + owner = NULL; + + list_add(&rdy->list, &random_ready_list); + err = 0; + +out: + spin_unlock_irqrestore(&random_ready_list_lock, flags); + + module_put(owner); + + return err; +} +EXPORT_SYMBOL(add_random_ready_callback); + +void del_random_ready_callback(struct random_ready_callback *rdy) +{ + unsigned long flags; + struct module *owner = NULL; + + spin_lock_irqsave(&random_ready_list_lock, flags); + if (!list_empty(&rdy->list)) { + list_del_init(&rdy->list); + owner = rdy->owner; + } + spin_unlock_irqrestore(&random_ready_list_lock, flags); + + module_put(owner); +} +EXPORT_SYMBOL(del_random_ready_callback); + +static void process_oldschool_random_ready_list(void) +{ + unsigned long flags; + struct random_ready_callback *rdy, *tmp; + + spin_lock_irqsave(&random_ready_list_lock, flags); + list_for_each_entry_safe(rdy, tmp, &random_ready_list, list) { + struct module *owner = rdy->owner; + + list_del_init(&rdy->list); + rdy->func(rdy); + module_put(owner); + } + spin_unlock_irqrestore(&random_ready_list_lock, flags); +} diff --git a/include/linux/random.h b/include/linux/random.h index 917470c4490a..5d04058c494c 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -138,4 +138,17 @@ int random_online_cpu(unsigned int cpu); extern const struct file_operations random_fops, urandom_fops; #endif +/* + * Android KABI fixups + * Added back the following structure and calls to preserve the ABI for + * out-of-tree drivers that were using them. + */ +struct random_ready_callback { + struct list_head list; + void (*func)(struct random_ready_callback *rdy); + struct module *owner; +}; +extern int add_random_ready_callback(struct random_ready_callback *rdy); +extern void del_random_ready_callback(struct random_ready_callback *rdy); + #endif /* _LINUX_RANDOM_H */ From 830f0202d737dc6d9301975038ecab58446cca65 Mon Sep 17 00:00:00 2001 From: "Jason A. Donenfeld" Date: Thu, 23 Jun 2022 15:17:14 +0200 Subject: [PATCH 1312/1842] ANDROID: cpu/hotplug: avoid breaking Android ABI by fusing cpuhp steps We can't add more values to the cpuhp_state enum, lest we break some vendor module. So instead break out existing steps into helper functions which call both steps. Fortunately none of these steps return anything other than 0, so we don't need to worry about rollback. Bug: 161946584 Fixes: 5064550d422d ("random: clear fast pool, crng, and batches in cpuhp bring up") Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman Change-Id: Icaa61291207d799bab741be7bab2b636cda09422 --- include/linux/cpuhotplug.h | 2 -- kernel/cpu.c | 28 ++++++++++++++++------------ 2 files changed, 16 insertions(+), 14 deletions(-) diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 4efdc501bcab..843bb056b8c6 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -61,7 +61,6 @@ enum cpuhp_state { CPUHP_LUSTRE_CFS_DEAD, CPUHP_AP_ARM_CACHE_B15_RAC_DEAD, CPUHP_PADATA_DEAD, - CPUHP_RANDOM_PREPARE, CPUHP_WORKQUEUE_PREP, CPUHP_POWER_NUMA_PREPARE, CPUHP_HRTIMERS_PREPARE, @@ -188,7 +187,6 @@ enum cpuhp_state { CPUHP_AP_PERF_POWERPC_HV_GPCI_ONLINE, CPUHP_AP_WATCHDOG_ONLINE, CPUHP_AP_WORKQUEUE_ONLINE, - CPUHP_AP_RANDOM_ONLINE, CPUHP_AP_RCUTREE_ONLINE, CPUHP_AP_BASE_CACHEINFO_ONLINE, CPUHP_AP_ONLINE_DYN, diff --git a/kernel/cpu.c b/kernel/cpu.c index f9e503ecc363..90d09bafecf6 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -1862,6 +1862,20 @@ core_initcall(cpu_hotplug_pm_sync_init); int __boot_cpu_id; +/* Horrific hacks because we can't add more to cpuhp_hp_states. */ +static int random_and_perf_prepare_fusion(unsigned int cpu) +{ + perf_event_init_cpu(cpu); + random_prepare_cpu(cpu); + return 0; +} +static int random_and_workqueue_online_fusion(unsigned int cpu) +{ + workqueue_online_cpu(cpu); + random_online_cpu(cpu); + return 0; +} + #endif /* CONFIG_SMP */ /* Boot processor state steps */ @@ -1880,14 +1894,9 @@ static struct cpuhp_step cpuhp_hp_states[] = { }, [CPUHP_PERF_PREPARE] = { .name = "perf:prepare", - .startup.single = perf_event_init_cpu, + .startup.single = random_and_perf_prepare_fusion, .teardown.single = perf_event_exit_cpu, }, - [CPUHP_RANDOM_PREPARE] = { - .name = "random:prepare", - .startup.single = random_prepare_cpu, - .teardown.single = NULL, - }, [CPUHP_WORKQUEUE_PREP] = { .name = "workqueue:prepare", .startup.single = workqueue_prepare_cpu, @@ -2001,14 +2010,9 @@ static struct cpuhp_step cpuhp_hp_states[] = { }, [CPUHP_AP_WORKQUEUE_ONLINE] = { .name = "workqueue:online", - .startup.single = workqueue_online_cpu, + .startup.single = random_and_workqueue_online_fusion, .teardown.single = workqueue_offline_cpu, }, - [CPUHP_AP_RANDOM_ONLINE] = { - .name = "random:online", - .startup.single = random_online_cpu, - .teardown.single = NULL, - }, [CPUHP_AP_RCUTREE_ONLINE] = { .name = "RCU/tree:online", .startup.single = rcutree_online_cpu, From e61ebc6383dd131b2d54b17cc30ed04baaea61e9 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 23 Jun 2022 15:14:02 +0200 Subject: [PATCH 1313/1842] ANDROID: change function signatures for some random functions. In 5.10.119 the functions add_device_randomness() and get_random_bytes() want from: void get_random_bytes(void *buf, int len); -> void get_random_bytes(void *buf, size_t len); void add_device_randomness(const void *buf, unsigned int len); -> void add_device_randomness(const void *buf, size_t len); int __must_check get_random_bytes_arch(void *buf, int len); -> size_t __must_check get_random_bytes_arch(void *buf, size_t len); As these functions are part of the GKI abi to be preserved, move them back. As these are just lengths of the buffer, all is fine with the size, they should never overflow. Bug: 161946584 Fixes: 07280d2c3f33 ("random: make more consistent use of integer types") Signed-off-by: Greg Kroah-Hartman Change-Id: I5430259958b9a904e6e8b950e11526332c919c0c --- drivers/char/random.c | 6 +++--- include/linux/random.h | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/char/random.c b/drivers/char/random.c index e77c9233c50b..550cf8894ee6 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -445,7 +445,7 @@ static void _get_random_bytes(void *buf, size_t len) * wait_for_random_bytes() should be called and return 0 at least once * at any point prior. */ -void get_random_bytes(void *buf, size_t len) +void get_random_bytes(void *buf, int len) { warn_unseeded_randomness(); _get_random_bytes(buf, len); @@ -589,7 +589,7 @@ int __cold random_prepare_cpu(unsigned int cpu) * use. Use get_random_bytes() instead. It returns the number of * bytes filled in. */ -size_t __must_check get_random_bytes_arch(void *buf, size_t len) +int __must_check get_random_bytes_arch(void *buf, int len) { size_t left = len; u8 *p = buf; @@ -864,7 +864,7 @@ int __init random_init(const char *command_line) * the entropy pool having similar initial state across largely * identical devices. */ -void add_device_randomness(const void *buf, size_t len) +void add_device_randomness(const void *buf, unsigned int len) { unsigned long entropy = random_get_entropy(); unsigned long flags; diff --git a/include/linux/random.h b/include/linux/random.h index 5d04058c494c..9246d87767fd 100644 --- a/include/linux/random.h +++ b/include/linux/random.h @@ -12,7 +12,7 @@ struct notifier_block; -void add_device_randomness(const void *buf, size_t len); +void add_device_randomness(const void *buf, unsigned int len); void add_bootloader_randomness(const void *buf, size_t len); void add_input_randomness(unsigned int type, unsigned int code, unsigned int value) __latent_entropy; @@ -28,8 +28,8 @@ static inline void add_latent_entropy(void) static inline void add_latent_entropy(void) { } #endif -void get_random_bytes(void *buf, size_t len); -size_t __must_check get_random_bytes_arch(void *buf, size_t len); +void get_random_bytes(void *buf, int len); +int __must_check get_random_bytes_arch(void *buf, int len); u32 get_random_u32(void); u64 get_random_u64(void); static inline unsigned int get_random_int(void) From ebc9fb07d294cc27587d08866f0c0f2321bded54 Mon Sep 17 00:00:00 2001 From: Will McVicker Date: Thu, 14 Jul 2022 17:08:07 -0700 Subject: [PATCH 1314/1842] ANDROID: random: fix CRC issues with the merge MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The upstream commit 9342656c01 ("random: remove unused tracepoints") removed the random.h tracepoints which lead to CRC differences for the following symbols: add_random_ready_callback() del_random_ready_callback() If I add back #includes which is what was included by the random.h tracepoint header, then the CRCs match the original CRC values for those two functions above. This is necessary to retain GKI compatibility. Fixes: 9342656c013d ("random: remove unused tracepoints") Change-Id: I52422404ec46273cc686d1930102e066cef05eb0 Signed-off-by: Will McVicker Signed-off-by: Greg Kroah-Hartman --- drivers/char/random.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index 550cf8894ee6..144134197ac3 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -60,6 +60,10 @@ #include #include +// GKI: Keep this header to retain the original CRC that previously used the +// random.h tracepoints. +#include + /********************************************************************* * * Initialization and readiness waiting. From cc5ee0e0eed0bec2b7cc1d0feb9405e884eace7d Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Fri, 15 Jul 2022 09:21:15 +0200 Subject: [PATCH 1315/1842] Revert "mtd: rawnand: gpmi: Fix setting busy timeout setting" MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This reverts commit 156427b3123c2c1f0987a544d0b005b188a75393 which is commit 06781a5026350cde699d2d10c9914a25c1524f45 upstream. It is reported to cause data loss, so revert it to prevent that from happening for users of this driver. Reported-by: Tomasz Moń Reported-by: Sascha Hauer Cc: Miquel Raynal Link: https://lore.kernel.org/all/20220701110341.3094023-1-s.hauer@pengutronix.de/ Signed-off-by: Greg Kroah-Hartman --- drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c index 8d096ca770b0..92e8ca56f566 100644 --- a/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c +++ b/drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c @@ -683,7 +683,7 @@ static void gpmi_nfc_compute_timings(struct gpmi_nand_data *this, hw->timing0 = BF_GPMI_TIMING0_ADDRESS_SETUP(addr_setup_cycles) | BF_GPMI_TIMING0_DATA_HOLD(data_hold_cycles) | BF_GPMI_TIMING0_DATA_SETUP(data_setup_cycles); - hw->timing1 = BF_GPMI_TIMING1_BUSY_TIMEOUT(DIV_ROUND_UP(busy_timeout_cycles, 4096)); + hw->timing1 = BF_GPMI_TIMING1_BUSY_TIMEOUT(busy_timeout_cycles * 4096); /* * Derive NFC ideal delay from {3}: From 8f95261a006489c828f1d909355669875649668b Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Fri, 15 Jul 2022 10:14:00 +0200 Subject: [PATCH 1316/1842] Linux 5.10.131 Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index b0a35378803d..53f1a45ae69b 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 130 +SUBLEVEL = 131 EXTRAVERSION = NAME = Dare mighty things From 4486bbe92840d315c427393725ddfb670b7c0051 Mon Sep 17 00:00:00 2001 From: Meng Tang Date: Tue, 12 Jul 2022 14:00:05 +0800 Subject: [PATCH 1317/1842] ALSA: hda - Add fixup for Dell Latitidue E5430 commit 841bdf85c226803a78a9319af9b2caa9bf3e2eda upstream. Another Dell model, another fixup entry: Latitude E5430 needs the same fixup as other Latitude E series as workaround for noise problems. Signed-off-by: Meng Tang Cc: Link: https://lore.kernel.org/r/20220712060005.20176-1-tangmeng@uniontech.com Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index f7645720d29c..80bf3bd66cdc 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -8642,6 +8642,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1025, 0x1466, "Acer Aspire A515-56", ALC255_FIXUP_ACER_HEADPHONE_AND_MIC), SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z), + SND_PCI_QUIRK(0x1028, 0x053c, "Dell Latitude E5430", ALC292_FIXUP_DELL_E7X), SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS), SND_PCI_QUIRK(0x1028, 0x05bd, "Dell Latitude E6440", ALC292_FIXUP_DELL_E7X), SND_PCI_QUIRK(0x1028, 0x05be, "Dell Latitude E6540", ALC292_FIXUP_DELL_E7X), From b642a3476a348cc9fbc71fadb8fd2a6f9872bbbe Mon Sep 17 00:00:00 2001 From: Meng Tang Date: Mon, 11 Jul 2022 18:17:44 +0800 Subject: [PATCH 1318/1842] ALSA: hda/conexant: Apply quirk for another HP ProDesk 600 G3 model commit d16d69bf5a25d91c6d8f3e29711be12551bf56cd upstream. There is another HP ProDesk 600 G3 model with the PCI SSID 103c:82b4 that requires the quirk HP_MIC_NO_PRESENCE. Add the corresponding entry to the quirk table. Signed-off-by: Meng Tang Cc: Link: https://lore.kernel.org/r/20220711101744.25189-1-tangmeng@uniontech.com Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_conexant.c | 1 + 1 file changed, 1 insertion(+) diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c index 53b7ea86f3f8..6b5d7b4760ed 100644 --- a/sound/pci/hda/patch_conexant.c +++ b/sound/pci/hda/patch_conexant.c @@ -937,6 +937,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = { SND_PCI_QUIRK(0x103c, 0x828c, "HP EliteBook 840 G4", CXT_FIXUP_HP_DOCK), SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x103c, 0x82b4, "HP ProDesk 600 G3", CXT_FIXUP_HP_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x103c, 0x836e, "HP ProBook 455 G5", CXT_FIXUP_MUTE_LED_GPIO), SND_PCI_QUIRK(0x103c, 0x837f, "HP ProBook 470 G5", CXT_FIXUP_MUTE_LED_GPIO), SND_PCI_QUIRK(0x103c, 0x83b2, "HP EliteBook 840 G5", CXT_FIXUP_HP_DOCK), From 4d0d15d1846784388a5444435f7ab8208ea7cfef Mon Sep 17 00:00:00 2001 From: Meng Tang Date: Mon, 11 Jul 2022 16:15:27 +0800 Subject: [PATCH 1319/1842] ALSA: hda/realtek: Fix headset mic for Acer SF313-51 commit 5f3fe25e70559fa3b096ab17e13316c93ddb7020 upstream. The issue on Acer SWIFT SF313-51 is that headset microphone doesn't work. The following quirk fixed headset microphone issue. Note that the fixup of SF314-54/55 (ALC256_FIXUP_ACER_HEADSET_MIC) was not successful on my SF313-51. Signed-off-by: Meng Tang Cc: Link: https://lore.kernel.org/r/20220711081527.6254-1-tangmeng@uniontech.com Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 80bf3bd66cdc..6ead726df89f 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -8633,6 +8633,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC), SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC), SND_PCI_QUIRK(0x1025, 0x129c, "Acer SWIFT SF314-55", ALC256_FIXUP_ACER_HEADSET_MIC), + SND_PCI_QUIRK(0x1025, 0x129d, "Acer SWIFT SF313-51", ALC256_FIXUP_ACER_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1025, 0x1300, "Acer SWIFT SF314-56", ALC256_FIXUP_ACER_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC), SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC), From 08ab39027a88279bec8e71ff6b4de13ddc7dd588 Mon Sep 17 00:00:00 2001 From: Meng Tang Date: Tue, 12 Jul 2022 17:22:22 +0800 Subject: [PATCH 1320/1842] ALSA: hda/realtek - Fix headset mic problem for a HP machine with alc671 commit dbe75d314748e08fc6e4576d153d8a69621ee5ca upstream. On a HP 288 Pro G6, the front mic could not be detected.In order to get it working, the pin configuration needs to be set correctly, and the ALC671_FIXUP_HP_HEADSET_MIC2 fixup needs to be applied. Signed-off-by: Meng Tang Cc: Link: https://lore.kernel.org/r/20220712092222.21738-1-tangmeng@uniontech.com Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 6ead726df89f..83c681021fc0 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -10928,6 +10928,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = { SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800), SND_PCI_QUIRK(0x103c, 0x8719, "HP", ALC897_FIXUP_HP_HSMIC_VERB), SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2), + SND_PCI_QUIRK(0x103c, 0x877e, "HP 288 Pro G6", ALC671_FIXUP_HP_HEADSET_MIC2), SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2), SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE), SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50), From cacac3e13a816c917ee35d6ac4c9c0a8536c9df2 Mon Sep 17 00:00:00 2001 From: Meng Tang Date: Wed, 13 Jul 2022 14:33:32 +0800 Subject: [PATCH 1321/1842] ALSA: hda/realtek - Fix headset mic problem for a HP machine with alc221 commit 4ba5c853d7945b3855c3dcb293f7f9f019db641e upstream. On a HP 288 Pro G2 MT (X9W02AV), the front mic could not be detected. In order to get it working, the pin configuration needs to be set correctly, and the ALC221_FIXUP_HP_288PRO_MIC_NO_PRESENCE fixup needs to be applied. Signed-off-by: Meng Tang Cc: Link: https://lore.kernel.org/r/20220713063332.30095-1-tangmeng@uniontech.com Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_realtek.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 83c681021fc0..bfe18027b014 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -6725,6 +6725,7 @@ enum { ALC298_FIXUP_LENOVO_SPK_VOLUME, ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER, ALC269_FIXUP_ATIV_BOOK_8, + ALC221_FIXUP_HP_288PRO_MIC_NO_PRESENCE, ALC221_FIXUP_HP_MIC_NO_PRESENCE, ALC256_FIXUP_ASUS_HEADSET_MODE, ALC256_FIXUP_ASUS_MIC, @@ -7651,6 +7652,16 @@ static const struct hda_fixup alc269_fixups[] = { .chained = true, .chain_id = ALC269_FIXUP_NO_SHUTUP }, + [ALC221_FIXUP_HP_288PRO_MIC_NO_PRESENCE] = { + .type = HDA_FIXUP_PINS, + .v.pins = (const struct hda_pintbl[]) { + { 0x19, 0x01a1913c }, /* use as headset mic, without its own jack detect */ + { 0x1a, 0x01813030 }, /* use as headphone mic, without its own jack detect */ + { } + }, + .chained = true, + .chain_id = ALC269_FIXUP_HEADSET_MODE + }, [ALC221_FIXUP_HP_MIC_NO_PRESENCE] = { .type = HDA_FIXUP_PINS, .v.pins = (const struct hda_pintbl[]) { @@ -8758,6 +8769,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x103c, 0x2335, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), SND_PCI_QUIRK(0x103c, 0x2336, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), SND_PCI_QUIRK(0x103c, 0x2337, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), + SND_PCI_QUIRK(0x103c, 0x2b5e, "HP 288 Pro G2 MT", ALC221_FIXUP_HP_288PRO_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x103c, 0x802e, "HP Z240 SFF", ALC221_FIXUP_HP_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x103c, 0x802f, "HP Z240", ALC221_FIXUP_HP_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x103c, 0x8077, "HP", ALC256_FIXUP_HP_HEADSET_MIC), From 782a6b07b12738f651d916ef62e67b4c2536142a Mon Sep 17 00:00:00 2001 From: Meng Tang Date: Wed, 13 Jul 2022 17:41:33 +0800 Subject: [PATCH 1322/1842] ALSA: hda/realtek - Enable the headset-mic on a Xiaomi's laptop commit 9b043a8f386485c74c0f8eea2c287d5bdbdf3279 upstream. The headset on this machine is not defined, after applying the quirk ALC256_FIXUP_ASUS_HEADSET_MIC, the headset-mic works well Signed-off-by: Meng Tang Cc: Link: https://lore.kernel.org/r/20220713094133.9894-1-tangmeng@uniontech.com Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/pci/hda/patch_realtek.c | 1 + 1 file changed, 1 insertion(+) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index bfe18027b014..615526126408 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -9087,6 +9087,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC), SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE), SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC), + SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC), SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC), SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC), SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED), From b9c32a6886af79d6e0ad87a7b01800ed079cdd02 Mon Sep 17 00:00:00 2001 From: Juergen Gross Date: Wed, 13 Jul 2022 15:53:22 +0200 Subject: [PATCH 1323/1842] xen/netback: avoid entering xenvif_rx_next_skb() with an empty rx queue commit 94e8100678889ab428e68acadf042de723f094b9 upstream. xenvif_rx_next_skb() is expecting the rx queue not being empty, but in case the loop in xenvif_rx_action() is doing multiple iterations, the availability of another skb in the rx queue is not being checked. This can lead to crashes: [40072.537261] BUG: unable to handle kernel NULL pointer dereference at 0000000000000080 [40072.537407] IP: xenvif_rx_skb+0x23/0x590 [xen_netback] [40072.537534] PGD 0 P4D 0 [40072.537644] Oops: 0000 [#1] SMP NOPTI [40072.537749] CPU: 0 PID: 12505 Comm: v1-c40247-q2-gu Not tainted 4.12.14-122.121-default #1 SLE12-SP5 [40072.537867] Hardware name: HP ProLiant DL580 Gen9/ProLiant DL580 Gen9, BIOS U17 11/23/2021 [40072.537999] task: ffff880433b38100 task.stack: ffffc90043d40000 [40072.538112] RIP: e030:xenvif_rx_skb+0x23/0x590 [xen_netback] [40072.538217] RSP: e02b:ffffc90043d43de0 EFLAGS: 00010246 [40072.538319] RAX: 0000000000000000 RBX: ffffc90043cd7cd0 RCX: 00000000000000f7 [40072.538430] RDX: 0000000000000000 RSI: 0000000000000006 RDI: ffffc90043d43df8 [40072.538531] RBP: 000000000000003f R08: 000077ff80000000 R09: 0000000000000008 [40072.538644] R10: 0000000000007ff0 R11: 00000000000008f6 R12: ffffc90043ce2708 [40072.538745] R13: 0000000000000000 R14: ffffc90043d43ed0 R15: ffff88043ea748c0 [40072.538861] FS: 0000000000000000(0000) GS:ffff880484600000(0000) knlGS:0000000000000000 [40072.538988] CS: e033 DS: 0000 ES: 0000 CR0: 0000000080050033 [40072.539088] CR2: 0000000000000080 CR3: 0000000407ac8000 CR4: 0000000000040660 [40072.539211] Call Trace: [40072.539319] xenvif_rx_action+0x71/0x90 [xen_netback] [40072.539429] xenvif_kthread_guest_rx+0x14a/0x29c [xen_netback] Fix that by stopping the loop in case the rx queue becomes empty. Cc: stable@vger.kernel.org Fixes: 98f6d57ced73 ("xen-netback: process guest rx packets in batches") Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Reviewed-by: Paul Durrant Link: https://lore.kernel.org/r/20220713135322.19616-1-jgross@suse.com Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- drivers/net/xen-netback/rx.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c index dbac4c03d21a..a0335407be42 100644 --- a/drivers/net/xen-netback/rx.c +++ b/drivers/net/xen-netback/rx.c @@ -495,6 +495,7 @@ void xenvif_rx_action(struct xenvif_queue *queue) queue->rx_copy.completed = &completed_skbs; while (xenvif_rx_ring_slots_available(queue) && + !skb_queue_empty(&queue->rx_queue) && work_done < RX_BATCH_SIZE) { xenvif_rx_skb(queue); work_done++; From 91530f675e88c6b27306c35ca4e482fb956794d1 Mon Sep 17 00:00:00 2001 From: Oleg Nesterov Date: Mon, 11 Jul 2022 18:16:25 +0200 Subject: [PATCH 1324/1842] fix race between exit_itimers() and /proc/pid/timers commit d5b36a4dbd06c5e8e36ca8ccc552f679069e2946 upstream. As Chris explains, the comment above exit_itimers() is not correct, we can race with proc_timers_seq_ops. Change exit_itimers() to clear signal->posix_timers with ->siglock held. Cc: Reported-by: chris@accessvector.net Signed-off-by: Oleg Nesterov Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- fs/exec.c | 2 +- include/linux/sched/task.h | 2 +- kernel/exit.c | 2 +- kernel/time/posix-timers.c | 19 ++++++++++++++----- 4 files changed, 17 insertions(+), 8 deletions(-) diff --git a/fs/exec.c b/fs/exec.c index bcd86f2d176c..d37a82206fa3 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1286,7 +1286,7 @@ int begin_new_exec(struct linux_binprm * bprm) bprm->mm = NULL; #ifdef CONFIG_POSIX_TIMERS - exit_itimers(me->signal); + exit_itimers(me); flush_itimer_signals(); #endif diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index fa75f325dad5..eeacb4a16fe3 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -82,7 +82,7 @@ static inline void exit_thread(struct task_struct *tsk) extern void do_group_exit(int); extern void exit_files(struct task_struct *); -extern void exit_itimers(struct signal_struct *); +extern void exit_itimers(struct task_struct *); extern pid_t kernel_clone(struct kernel_clone_args *kargs); struct task_struct *fork_idle(int); diff --git a/kernel/exit.c b/kernel/exit.c index d13d67fc5f4e..ab900b661867 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -782,7 +782,7 @@ void __noreturn do_exit(long code) #ifdef CONFIG_POSIX_TIMERS hrtimer_cancel(&tsk->signal->real_timer); - exit_itimers(tsk->signal); + exit_itimers(tsk); #endif if (tsk->mm) setmax_mm_hiwater_rss(&tsk->signal->maxrss, tsk->mm); diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c index dd5697d7347b..b624788023d8 100644 --- a/kernel/time/posix-timers.c +++ b/kernel/time/posix-timers.c @@ -1051,15 +1051,24 @@ static void itimer_delete(struct k_itimer *timer) } /* - * This is called by do_exit or de_thread, only when there are no more - * references to the shared signal_struct. + * This is called by do_exit or de_thread, only when nobody else can + * modify the signal->posix_timers list. Yet we need sighand->siglock + * to prevent the race with /proc/pid/timers. */ -void exit_itimers(struct signal_struct *sig) +void exit_itimers(struct task_struct *tsk) { + struct list_head timers; struct k_itimer *tmr; - while (!list_empty(&sig->posix_timers)) { - tmr = list_entry(sig->posix_timers.next, struct k_itimer, list); + if (list_empty(&tsk->signal->posix_timers)) + return; + + spin_lock_irq(&tsk->sighand->siglock); + list_replace_init(&tsk->signal->posix_timers, &timers); + spin_unlock_irq(&tsk->sighand->siglock); + + while (!list_empty(&timers)) { + tmr = list_first_entry(&timers, struct k_itimer, list); itimer_delete(tmr); } } From 931dbcc2e02f0409c095b11e35490cade9ac14af Mon Sep 17 00:00:00 2001 From: "Gowans, James" Date: Thu, 23 Jun 2022 05:24:03 +0000 Subject: [PATCH 1325/1842] mm: split huge PUD on wp_huge_pud fallback MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 14c99d65941538aa33edd8dc7b1bbbb593c324a2 upstream. Currently the implementation will split the PUD when a fallback is taken inside the create_huge_pud function. This isn't where it should be done: the splitting should be done in wp_huge_pud, just like it's done for PMDs. Reason being that if a callback is taken during create, there is no PUD yet so nothing to split, whereas if a fallback is taken when encountering a write protection fault there is something to split. It looks like this was the original intention with the commit where the splitting was introduced, but somehow it got moved to the wrong place between v1 and v2 of the patch series. Rebase mistake perhaps. Link: https://lkml.kernel.org/r/6f48d622eb8bce1ae5dd75327b0b73894a2ec407.camel@amazon.com Fixes: 327e9fd48972 ("mm: Split huge pages on write-notify or COW") Signed-off-by: James Gowans Reviewed-by: Thomas Hellström Cc: Christian König Cc: Jan H. Schönherr Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- mm/memory.c | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 72236b1ce590..cc50fa0f4590 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4365,6 +4365,19 @@ static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd) static vm_fault_t create_huge_pud(struct vm_fault *vmf) { +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \ + defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) + /* No support for anonymous transparent PUD pages yet */ + if (vma_is_anonymous(vmf->vma)) + return VM_FAULT_FALLBACK; + if (vmf->vma->vm_ops->huge_fault) + return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PUD); +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + return VM_FAULT_FALLBACK; +} + +static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud) +{ #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \ defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) /* No support for anonymous transparent PUD pages yet */ @@ -4379,19 +4392,7 @@ static vm_fault_t create_huge_pud(struct vm_fault *vmf) split: /* COW or write-notify not handled on PUD level: split pud.*/ __split_huge_pud(vmf->vma, vmf->pud, vmf->address); -#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ - return VM_FAULT_FALLBACK; -} - -static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud) -{ -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - /* No support for anonymous transparent PUD pages yet */ - if (vma_is_anonymous(vmf->vma)) - return VM_FAULT_FALLBACK; - if (vmf->vma->vm_ops->huge_fault) - return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PUD); -#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +#endif /* CONFIG_TRANSPARENT_HUGEPAGE && CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ return VM_FAULT_FALLBACK; } From 78a1400c42ee11197eb1f0f85ba51df9a4fdfff0 Mon Sep 17 00:00:00 2001 From: Zheng Yejian Date: Mon, 11 Jul 2022 09:47:31 +0800 Subject: [PATCH 1326/1842] tracing/histograms: Fix memory leak problem commit 7edc3945bdce9c39198a10d6129377a5c53559c2 upstream. This reverts commit 46bbe5c671e06f070428b9be142cc4ee5cedebac. As commit 46bbe5c671e0 ("tracing: fix double free") said, the "double free" problem reported by clang static analyzer is: > In parse_var_defs() if there is a problem allocating > var_defs.expr, the earlier var_defs.name is freed. > This free is duplicated by free_var_defs() which frees > the rest of the list. However, if there is a problem allocating N-th var_defs.expr: + in parse_var_defs(), the freed 'earlier var_defs.name' is actually the N-th var_defs.name; + then in free_var_defs(), the names from 0th to (N-1)-th are freed; IF ALLOCATING PROBLEM HAPPENED HERE!!! -+ \ | 0th 1th (N-1)-th N-th V +-------------+-------------+-----+-------------+----------- var_defs: | name | expr | name | expr | ... | name | expr | name | /// +-------------+-------------+-----+-------------+----------- These two frees don't act on same name, so there was no "double free" problem before. Conversely, after that commit, we get a "memory leak" problem because the above "N-th var_defs.name" is not freed. If enable CONFIG_DEBUG_KMEMLEAK and inject a fault at where the N-th var_defs.expr allocated, then execute on shell like: $ echo 'hist:key=call_site:val=$v1,$v2:v1=bytes_req,v2=bytes_alloc' > \ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger Then kmemleak reports: unreferenced object 0xffff8fb100ef3518 (size 8): comm "bash", pid 196, jiffies 4295681690 (age 28.538s) hex dump (first 8 bytes): 76 31 00 00 b1 8f ff ff v1...... backtrace: [<0000000038fe4895>] kstrdup+0x2d/0x60 [<00000000c99c049a>] event_hist_trigger_parse+0x206f/0x20e0 [<00000000ae70d2cc>] trigger_process_regex+0xc0/0x110 [<0000000066737a4c>] event_trigger_write+0x75/0xd0 [<000000007341e40c>] vfs_write+0xbb/0x2a0 [<0000000087fde4c2>] ksys_write+0x59/0xd0 [<00000000581e9cdf>] do_syscall_64+0x3a/0x80 [<00000000cf3b065c>] entry_SYSCALL_64_after_hwframe+0x46/0xb0 Link: https://lkml.kernel.org/r/20220711014731.69520-1-zhengyejian1@huawei.com Cc: stable@vger.kernel.org Fixes: 46bbe5c671e0 ("tracing: fix double free") Reported-by: Hulk Robot Suggested-by: Steven Rostedt Reviewed-by: Tom Zanussi Signed-off-by: Zheng Yejian Signed-off-by: Steven Rostedt (Google) Signed-off-by: Greg Kroah-Hartman --- kernel/trace/trace_events_hist.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c index 3ed1723b68d5..fd5416829445 100644 --- a/kernel/trace/trace_events_hist.c +++ b/kernel/trace/trace_events_hist.c @@ -3943,6 +3943,8 @@ static int parse_var_defs(struct hist_trigger_data *hist_data) s = kstrdup(field_str, GFP_KERNEL); if (!s) { + kfree(hist_data->attrs->var_defs.name[n_vars]); + hist_data->attrs->var_defs.name[n_vars] = NULL; ret = -ENOMEM; goto free; } From 4d3e0fb05eec645a791c278a9c82e0a15a916aa8 Mon Sep 17 00:00:00 2001 From: "Steven Rostedt (Google)" Date: Wed, 6 Jul 2022 10:50:40 -0400 Subject: [PATCH 1327/1842] net: sock: tracing: Fix sock_exceed_buf_limit not to dereference stale pointer commit 820b8963adaea34a87abbecb906d1f54c0aabfb7 upstream. The trace event sock_exceed_buf_limit saves the prot->sysctl_mem pointer and then dereferences it in the TP_printk() portion. This is unsafe as the TP_printk() portion is executed at the time the buffer is read. That is, it can be seconds, minutes, days, months, even years later. If the proto is freed, then this dereference will can also lead to a kernel crash. Instead, save the sysctl_mem array into the ring buffer and have the TP_printk() reference that instead. This is the proper and safe way to read pointers in trace events. Link: https://lore.kernel.org/all/20220706052130.16368-12-kuniyu@amazon.com/ Cc: stable@vger.kernel.org Fixes: 3847ce32aea9f ("core: add tracepoints for queueing skb to rcvbuf") Signed-off-by: Steven Rostedt (Google) Acked-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- include/trace/events/sock.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/include/trace/events/sock.h b/include/trace/events/sock.h index a966d4b5ab37..905b151bc3dd 100644 --- a/include/trace/events/sock.h +++ b/include/trace/events/sock.h @@ -98,7 +98,7 @@ TRACE_EVENT(sock_exceed_buf_limit, TP_STRUCT__entry( __array(char, name, 32) - __field(long *, sysctl_mem) + __array(long, sysctl_mem, 3) __field(long, allocated) __field(int, sysctl_rmem) __field(int, rmem_alloc) @@ -110,7 +110,9 @@ TRACE_EVENT(sock_exceed_buf_limit, TP_fast_assign( strncpy(__entry->name, prot->name, 32); - __entry->sysctl_mem = prot->sysctl_mem; + __entry->sysctl_mem[0] = READ_ONCE(prot->sysctl_mem[0]); + __entry->sysctl_mem[1] = READ_ONCE(prot->sysctl_mem[1]); + __entry->sysctl_mem[2] = READ_ONCE(prot->sysctl_mem[2]); __entry->allocated = allocated; __entry->sysctl_rmem = sk_get_rmem0(sk, prot); __entry->rmem_alloc = atomic_read(&sk->sk_rmem_alloc); From fa40bb3a5f0c380630923dbc9df6e5c8cf6591b4 Mon Sep 17 00:00:00 2001 From: Nicolas Dichtel Date: Wed, 13 Jul 2022 13:48:52 +0200 Subject: [PATCH 1328/1842] ip: fix dflt addr selection for connected nexthop commit 747c14307214b55dbd8250e1ab44cad8305756f1 upstream. When a nexthop is added, without a gw address, the default scope was set to 'host'. Thus, when a source address is selected, 127.0.0.1 may be chosen but rejected when the route is used. When using a route without a nexthop id, the scope can be configured in the route, thus the problem doesn't exist. To explain more deeply: when a user creates a nexthop, it cannot specify the scope. To create it, the function nh_create_ipv4() calls fib_check_nh() with scope set to 0. fib_check_nh() calls fib_check_nh_nongw() wich was setting scope to 'host'. Then, nh_create_ipv4() calls fib_info_update_nhc_saddr() with scope set to 'host'. The src addr is chosen before the route is inserted. When a 'standard' route (ie without a reference to a nexthop) is added, fib_create_info() calls fib_info_update_nhc_saddr() with the scope set by the user. iproute2 set the scope to 'link' by default. Here is a way to reproduce the problem: ip netns add foo ip -n foo link set lo up ip netns add bar ip -n bar link set lo up sleep 1 ip -n foo link add name eth0 type dummy ip -n foo link set eth0 up ip -n foo address add 192.168.0.1/24 dev eth0 ip -n foo link add name veth0 type veth peer name veth1 netns bar ip -n foo link set veth0 up ip -n bar link set veth1 up ip -n bar address add 192.168.1.1/32 dev veth1 ip -n bar route add default dev veth1 ip -n foo nexthop add id 1 dev veth0 ip -n foo route add 192.168.1.1 nhid 1 Try to get/use the route: > $ ip -n foo route get 192.168.1.1 > RTNETLINK answers: Invalid argument > $ ip netns exec foo ping -c1 192.168.1.1 > ping: connect: Invalid argument Try without nexthop group (iproute2 sets scope to 'link' by dflt): ip -n foo route del 192.168.1.1 ip -n foo route add 192.168.1.1 dev veth0 Try to get/use the route: > $ ip -n foo route get 192.168.1.1 > 192.168.1.1 dev veth0 src 192.168.0.1 uid 0 > cache > $ ip netns exec foo ping -c1 192.168.1.1 > PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. > 64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.039 ms > > --- 192.168.1.1 ping statistics --- > 1 packets transmitted, 1 received, 0% packet loss, time 0ms > rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms CC: stable@vger.kernel.org Fixes: 597cfe4fc339 ("nexthop: Add support for IPv4 nexthops") Reported-by: Edwin Brossette Signed-off-by: Nicolas Dichtel Link: https://lore.kernel.org/r/20220713114853.29406-1-nicolas.dichtel@6wind.com Signed-off-by: Paolo Abeni Signed-off-by: Greg Kroah-Hartman --- net/ipv4/fib_semantics.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c index c8c7b76c3b2e..845adb92ef70 100644 --- a/net/ipv4/fib_semantics.c +++ b/net/ipv4/fib_semantics.c @@ -1229,7 +1229,7 @@ static int fib_check_nh_nongw(struct net *net, struct fib_nh *nh, nh->fib_nh_dev = in_dev->dev; dev_hold(nh->fib_nh_dev); - nh->fib_nh_scope = RT_SCOPE_HOST; + nh->fib_nh_scope = RT_SCOPE_LINK; if (!netif_carrier_ok(nh->fib_nh_dev)) nh->fib_nh_flags |= RTNH_F_LINKDOWN; err = 0; From f851e4f40253f4f8b57cbee503806f2a7c16f7ce Mon Sep 17 00:00:00 2001 From: Dmitry Osipenko Date: Tue, 28 Jun 2022 08:55:45 +0100 Subject: [PATCH 1329/1842] ARM: 9213/1: Print message about disabled Spectre workarounds only once commit e4ced82deb5fb17222fb82e092c3f8311955b585 upstream. Print the message about disabled Spectre workarounds only once. The message is printed each time CPU goes out from idling state on NVIDIA Tegra boards, causing storm in KMSG that makes system unusable. Cc: stable@vger.kernel.org Signed-off-by: Dmitry Osipenko Signed-off-by: Russell King (Oracle) Signed-off-by: Greg Kroah-Hartman --- arch/arm/mm/proc-v7-bugs.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c index fb9f3eb6bf48..f9730eba0632 100644 --- a/arch/arm/mm/proc-v7-bugs.c +++ b/arch/arm/mm/proc-v7-bugs.c @@ -108,8 +108,7 @@ static unsigned int spectre_v2_install_workaround(unsigned int method) #else static unsigned int spectre_v2_install_workaround(unsigned int method) { - pr_info("CPU%u: Spectre V2: workarounds disabled by configuration\n", - smp_processor_id()); + pr_info_once("Spectre V2: workarounds disabled by configuration\n"); return SPECTRE_VULNERABLE; } From db6e8c30154f390348b5bd6e8028461b90c0876e Mon Sep 17 00:00:00 2001 From: Ard Biesheuvel Date: Thu, 30 Jun 2022 16:46:54 +0100 Subject: [PATCH 1330/1842] ARM: 9214/1: alignment: advance IT state after emulating Thumb instruction commit e5c46fde75e43c15a29b40e5fc5641727f97ae47 upstream. After emulating a misaligned load or store issued in Thumb mode, we have to advance the IT state by hand, or it will get out of sync with the actual instruction stream, which means we'll end up applying the wrong condition code to subsequent instructions. This might corrupt the program state rather catastrophically. So borrow the it_advance() helper from the probing code, and use it on CPSR if the emulated instruction is Thumb. Cc: Reviewed-by: Linus Walleij Signed-off-by: Ard Biesheuvel Signed-off-by: Russell King (Oracle) Signed-off-by: Greg Kroah-Hartman --- arch/arm/include/asm/ptrace.h | 26 ++++++++++++++++++++++++++ arch/arm/mm/alignment.c | 3 +++ arch/arm/probes/decode.h | 26 +------------------------- 3 files changed, 30 insertions(+), 25 deletions(-) diff --git a/arch/arm/include/asm/ptrace.h b/arch/arm/include/asm/ptrace.h index 91d6b7856be4..73c83f4d33b3 100644 --- a/arch/arm/include/asm/ptrace.h +++ b/arch/arm/include/asm/ptrace.h @@ -164,5 +164,31 @@ static inline unsigned long user_stack_pointer(struct pt_regs *regs) ((current_stack_pointer | (THREAD_SIZE - 1)) - 7) - 1; \ }) + +/* + * Update ITSTATE after normal execution of an IT block instruction. + * + * The 8 IT state bits are split into two parts in CPSR: + * ITSTATE<1:0> are in CPSR<26:25> + * ITSTATE<7:2> are in CPSR<15:10> + */ +static inline unsigned long it_advance(unsigned long cpsr) +{ + if ((cpsr & 0x06000400) == 0) { + /* ITSTATE<2:0> == 0 means end of IT block, so clear IT state */ + cpsr &= ~PSR_IT_MASK; + } else { + /* We need to shift left ITSTATE<4:0> */ + const unsigned long mask = 0x06001c00; /* Mask ITSTATE<4:0> */ + unsigned long it = cpsr & mask; + it <<= 1; + it |= it >> (27 - 10); /* Carry ITSTATE<2> to correct place */ + it &= mask; + cpsr &= ~mask; + cpsr |= it; + } + return cpsr; +} + #endif /* __ASSEMBLY__ */ #endif diff --git a/arch/arm/mm/alignment.c b/arch/arm/mm/alignment.c index ea81e89e7740..bcefe3f51744 100644 --- a/arch/arm/mm/alignment.c +++ b/arch/arm/mm/alignment.c @@ -935,6 +935,9 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs) if (type == TYPE_LDST) do_alignment_finish_ldst(addr, instr, regs, offset); + if (thumb_mode(regs)) + regs->ARM_cpsr = it_advance(regs->ARM_cpsr); + return 0; bad_or_fault: diff --git a/arch/arm/probes/decode.h b/arch/arm/probes/decode.h index 973173598992..facc889d05ee 100644 --- a/arch/arm/probes/decode.h +++ b/arch/arm/probes/decode.h @@ -14,6 +14,7 @@ #include #include #include +#include #include void __init arm_probes_decode_init(void); @@ -35,31 +36,6 @@ void __init find_str_pc_offset(void); #endif -/* - * Update ITSTATE after normal execution of an IT block instruction. - * - * The 8 IT state bits are split into two parts in CPSR: - * ITSTATE<1:0> are in CPSR<26:25> - * ITSTATE<7:2> are in CPSR<15:10> - */ -static inline unsigned long it_advance(unsigned long cpsr) - { - if ((cpsr & 0x06000400) == 0) { - /* ITSTATE<2:0> == 0 means end of IT block, so clear IT state */ - cpsr &= ~PSR_IT_MASK; - } else { - /* We need to shift left ITSTATE<4:0> */ - const unsigned long mask = 0x06001c00; /* Mask ITSTATE<4:0> */ - unsigned long it = cpsr & mask; - it <<= 1; - it |= it >> (27 - 10); /* Carry ITSTATE<2> to correct place */ - it &= mask; - cpsr &= ~mask; - cpsr |= it; - } - return cpsr; -} - static inline void __kprobes bx_write_pc(long pcv, struct pt_regs *regs) { long cpsr = regs->ARM_cpsr; From e013ea2a51a94b903b396a8dff531a07d470335d Mon Sep 17 00:00:00 2001 From: Felix Fietkau Date: Sat, 2 Jul 2022 16:52:27 +0200 Subject: [PATCH 1331/1842] wifi: mac80211: fix queue selection for mesh/OCB interfaces commit 50e2ab39291947b6c6c7025cf01707c270fcde59 upstream. When using iTXQ, the code assumes that there is only one vif queue for broadcast packets, using the BE queue. Allowing non-BE queue marking violates that assumption and txq->ac == skb_queue_mapping is no longer guaranteed. This can cause issues with queue handling in the driver and also causes issues with the recent ATF change, resulting in an AQL underflow warning. Cc: stable@vger.kernel.org Signed-off-by: Felix Fietkau Link: https://lore.kernel.org/r/20220702145227.39356-1-nbd@nbd.name Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman --- net/mac80211/wme.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/mac80211/wme.c b/net/mac80211/wme.c index 2fb99325135a..b9404b056087 100644 --- a/net/mac80211/wme.c +++ b/net/mac80211/wme.c @@ -145,8 +145,8 @@ u16 __ieee80211_select_queue(struct ieee80211_sub_if_data *sdata, bool qos; /* all mesh/ocb stations are required to support WME */ - if (sdata->vif.type == NL80211_IFTYPE_MESH_POINT || - sdata->vif.type == NL80211_IFTYPE_OCB) + if (sta && (sdata->vif.type == NL80211_IFTYPE_MESH_POINT || + sdata->vif.type == NL80211_IFTYPE_OCB)) qos = true; else if (sta) qos = sta->sta.wme; From 7657e3958535d101a24ab4400f9b8062b9107cc4 Mon Sep 17 00:00:00 2001 From: Tejun Heo Date: Mon, 13 Jun 2022 12:19:50 -1000 Subject: [PATCH 1332/1842] cgroup: Use separate src/dst nodes when preloading css_sets for migration commit 07fd5b6cdf3cc30bfde8fe0f644771688be04447 upstream. Each cset (css_set) is pinned by its tasks. When we're moving tasks around across csets for a migration, we need to hold the source and destination csets to ensure that they don't go away while we're moving tasks about. This is done by linking cset->mg_preload_node on either the mgctx->preloaded_src_csets or mgctx->preloaded_dst_csets list. Using the same cset->mg_preload_node for both the src and dst lists was deemed okay as a cset can't be both the source and destination at the same time. Unfortunately, this overloading becomes problematic when multiple tasks are involved in a migration and some of them are identity noop migrations while others are actually moving across cgroups. For example, this can happen with the following sequence on cgroup1: #1> mkdir -p /sys/fs/cgroup/misc/a/b #2> echo $$ > /sys/fs/cgroup/misc/a/cgroup.procs #3> RUN_A_COMMAND_WHICH_CREATES_MULTIPLE_THREADS & #4> PID=$! #5> echo $PID > /sys/fs/cgroup/misc/a/b/tasks #6> echo $PID > /sys/fs/cgroup/misc/a/cgroup.procs the process including the group leader back into a. In this final migration, non-leader threads would be doing identity migration while the group leader is doing an actual one. After #3, let's say the whole process was in cset A, and that after #4, the leader moves to cset B. Then, during #6, the following happens: 1. cgroup_migrate_add_src() is called on B for the leader. 2. cgroup_migrate_add_src() is called on A for the other threads. 3. cgroup_migrate_prepare_dst() is called. It scans the src list. 4. It notices that B wants to migrate to A, so it tries to A to the dst list but realizes that its ->mg_preload_node is already busy. 5. and then it notices A wants to migrate to A as it's an identity migration, it culls it by list_del_init()'ing its ->mg_preload_node and putting references accordingly. 6. The rest of migration takes place with B on the src list but nothing on the dst list. This means that A isn't held while migration is in progress. If all tasks leave A before the migration finishes and the incoming task pins it, the cset will be destroyed leading to use-after-free. This is caused by overloading cset->mg_preload_node for both src and dst preload lists. We wanted to exclude the cset from the src list but ended up inadvertently excluding it from the dst list too. This patch fixes the issue by separating out cset->mg_preload_node into ->mg_src_preload_node and ->mg_dst_preload_node, so that the src and dst preloadings don't interfere with each other. Signed-off-by: Tejun Heo Reported-by: Mukesh Ojha Reported-by: shisiyuan Link: http://lkml.kernel.org/r/1654187688-27411-1-git-send-email-shisiyuan@xiaomi.com Link: https://www.spinics.net/lists/cgroups/msg33313.html Fixes: f817de98513d ("cgroup: prepare migration path for unified hierarchy") Cc: stable@vger.kernel.org # v3.16+ Signed-off-by: Greg Kroah-Hartman --- include/linux/cgroup-defs.h | 3 ++- kernel/cgroup/cgroup.c | 39 +++++++++++++++++++++++-------------- 2 files changed, 26 insertions(+), 16 deletions(-) diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h index fee0b5547cd0..c9fafca1c30c 100644 --- a/include/linux/cgroup-defs.h +++ b/include/linux/cgroup-defs.h @@ -260,7 +260,8 @@ struct css_set { * List of csets participating in the on-going migration either as * source or destination. Protected by cgroup_mutex. */ - struct list_head mg_preload_node; + struct list_head mg_src_preload_node; + struct list_head mg_dst_preload_node; struct list_head mg_node; /* diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index 0853289d321a..5046c99deba8 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -736,7 +736,8 @@ struct css_set init_css_set = { .task_iters = LIST_HEAD_INIT(init_css_set.task_iters), .threaded_csets = LIST_HEAD_INIT(init_css_set.threaded_csets), .cgrp_links = LIST_HEAD_INIT(init_css_set.cgrp_links), - .mg_preload_node = LIST_HEAD_INIT(init_css_set.mg_preload_node), + .mg_src_preload_node = LIST_HEAD_INIT(init_css_set.mg_src_preload_node), + .mg_dst_preload_node = LIST_HEAD_INIT(init_css_set.mg_dst_preload_node), .mg_node = LIST_HEAD_INIT(init_css_set.mg_node), /* @@ -1211,7 +1212,8 @@ static struct css_set *find_css_set(struct css_set *old_cset, INIT_LIST_HEAD(&cset->threaded_csets); INIT_HLIST_NODE(&cset->hlist); INIT_LIST_HEAD(&cset->cgrp_links); - INIT_LIST_HEAD(&cset->mg_preload_node); + INIT_LIST_HEAD(&cset->mg_src_preload_node); + INIT_LIST_HEAD(&cset->mg_dst_preload_node); INIT_LIST_HEAD(&cset->mg_node); /* Copy the set of subsystem state objects generated in @@ -2556,21 +2558,27 @@ int cgroup_migrate_vet_dst(struct cgroup *dst_cgrp) */ void cgroup_migrate_finish(struct cgroup_mgctx *mgctx) { - LIST_HEAD(preloaded); struct css_set *cset, *tmp_cset; lockdep_assert_held(&cgroup_mutex); spin_lock_irq(&css_set_lock); - list_splice_tail_init(&mgctx->preloaded_src_csets, &preloaded); - list_splice_tail_init(&mgctx->preloaded_dst_csets, &preloaded); - - list_for_each_entry_safe(cset, tmp_cset, &preloaded, mg_preload_node) { + list_for_each_entry_safe(cset, tmp_cset, &mgctx->preloaded_src_csets, + mg_src_preload_node) { cset->mg_src_cgrp = NULL; cset->mg_dst_cgrp = NULL; cset->mg_dst_cset = NULL; - list_del_init(&cset->mg_preload_node); + list_del_init(&cset->mg_src_preload_node); + put_css_set_locked(cset); + } + + list_for_each_entry_safe(cset, tmp_cset, &mgctx->preloaded_dst_csets, + mg_dst_preload_node) { + cset->mg_src_cgrp = NULL; + cset->mg_dst_cgrp = NULL; + cset->mg_dst_cset = NULL; + list_del_init(&cset->mg_dst_preload_node); put_css_set_locked(cset); } @@ -2612,7 +2620,7 @@ void cgroup_migrate_add_src(struct css_set *src_cset, src_cgrp = cset_cgroup_from_root(src_cset, dst_cgrp->root); - if (!list_empty(&src_cset->mg_preload_node)) + if (!list_empty(&src_cset->mg_src_preload_node)) return; WARN_ON(src_cset->mg_src_cgrp); @@ -2623,7 +2631,7 @@ void cgroup_migrate_add_src(struct css_set *src_cset, src_cset->mg_src_cgrp = src_cgrp; src_cset->mg_dst_cgrp = dst_cgrp; get_css_set(src_cset); - list_add_tail(&src_cset->mg_preload_node, &mgctx->preloaded_src_csets); + list_add_tail(&src_cset->mg_src_preload_node, &mgctx->preloaded_src_csets); } /** @@ -2648,7 +2656,7 @@ int cgroup_migrate_prepare_dst(struct cgroup_mgctx *mgctx) /* look up the dst cset for each src cset and link it to src */ list_for_each_entry_safe(src_cset, tmp_cset, &mgctx->preloaded_src_csets, - mg_preload_node) { + mg_src_preload_node) { struct css_set *dst_cset; struct cgroup_subsys *ss; int ssid; @@ -2667,7 +2675,7 @@ int cgroup_migrate_prepare_dst(struct cgroup_mgctx *mgctx) if (src_cset == dst_cset) { src_cset->mg_src_cgrp = NULL; src_cset->mg_dst_cgrp = NULL; - list_del_init(&src_cset->mg_preload_node); + list_del_init(&src_cset->mg_src_preload_node); put_css_set(src_cset); put_css_set(dst_cset); continue; @@ -2675,8 +2683,8 @@ int cgroup_migrate_prepare_dst(struct cgroup_mgctx *mgctx) src_cset->mg_dst_cset = dst_cset; - if (list_empty(&dst_cset->mg_preload_node)) - list_add_tail(&dst_cset->mg_preload_node, + if (list_empty(&dst_cset->mg_dst_preload_node)) + list_add_tail(&dst_cset->mg_dst_preload_node, &mgctx->preloaded_dst_csets); else put_css_set(dst_cset); @@ -2922,7 +2930,8 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp) goto out_finish; spin_lock_irq(&css_set_lock); - list_for_each_entry(src_cset, &mgctx.preloaded_src_csets, mg_preload_node) { + list_for_each_entry(src_cset, &mgctx.preloaded_src_csets, + mg_src_preload_node) { struct task_struct *task, *ntask; /* all tasks in src_csets need to be migrated */ From c1ea39a77cbdbcae0c34559b3506374915a2080d Mon Sep 17 00:00:00 2001 From: Filipe Manana Date: Mon, 4 Jul 2022 12:42:03 +0100 Subject: [PATCH 1333/1842] btrfs: return -EAGAIN for NOWAIT dio reads/writes on compressed and inline extents commit a4527e1853f8ff6e0b7c2dadad6268bd38427a31 upstream. When doing a direct IO read or write, we always return -ENOTBLK when we find a compressed extent (or an inline extent) so that we fallback to buffered IO. This however is not ideal in case we are in a NOWAIT context (io_uring for example), because buffered IO can block and we currently have no support for NOWAIT semantics for buffered IO, so if we need to fallback to buffered IO we should first signal the caller that we may need to block by returning -EAGAIN instead. This behaviour can also result in short reads being returned to user space, which although it's not incorrect and user space should be able to deal with partial reads, it's somewhat surprising and even some popular applications like QEMU (Link tag #1) and MariaDB (Link tag #2) don't deal with short reads properly (or at all). The short read case happens when we try to read from a range that has a non-compressed and non-inline extent followed by a compressed extent. After having read the first extent, when we find the compressed extent we return -ENOTBLK from btrfs_dio_iomap_begin(), which results in iomap to treat the request as a short read, returning 0 (success) and waiting for previously submitted bios to complete (this happens at fs/iomap/direct-io.c:__iomap_dio_rw()). After that, and while at btrfs_file_read_iter(), we call filemap_read() to use buffered IO to read the remaining data, and pass it the number of bytes we were able to read with direct IO. Than at filemap_read() if we get a page fault error when accessing the read buffer, we return a partial read instead of an -EFAULT error, because the number of bytes previously read is greater than zero. So fix this by returning -EAGAIN for NOWAIT direct IO when we find a compressed or an inline extent. Reported-by: Dominique MARTINET Link: https://lore.kernel.org/linux-btrfs/YrrFGO4A1jS0GI0G@atmark-techno.com/ Link: https://jira.mariadb.org/browse/MDEV-27900?focusedCommentId=216582&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-216582 Tested-by: Dominique MARTINET CC: stable@vger.kernel.org # 5.10+ Reviewed-by: Christoph Hellwig Signed-off-by: Filipe Manana Signed-off-by: David Sterba Signed-off-by: Greg Kroah-Hartman --- fs/btrfs/inode.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 4a5248097d7a..779b7745cdc4 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -7480,7 +7480,19 @@ static int btrfs_dio_iomap_begin(struct inode *inode, loff_t start, if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) || em->block_start == EXTENT_MAP_INLINE) { free_extent_map(em); - ret = -ENOTBLK; + /* + * If we are in a NOWAIT context, return -EAGAIN in order to + * fallback to buffered IO. This is not only because we can + * block with buffered IO (no support for NOWAIT semantics at + * the moment) but also to avoid returning short reads to user + * space - this happens if we were able to read some data from + * previous non-compressed extents and then when we fallback to + * buffered IO, at btrfs_file_read_iter() by calling + * filemap_read(), we fail to fault in pages for the read buffer, + * in which case filemap_read() returns a short read (the number + * of bytes previously read is > 0, so it does not return -EFAULT). + */ + ret = (flags & IOMAP_NOWAIT) ? -EAGAIN : -ENOTBLK; goto unlock_err; } From 2e760fe05d3ea5b5addb02691ec8a29e601dff56 Mon Sep 17 00:00:00 2001 From: Dmitry Osipenko Date: Thu, 30 Jun 2022 23:06:00 +0300 Subject: [PATCH 1334/1842] drm/panfrost: Put mapping instead of shmem obj on panfrost_mmu_map_fault_addr() error commit fb6e0637ab7ebd8e61fe24f4d663c4bae99cfa62 upstream. When panfrost_mmu_map_fault_addr() fails, the BO's mapping should be unreferenced and not the shmem object which backs the mapping. Cc: stable@vger.kernel.org Fixes: bdefca2d8dc0 ("drm/panfrost: Add the panfrost_gem_mapping concept") Reviewed-by: Steven Price Signed-off-by: Dmitry Osipenko Signed-off-by: Steven Price Link: https://patchwork.freedesktop.org/patch/msgid/20220630200601.1884120-2-dmitry.osipenko@collabora.com Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c index 7fc45b13a52c..13596961ae17 100644 --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c @@ -491,7 +491,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as, err_pages: drm_gem_shmem_put_pages(&bo->base); err_bo: - drm_gem_object_put(&bo->base.base); + panfrost_gem_mapping_put(bomapping); return ret; } From 0581613df7f9a4c5fac096ce1d5fb15b7b994240 Mon Sep 17 00:00:00 2001 From: Dmitry Osipenko Date: Thu, 30 Jun 2022 23:06:01 +0300 Subject: [PATCH 1335/1842] drm/panfrost: Fix shrinker list corruption by madvise IOCTL commit 9fc33eaaa979d112d10fea729edcd2a2e21aa912 upstream. Calling madvise IOCTL twice on BO causes memory shrinker list corruption and crashes kernel because BO is already on the list and it's added to the list again, while BO should be removed from the list before it's re-added. Fix it. Cc: stable@vger.kernel.org Fixes: 013b65101315 ("drm/panfrost: Add madvise and shrinker support") Acked-by: Alyssa Rosenzweig Reviewed-by: Steven Price Signed-off-by: Dmitry Osipenko Signed-off-by: Steven Price Link: https://patchwork.freedesktop.org/patch/msgid/20220630200601.1884120-3-dmitry.osipenko@collabora.com Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/panfrost/panfrost_drv.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index a70261809cdd..1dfc457bbefc 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -427,8 +427,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data, if (args->retained) { if (args->madv == PANFROST_MADV_DONTNEED) - list_add_tail(&bo->base.madv_list, - &pfdev->shrinker_list); + list_move_tail(&bo->base.madv_list, + &pfdev->shrinker_list); else if (args->madv == PANFROST_MADV_WILLNEED) list_del_init(&bo->base.madv_list); } From 14e63942d63e278d271aac211476b81b2c60aaf6 Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Wed, 13 Jul 2022 17:49:15 +1000 Subject: [PATCH 1336/1842] fs/remap: constrain dedupe of EOF blocks MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 5750676b64a561f7ec920d7c6ba130fc9c7378f3 upstream. If dedupe of an EOF block is not constrainted to match against only other EOF blocks with the same EOF offset into the block, it can match against any other block that has the same matching initial bytes in it, even if the bytes beyond EOF in the source file do not match. Fix this by constraining the EOF block matching to only match against other EOF blocks that have identical EOF offsets and data. This allows "whole file dedupe" to continue to work without allowing eof blocks to randomly match against partial full blocks with the same data. Reported-by: Ansgar Lößer Fixes: 1383a7ed6749 ("vfs: check file ranges before cloning files") Link: https://lore.kernel.org/linux-fsdevel/a7c93559-4ba1-df2f-7a85-55a143696405@tu-darmstadt.de/ Signed-off-by: Dave Chinner Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- fs/remap_range.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/remap_range.c b/fs/remap_range.c index e6099beefa97..e8e00e217d6c 100644 --- a/fs/remap_range.c +++ b/fs/remap_range.c @@ -71,7 +71,8 @@ static int generic_remap_checks(struct file *file_in, loff_t pos_in, * Otherwise, make sure the count is also block-aligned, having * already confirmed the starting offsets' block alignment. */ - if (pos_in + count == size_in) { + if (pos_in + count == size_in && + (!(remap_flags & REMAP_FILE_DEDUP) || pos_out + count == size_out)) { bcount = ALIGN(size_in, bs) - pos_in; } else { if (!IS_ALIGNED(count, bs)) From ea4dbcfb9532a6ac79755e5043d988859961d29c Mon Sep 17 00:00:00 2001 From: Ryusuke Konishi Date: Thu, 23 Jun 2022 17:54:01 +0900 Subject: [PATCH 1337/1842] nilfs2: fix incorrect masking of permission flags for symlinks commit 5924e6ec1585445f251ea92713eb15beb732622a upstream. The permission flags of newly created symlinks are wrongly dropped on nilfs2 with the current umask value even though symlinks should have 777 (rwxrwxrwx) permissions: $ umask 0022 $ touch file && ln -s file symlink; ls -l file symlink -rw-r--r--. 1 root root 0 Jun 23 16:29 file lrwxr-xr-x. 1 root root 4 Jun 23 16:29 symlink -> file This fixes the bug by inserting a missing check that excludes symlinks. Link: https://lkml.kernel.org/r/1655974441-5612-1-git-send-email-konishi.ryusuke@gmail.com Signed-off-by: Ryusuke Konishi Reported-by: Tommy Pettersson Reported-by: Ciprian Craciun Tested-by: Ryusuke Konishi Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- fs/nilfs2/nilfs.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/fs/nilfs2/nilfs.h b/fs/nilfs2/nilfs.h index 9ca165bc97d2..ace27a89fbb0 100644 --- a/fs/nilfs2/nilfs.h +++ b/fs/nilfs2/nilfs.h @@ -198,6 +198,9 @@ static inline int nilfs_acl_chmod(struct inode *inode) static inline int nilfs_init_acl(struct inode *inode, struct inode *dir) { + if (S_ISLNK(inode->i_mode)) + return 0; + inode->i_mode &= ~current_umask(); return 0; } From 41007669fc3b4c968c54b81088c2bbc18d8affe4 Mon Sep 17 00:00:00 2001 From: Geert Uytterhoeven Date: Mon, 20 Jun 2022 09:01:43 +0200 Subject: [PATCH 1338/1842] sh: convert nommu io{re,un}map() to static inline functions commit d684e0a52d36f8939eda30a0f31ee235ee4ee741 upstream. Recently, nommu iounmap() was converted from a static inline function to a macro again, basically reverting commit 4580ba4ad2e6b8dd ("sh: Convert iounmap() macros to inline functions"). With -Werror, this leads to build failures like: drivers/iio/adc/xilinx-ams.c: In function `ams_iounmap_ps': drivers/iio/adc/xilinx-ams.c:1195:14: error: unused variable `ams' [-Werror=unused-variable] 1195 | struct ams *ams = data; | ^~~ Fix this by replacing the macros for ioremap() and iounmap() by static inline functions, based on . Link: https://lkml.kernel.org/r/8d1b1766260961799b04035e7bc39a7f59729f72.1655708312.git.geert+renesas@glider.be Fixes: 13f1fc870dd74713 ("sh: move the ioremap implementation out of line") Signed-off-by: Geert Uytterhoeven Reported-by: kernel test robot Reported-by: Jonathan Cameron Acked-by: Jonathan Cameron Reviewed-by: Christoph Hellwig Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- arch/sh/include/asm/io.h | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/sh/include/asm/io.h b/arch/sh/include/asm/io.h index 6d5c6463bc07..de99a19e72d7 100644 --- a/arch/sh/include/asm/io.h +++ b/arch/sh/include/asm/io.h @@ -271,8 +271,12 @@ static inline void __iomem *ioremap_prot(phys_addr_t offset, unsigned long size, #endif /* CONFIG_HAVE_IOREMAP_PROT */ #else /* CONFIG_MMU */ -#define iounmap(addr) do { } while (0) -#define ioremap(offset, size) ((void __iomem *)(unsigned long)(offset)) +static inline void __iomem *ioremap(phys_addr_t offset, size_t size) +{ + return (void __iomem *)(unsigned long)offset; +} + +static inline void iounmap(volatile void __iomem *addr) { } #endif /* CONFIG_MMU */ #define ioremap_uc ioremap From 9d883b3f000d405d19b3484e3d8d97796e6854c6 Mon Sep 17 00:00:00 2001 From: Xiu Jianfeng Date: Fri, 27 May 2022 19:17:26 +0800 Subject: [PATCH 1339/1842] Revert "evm: Fix memleak in init_desc" commit 51dd64bb99e4478fc5280171acd8e1b529eadaf7 upstream. This reverts commit ccf11dbaa07b328fa469415c362d33459c140a37. Commit ccf11dbaa07b ("evm: Fix memleak in init_desc") said there is memleak in init_desc. That may be incorrect, as we can see, tmp_tfm is saved in one of the two global variables hmac_tfm or evm_tfm[hash_algo], then if init_desc is called next time, there is no need to alloc tfm again, so in the error path of kmalloc desc or crypto_shash_init(desc), It is not a problem without freeing tmp_tfm. And also that commit did not reset the global variable to NULL after freeing tmp_tfm and this makes *tfm a dangling pointer which may cause a UAF issue. Reported-by: Guozihua (Scott) Signed-off-by: Xiu Jianfeng Signed-off-by: Mimi Zohar Signed-off-by: Greg Kroah-Hartman --- security/integrity/evm/evm_crypto.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/security/integrity/evm/evm_crypto.c b/security/integrity/evm/evm_crypto.c index a6dd47eb086d..168c3b78ac47 100644 --- a/security/integrity/evm/evm_crypto.c +++ b/security/integrity/evm/evm_crypto.c @@ -73,7 +73,7 @@ static struct shash_desc *init_desc(char type, uint8_t hash_algo) { long rc; const char *algo; - struct crypto_shash **tfm, *tmp_tfm = NULL; + struct crypto_shash **tfm, *tmp_tfm; struct shash_desc *desc; if (type == EVM_XATTR_HMAC) { @@ -118,16 +118,13 @@ static struct shash_desc *init_desc(char type, uint8_t hash_algo) alloc: desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(*tfm), GFP_KERNEL); - if (!desc) { - crypto_free_shash(tmp_tfm); + if (!desc) return ERR_PTR(-ENOMEM); - } desc->tfm = *tfm; rc = crypto_shash_init(desc); if (rc) { - crypto_free_shash(tmp_tfm); kfree(desc); return ERR_PTR(rc); } From 91f90b571f1a23f5b8a9c2b68a9aa5d6981a3c3d Mon Sep 17 00:00:00 2001 From: Baokun Li Date: Thu, 28 Apr 2022 21:40:31 +0800 Subject: [PATCH 1340/1842] ext4: fix race condition between ext4_write and ext4_convert_inline_data commit f87c7a4b084afc13190cbb263538e444cb2b392a upstream. Hulk Robot reported a BUG_ON: ================================================================== EXT4-fs error (device loop3): ext4_mb_generate_buddy:805: group 0, block bitmap and bg descriptor inconsistent: 25 vs 31513 free clusters kernel BUG at fs/ext4/ext4_jbd2.c:53! invalid opcode: 0000 [#1] SMP KASAN PTI CPU: 0 PID: 25371 Comm: syz-executor.3 Not tainted 5.10.0+ #1 RIP: 0010:ext4_put_nojournal fs/ext4/ext4_jbd2.c:53 [inline] RIP: 0010:__ext4_journal_stop+0x10e/0x110 fs/ext4/ext4_jbd2.c:116 [...] Call Trace: ext4_write_inline_data_end+0x59a/0x730 fs/ext4/inline.c:795 generic_perform_write+0x279/0x3c0 mm/filemap.c:3344 ext4_buffered_write_iter+0x2e3/0x3d0 fs/ext4/file.c:270 ext4_file_write_iter+0x30a/0x11c0 fs/ext4/file.c:520 do_iter_readv_writev+0x339/0x3c0 fs/read_write.c:732 do_iter_write+0x107/0x430 fs/read_write.c:861 vfs_writev fs/read_write.c:934 [inline] do_pwritev+0x1e5/0x380 fs/read_write.c:1031 [...] ================================================================== Above issue may happen as follows: cpu1 cpu2 __________________________|__________________________ do_pwritev vfs_writev do_iter_write ext4_file_write_iter ext4_buffered_write_iter generic_perform_write ext4_da_write_begin vfs_fallocate ext4_fallocate ext4_convert_inline_data ext4_convert_inline_data_nolock ext4_destroy_inline_data_nolock clear EXT4_STATE_MAY_INLINE_DATA ext4_map_blocks ext4_ext_map_blocks ext4_mb_new_blocks ext4_mb_regular_allocator ext4_mb_good_group_nolock ext4_mb_init_group ext4_mb_init_cache ext4_mb_generate_buddy --> error ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA) ext4_restore_inline_data set EXT4_STATE_MAY_INLINE_DATA ext4_block_write_begin ext4_da_write_end ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA) ext4_write_inline_data_end handle=NULL ext4_journal_stop(handle) __ext4_journal_stop ext4_put_nojournal(handle) ref_cnt = (unsigned long)handle BUG_ON(ref_cnt == 0) ---> BUG_ON The lock held by ext4_convert_inline_data is xattr_sem, but the lock held by generic_perform_write is i_rwsem. Therefore, the two locks can be concurrent. To solve above issue, we add inode_lock() for ext4_convert_inline_data(). At the same time, move ext4_convert_inline_data() in front of ext4_punch_hole(), remove similar handling from ext4_punch_hole(). Fixes: 0c8d414f163f ("ext4: let fallocate handle inline data correctly") Cc: stable@vger.kernel.org Reported-by: Hulk Robot Signed-off-by: Baokun Li Reviewed-by: Jan Kara Link: https://lore.kernel.org/r/20220428134031.4153381-1-libaokun1@huawei.com Signed-off-by: Theodore Ts'o Signed-off-by: Tadeusz Struk Signed-off-by: Greg Kroah-Hartman --- fs/ext4/extents.c | 9 +++++---- fs/ext4/inode.c | 9 --------- 2 files changed, 5 insertions(+), 13 deletions(-) diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index 6641b74ad462..0f49bf547b84 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4691,16 +4691,17 @@ long ext4_fallocate(struct file *file, int mode, loff_t offset, loff_t len) return -EOPNOTSUPP; ext4_fc_start_update(inode); + inode_lock(inode); + ret = ext4_convert_inline_data(inode); + inode_unlock(inode); + if (ret) + goto exit; if (mode & FALLOC_FL_PUNCH_HOLE) { ret = ext4_punch_hole(file, offset, len); goto exit; } - ret = ext4_convert_inline_data(inode); - if (ret) - goto exit; - if (mode & FALLOC_FL_COLLAPSE_RANGE) { ret = ext4_collapse_range(file, offset, len); goto exit; diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 72e3f55f1e07..bd0d0a10ca42 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -4042,15 +4042,6 @@ int ext4_punch_hole(struct file *file, loff_t offset, loff_t length) trace_ext4_punch_hole(inode, offset, length, 0); - ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA); - if (ext4_has_inline_data(inode)) { - down_write(&EXT4_I(inode)->i_mmap_sem); - ret = ext4_convert_inline_data(inode); - up_write(&EXT4_I(inode)->i_mmap_sem); - if (ret) - return ret; - } - /* * Write out all dirty pages to avoid race conditions * Then release them. From d8d42c92fe564ed9dad36a5b0dcfc3b12bfb7965 Mon Sep 17 00:00:00 2001 From: Kris Bahnsen Date: Thu, 30 Jun 2022 14:03:27 -0700 Subject: [PATCH 1341/1842] ARM: dts: imx6qdl-ts7970: Fix ngpio typo and count [ Upstream commit e95ea0f687e679fcb0a3a67d0755b81ee7d60db0 ] Device-tree incorrectly used "ngpio" which caused the driver to fallback to 32 ngpios. This platform has 62 GPIO registers. Fixes: 9ff8e9fccef9 ("ARM: dts: TS-7970: add basic device tree") Signed-off-by: Kris Bahnsen Reviewed-by: Fabio Estevam Signed-off-by: Shawn Guo Signed-off-by: Sasha Levin --- arch/arm/boot/dts/imx6qdl-ts7970.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/boot/dts/imx6qdl-ts7970.dtsi b/arch/arm/boot/dts/imx6qdl-ts7970.dtsi index e6aa0c33754d..966038ecc5bf 100644 --- a/arch/arm/boot/dts/imx6qdl-ts7970.dtsi +++ b/arch/arm/boot/dts/imx6qdl-ts7970.dtsi @@ -226,7 +226,7 @@ gpio8: gpio@28 { reg = <0x28>; #gpio-cells = <2>; gpio-controller; - ngpio = <32>; + ngpios = <62>; }; sgtl5000: codec@a { From 0c300e294d1c8200d26d0eb561e93204cd868318 Mon Sep 17 00:00:00 2001 From: Cristian Ciocaltea Date: Wed, 6 Jul 2022 13:06:22 +0300 Subject: [PATCH 1342/1842] spi: amd: Limit max transfer and message size [ Upstream commit 6ece49c56965544262523dae4a071ace3db63507 ] Enabling the SPI CS35L41 audio codec driver for Steam Deck [1] revealed a problem with the current AMD SPI controller driver implementation, consisting of an unrecoverable system hang. The issue can be prevented if we ensure the max transfer size and the max message size do not exceed the FIFO buffer size. According to the implementation of the downstream driver, the AMD SPI controller is not able to handle more than 70 bytes per transfer, which corresponds to the size of the FIFO buffer. Hence, let's fix this by setting the SPI limits mentioned above. [1] https://lore.kernel.org/r/20220621213819.262537-1-cristian.ciocaltea@collabora.com Reported-by: Anastasios Vacharakis Fixes: bbb336f39efc ("spi: spi-amd: Add AMD SPI controller driver support") Signed-off-by: Cristian Ciocaltea Link: https://lore.kernel.org/r/20220706100626.1234731-2-cristian.ciocaltea@collabora.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- drivers/spi/spi-amd.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/spi/spi-amd.c b/drivers/spi/spi-amd.c index 7f629544060d..a027cfd49df8 100644 --- a/drivers/spi/spi-amd.c +++ b/drivers/spi/spi-amd.c @@ -28,6 +28,7 @@ #define AMD_SPI_RX_COUNT_REG 0x4B #define AMD_SPI_STATUS_REG 0x4C +#define AMD_SPI_FIFO_SIZE 70 #define AMD_SPI_MEM_SIZE 200 /* M_CMD OP codes for SPI */ @@ -245,6 +246,11 @@ static int amd_spi_master_transfer(struct spi_master *master, return 0; } +static size_t amd_spi_max_transfer_size(struct spi_device *spi) +{ + return AMD_SPI_FIFO_SIZE; +} + static int amd_spi_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; @@ -278,6 +284,8 @@ static int amd_spi_probe(struct platform_device *pdev) master->flags = SPI_MASTER_HALF_DUPLEX; master->setup = amd_spi_master_setup; master->transfer_one_message = amd_spi_master_transfer; + master->max_transfer_size = amd_spi_max_transfer_size; + master->max_message_size = amd_spi_max_transfer_size; /* Register the controller with SPI framework */ err = devm_spi_register_master(dev, master); From 3d82fba7d3632475e8005298d52ebd0202f8e24d Mon Sep 17 00:00:00 2001 From: Ard Biesheuvel Date: Tue, 31 May 2022 09:53:42 +0100 Subject: [PATCH 1343/1842] ARM: 9209/1: Spectre-BHB: avoid pr_info() every time a CPU comes out of idle [ Upstream commit 0609e200246bfd3b7516091c491bec4308349055 ] Jon reports that the Spectre-BHB init code is filling up the kernel log with spurious notifications about which mitigation has been enabled, every time any CPU comes out of a low power state. Given that Spectre-BHB mitigations are system wide, only a single mitigation can be enabled, and we already print an error if two types of CPUs coexist in a single system that require different Spectre-BHB mitigations. This means that the pr_info() that describes the selected mitigation does not need to be emitted for each CPU anyway, and so we can simply emit it only once. In order to clarify the above in the log message, update it to describe that the selected mitigation will be enabled on all CPUs, including ones that are unaffected. If another CPU comes up later that is affected and requires a different mitigation, we report an error as before. Fixes: b9baf5c8c5c3 ("ARM: Spectre-BHB workaround") Tested-by: Jon Hunter Signed-off-by: Ard Biesheuvel Signed-off-by: Russell King (Oracle) Signed-off-by: Sasha Levin --- arch/arm/mm/proc-v7-bugs.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c index f9730eba0632..8bc7a2d6d6c7 100644 --- a/arch/arm/mm/proc-v7-bugs.c +++ b/arch/arm/mm/proc-v7-bugs.c @@ -208,10 +208,10 @@ static int spectre_bhb_install_workaround(int method) return SPECTRE_VULNERABLE; spectre_bhb_method = method; - } - pr_info("CPU%u: Spectre BHB: using %s workaround\n", - smp_processor_id(), spectre_bhb_method_name(method)); + pr_info("CPU%u: Spectre BHB: enabling %s workaround for all CPUs\n", + smp_processor_id(), spectre_bhb_method_name(method)); + } return SPECTRE_MITIGATED; } From d6cab2e06c33533d45586cb6161cdc9a754b29a1 Mon Sep 17 00:00:00 2001 From: Zhen Lei Date: Mon, 13 Jun 2022 15:05:41 +0100 Subject: [PATCH 1344/1842] ARM: 9210/1: Mark the FDT_FIXED sections as shareable [ Upstream commit 598f0a99fa8a35be44b27106b43ddc66417af3b1 ] commit 7a1be318f579 ("ARM: 9012/1: move device tree mapping out of linear region") use FDT_FIXED_BASE to map the whole FDT_FIXED_SIZE memory area which contains fdt. But it only reserves the exact physical memory that fdt occupied. Unfortunately, this mapping is non-shareable. An illegal or speculative read access can bring the RAM content from non-fdt zone into cache, PIPT makes it to be hit by subsequently read access through shareable mapping(such as linear mapping), and the cache consistency between cores is lost due to non-shareable property. |<---------FDT_FIXED_SIZE------>| | | ------------------------------- | | | | ------------------------------- 1. CoreA read through MT_ROM mapping, the old data is loaded into the cache. 2. CoreB write to update data through linear mapping. CoreA received the notification to invalid the corresponding cachelines, but the property non-shareable makes it to be ignored. 3. CoreA read through linear mapping, cache hit, the old data is read. To eliminate this risk, add a new memory type MT_MEMORY_RO. Compared to MT_ROM, it is shareable and non-executable. Here's an example: list_del corruption. prev->next should be c0ecbf74, but was c08410dc kernel BUG at lib/list_debug.c:53! ... ... PC is at __list_del_entry_valid+0x58/0x98 LR is at __list_del_entry_valid+0x58/0x98 psr: 60000093 sp : c0ecbf30 ip : 00000000 fp : 00000001 r10: c08410d0 r9 : 00000001 r8 : c0825e0c r7 : 20000013 r6 : c08410d0 r5 : c0ecbf74 r4 : c0ecbf74 r3 : c0825d08 r2 : 00000000 r1 : df7ce6f4 r0 : 00000044 ... ... Stack: (0xc0ecbf30 to 0xc0ecc000) bf20: c0ecbf74 c0164fd0 c0ecbf70 c0165170 bf40: c0eca000 c0840c00 c0840c00 c0824500 c0825e0c c0189bbc c088f404 60000013 bf60: 60000013 c0e85100 000004ec 00000000 c0ebcdc0 c0ecbf74 c0ecbf74 c0825d08 ... ... < next prev > (__list_del_entry_valid) from (__list_del_entry+0xc/0x20) (__list_del_entry) from (finish_swait+0x60/0x7c) (finish_swait) from (rcu_gp_kthread+0x560/0xa20) (rcu_gp_kthread) from (kthread+0x14c/0x15c) (kthread) from (ret_from_fork+0x14/0x24) The faulty list node to be deleted is a local variable, its address is c0ecbf74. The dumped stack shows that 'prev' = c0ecbf74, but its value before lib/list_debug.c:53 is c08410dc. A large amount of printing results in swapping out the cacheline containing the old data(MT_ROM mapping is read only, so the cacheline cannot be dirty), and the subsequent dump operation obtains new data from the DDR. Fixes: 7a1be318f579 ("ARM: 9012/1: move device tree mapping out of linear region") Suggested-by: Ard Biesheuvel Signed-off-by: Zhen Lei Reviewed-by: Ard Biesheuvel Reviewed-by: Kefeng Wang Signed-off-by: Russell King (Oracle) Signed-off-by: Sasha Levin --- arch/arm/include/asm/mach/map.h | 1 + arch/arm/mm/mmu.c | 15 ++++++++++++++- 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/mach/map.h b/arch/arm/include/asm/mach/map.h index 92282558caf7..2b8970d8e5a2 100644 --- a/arch/arm/include/asm/mach/map.h +++ b/arch/arm/include/asm/mach/map.h @@ -27,6 +27,7 @@ enum { MT_HIGH_VECTORS, MT_MEMORY_RWX, MT_MEMORY_RW, + MT_MEMORY_RO, MT_ROM, MT_MEMORY_RWX_NONCACHED, MT_MEMORY_RW_DTCM, diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 3e3001998460..86f213f1b44b 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -296,6 +296,13 @@ static struct mem_type mem_types[] __ro_after_init = { .prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE, .domain = DOMAIN_KERNEL, }, + [MT_MEMORY_RO] = { + .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY | + L_PTE_XN | L_PTE_RDONLY, + .prot_l1 = PMD_TYPE_TABLE, + .prot_sect = PMD_TYPE_SECT, + .domain = DOMAIN_KERNEL, + }, [MT_ROM] = { .prot_sect = PMD_TYPE_SECT, .domain = DOMAIN_KERNEL, @@ -490,6 +497,7 @@ static void __init build_mem_type_table(void) /* Also setup NX memory mapping */ mem_types[MT_MEMORY_RW].prot_sect |= PMD_SECT_XN; + mem_types[MT_MEMORY_RO].prot_sect |= PMD_SECT_XN; } if (cpu_arch >= CPU_ARCH_ARMv7 && (cr & CR_TRE)) { /* @@ -569,6 +577,7 @@ static void __init build_mem_type_table(void) mem_types[MT_ROM].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE; mem_types[MT_MINICLEAN].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE; mem_types[MT_CACHECLEAN].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE; + mem_types[MT_MEMORY_RO].prot_sect |= PMD_SECT_APX|PMD_SECT_AP_WRITE; #endif /* @@ -588,6 +597,8 @@ static void __init build_mem_type_table(void) mem_types[MT_MEMORY_RWX].prot_pte |= L_PTE_SHARED; mem_types[MT_MEMORY_RW].prot_sect |= PMD_SECT_S; mem_types[MT_MEMORY_RW].prot_pte |= L_PTE_SHARED; + mem_types[MT_MEMORY_RO].prot_sect |= PMD_SECT_S; + mem_types[MT_MEMORY_RO].prot_pte |= L_PTE_SHARED; mem_types[MT_MEMORY_DMA_READY].prot_pte |= L_PTE_SHARED; mem_types[MT_MEMORY_RWX_NONCACHED].prot_sect |= PMD_SECT_S; mem_types[MT_MEMORY_RWX_NONCACHED].prot_pte |= L_PTE_SHARED; @@ -648,6 +659,8 @@ static void __init build_mem_type_table(void) mem_types[MT_MEMORY_RWX].prot_pte |= kern_pgprot; mem_types[MT_MEMORY_RW].prot_sect |= ecc_mask | cp->pmd; mem_types[MT_MEMORY_RW].prot_pte |= kern_pgprot; + mem_types[MT_MEMORY_RO].prot_sect |= ecc_mask | cp->pmd; + mem_types[MT_MEMORY_RO].prot_pte |= kern_pgprot; mem_types[MT_MEMORY_DMA_READY].prot_pte |= kern_pgprot; mem_types[MT_MEMORY_RWX_NONCACHED].prot_sect |= ecc_mask; mem_types[MT_ROM].prot_sect |= cp->pmd; @@ -1342,7 +1355,7 @@ static void __init devicemaps_init(const struct machine_desc *mdesc) map.pfn = __phys_to_pfn(__atags_pointer & SECTION_MASK); map.virtual = FDT_FIXED_BASE; map.length = FDT_FIXED_SIZE; - map.type = MT_ROM; + map.type = MT_MEMORY_RO; create_mapping(&map); } From c87d5211be84e682b9d0ae781368446acf4f2639 Mon Sep 17 00:00:00 2001 From: Tariq Toukan Date: Mon, 6 Jun 2022 21:20:29 +0300 Subject: [PATCH 1345/1842] net/mlx5e: kTLS, Fix build time constant test in TX [ Upstream commit 6cc2714e85754a621219693ea8aa3077d6fca0cb ] Use the correct constant (TLS_DRIVER_STATE_SIZE_TX) in the comparison against the size of the private TX TLS driver context. Fixes: df8d866770f9 ("net/mlx5e: kTLS, Use kernel API to extract private offload context") Signed-off-by: Tariq Toukan Reviewed-by: Maxim Mikityanskiy Signed-off-by: Saeed Mahameed Signed-off-by: Sasha Levin --- drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c index b140e13fdcc8..679747db3110 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c @@ -63,8 +63,7 @@ mlx5e_set_ktls_tx_priv_ctx(struct tls_context *tls_ctx, struct mlx5e_ktls_offload_context_tx **ctx = __tls_driver_ctx(tls_ctx, TLS_OFFLOAD_CTX_DIR_TX); - BUILD_BUG_ON(sizeof(struct mlx5e_ktls_offload_context_tx *) > - TLS_OFFLOAD_CONTEXT_SIZE_TX); + BUILD_BUG_ON(sizeof(priv_tx) > TLS_DRIVER_STATE_SIZE_TX); *ctx = priv_tx; } From 6eb1d0c370afbfb717824ba148e67243c85cfa72 Mon Sep 17 00:00:00 2001 From: Tariq Toukan Date: Mon, 6 Jun 2022 21:21:10 +0300 Subject: [PATCH 1346/1842] net/mlx5e: kTLS, Fix build time constant test in RX [ Upstream commit 2ec6cf9b742a5c18982861322fa5de6510f8f57e ] Use the correct constant (TLS_DRIVER_STATE_SIZE_RX) in the comparison against the size of the private RX TLS driver context. Fixes: 1182f3659357 ("net/mlx5e: kTLS, Add kTLS RX HW offload support") Signed-off-by: Tariq Toukan Reviewed-by: Maxim Mikityanskiy Signed-off-by: Saeed Mahameed Signed-off-by: Sasha Levin --- drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c index d06532d0baa4..634777fd7db9 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c @@ -231,8 +231,7 @@ mlx5e_set_ktls_rx_priv_ctx(struct tls_context *tls_ctx, struct mlx5e_ktls_offload_context_rx **ctx = __tls_driver_ctx(tls_ctx, TLS_OFFLOAD_CTX_DIR_RX); - BUILD_BUG_ON(sizeof(struct mlx5e_ktls_offload_context_rx *) > - TLS_OFFLOAD_CONTEXT_SIZE_RX); + BUILD_BUG_ON(sizeof(priv_rx) > TLS_DRIVER_STATE_SIZE_RX); *ctx = priv_rx; } From 4cb5c1950b7ad37f57ced2bea8c5b4bcb755b61a Mon Sep 17 00:00:00 2001 From: Gal Pressman Date: Mon, 27 Jun 2022 15:05:53 +0300 Subject: [PATCH 1347/1842] net/mlx5e: Fix capability check for updating vnic env counters [ Upstream commit 452133dd580811f184e76b1402983182ee425298 ] The existing capability check for vnic env counters only checks for receive steering discards, although we need the counters update for the exposed internal queue oob counter as well. This could result in the latter counter not being updated correctly when the receive steering discards counter is not supported. Fix that by checking whether any counter is supported instead of only the steering counter capability. Fixes: 0cfafd4b4ddf ("net/mlx5e: Add device out of buffer counter") Signed-off-by: Gal Pressman Reviewed-by: Tariq Toukan Signed-off-by: Saeed Mahameed Signed-off-by: Sasha Levin --- drivers/net/ethernet/mellanox/mlx5/core/en_stats.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c index 78f6a6f0a7e0..ff4f10d0f090 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c @@ -536,7 +536,7 @@ static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(vnic_env) u32 in[MLX5_ST_SZ_DW(query_vnic_env_in)] = {}; struct mlx5_core_dev *mdev = priv->mdev; - if (!MLX5_CAP_GEN(priv->mdev, nic_receive_steering_discard)) + if (!mlx5e_stats_grp_vnic_env_num_stats(priv)) return; MLX5_SET(query_vnic_env_in, in, opcode, MLX5_CMD_OP_QUERY_VNIC_ENV); From 592f3bad00b7e2a95a6fb7a4f9e742c061c9c3c1 Mon Sep 17 00:00:00 2001 From: Hangyu Hua Date: Fri, 24 Jun 2022 06:04:06 -0700 Subject: [PATCH 1348/1842] drm/i915: fix a possible refcount leak in intel_dp_add_mst_connector() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 85144df9ff4652816448369de76897c57cbb1b93 ] If drm_connector_init fails, intel_connector_free will be called to take care of proper free. So it is necessary to drop the refcount of port before intel_connector_free. Fixes: 091a4f91942a ("drm/i915: Handle drm-layer errors in intel_dp_add_mst_connector") Signed-off-by: Hangyu Hua Reviewed-by: José Roberto de Souza Link: https://patchwork.freedesktop.org/patch/msgid/20220624130406.17996-1-jose.souza@intel.com Signed-off-by: José Roberto de Souza (cherry picked from commit cea9ed611e85d36a05db52b6457bf584b7d969e2) Signed-off-by: Rodrigo Vivi Signed-off-by: Sasha Levin --- drivers/gpu/drm/i915/display/intel_dp_mst.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c index ecaa538b2d35..ef7878193491 100644 --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c @@ -790,6 +790,7 @@ static struct drm_connector *intel_dp_add_mst_connector(struct drm_dp_mst_topolo ret = drm_connector_init(dev, connector, &intel_dp_mst_connector_funcs, DRM_MODE_CONNECTOR_DisplayPort); if (ret) { + drm_dp_mst_put_port_malloc(port); intel_connector_free(intel_connector); return NULL; } From 831e190175f10652be93b08436cc7bf2e62e4bb6 Mon Sep 17 00:00:00 2001 From: Huaxin Lu Date: Tue, 5 Jul 2022 13:14:17 +0800 Subject: [PATCH 1349/1842] ima: Fix a potential integer overflow in ima_appraise_measurement [ Upstream commit d2ee2cfc4aa85ff6a2a3b198a3a524ec54e3d999 ] When the ima-modsig is enabled, the rc passed to evm_verifyxattr() may be negative, which may cause the integer overflow problem. Fixes: 39b07096364a ("ima: Implement support for module-style appended signatures") Signed-off-by: Huaxin Lu Signed-off-by: Mimi Zohar Signed-off-by: Sasha Levin --- security/integrity/ima/ima_appraise.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/security/integrity/ima/ima_appraise.c b/security/integrity/ima/ima_appraise.c index 3dd8c2e4314e..7122a359a268 100644 --- a/security/integrity/ima/ima_appraise.c +++ b/security/integrity/ima/ima_appraise.c @@ -396,7 +396,8 @@ int ima_appraise_measurement(enum ima_hooks func, goto out; } - status = evm_verifyxattr(dentry, XATTR_NAME_IMA, xattr_value, rc, iint); + status = evm_verifyxattr(dentry, XATTR_NAME_IMA, xattr_value, + rc < 0 ? 0 : rc, iint); switch (status) { case INTEGRITY_PASS: case INTEGRITY_PASS_IMMUTABLE: From f160a1f97091dc52e66b319829e60c7eeebfc9d4 Mon Sep 17 00:00:00 2001 From: Francesco Dolcini Date: Fri, 24 Jun 2022 12:13:01 +0200 Subject: [PATCH 1350/1842] ASoC: sgtl5000: Fix noise on shutdown/remove [ Upstream commit 040e3360af3736348112d29425bf5d0be5b93115 ] Put the SGTL5000 in a silent/safe state on shutdown/remove, this is required since the SGTL5000 produces a constant noise on its output after it is configured and its clock is removed. Without this change this is happening every time the module is unbound/removed or from reboot till the clock is enabled again. The issue was experienced on both a Toradex Colibri/Apalis iMX6, but can be easily reproduced everywhere just playing something on the codec and after that removing/unbinding the driver. Fixes: 9b34e6cc3bc2 ("ASoC: Add Freescale SGTL5000 codec support") Signed-off-by: Francesco Dolcini Reviewed-by: Fabio Estevam Link: https://lore.kernel.org/r/20220624101301.441314-1-francesco.dolcini@toradex.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/sgtl5000.c | 9 +++++++++ sound/soc/codecs/sgtl5000.h | 1 + 2 files changed, 10 insertions(+) diff --git a/sound/soc/codecs/sgtl5000.c b/sound/soc/codecs/sgtl5000.c index 4c0e87e22b97..f066e016a874 100644 --- a/sound/soc/codecs/sgtl5000.c +++ b/sound/soc/codecs/sgtl5000.c @@ -1797,6 +1797,9 @@ static int sgtl5000_i2c_remove(struct i2c_client *client) { struct sgtl5000_priv *sgtl5000 = i2c_get_clientdata(client); + regmap_write(sgtl5000->regmap, SGTL5000_CHIP_DIG_POWER, SGTL5000_DIG_POWER_DEFAULT); + regmap_write(sgtl5000->regmap, SGTL5000_CHIP_ANA_POWER, SGTL5000_ANA_POWER_DEFAULT); + clk_disable_unprepare(sgtl5000->mclk); regulator_bulk_disable(sgtl5000->num_supplies, sgtl5000->supplies); regulator_bulk_free(sgtl5000->num_supplies, sgtl5000->supplies); @@ -1804,6 +1807,11 @@ static int sgtl5000_i2c_remove(struct i2c_client *client) return 0; } +static void sgtl5000_i2c_shutdown(struct i2c_client *client) +{ + sgtl5000_i2c_remove(client); +} + static const struct i2c_device_id sgtl5000_id[] = { {"sgtl5000", 0}, {}, @@ -1824,6 +1832,7 @@ static struct i2c_driver sgtl5000_i2c_driver = { }, .probe = sgtl5000_i2c_probe, .remove = sgtl5000_i2c_remove, + .shutdown = sgtl5000_i2c_shutdown, .id_table = sgtl5000_id, }; diff --git a/sound/soc/codecs/sgtl5000.h b/sound/soc/codecs/sgtl5000.h index 56ec5863f250..3a808c762299 100644 --- a/sound/soc/codecs/sgtl5000.h +++ b/sound/soc/codecs/sgtl5000.h @@ -80,6 +80,7 @@ /* * SGTL5000_CHIP_DIG_POWER */ +#define SGTL5000_DIG_POWER_DEFAULT 0x0000 #define SGTL5000_ADC_EN 0x0040 #define SGTL5000_DAC_EN 0x0020 #define SGTL5000_DAP_POWERUP 0x0010 From 249fe2d20d55d7bd01779ec802cc9967a758a092 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Martin=20Povi=C5=A1er?= Date: Thu, 30 Jun 2022 09:51:32 +0200 Subject: [PATCH 1351/1842] ASoC: tas2764: Add post reset delays MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit cd10bb89b0d57bca98eb75e0444854a1c129a14e ] Make sure there is at least 1 ms delay from reset to first command as is specified in the datasheet. This is a fix similar to commit 307f31452078 ("ASoC: tas2770: Insert post reset delay"). Fixes: 827ed8a0fa50 ("ASoC: tas2764: Add the driver for the TAS2764") Signed-off-by: Martin Povišer Link: https://lore.kernel.org/r/20220630075135.2221-1-povik+lin@cutebit.org Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/tas2764.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c index 14a193e48dc7..d8e79cc2cd1d 100644 --- a/sound/soc/codecs/tas2764.c +++ b/sound/soc/codecs/tas2764.c @@ -42,10 +42,12 @@ static void tas2764_reset(struct tas2764_priv *tas2764) gpiod_set_value_cansleep(tas2764->reset_gpio, 0); msleep(20); gpiod_set_value_cansleep(tas2764->reset_gpio, 1); + usleep_range(1000, 2000); } snd_soc_component_write(tas2764->component, TAS2764_SW_RST, TAS2764_RST); + usleep_range(1000, 2000); } static int tas2764_set_bias_level(struct snd_soc_component *component, @@ -107,8 +109,10 @@ static int tas2764_codec_resume(struct snd_soc_component *component) struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(component); int ret; - if (tas2764->sdz_gpio) + if (tas2764->sdz_gpio) { gpiod_set_value_cansleep(tas2764->sdz_gpio, 1); + usleep_range(1000, 2000); + } ret = snd_soc_component_update_bits(component, TAS2764_PWR_CTRL, TAS2764_PWR_CTRL_MASK, @@ -501,8 +505,10 @@ static int tas2764_codec_probe(struct snd_soc_component *component) tas2764->component = component; - if (tas2764->sdz_gpio) + if (tas2764->sdz_gpio) { gpiod_set_value_cansleep(tas2764->sdz_gpio, 1); + usleep_range(1000, 2000); + } tas2764_reset(tas2764); From 52d1b4250ca9d1090d4cdbc4cb35d6be64fe1cce Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Martin=20Povi=C5=A1er?= Date: Thu, 30 Jun 2022 09:51:33 +0200 Subject: [PATCH 1352/1842] ASoC: tas2764: Fix and extend FSYNC polarity handling MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit d1a10f1b48202e2d183cce144c218a211e98d906 ] Fix setting of FSYNC polarity in case of LEFT_J and DSP_A/B formats. Do NOT set the SCFG field as was previously done, because that is not correct and is also in conflict with the "ASI1 Source" control which sets the same SCFG field! Also add support for explicit polarity inversion. Fixes: 827ed8a0fa50 ("ASoC: tas2764: Add the driver for the TAS2764") Signed-off-by: Martin Povišer Link: https://lore.kernel.org/r/20220630075135.2221-2-povik+lin@cutebit.org Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/tas2764.c | 30 +++++++++++++++++------------- sound/soc/codecs/tas2764.h | 6 ++---- 2 files changed, 19 insertions(+), 17 deletions(-) diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c index d8e79cc2cd1d..b93e593788f2 100644 --- a/sound/soc/codecs/tas2764.c +++ b/sound/soc/codecs/tas2764.c @@ -135,7 +135,8 @@ static const char * const tas2764_ASI1_src[] = { }; static SOC_ENUM_SINGLE_DECL( - tas2764_ASI1_src_enum, TAS2764_TDM_CFG2, 4, tas2764_ASI1_src); + tas2764_ASI1_src_enum, TAS2764_TDM_CFG2, TAS2764_TDM_CFG2_SCFG_SHIFT, + tas2764_ASI1_src); static const struct snd_kcontrol_new tas2764_asi1_mux = SOC_DAPM_ENUM("ASI1 Source", tas2764_ASI1_src_enum); @@ -333,20 +334,22 @@ static int tas2764_set_fmt(struct snd_soc_dai *dai, unsigned int fmt) { struct snd_soc_component *component = dai->component; struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(component); - u8 tdm_rx_start_slot = 0, asi_cfg_1 = 0; - int iface; + u8 tdm_rx_start_slot = 0, asi_cfg_0 = 0, asi_cfg_1 = 0; int ret; switch (fmt & SND_SOC_DAIFMT_INV_MASK) { + case SND_SOC_DAIFMT_NB_IF: + asi_cfg_0 ^= TAS2764_TDM_CFG0_FRAME_START; + fallthrough; case SND_SOC_DAIFMT_NB_NF: asi_cfg_1 = TAS2764_TDM_CFG1_RX_RISING; break; + case SND_SOC_DAIFMT_IB_IF: + asi_cfg_0 ^= TAS2764_TDM_CFG0_FRAME_START; + fallthrough; case SND_SOC_DAIFMT_IB_NF: asi_cfg_1 = TAS2764_TDM_CFG1_RX_FALLING; break; - default: - dev_err(tas2764->dev, "ASI format Inverse is not found\n"); - return -EINVAL; } ret = snd_soc_component_update_bits(component, TAS2764_TDM_CFG1, @@ -357,13 +360,13 @@ static int tas2764_set_fmt(struct snd_soc_dai *dai, unsigned int fmt) switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) { case SND_SOC_DAIFMT_I2S: + asi_cfg_0 ^= TAS2764_TDM_CFG0_FRAME_START; + fallthrough; case SND_SOC_DAIFMT_DSP_A: - iface = TAS2764_TDM_CFG2_SCFG_I2S; tdm_rx_start_slot = 1; break; case SND_SOC_DAIFMT_DSP_B: case SND_SOC_DAIFMT_LEFT_J: - iface = TAS2764_TDM_CFG2_SCFG_LEFT_J; tdm_rx_start_slot = 0; break; default: @@ -372,14 +375,15 @@ static int tas2764_set_fmt(struct snd_soc_dai *dai, unsigned int fmt) return -EINVAL; } - ret = snd_soc_component_update_bits(component, TAS2764_TDM_CFG1, - TAS2764_TDM_CFG1_MASK, - (tdm_rx_start_slot << TAS2764_TDM_CFG1_51_SHIFT)); + ret = snd_soc_component_update_bits(component, TAS2764_TDM_CFG0, + TAS2764_TDM_CFG0_FRAME_START, + asi_cfg_0); if (ret < 0) return ret; - ret = snd_soc_component_update_bits(component, TAS2764_TDM_CFG2, - TAS2764_TDM_CFG2_SCFG_MASK, iface); + ret = snd_soc_component_update_bits(component, TAS2764_TDM_CFG1, + TAS2764_TDM_CFG1_MASK, + (tdm_rx_start_slot << TAS2764_TDM_CFG1_51_SHIFT)); if (ret < 0) return ret; diff --git a/sound/soc/codecs/tas2764.h b/sound/soc/codecs/tas2764.h index 67d6fd903c42..f015f22a083b 100644 --- a/sound/soc/codecs/tas2764.h +++ b/sound/soc/codecs/tas2764.h @@ -47,6 +47,7 @@ #define TAS2764_TDM_CFG0_MASK GENMASK(3, 1) #define TAS2764_TDM_CFG0_44_1_48KHZ BIT(3) #define TAS2764_TDM_CFG0_88_2_96KHZ (BIT(3) | BIT(1)) +#define TAS2764_TDM_CFG0_FRAME_START BIT(0) /* TDM Configuration Reg1 */ #define TAS2764_TDM_CFG1 TAS2764_REG(0X0, 0x09) @@ -66,10 +67,7 @@ #define TAS2764_TDM_CFG2_RXS_16BITS 0x0 #define TAS2764_TDM_CFG2_RXS_24BITS BIT(0) #define TAS2764_TDM_CFG2_RXS_32BITS BIT(1) -#define TAS2764_TDM_CFG2_SCFG_MASK GENMASK(5, 4) -#define TAS2764_TDM_CFG2_SCFG_I2S 0x0 -#define TAS2764_TDM_CFG2_SCFG_LEFT_J BIT(4) -#define TAS2764_TDM_CFG2_SCFG_RIGHT_J BIT(5) +#define TAS2764_TDM_CFG2_SCFG_SHIFT 4 /* TDM Configuration Reg3 */ #define TAS2764_TDM_CFG3 TAS2764_REG(0X0, 0x0c) From f1cd988de463cd464c6844aa6f24ee79e085f1fd Mon Sep 17 00:00:00 2001 From: Hector Martin Date: Thu, 30 Jun 2022 09:51:34 +0200 Subject: [PATCH 1353/1842] ASoC: tas2764: Correct playback volume range MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 3e99e5697e1f7120b5abc755e8a560b22612d6ed ] DVC value 0xc8 is -100dB and 0xc9 is mute; this needs to map to -100.5dB as far as the dB scale is concerned. Fix that and enable the mute flag, so alsamixer correctly shows the control as <0 dB .. -100 dB, mute>. Signed-off-by: Hector Martin Fixes: 827ed8a0fa50 ("ASoC: tas2764: Add the driver for the TAS2764") Signed-off-by: Martin Povišer Link: https://lore.kernel.org/r/20220630075135.2221-3-povik+lin@cutebit.org Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/tas2764.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c index b93e593788f2..33d7ce78aced 100644 --- a/sound/soc/codecs/tas2764.c +++ b/sound/soc/codecs/tas2764.c @@ -536,7 +536,7 @@ static int tas2764_codec_probe(struct snd_soc_component *component) } static DECLARE_TLV_DB_SCALE(tas2764_digital_tlv, 1100, 50, 0); -static DECLARE_TLV_DB_SCALE(tas2764_playback_volume, -10000, 50, 0); +static DECLARE_TLV_DB_SCALE(tas2764_playback_volume, -10050, 50, 1); static const struct snd_kcontrol_new tas2764_snd_controls[] = { SOC_SINGLE_TLV("Speaker Volume", TAS2764_DVC, 0, From bb8bf8038771ba612000965915758cc364c4ca52 Mon Sep 17 00:00:00 2001 From: Hector Martin Date: Thu, 30 Jun 2022 09:51:35 +0200 Subject: [PATCH 1354/1842] ASoC: tas2764: Fix amp gain register offset & default MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 1c4f29ec878bbf1cc0a1eb54ae7da5ff98e19641 ] The register default is 0x28 per the datasheet, and the amp gain field is supposed to be shifted left by one. With the wrong default, the ALSA controls lie about the power-up state. With the wrong shift, we get only half the gain we expect. Signed-off-by: Hector Martin Fixes: 827ed8a0fa50 ("ASoC: tas2764: Add the driver for the TAS2764") Signed-off-by: Martin Povišer Link: https://lore.kernel.org/r/20220630075135.2221-4-povik+lin@cutebit.org Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/tas2764.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c index 33d7ce78aced..37588804a6b5 100644 --- a/sound/soc/codecs/tas2764.c +++ b/sound/soc/codecs/tas2764.c @@ -541,7 +541,7 @@ static DECLARE_TLV_DB_SCALE(tas2764_playback_volume, -10050, 50, 1); static const struct snd_kcontrol_new tas2764_snd_controls[] = { SOC_SINGLE_TLV("Speaker Volume", TAS2764_DVC, 0, TAS2764_DVC_MAX, 1, tas2764_playback_volume), - SOC_SINGLE_TLV("Amp Gain Volume", TAS2764_CHNL_0, 0, 0x14, 0, + SOC_SINGLE_TLV("Amp Gain Volume", TAS2764_CHNL_0, 1, 0x14, 0, tas2764_digital_tlv), }; @@ -566,7 +566,7 @@ static const struct reg_default tas2764_reg_defaults[] = { { TAS2764_SW_RST, 0x00 }, { TAS2764_PWR_CTRL, 0x1a }, { TAS2764_DVC, 0x00 }, - { TAS2764_CHNL_0, 0x00 }, + { TAS2764_CHNL_0, 0x28 }, { TAS2764_TDM_CFG0, 0x09 }, { TAS2764_TDM_CFG1, 0x02 }, { TAS2764_TDM_CFG2, 0x0a }, From fbb87a0ed216c48109fb3c8b502cc772b23a4290 Mon Sep 17 00:00:00 2001 From: Peter Ujfalusi Date: Thu, 30 Jun 2022 09:56:37 +0300 Subject: [PATCH 1355/1842] ASoC: Intel: Skylake: Correct the ssp rate discovery in skl_get_ssp_clks() [ Upstream commit 219af251bd1694bce1f627d238347d2eaf13de61 ] The present flag is only set once when one rate has been found to be saved. This will effectively going to ignore any rate discovered at later time and based on the code, this is not the intention. Fixes: bc2bd45b1f7f3 ("ASoC: Intel: Skylake: Parse nhlt and register clock device") Signed-off-by: Peter Ujfalusi Reviewed-by: Cezary Rojewski Link: https://lore.kernel.org/r/20220630065638.11183-2-peter.ujfalusi@linux.intel.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/intel/skylake/skl-nhlt.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sound/soc/intel/skylake/skl-nhlt.c b/sound/soc/intel/skylake/skl-nhlt.c index 87c891c46291..c668e10baade 100644 --- a/sound/soc/intel/skylake/skl-nhlt.c +++ b/sound/soc/intel/skylake/skl-nhlt.c @@ -201,7 +201,6 @@ static void skl_get_ssp_clks(struct skl_dev *skl, struct skl_ssp_clk *ssp_clks, struct nhlt_fmt_cfg *fmt_cfg; struct wav_fmt_ext *wav_fmt; unsigned long rate; - bool present = false; int rate_index = 0; u16 channels, bps; u8 clk_src; @@ -215,6 +214,8 @@ static void skl_get_ssp_clks(struct skl_dev *skl, struct skl_ssp_clk *ssp_clks, return; for (i = 0; i < fmt->fmt_count; i++) { + bool present = false; + fmt_cfg = &fmt->fmt_config[i]; wav_fmt = &fmt_cfg->fmt_ext; From cd201332cc397ab999a0af8ad3b4888f521b8a15 Mon Sep 17 00:00:00 2001 From: Peter Ujfalusi Date: Thu, 30 Jun 2022 09:56:38 +0300 Subject: [PATCH 1356/1842] ASoC: Intel: Skylake: Correct the handling of fmt_config flexible array [ Upstream commit fc976f5629afb4160ee77798b14a693eac903ffd ] The struct nhlt_format's fmt_config is a flexible array, it must not be used as normal array. When moving to the next nhlt_fmt_cfg we need to take into account the data behind the ->config.caps (indicated by ->config.size). The logic of the code also changed: it is no longer saves the _last_ fmt_cfg for all found rates. Fixes: bc2bd45b1f7f3 ("ASoC: Intel: Skylake: Parse nhlt and register clock device") Signed-off-by: Peter Ujfalusi Reviewed-by: Cezary Rojewski Link: https://lore.kernel.org/r/20220630065638.11183-3-peter.ujfalusi@linux.intel.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/intel/skylake/skl-nhlt.c | 37 ++++++++++++++++++++---------- 1 file changed, 25 insertions(+), 12 deletions(-) diff --git a/sound/soc/intel/skylake/skl-nhlt.c b/sound/soc/intel/skylake/skl-nhlt.c index c668e10baade..3b3868df9f67 100644 --- a/sound/soc/intel/skylake/skl-nhlt.c +++ b/sound/soc/intel/skylake/skl-nhlt.c @@ -213,11 +213,12 @@ static void skl_get_ssp_clks(struct skl_dev *skl, struct skl_ssp_clk *ssp_clks, if (fmt->fmt_count == 0) return; + fmt_cfg = (struct nhlt_fmt_cfg *)fmt->fmt_config; for (i = 0; i < fmt->fmt_count; i++) { + struct nhlt_fmt_cfg *saved_fmt_cfg = fmt_cfg; bool present = false; - fmt_cfg = &fmt->fmt_config[i]; - wav_fmt = &fmt_cfg->fmt_ext; + wav_fmt = &saved_fmt_cfg->fmt_ext; channels = wav_fmt->fmt.channels; bps = wav_fmt->fmt.bits_per_sample; @@ -235,12 +236,18 @@ static void skl_get_ssp_clks(struct skl_dev *skl, struct skl_ssp_clk *ssp_clks, * derive the rate. */ for (j = i; j < fmt->fmt_count; j++) { - fmt_cfg = &fmt->fmt_config[j]; - wav_fmt = &fmt_cfg->fmt_ext; + struct nhlt_fmt_cfg *tmp_fmt_cfg = fmt_cfg; + + wav_fmt = &tmp_fmt_cfg->fmt_ext; if ((fs == wav_fmt->fmt.samples_per_sec) && - (bps == wav_fmt->fmt.bits_per_sample)) + (bps == wav_fmt->fmt.bits_per_sample)) { channels = max_t(u16, channels, wav_fmt->fmt.channels); + saved_fmt_cfg = tmp_fmt_cfg; + } + /* Move to the next nhlt_fmt_cfg */ + tmp_fmt_cfg = (struct nhlt_fmt_cfg *)(tmp_fmt_cfg->config.caps + + tmp_fmt_cfg->config.size); } rate = channels * bps * fs; @@ -256,8 +263,11 @@ static void skl_get_ssp_clks(struct skl_dev *skl, struct skl_ssp_clk *ssp_clks, /* Fill rate and parent for sclk/sclkfs */ if (!present) { + struct nhlt_fmt_cfg *first_fmt_cfg; + + first_fmt_cfg = (struct nhlt_fmt_cfg *)fmt->fmt_config; i2s_config_ext = (struct skl_i2s_config_blob_ext *) - fmt->fmt_config[0].config.caps; + first_fmt_cfg->config.caps; /* MCLK Divider Source Select */ if (is_legacy_blob(i2s_config_ext->hdr.sig)) { @@ -271,6 +281,9 @@ static void skl_get_ssp_clks(struct skl_dev *skl, struct skl_ssp_clk *ssp_clks, parent = skl_get_parent_clk(clk_src); + /* Move to the next nhlt_fmt_cfg */ + fmt_cfg = (struct nhlt_fmt_cfg *)(fmt_cfg->config.caps + + fmt_cfg->config.size); /* * Do not copy the config data if there is no parent * clock available for this clock source select @@ -279,9 +292,9 @@ static void skl_get_ssp_clks(struct skl_dev *skl, struct skl_ssp_clk *ssp_clks, continue; sclk[id].rate_cfg[rate_index].rate = rate; - sclk[id].rate_cfg[rate_index].config = fmt_cfg; + sclk[id].rate_cfg[rate_index].config = saved_fmt_cfg; sclkfs[id].rate_cfg[rate_index].rate = rate; - sclkfs[id].rate_cfg[rate_index].config = fmt_cfg; + sclkfs[id].rate_cfg[rate_index].config = saved_fmt_cfg; sclk[id].parent_name = parent->name; sclkfs[id].parent_name = parent->name; @@ -295,13 +308,13 @@ static void skl_get_mclk(struct skl_dev *skl, struct skl_ssp_clk *mclk, { struct skl_i2s_config_blob_ext *i2s_config_ext; struct skl_i2s_config_blob_legacy *i2s_config; - struct nhlt_specific_cfg *fmt_cfg; + struct nhlt_fmt_cfg *fmt_cfg; struct skl_clk_parent_src *parent; u32 clkdiv, div_ratio; u8 clk_src; - fmt_cfg = &fmt->fmt_config[0].config; - i2s_config_ext = (struct skl_i2s_config_blob_ext *)fmt_cfg->caps; + fmt_cfg = (struct nhlt_fmt_cfg *)fmt->fmt_config; + i2s_config_ext = (struct skl_i2s_config_blob_ext *)fmt_cfg->config.caps; /* MCLK Divider Source Select and divider */ if (is_legacy_blob(i2s_config_ext->hdr.sig)) { @@ -330,7 +343,7 @@ static void skl_get_mclk(struct skl_dev *skl, struct skl_ssp_clk *mclk, return; mclk[id].rate_cfg[0].rate = parent->rate/div_ratio; - mclk[id].rate_cfg[0].config = &fmt->fmt_config[0]; + mclk[id].rate_cfg[0].config = fmt_cfg; mclk[id].parent_name = parent->name; } From 9cc8edc571b871d974b3289868553f9ce544aba6 Mon Sep 17 00:00:00 2001 From: Jon Hunter Date: Wed, 6 Jul 2022 09:39:13 +0100 Subject: [PATCH 1357/1842] net: stmmac: dwc-qos: Disable split header for Tegra194 [ Upstream commit 029c1c2059e9c4b38f97a06204cdecd10cfbeb8a ] There is a long-standing issue with the Synopsys DWC Ethernet driver for Tegra194 where random system crashes have been observed [0]. The problem occurs when the split header feature is enabled in the stmmac driver. In the bad case, a larger than expected buffer length is received and causes the calculation of the total buffer length to overflow. This results in a very large buffer length that causes the kernel to crash. Why this larger buffer length is received is not clear, however, the feedback from the NVIDIA design team is that the split header feature is not supported for Tegra194. Therefore, disable split header support for Tegra194 to prevent these random crashes from occurring. [0] https://lore.kernel.org/linux-tegra/b0b17697-f23e-8fa5-3757-604a86f3a095@nvidia.com/ Fixes: 67afd6d1cfdf ("net: stmmac: Add Split Header support and enable it in XGMAC cores") Signed-off-by: Jon Hunter Link: https://lore.kernel.org/r/20220706083913.13750-1-jonathanh@nvidia.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c index 2342d497348e..fd1b0cc6b5fa 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c @@ -363,6 +363,7 @@ static void *tegra_eqos_probe(struct platform_device *pdev, data->fix_mac_speed = tegra_eqos_fix_speed; data->init = tegra_eqos_init; data->bsp_priv = eqos; + data->sph_disable = 1; err = tegra_eqos_init(pdev, eqos); if (err < 0) From 80cc28a4b4847894e450a591ea2883dead524214 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 6 Jul 2022 16:39:52 -0700 Subject: [PATCH 1358/1842] sysctl: Fix data races in proc_dointvec(). [ Upstream commit 1f1be04b4d48a2475ea1aab46a99221bfc5c0968 ] A sysctl variable is accessed concurrently, and there is always a chance of data-race. So, all readers and writers need some basic protection to avoid load/store-tearing. This patch changes proc_dointvec() to use READ_ONCE() and WRITE_ONCE() internally to fix data-races on the sysctl side. For now, proc_dointvec() itself is tolerant to a data-race, but we still need to add annotations on the other subsystem's side. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- kernel/sysctl.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 8832440a4938..81657b13bd53 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -557,14 +557,14 @@ static int do_proc_dointvec_conv(bool *negp, unsigned long *lvalp, if (*negp) { if (*lvalp > (unsigned long) INT_MAX + 1) return -EINVAL; - *valp = -*lvalp; + WRITE_ONCE(*valp, -*lvalp); } else { if (*lvalp > (unsigned long) INT_MAX) return -EINVAL; - *valp = *lvalp; + WRITE_ONCE(*valp, *lvalp); } } else { - int val = *valp; + int val = READ_ONCE(*valp); if (val < 0) { *negp = true; *lvalp = -(unsigned long)val; From d5d54714e329f646bd7af4994fc427d88ee68936 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 6 Jul 2022 16:39:53 -0700 Subject: [PATCH 1359/1842] sysctl: Fix data races in proc_douintvec(). [ Upstream commit 4762b532ec9539755aab61445d5da6e1926ccb99 ] A sysctl variable is accessed concurrently, and there is always a chance of data-race. So, all readers and writers need some basic protection to avoid load/store-tearing. This patch changes proc_douintvec() to use READ_ONCE() and WRITE_ONCE() internally to fix data-races on the sysctl side. For now, proc_douintvec() itself is tolerant to a data-race, but we still need to add annotations on the other subsystem's side. Fixes: e7d316a02f68 ("sysctl: handle error writing UINT_MAX to u32 fields") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- kernel/sysctl.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 81657b13bd53..30681afbdb70 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -583,9 +583,9 @@ static int do_proc_douintvec_conv(unsigned long *lvalp, if (write) { if (*lvalp > UINT_MAX) return -EINVAL; - *valp = *lvalp; + WRITE_ONCE(*valp, *lvalp); } else { - unsigned int val = *valp; + unsigned int val = READ_ONCE(*valp); *lvalp = (unsigned long)val; } return 0; From 71ddde27c2eba5223c350513dd7d524c3cf1d6a4 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 6 Jul 2022 16:39:54 -0700 Subject: [PATCH 1360/1842] sysctl: Fix data races in proc_dointvec_minmax(). [ Upstream commit f613d86d014b6375a4085901de39406598121e35 ] A sysctl variable is accessed concurrently, and there is always a chance of data-race. So, all readers and writers need some basic protection to avoid load/store-tearing. This patch changes proc_dointvec_minmax() to use READ_ONCE() and WRITE_ONCE() internally to fix data-races on the sysctl side. For now, proc_dointvec_minmax() itself is tolerant to a data-race, but we still need to add annotations on the other subsystem's side. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- kernel/sysctl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 30681afbdb70..1800907da60c 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -959,7 +959,7 @@ static int do_proc_dointvec_minmax_conv(bool *negp, unsigned long *lvalp, if ((param->min && *param->min > tmp) || (param->max && *param->max < tmp)) return -EINVAL; - *valp = tmp; + WRITE_ONCE(*valp, tmp); } return 0; From e3a2144b3b6bf9ecafd91087c8b8b48171ec19df Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 6 Jul 2022 16:39:55 -0700 Subject: [PATCH 1361/1842] sysctl: Fix data races in proc_douintvec_minmax(). [ Upstream commit 2d3b559df3ed39258737789aae2ae7973d205bc1 ] A sysctl variable is accessed concurrently, and there is always a chance of data-race. So, all readers and writers need some basic protection to avoid load/store-tearing. This patch changes proc_douintvec_minmax() to use READ_ONCE() and WRITE_ONCE() internally to fix data-races on the sysctl side. For now, proc_douintvec_minmax() itself is tolerant to a data-race, but we still need to add annotations on the other subsystem's side. Fixes: 61d9b56a8920 ("sysctl: add unsigned int range support") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- kernel/sysctl.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 1800907da60c..df6090ba1d0b 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -1025,7 +1025,7 @@ static int do_proc_douintvec_minmax_conv(unsigned long *lvalp, (param->max && *param->max < tmp)) return -ERANGE; - *valp = tmp; + WRITE_ONCE(*valp, tmp); } return 0; From a5ee448d388cdafb154c10bde41865fb9ccf76a4 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 6 Jul 2022 16:39:56 -0700 Subject: [PATCH 1362/1842] sysctl: Fix data races in proc_doulongvec_minmax(). [ Upstream commit c31bcc8fb89fc2812663900589c6325ba35d9a65 ] A sysctl variable is accessed concurrently, and there is always a chance of data-race. So, all readers and writers need some basic protection to avoid load/store-tearing. This patch changes proc_doulongvec_minmax() to use READ_ONCE() and WRITE_ONCE() internally to fix data-races on the sysctl side. For now, proc_doulongvec_minmax() itself is tolerant to a data-race, but we still need to add annotations on the other subsystem's side. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- kernel/sysctl.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/sysctl.c b/kernel/sysctl.c index df6090ba1d0b..e7409788db64 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -1193,9 +1193,9 @@ static int __do_proc_doulongvec_minmax(void *data, struct ctl_table *table, err = -EINVAL; break; } - *i = val; + WRITE_ONCE(*i, val); } else { - val = convdiv * (*i) / convmul; + val = convdiv * READ_ONCE(*i) / convmul; if (!first) proc_put_char(&buffer, &left, '\t'); proc_put_long(&buffer, &left, val, false); From 609ce7ff75a75aff2c2301094bb1572544b3ffa7 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 6 Jul 2022 16:39:57 -0700 Subject: [PATCH 1363/1842] sysctl: Fix data races in proc_dointvec_jiffies(). [ Upstream commit e877820877663fbae8cb9582ea597a7230b94df3 ] A sysctl variable is accessed concurrently, and there is always a chance of data-race. So, all readers and writers need some basic protection to avoid load/store-tearing. This patch changes proc_dointvec_jiffies() to use READ_ONCE() and WRITE_ONCE() internally to fix data-races on the sysctl side. For now, proc_dointvec_jiffies() itself is tolerant to a data-race, but we still need to add annotations on the other subsystem's side. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- kernel/sysctl.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/kernel/sysctl.c b/kernel/sysctl.c index e7409788db64..83241a56539b 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -1276,9 +1276,12 @@ static int do_proc_dointvec_jiffies_conv(bool *negp, unsigned long *lvalp, if (write) { if (*lvalp > INT_MAX / HZ) return 1; - *valp = *negp ? -(*lvalp*HZ) : (*lvalp*HZ); + if (*negp) + WRITE_ONCE(*valp, -*lvalp * HZ); + else + WRITE_ONCE(*valp, *lvalp * HZ); } else { - int val = *valp; + int val = READ_ONCE(*valp); unsigned long lval; if (val < 0) { *negp = true; From 6481a8a72a746dc96e3aa70a9f9f6d3daf0f88c2 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 6 Jul 2022 16:39:58 -0700 Subject: [PATCH 1364/1842] tcp: Fix a data-race around sysctl_tcp_max_orphans. [ Upstream commit 47e6ab24e8c6e3ca10ceb5835413f401f90de4bf ] While reading sysctl_tcp_max_orphans, it can be changed concurrently. So, we need to add READ_ONCE() to avoid a data-race. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index a3ec2a08027b..19c13ad5c121 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -2490,7 +2490,8 @@ static void tcp_orphan_update(struct timer_list *unused) static bool tcp_too_many_orphans(int shift) { - return READ_ONCE(tcp_orphan_cache) << shift > sysctl_tcp_max_orphans; + return READ_ONCE(tcp_orphan_cache) << shift > + READ_ONCE(sysctl_tcp_max_orphans); } bool tcp_check_oom(struct sock *sk, int shift) From d54b6ef53cbcc40c61e83e7410451e19a038e6b0 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 6 Jul 2022 16:39:59 -0700 Subject: [PATCH 1365/1842] inetpeer: Fix data-races around sysctl. [ Upstream commit 3d32edf1f3c38d3301f6434e56316f293466d7fb ] While reading inetpeer sysctl variables, they can be changed concurrently. So, we need to add READ_ONCE() to avoid data-races. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/inetpeer.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c index ff327a62c9ce..a18668552d33 100644 --- a/net/ipv4/inetpeer.c +++ b/net/ipv4/inetpeer.c @@ -148,16 +148,20 @@ static void inet_peer_gc(struct inet_peer_base *base, struct inet_peer *gc_stack[], unsigned int gc_cnt) { + int peer_threshold, peer_maxttl, peer_minttl; struct inet_peer *p; __u32 delta, ttl; int i; - if (base->total >= inet_peer_threshold) + peer_threshold = READ_ONCE(inet_peer_threshold); + peer_maxttl = READ_ONCE(inet_peer_maxttl); + peer_minttl = READ_ONCE(inet_peer_minttl); + + if (base->total >= peer_threshold) ttl = 0; /* be aggressive */ else - ttl = inet_peer_maxttl - - (inet_peer_maxttl - inet_peer_minttl) / HZ * - base->total / inet_peer_threshold * HZ; + ttl = peer_maxttl - (peer_maxttl - peer_minttl) / HZ * + base->total / peer_threshold * HZ; for (i = 0; i < gc_cnt; i++) { p = gc_stack[i]; From f5811b8df2b97abb2a0c8936cdbdc4dc4457a9c2 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 6 Jul 2022 16:40:00 -0700 Subject: [PATCH 1366/1842] net: Fix data-races around sysctl_mem. [ Upstream commit 310731e2f1611d1d13aae237abcf8e66d33345d5 ] While reading .sysctl_mem, it can be changed concurrently. So, we need to add READ_ONCE() to avoid data-races. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- include/net/sock.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/net/sock.h b/include/net/sock.h index 2c11eb4abdd2..83854cec4a47 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1445,7 +1445,7 @@ void __sk_mem_reclaim(struct sock *sk, int amount); /* sysctl_mem values are in pages, we convert them in SK_MEM_QUANTUM units */ static inline long sk_prot_mem_limits(const struct sock *sk, int index) { - long val = sk->sk_prot->sysctl_mem[index]; + long val = READ_ONCE(sk->sk_prot->sysctl_mem[index]); #if PAGE_SIZE > SK_MEM_QUANTUM val <<= PAGE_SHIFT - SK_MEM_QUANTUM_SHIFT; From fe2a35fa2c4f9c8ce5ef970eb927031387f9446a Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 6 Jul 2022 16:40:01 -0700 Subject: [PATCH 1367/1842] cipso: Fix data-races around sysctl. [ Upstream commit dd44f04b9214adb68ef5684ae87a81ba03632250 ] While reading cipso sysctl variables, they can be changed concurrently. So, we need to add READ_ONCE() to avoid data-races. Fixes: 446fda4f2682 ("[NetLabel]: CIPSOv4 engine") Signed-off-by: Kuniyuki Iwashima Acked-by: Paul Moore Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- Documentation/networking/ip-sysctl.rst | 2 +- net/ipv4/cipso_ipv4.c | 12 +++++++----- 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst index 4822a058a81d..73de75906b24 100644 --- a/Documentation/networking/ip-sysctl.rst +++ b/Documentation/networking/ip-sysctl.rst @@ -988,7 +988,7 @@ cipso_cache_enable - BOOLEAN cipso_cache_bucket_size - INTEGER The CIPSO label cache consists of a fixed size hash table with each hash bucket containing a number of cache entries. This variable limits - the number of entries in each hash bucket; the larger the value the + the number of entries in each hash bucket; the larger the value is, the more CIPSO label mappings that can be cached. When the number of entries in a given hash bucket reaches this limit adding new entries causes the oldest entry in the bucket to be removed to make room. diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c index ca217a6f488f..d4a4160159a9 100644 --- a/net/ipv4/cipso_ipv4.c +++ b/net/ipv4/cipso_ipv4.c @@ -240,7 +240,7 @@ static int cipso_v4_cache_check(const unsigned char *key, struct cipso_v4_map_cache_entry *prev_entry = NULL; u32 hash; - if (!cipso_v4_cache_enabled) + if (!READ_ONCE(cipso_v4_cache_enabled)) return -ENOENT; hash = cipso_v4_map_cache_hash(key, key_len); @@ -297,13 +297,14 @@ static int cipso_v4_cache_check(const unsigned char *key, int cipso_v4_cache_add(const unsigned char *cipso_ptr, const struct netlbl_lsm_secattr *secattr) { + int bkt_size = READ_ONCE(cipso_v4_cache_bucketsize); int ret_val = -EPERM; u32 bkt; struct cipso_v4_map_cache_entry *entry = NULL; struct cipso_v4_map_cache_entry *old_entry = NULL; u32 cipso_ptr_len; - if (!cipso_v4_cache_enabled || cipso_v4_cache_bucketsize <= 0) + if (!READ_ONCE(cipso_v4_cache_enabled) || bkt_size <= 0) return 0; cipso_ptr_len = cipso_ptr[1]; @@ -323,7 +324,7 @@ int cipso_v4_cache_add(const unsigned char *cipso_ptr, bkt = entry->hash & (CIPSO_V4_CACHE_BUCKETS - 1); spin_lock_bh(&cipso_v4_cache[bkt].lock); - if (cipso_v4_cache[bkt].size < cipso_v4_cache_bucketsize) { + if (cipso_v4_cache[bkt].size < bkt_size) { list_add(&entry->list, &cipso_v4_cache[bkt].list); cipso_v4_cache[bkt].size += 1; } else { @@ -1200,7 +1201,8 @@ static int cipso_v4_gentag_rbm(const struct cipso_v4_doi *doi_def, /* This will send packets using the "optimized" format when * possible as specified in section 3.4.2.6 of the * CIPSO draft. */ - if (cipso_v4_rbm_optfmt && ret_val > 0 && ret_val <= 10) + if (READ_ONCE(cipso_v4_rbm_optfmt) && ret_val > 0 && + ret_val <= 10) tag_len = 14; else tag_len = 4 + ret_val; @@ -1604,7 +1606,7 @@ int cipso_v4_validate(const struct sk_buff *skb, unsigned char **option) * all the CIPSO validations here but it doesn't * really specify _exactly_ what we need to validate * ... so, just make it a sysctl tunable. */ - if (cipso_v4_rbm_strictvalid) { + if (READ_ONCE(cipso_v4_rbm_strictvalid)) { if (cipso_v4_map_lvl_valid(doi_def, tag[3]) < 0) { err_offset = opt_iter + 3; From e088ceb73c24ab4774da391d54a6426f4bfaefce Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 6 Jul 2022 16:40:02 -0700 Subject: [PATCH 1368/1842] icmp: Fix data-races around sysctl. [ Upstream commit 48d7ee321ea5182c6a70782aa186422a70e67e22 ] While reading icmp sysctl variables, they can be changed concurrently. So, we need to add READ_ONCE() to avoid data-races. Fixes: 4cdf507d5452 ("icmp: add a global rate limitation") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/icmp.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index cd65d3146c30..f22c0d55f479 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -261,11 +261,12 @@ bool icmp_global_allow(void) spin_lock(&icmp_global.lock); delta = min_t(u32, now - icmp_global.stamp, HZ); if (delta >= HZ / 50) { - incr = sysctl_icmp_msgs_per_sec * delta / HZ ; + incr = READ_ONCE(sysctl_icmp_msgs_per_sec) * delta / HZ; if (incr) WRITE_ONCE(icmp_global.stamp, now); } - credit = min_t(u32, icmp_global.credit + incr, sysctl_icmp_msgs_burst); + credit = min_t(u32, icmp_global.credit + incr, + READ_ONCE(sysctl_icmp_msgs_burst)); if (credit) { /* We want to use a credit of one in average, but need to randomize * it for security reasons. From 418b191d5f223a8cb6cab09eae1f72c04ba6adf2 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 6 Jul 2022 16:40:03 -0700 Subject: [PATCH 1369/1842] ipv4: Fix a data-race around sysctl_fib_sync_mem. [ Upstream commit 73318c4b7dbd0e781aaababff17376b2894745c0 ] While reading sysctl_fib_sync_mem, it can be changed concurrently. So, we need to add READ_ONCE() to avoid a data-race. Fixes: 9ab948a91b2c ("ipv4: Allow amount of dirty memory from fib resizing to be controllable") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/fib_trie.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c index ffc5332f1390..a28f525e2c47 100644 --- a/net/ipv4/fib_trie.c +++ b/net/ipv4/fib_trie.c @@ -497,7 +497,7 @@ static void tnode_free(struct key_vector *tn) tn = container_of(head, struct tnode, rcu)->kv; } - if (tnode_free_size >= sysctl_fib_sync_mem) { + if (tnode_free_size >= READ_ONCE(sysctl_fib_sync_mem)) { tnode_free_size = 0; synchronize_rcu(); } From e1aa73454ab4134c86b63b9cb805f5a0bc40f310 Mon Sep 17 00:00:00 2001 From: Ryan Wanner Date: Thu, 7 Jul 2022 14:58:12 -0700 Subject: [PATCH 1370/1842] ARM: dts: at91: sama5d2: Fix typo in i2s1 node [ Upstream commit 2fdf15b50a46e366740df4cccbe2343269b4ff55 ] Fix typo in i2s1 causing errors in dt binding validation. Change assigned-parrents to assigned-clock-parents to match i2s0 node formatting. Fixes: 1ca81883c557 ("ARM: dts: at91: sama5d2: add nodes for I2S controllers") Signed-off-by: Ryan Wanner [claudiu.beznea: use imperative addressing in commit description, remove blank line after fixes tag, fix typo in commit message] Signed-off-by: Claudiu Beznea Link: https://lore.kernel.org/r/20220707215812.193008-1-Ryan.Wanner@microchip.com Signed-off-by: Sasha Levin --- arch/arm/boot/dts/sama5d2.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/boot/dts/sama5d2.dtsi b/arch/arm/boot/dts/sama5d2.dtsi index 12f57278ba4a..33f76d14341e 100644 --- a/arch/arm/boot/dts/sama5d2.dtsi +++ b/arch/arm/boot/dts/sama5d2.dtsi @@ -1125,7 +1125,7 @@ AT91_XDMAC_DT_PERID(33))>, clocks = <&pmc PMC_TYPE_PERIPHERAL 55>, <&pmc PMC_TYPE_GCK 55>; clock-names = "pclk", "gclk"; assigned-clocks = <&pmc PMC_TYPE_CORE PMC_I2S1_MUX>; - assigned-parrents = <&pmc PMC_TYPE_GCK 55>; + assigned-clock-parents = <&pmc PMC_TYPE_GCK 55>; status = "disabled"; }; From 359f2bca798968f6e680b0dcb79776abb2a205c6 Mon Sep 17 00:00:00 2001 From: Michal Suchanek Date: Fri, 8 Jul 2022 19:45:29 +0200 Subject: [PATCH 1371/1842] ARM: dts: sunxi: Fix SPI NOR campatible on Orange Pi Zero [ Upstream commit 884b66976a7279ee889ba885fe364244d50b79e7 ] The device tree should include generic "jedec,spi-nor" compatible, and a manufacturer-specific one. The macronix part is what is shipped on the boards that come with a flash chip. Fixes: 45857ae95478 ("ARM: dts: orange-pi-zero: add node for SPI NOR") Signed-off-by: Michal Suchanek Acked-by: Jernej Skrabec Signed-off-by: Jernej Skrabec Link: https://lore.kernel.org/r/20220708174529.3360-1-msuchanek@suse.de Signed-off-by: Sasha Levin --- arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts b/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts index f19ed981da9d..3706216ffb40 100644 --- a/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts +++ b/arch/arm/boot/dts/sun8i-h2-plus-orangepi-zero.dts @@ -169,7 +169,7 @@ &spi0 { flash@0 { #address-cells = <1>; #size-cells = <1>; - compatible = "mxicy,mx25l1606e", "winbond,w25q128"; + compatible = "mxicy,mx25l1606e", "jedec,spi-nor"; reg = <0>; spi-max-frequency = <40000000>; }; From 636e5dbaf097aaf66b6234f0b10598d44ccc8a87 Mon Sep 17 00:00:00 2001 From: Dan Carpenter Date: Fri, 8 Jul 2022 12:41:04 +0300 Subject: [PATCH 1372/1842] drm/i915/selftests: fix a couple IS_ERR() vs NULL tests [ Upstream commit 896dcabd1f8f613c533d948df17408c41f8929f5 ] The shmem_pin_map() function doesn't return error pointers, it returns NULL. Fixes: be1cb55a07bf ("drm/i915/gt: Keep a no-frills swappable copy of the default context state") Signed-off-by: Dan Carpenter Reviewed-by: Matthew Auld Signed-off-by: Matthew Auld Link: https://patchwork.freedesktop.org/patch/msgid/20220708094104.GL2316@kadam (cherry picked from commit d50f5a109cf4ed50c5b575c1bb5fc3bd17b23308) Signed-off-by: Rodrigo Vivi Signed-off-by: Sasha Levin --- drivers/gpu/drm/i915/gt/selftest_lrc.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/selftest_lrc.c b/drivers/gpu/drm/i915/gt/selftest_lrc.c index 95d41c01d0e0..35d55f98a06f 100644 --- a/drivers/gpu/drm/i915/gt/selftest_lrc.c +++ b/drivers/gpu/drm/i915/gt/selftest_lrc.c @@ -4788,8 +4788,8 @@ static int live_lrc_layout(void *arg) continue; hw = shmem_pin_map(engine->default_state); - if (IS_ERR(hw)) { - err = PTR_ERR(hw); + if (!hw) { + err = -ENOMEM; break; } hw += LRC_STATE_OFFSET / sizeof(*hw); @@ -4965,8 +4965,8 @@ static int live_lrc_fixed(void *arg) continue; hw = shmem_pin_map(engine->default_state); - if (IS_ERR(hw)) { - err = PTR_ERR(hw); + if (!hw) { + err = -ENOMEM; break; } hw += LRC_STATE_OFFSET / sizeof(*hw); From 2744e302e752fe3538dfefa406f382cd90681963 Mon Sep 17 00:00:00 2001 From: Chris Wilson Date: Tue, 12 Jul 2022 16:21:33 +0100 Subject: [PATCH 1373/1842] drm/i915/gt: Serialize TLB invalidates with GT resets MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit a1c5a7bf79c1faa5633b918b5c0666545e84c4d1 ] Avoid trying to invalidate the TLB in the middle of performing an engine reset, as this may result in the reset timing out. Currently, the TLB invalidate is only serialised by its own mutex, forgoing the uncore lock, but we can take the uncore->lock as well to serialise the mmio access, thereby serialising with the GDRST. Tested on a NUC5i7RYB, BIOS RYBDWi35.86A.0380.2019.0517.1530 with i915 selftest/hangcheck. Cc: stable@vger.kernel.org # v4.4 and upper Fixes: 7938d61591d3 ("drm/i915: Flush TLBs before releasing backing store") Reported-by: Mauro Carvalho Chehab Tested-by: Mauro Carvalho Chehab Reviewed-by: Mauro Carvalho Chehab Signed-off-by: Chris Wilson Cc: Tvrtko Ursulin Reviewed-by: Andi Shyti Acked-by: Thomas Hellström Signed-off-by: Mauro Carvalho Chehab Signed-off-by: Rodrigo Vivi Link: https://patchwork.freedesktop.org/patch/msgid/1e59a7c45dd919a530256b9ac721ac6ea86c0677.1657639152.git.mchehab@kernel.org (cherry picked from commit 33da97894758737895e90c909f16786052680ef4) Signed-off-by: Rodrigo Vivi Signed-off-by: Sasha Levin --- drivers/gpu/drm/i915/gt/intel_gt.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c index 6615eb5147e2..a33887f2464f 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt.c +++ b/drivers/gpu/drm/i915/gt/intel_gt.c @@ -736,6 +736,20 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt) mutex_lock(>->tlb_invalidate_lock); intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL); + spin_lock_irq(&uncore->lock); /* serialise invalidate with GT reset */ + + for_each_engine(engine, gt, id) { + struct reg_and_bit rb; + + rb = get_reg_and_bit(engine, regs == gen8_regs, regs, num); + if (!i915_mmio_reg_offset(rb.reg)) + continue; + + intel_uncore_write_fw(uncore, rb.reg, rb.bit); + } + + spin_unlock_irq(&uncore->lock); + for_each_engine(engine, gt, id) { /* * HW architecture suggest typical invalidation time at 40us, @@ -750,7 +764,6 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt) if (!i915_mmio_reg_offset(rb.reg)) continue; - intel_uncore_write_fw(uncore, rb.reg, rb.bit); if (__intel_wait_for_register_fw(uncore, rb.reg, rb.bit, 0, timeout_us, timeout_ms, From b8871d9186029458786f62e8f8ef72543ea274ac Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 11 Jul 2022 17:15:20 -0700 Subject: [PATCH 1374/1842] sysctl: Fix data-races in proc_dointvec_ms_jiffies(). [ Upstream commit 7d1025e559782b58824b36cb8ad547a69f2e4b31 ] A sysctl variable is accessed concurrently, and there is always a chance of data-race. So, all readers and writers need some basic protection to avoid load/store-tearing. This patch changes proc_dointvec_ms_jiffies() to use READ_ONCE() and WRITE_ONCE() internally to fix data-races on the sysctl side. For now, proc_dointvec_ms_jiffies() itself is tolerant to a data-race, but we still need to add annotations on the other subsystem's side. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- kernel/sysctl.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 83241a56539b..642dc51b6503 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -1327,9 +1327,9 @@ static int do_proc_dointvec_ms_jiffies_conv(bool *negp, unsigned long *lvalp, if (jif > INT_MAX) return 1; - *valp = (int)jif; + WRITE_ONCE(*valp, (int)jif); } else { - int val = *valp; + int val = READ_ONCE(*valp); unsigned long lval; if (val < 0) { *negp = true; @@ -1397,8 +1397,8 @@ int proc_dointvec_userhz_jiffies(struct ctl_table *table, int write, * @ppos: the current position in the file * * Reads/writes up to table->maxlen/sizeof(unsigned int) integer - * values from/to the user buffer, treated as an ASCII string. - * The values read are assumed to be in 1/1000 seconds, and + * values from/to the user buffer, treated as an ASCII string. + * The values read are assumed to be in 1/1000 seconds, and * are converted into jiffies. * * Returns 0 on success. From 4ebf26153215cbc3826d5ec57f051c3ec6c95b22 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 11 Jul 2022 17:15:27 -0700 Subject: [PATCH 1375/1842] icmp: Fix a data-race around sysctl_icmp_ratelimit. [ Upstream commit 2a4eb714841f288cf51c7d942d98af6a8c6e4b01 ] While reading sysctl_icmp_ratelimit, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/icmp.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index f22c0d55f479..9483c2a16b78 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -328,7 +328,8 @@ static bool icmpv4_xrlim_allow(struct net *net, struct rtable *rt, vif = l3mdev_master_ifindex(dst->dev); peer = inet_getpeer_v4(net->ipv4.peers, fl4->daddr, vif, 1); - rc = inet_peer_xrlim_allow(peer, net->ipv4.sysctl_icmp_ratelimit); + rc = inet_peer_xrlim_allow(peer, + READ_ONCE(net->ipv4.sysctl_icmp_ratelimit)); if (peer) inet_putpeer(peer); out: From 38d78c7b4be7759555d45181213a7845aad279e2 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 11 Jul 2022 17:15:28 -0700 Subject: [PATCH 1376/1842] icmp: Fix a data-race around sysctl_icmp_ratemask. [ Upstream commit 1ebcb25ad6fc3d50fca87350acf451b9a66dd31e ] While reading sysctl_icmp_ratemask, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/icmp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index 9483c2a16b78..0fa0da1d71f5 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -290,7 +290,7 @@ static bool icmpv4_mask_allow(struct net *net, int type, int code) return true; /* Limit if icmp type is enabled in ratemask. */ - if (!((1 << type) & net->ipv4.sysctl_icmp_ratemask)) + if (!((1 << type) & READ_ONCE(net->ipv4.sysctl_icmp_ratemask))) return true; return false; From 038a87b3e460d2ee579c8b1bd3890d816d6687b1 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 11 Jul 2022 17:15:29 -0700 Subject: [PATCH 1377/1842] raw: Fix a data-race around sysctl_raw_l3mdev_accept. [ Upstream commit 1dace014928e6e385363032d359a04dee9158af0 ] While reading sysctl_raw_l3mdev_accept, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 6897445fb194 ("net: provide a sysctl raw_l3mdev_accept for raw socket lookup with VRFs") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- include/net/raw.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/net/raw.h b/include/net/raw.h index 8ad8df594853..c51a635671a7 100644 --- a/include/net/raw.h +++ b/include/net/raw.h @@ -75,7 +75,7 @@ static inline bool raw_sk_bound_dev_eq(struct net *net, int bound_dev_if, int dif, int sdif) { #if IS_ENABLED(CONFIG_NET_L3_MASTER_DEV) - return inet_bound_dev_eq(!!net->ipv4.sysctl_raw_l3mdev_accept, + return inet_bound_dev_eq(READ_ONCE(net->ipv4.sysctl_raw_l3mdev_accept), bound_dev_if, dif, sdif); #else return inet_bound_dev_eq(true, bound_dev_if, dif, sdif); From 2c56958de89b9a2efdb19310e3f11a6b97254d39 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 11 Jul 2022 17:15:32 -0700 Subject: [PATCH 1378/1842] ipv4: Fix data-races around sysctl_ip_dynaddr. [ Upstream commit e49e4aff7ec19b2d0d0957ee30e93dade57dab9e ] While reading sysctl_ip_dynaddr, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- Documentation/networking/ip-sysctl.rst | 2 +- net/ipv4/af_inet.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst index 73de75906b24..0b1f3235aa77 100644 --- a/Documentation/networking/ip-sysctl.rst +++ b/Documentation/networking/ip-sysctl.rst @@ -1080,7 +1080,7 @@ ip_autobind_reuse - BOOLEAN option should only be set by experts. Default: 0 -ip_dynaddr - BOOLEAN +ip_dynaddr - INTEGER If set non-zero, enables support for dynamic addresses. If set to a non-zero value larger than 1, a kernel log message will be printed when dynamic address rewriting diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index 742218594741..e77283069c7b 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -1245,7 +1245,7 @@ static int inet_sk_reselect_saddr(struct sock *sk) if (new_saddr == old_saddr) return 0; - if (sock_net(sk)->ipv4.sysctl_ip_dynaddr > 1) { + if (READ_ONCE(sock_net(sk)->ipv4.sysctl_ip_dynaddr) > 1) { pr_info("%s(): shifting inet->saddr from %pI4 to %pI4\n", __func__, &old_saddr, &new_saddr); } @@ -1300,7 +1300,7 @@ int inet_sk_rebuild_header(struct sock *sk) * Other protocols have to map its equivalent state to TCP_SYN_SENT. * DCCP maps its DCCP_REQUESTING state to TCP_SYN_SENT. -acme */ - if (!sock_net(sk)->ipv4.sysctl_ip_dynaddr || + if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_ip_dynaddr) || sk->sk_state != TCP_SYN_SENT || (sk->sk_userlocks & SOCK_BINDADDR_LOCK) || (err = inet_sk_reselect_saddr(sk)) != 0) From a51040d4b120f3520df64fb0b9c63b31d69bea9b Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 11 Jul 2022 17:15:33 -0700 Subject: [PATCH 1379/1842] nexthop: Fix data-races around nexthop_compat_mode. [ Upstream commit bdf00bf24bef9be1ca641a6390fd5487873e0d2e ] While reading nexthop_compat_mode, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 4f80116d3df3 ("net: ipv4: add sysctl for nexthop api compatibility mode") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/fib_semantics.c | 2 +- net/ipv4/nexthop.c | 5 +++-- net/ipv6/route.c | 2 +- 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c index 845adb92ef70..70c866308abe 100644 --- a/net/ipv4/fib_semantics.c +++ b/net/ipv4/fib_semantics.c @@ -1831,7 +1831,7 @@ int fib_dump_info(struct sk_buff *skb, u32 portid, u32 seq, int event, goto nla_put_failure; if (nexthop_is_blackhole(fi->nh)) rtm->rtm_type = RTN_BLACKHOLE; - if (!fi->fib_net->ipv4.sysctl_nexthop_compat_mode) + if (!READ_ONCE(fi->fib_net->ipv4.sysctl_nexthop_compat_mode)) goto offload; } diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c index 8bd3f5e3c0e7..2a17dc9413ae 100644 --- a/net/ipv4/nexthop.c +++ b/net/ipv4/nexthop.c @@ -882,7 +882,7 @@ static void __remove_nexthop_fib(struct net *net, struct nexthop *nh) /* __ip6_del_rt does a release, so do a hold here */ fib6_info_hold(f6i); ipv6_stub->ip6_del_rt(net, f6i, - !net->ipv4.sysctl_nexthop_compat_mode); + !READ_ONCE(net->ipv4.sysctl_nexthop_compat_mode)); } } @@ -1194,7 +1194,8 @@ static int insert_nexthop(struct net *net, struct nexthop *new_nh, if (!rc) { nh_base_seq_inc(net); nexthop_notify(RTM_NEWNEXTHOP, new_nh, &cfg->nlinfo); - if (replace_notify && net->ipv4.sysctl_nexthop_compat_mode) + if (replace_notify && + READ_ONCE(net->ipv4.sysctl_nexthop_compat_mode)) nexthop_replace_notify(net, new_nh, &cfg->nlinfo); } diff --git a/net/ipv6/route.c b/net/ipv6/route.c index e67505c6d856..cdf215442d37 100644 --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -5641,7 +5641,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb, if (nexthop_is_blackhole(rt->nh)) rtm->rtm_type = RTN_BLACKHOLE; - if (net->ipv4.sysctl_nexthop_compat_mode && + if (READ_ONCE(net->ipv4.sysctl_nexthop_compat_mode) && rt6_fill_node_nexthop(skb, rt->nh, &nh_flags) < 0) goto nla_put_failure; From 29c6a632f8193b0a4b40156afb823371d1d4c05a Mon Sep 17 00:00:00 2001 From: Liang He Date: Tue, 12 Jul 2022 14:14:17 +0800 Subject: [PATCH 1380/1842] net: ftgmac100: Hold reference returned by of_get_child_by_name() [ Upstream commit 49b9f431ff0d845a36be0b3ede35ec324f2e5fee ] In ftgmac100_probe(), we should hold the refernece returned by of_get_child_by_name() and use it to call of_node_put() for reference balance. Fixes: 39bfab8844a0 ("net: ftgmac100: Add support for DT phy-handle property") Signed-off-by: Liang He Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/faraday/ftgmac100.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c index eea4bd3116e8..969af4dd6405 100644 --- a/drivers/net/ethernet/faraday/ftgmac100.c +++ b/drivers/net/ethernet/faraday/ftgmac100.c @@ -1747,6 +1747,19 @@ static int ftgmac100_setup_clk(struct ftgmac100 *priv) return rc; } +static bool ftgmac100_has_child_node(struct device_node *np, const char *name) +{ + struct device_node *child_np = of_get_child_by_name(np, name); + bool ret = false; + + if (child_np) { + ret = true; + of_node_put(child_np); + } + + return ret; +} + static int ftgmac100_probe(struct platform_device *pdev) { struct resource *res; @@ -1860,7 +1873,7 @@ static int ftgmac100_probe(struct platform_device *pdev) /* Display what we found */ phy_attached_info(phy); - } else if (np && !of_get_child_by_name(np, "mdio")) { + } else if (np && !ftgmac100_has_child_node(np, "mdio")) { /* Support legacy ASPEED devicetree descriptions that decribe a * MAC with an embedded MDIO controller but have no "mdio" * child node. Automatically scan the MDIO bus for available From eb360267e1e972475023d06546e18365a222698c Mon Sep 17 00:00:00 2001 From: Coiby Xu Date: Wed, 13 Jul 2022 15:21:11 +0800 Subject: [PATCH 1381/1842] ima: force signature verification when CONFIG_KEXEC_SIG is configured [ Upstream commit af16df54b89dee72df253abc5e7b5e8a6d16c11c ] Currently, an unsigned kernel could be kexec'ed when IMA arch specific policy is configured unless lockdown is enabled. Enforce kernel signature verification check in the kexec_file_load syscall when IMA arch specific policy is configured. Fixes: 99d5cadfde2b ("kexec_file: split KEXEC_VERIFY_SIG into KEXEC_SIG and KEXEC_SIG_FORCE") Reported-and-suggested-by: Mimi Zohar Signed-off-by: Coiby Xu Signed-off-by: Mimi Zohar Signed-off-by: Sasha Levin --- arch/x86/kernel/ima_arch.c | 2 ++ include/linux/kexec.h | 6 ++++++ kernel/kexec_file.c | 11 ++++++++++- 3 files changed, 18 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/ima_arch.c b/arch/x86/kernel/ima_arch.c index 7dfb1e808928..bd218470d145 100644 --- a/arch/x86/kernel/ima_arch.c +++ b/arch/x86/kernel/ima_arch.c @@ -88,6 +88,8 @@ const char * const *arch_get_ima_policy(void) if (IS_ENABLED(CONFIG_IMA_ARCH_POLICY) && arch_ima_get_secureboot()) { if (IS_ENABLED(CONFIG_MODULE_SIG)) set_module_sig_enforced(); + if (IS_ENABLED(CONFIG_KEXEC_SIG)) + set_kexec_sig_enforced(); return sb_arch_rules; } return NULL; diff --git a/include/linux/kexec.h b/include/linux/kexec.h index 037192c3a46f..a1f12e959bba 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -442,6 +442,12 @@ static inline int kexec_crash_loaded(void) { return 0; } #define kexec_in_progress false #endif /* CONFIG_KEXEC_CORE */ +#ifdef CONFIG_KEXEC_SIG +void set_kexec_sig_enforced(void); +#else +static inline void set_kexec_sig_enforced(void) {} +#endif + #endif /* !defined(__ASSEBMLY__) */ #endif /* LINUX_KEXEC_H */ diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c index 2e0f0b3fb9ab..fff11916aba3 100644 --- a/kernel/kexec_file.c +++ b/kernel/kexec_file.c @@ -29,6 +29,15 @@ #include #include "kexec_internal.h" +#ifdef CONFIG_KEXEC_SIG +static bool sig_enforce = IS_ENABLED(CONFIG_KEXEC_SIG_FORCE); + +void set_kexec_sig_enforced(void) +{ + sig_enforce = true; +} +#endif + static int kexec_calculate_store_digests(struct kimage *image); /* @@ -159,7 +168,7 @@ kimage_validate_signature(struct kimage *image) image->kernel_buf_len); if (ret) { - if (IS_ENABLED(CONFIG_KEXEC_SIG_FORCE)) { + if (sig_enforce) { pr_notice("Enforced kernel signature verification failed (%d).\n", ret); return ret; } From c1d9702ceb4a091da6bee380627596d1fba09274 Mon Sep 17 00:00:00 2001 From: Jianglei Nie Date: Tue, 12 Jul 2022 09:10:37 +0800 Subject: [PATCH 1382/1842] ima: Fix potential memory leak in ima_init_crypto() [ Upstream commit 067d2521874135267e681c19d42761c601d503d6 ] On failure to allocate the SHA1 tfm, IMA fails to initialize and exits without freeing the ima_algo_array. Add the missing kfree() for ima_algo_array to avoid the potential memory leak. Signed-off-by: Jianglei Nie Fixes: 6d94809af6b0 ("ima: Allocate and initialize tfm for each PCR bank") Signed-off-by: Mimi Zohar Signed-off-by: Sasha Levin --- security/integrity/ima/ima_crypto.c | 1 + 1 file changed, 1 insertion(+) diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c index f6a7e9643b54..b1e5e7749e41 100644 --- a/security/integrity/ima/ima_crypto.c +++ b/security/integrity/ima/ima_crypto.c @@ -205,6 +205,7 @@ int __init ima_init_crypto(void) crypto_free_shash(ima_algo_array[i].tfm); } + kfree(ima_algo_array); out: crypto_free_shash(ima_shash_tfm); return rc; From c2240500817b3b4b996cdf2a461a3a5679f49b94 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=C3=8D=C3=B1igo=20Huguet?= Date: Tue, 12 Jul 2022 08:26:42 +0200 Subject: [PATCH 1383/1842] sfc: fix use after free when disabling sriov MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit ebe41da5d47ac0fff877e57bd14c54dccf168827 ] Use after free is detected by kfence when disabling sriov. What was read after being freed was vf->pci_dev: it was freed from pci_disable_sriov and later read in efx_ef10_sriov_free_vf_vports, called from efx_ef10_sriov_free_vf_vswitching. Set the pointer to NULL at release time to not trying to read it later. Reproducer and dmesg log (note that kfence doesn't detect it every time): $ echo 1 > /sys/class/net/enp65s0f0np0/device/sriov_numvfs $ echo 0 > /sys/class/net/enp65s0f0np0/device/sriov_numvfs BUG: KFENCE: use-after-free read in efx_ef10_sriov_free_vf_vswitching+0x82/0x170 [sfc] Use-after-free read at 0x00000000ff3c1ba5 (in kfence-#224): efx_ef10_sriov_free_vf_vswitching+0x82/0x170 [sfc] efx_ef10_pci_sriov_disable+0x38/0x70 [sfc] efx_pci_sriov_configure+0x24/0x40 [sfc] sriov_numvfs_store+0xfe/0x140 kernfs_fop_write_iter+0x11c/0x1b0 new_sync_write+0x11f/0x1b0 vfs_write+0x1eb/0x280 ksys_write+0x5f/0xe0 do_syscall_64+0x5c/0x80 entry_SYSCALL_64_after_hwframe+0x44/0xae kfence-#224: 0x00000000edb8ef95-0x00000000671f5ce1, size=2792, cache=kmalloc-4k allocated by task 6771 on cpu 10 at 3137.860196s: pci_alloc_dev+0x21/0x60 pci_iov_add_virtfn+0x2a2/0x320 sriov_enable+0x212/0x3e0 efx_ef10_sriov_configure+0x67/0x80 [sfc] efx_pci_sriov_configure+0x24/0x40 [sfc] sriov_numvfs_store+0xba/0x140 kernfs_fop_write_iter+0x11c/0x1b0 new_sync_write+0x11f/0x1b0 vfs_write+0x1eb/0x280 ksys_write+0x5f/0xe0 do_syscall_64+0x5c/0x80 entry_SYSCALL_64_after_hwframe+0x44/0xae freed by task 6771 on cpu 12 at 3170.991309s: device_release+0x34/0x90 kobject_cleanup+0x3a/0x130 pci_iov_remove_virtfn+0xd9/0x120 sriov_disable+0x30/0xe0 efx_ef10_pci_sriov_disable+0x57/0x70 [sfc] efx_pci_sriov_configure+0x24/0x40 [sfc] sriov_numvfs_store+0xfe/0x140 kernfs_fop_write_iter+0x11c/0x1b0 new_sync_write+0x11f/0x1b0 vfs_write+0x1eb/0x280 ksys_write+0x5f/0xe0 do_syscall_64+0x5c/0x80 entry_SYSCALL_64_after_hwframe+0x44/0xae Fixes: 3c5eb87605e85 ("sfc: create vports for VFs and assign random MAC addresses") Reported-by: Yanghang Liu Signed-off-by: Íñigo Huguet Acked-by: Martin Habets Link: https://lore.kernel.org/r/20220712062642.6915-1-ihuguet@redhat.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/sfc/ef10_sriov.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/sfc/ef10_sriov.c b/drivers/net/ethernet/sfc/ef10_sriov.c index 84041cd587d7..b44acb6e3953 100644 --- a/drivers/net/ethernet/sfc/ef10_sriov.c +++ b/drivers/net/ethernet/sfc/ef10_sriov.c @@ -411,8 +411,9 @@ static int efx_ef10_pci_sriov_enable(struct efx_nic *efx, int num_vfs) static int efx_ef10_pci_sriov_disable(struct efx_nic *efx, bool force) { struct pci_dev *dev = efx->pci_dev; + struct efx_ef10_nic_data *nic_data = efx->nic_data; unsigned int vfs_assigned = pci_vfs_assigned(dev); - int rc = 0; + int i, rc = 0; if (vfs_assigned && !force) { netif_info(efx, drv, efx->net_dev, "VFs are assigned to guests; " @@ -420,10 +421,13 @@ static int efx_ef10_pci_sriov_disable(struct efx_nic *efx, bool force) return -EBUSY; } - if (!vfs_assigned) + if (!vfs_assigned) { + for (i = 0; i < efx->vf_count; i++) + nic_data->vf[i].pci_dev = NULL; pci_disable_sriov(dev); - else + } else { rc = -EBUSY; + } efx_ef10_sriov_free_vf_vswitching(efx); efx->vf_count = 0; From 834fa0a22fe827ec443543ddab8939fb5d58d1f1 Mon Sep 17 00:00:00 2001 From: Andrea Mayer Date: Tue, 12 Jul 2022 19:58:35 +0200 Subject: [PATCH 1384/1842] seg6: fix skb checksum evaluation in SRH encapsulation/insertion [ Upstream commit df8386d13ea280d55beee1b95f61a59234a3798b ] Support for SRH encapsulation and insertion was introduced with commit 6c8702c60b88 ("ipv6: sr: add support for SRH encapsulation and injection with lwtunnels"), through the seg6_do_srh_encap() and seg6_do_srh_inline() functions, respectively. The former encapsulates the packet in an outer IPv6 header along with the SRH, while the latter inserts the SRH between the IPv6 header and the payload. Then, the headers are initialized/updated according to the operating mode (i.e., encap/inline). Finally, the skb checksum is calculated to reflect the changes applied to the headers. The IPv6 payload length ('payload_len') is not initialized within seg6_do_srh_{inline,encap}() but is deferred in seg6_do_srh(), i.e. the caller of seg6_do_srh_{inline,encap}(). However, this operation invalidates the skb checksum, since the 'payload_len' is updated only after the checksum is evaluated. To solve this issue, the initialization of the IPv6 payload length is moved from seg6_do_srh() directly into the seg6_do_srh_{inline,encap}() functions and before the skb checksum update takes place. Fixes: 6c8702c60b88 ("ipv6: sr: add support for SRH encapsulation and injection with lwtunnels") Reported-by: Paolo Abeni Link: https://lore.kernel.org/all/20220705190727.69d532417be7438b15404ee1@uniroma2.it Signed-off-by: Andrea Mayer Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- net/ipv6/seg6_iptunnel.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/net/ipv6/seg6_iptunnel.c b/net/ipv6/seg6_iptunnel.c index 4d4399c5c5ea..40ac23242c37 100644 --- a/net/ipv6/seg6_iptunnel.c +++ b/net/ipv6/seg6_iptunnel.c @@ -188,6 +188,8 @@ int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto) } #endif + hdr->payload_len = htons(skb->len - sizeof(struct ipv6hdr)); + skb_postpush_rcsum(skb, hdr, tot_len); return 0; @@ -240,6 +242,8 @@ int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh) } #endif + hdr->payload_len = htons(skb->len - sizeof(struct ipv6hdr)); + skb_postpush_rcsum(skb, hdr, sizeof(struct ipv6hdr) + hdrlen); return 0; @@ -301,7 +305,6 @@ static int seg6_do_srh(struct sk_buff *skb) break; } - ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr)); skb_set_transport_header(skb, sizeof(struct ipv6hdr)); return 0; From 7b38df59a8f4119e1c9d83922f28c67fac9291d8 Mon Sep 17 00:00:00 2001 From: Andrea Mayer Date: Tue, 12 Jul 2022 19:58:36 +0200 Subject: [PATCH 1385/1842] seg6: fix skb checksum in SRv6 End.B6 and End.B6.Encaps behaviors [ Upstream commit f048880fc77058d864aff5c674af7918b30f312a ] The SRv6 End.B6 and End.B6.Encaps behaviors rely on functions seg6_do_srh_{encap,inline}() to, respectively: i) encapsulate the packet within an outer IPv6 header with the specified Segment Routing Header (SRH); ii) insert the specified SRH directly after the IPv6 header of the packet. This patch removes the initialization of the IPv6 header payload length from the input_action_end_b6{_encap}() functions, as it is now handled properly by seg6_do_srh_{encap,inline}() to avoid corruption of the skb checksum. Fixes: 140f04c33bbc ("ipv6: sr: implement several seg6local actions") Signed-off-by: Andrea Mayer Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- net/ipv6/seg6_local.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/net/ipv6/seg6_local.c b/net/ipv6/seg6_local.c index eba23279912d..11f7da4139f6 100644 --- a/net/ipv6/seg6_local.c +++ b/net/ipv6/seg6_local.c @@ -435,7 +435,6 @@ static int input_action_end_b6(struct sk_buff *skb, struct seg6_local_lwt *slwt) if (err) goto drop; - ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr)); skb_set_transport_header(skb, sizeof(struct ipv6hdr)); seg6_lookup_nexthop(skb, NULL, 0); @@ -467,7 +466,6 @@ static int input_action_end_b6_encap(struct sk_buff *skb, if (err) goto drop; - ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr)); skb_set_transport_header(skb, sizeof(struct ipv6hdr)); seg6_lookup_nexthop(skb, NULL, 0); From 2d4efc9a0e853400dd9e453c63bf5e16aee2c399 Mon Sep 17 00:00:00 2001 From: Andrea Mayer Date: Tue, 12 Jul 2022 19:58:37 +0200 Subject: [PATCH 1386/1842] seg6: bpf: fix skb checksum in bpf_push_seg6_encap() [ Upstream commit 4889fbd98deaf243c3baadc54e296d71c6af1eb0 ] Both helper functions bpf_lwt_seg6_action() and bpf_lwt_push_encap() use the bpf_push_seg6_encap() to encapsulate the packet in an IPv6 with Segment Routing Header (SRH) or insert an SRH between the IPv6 header and the payload. To achieve this result, such helper functions rely on bpf_push_seg6_encap() which, in turn, leverages seg6_do_srh_{encap,inline}() to perform the required operation (i.e. encap/inline). This patch removes the initialization of the IPv6 header payload length from bpf_push_seg6_encap(), as it is now handled properly by seg6_do_srh_{encap,inline}() to prevent corruption of the skb checksum. Fixes: fe94cc290f53 ("bpf: Add IPv6 Segment Routing helpers") Signed-off-by: Andrea Mayer Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- net/core/filter.c | 1 - 1 file changed, 1 deletion(-) diff --git a/net/core/filter.c b/net/core/filter.c index 246947fbc958..34ae30503ac4 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -5624,7 +5624,6 @@ static int bpf_push_seg6_encap(struct sk_buff *skb, u32 type, void *hdr, u32 len if (err) return err; - ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr)); skb_set_transport_header(skb, sizeof(struct ipv6hdr)); return seg6_lookup_nexthop(skb, NULL, 0); From b82e4ad58a7fb72456503958a93060f87896e629 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=C3=8D=C3=B1igo=20Huguet?= Date: Wed, 13 Jul 2022 11:21:16 +0200 Subject: [PATCH 1387/1842] sfc: fix kernel panic when creating VF MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit ada74c5539eba06cf8b47d068f92e0b3963a9a6e ] When creating VFs a kernel panic can happen when calling to efx_ef10_try_update_nic_stats_vf. When releasing a DMA coherent buffer, sometimes, I don't know in what specific circumstances, it has to unmap memory with vunmap. It is disallowed to do that in IRQ context or with BH disabled. Otherwise, we hit this line in vunmap, causing the crash: BUG_ON(in_interrupt()); This patch reenables BH to release the buffer. Log messages when the bug is hit: kernel BUG at mm/vmalloc.c:2727! invalid opcode: 0000 [#1] PREEMPT SMP NOPTI CPU: 6 PID: 1462 Comm: NetworkManager Kdump: loaded Tainted: G I --------- --- 5.14.0-119.el9.x86_64 #1 Hardware name: Dell Inc. PowerEdge R740/06WXJT, BIOS 2.8.2 08/27/2020 RIP: 0010:vunmap+0x2e/0x30 ...skip... Call Trace: __iommu_dma_free+0x96/0x100 efx_nic_free_buffer+0x2b/0x40 [sfc] efx_ef10_try_update_nic_stats_vf+0x14a/0x1c0 [sfc] efx_ef10_update_stats_vf+0x18/0x40 [sfc] efx_start_all+0x15e/0x1d0 [sfc] efx_net_open+0x5a/0xe0 [sfc] __dev_open+0xe7/0x1a0 __dev_change_flags+0x1d7/0x240 dev_change_flags+0x21/0x60 ...skip... Fixes: d778819609a2 ("sfc: DMA the VF stats only when requested") Reported-by: Ma Yuying Signed-off-by: Íñigo Huguet Acked-by: Edward Cree Link: https://lore.kernel.org/r/20220713092116.21238-1-ihuguet@redhat.com Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- drivers/net/ethernet/sfc/ef10.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c index fa1a872c4bc8..5b7413305be6 100644 --- a/drivers/net/ethernet/sfc/ef10.c +++ b/drivers/net/ethernet/sfc/ef10.c @@ -1916,7 +1916,10 @@ static int efx_ef10_try_update_nic_stats_vf(struct efx_nic *efx) efx_update_sw_stats(efx, stats); out: + /* releasing a DMA coherent buffer with BH disabled can panic */ + spin_unlock_bh(&efx->stats_lock); efx_nic_free_buffer(efx, &stats_buf); + spin_lock_bh(&efx->stats_lock); return rc; } From 38e081ee06cb5892d1bb9a4565518485d435bbd1 Mon Sep 17 00:00:00 2001 From: "Chia-Lin Kao (AceLan)" Date: Wed, 13 Jul 2022 19:12:23 +0800 Subject: [PATCH 1388/1842] net: atlantic: remove deep parameter on suspend/resume functions [ Upstream commit 0f33250760384e05c36466b0a2f92f3c6007ba92 ] Below commit claims that atlantic NIC requires to reset the device on pm op, and had set the deep to true for all suspend/resume functions. commit 1809c30b6e5a ("net: atlantic: always deep reset on pm op, fixing up my null deref regression") So, we could remove deep parameter on suspend/resume functions without any functional change. Fixes: 1809c30b6e5a ("net: atlantic: always deep reset on pm op, fixing up my null deref regression") Signed-off-by: Chia-Lin Kao (AceLan) Link: https://lore.kernel.org/r/20220713111224.1535938-1-acelan.kao@canonical.com Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- .../ethernet/aquantia/atlantic/aq_pci_func.c | 24 ++++++++----------- 1 file changed, 10 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c index fc5ea434a27c..8c05b2b79339 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c @@ -385,7 +385,7 @@ static void aq_pci_shutdown(struct pci_dev *pdev) } } -static int aq_suspend_common(struct device *dev, bool deep) +static int aq_suspend_common(struct device *dev) { struct aq_nic_s *nic = pci_get_drvdata(to_pci_dev(dev)); @@ -398,17 +398,15 @@ static int aq_suspend_common(struct device *dev, bool deep) if (netif_running(nic->ndev)) aq_nic_stop(nic); - if (deep) { - aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol); - aq_nic_set_power(nic); - } + aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol); + aq_nic_set_power(nic); rtnl_unlock(); return 0; } -static int atl_resume_common(struct device *dev, bool deep) +static int atl_resume_common(struct device *dev) { struct pci_dev *pdev = to_pci_dev(dev); struct aq_nic_s *nic; @@ -421,10 +419,8 @@ static int atl_resume_common(struct device *dev, bool deep) pci_set_power_state(pdev, PCI_D0); pci_restore_state(pdev); - if (deep) { - /* Reinitialize Nic/Vecs objects */ - aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol); - } + /* Reinitialize Nic/Vecs objects */ + aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol); if (netif_running(nic->ndev)) { ret = aq_nic_init(nic); @@ -450,22 +446,22 @@ static int atl_resume_common(struct device *dev, bool deep) static int aq_pm_freeze(struct device *dev) { - return aq_suspend_common(dev, true); + return aq_suspend_common(dev); } static int aq_pm_suspend_poweroff(struct device *dev) { - return aq_suspend_common(dev, true); + return aq_suspend_common(dev); } static int aq_pm_thaw(struct device *dev) { - return atl_resume_common(dev, true); + return atl_resume_common(dev); } static int aq_pm_resume_restore(struct device *dev) { - return atl_resume_common(dev, true); + return atl_resume_common(dev); } static const struct dev_pm_ops aq_pm_ops = { From c2978d0124f26e453ed30fda6a37f62d8c7f19cb Mon Sep 17 00:00:00 2001 From: "Chia-Lin Kao (AceLan)" Date: Wed, 13 Jul 2022 19:12:24 +0800 Subject: [PATCH 1389/1842] net: atlantic: remove aq_nic_deinit() when resume [ Upstream commit 2e15c51fefaffaf9f72255eaef4fada05055e4c5 ] aq_nic_deinit() has been called while suspending, so we don't have to call it again on resume. Actually, call it again leads to another hang issue when resuming from S3. Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992345] Call Trace: Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992346] Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992348] aq_nic_deinit+0xb4/0xd0 [atlantic] Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992356] aq_pm_thaw+0x7f/0x100 [atlantic] Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992362] pci_pm_resume+0x5c/0x90 Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992366] ? pci_pm_thaw+0x80/0x80 Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992368] dpm_run_callback+0x4e/0x120 Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992371] device_resume+0xad/0x200 Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992373] async_resume+0x1e/0x40 Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992374] async_run_entry_fn+0x33/0x120 Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992377] process_one_work+0x220/0x3c0 Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992380] worker_thread+0x4d/0x3f0 Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992382] ? process_one_work+0x3c0/0x3c0 Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992384] kthread+0x12a/0x150 Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992386] ? set_kthread_struct+0x40/0x40 Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992387] ret_from_fork+0x22/0x30 Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992391] Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992392] ---[ end trace 1ec8c79604ed5e0d ]--- Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992394] PM: dpm_run_callback(): pci_pm_resume+0x0/0x90 returns -110 Jul 8 03:09:44 u-Precision-7865-Tower kernel: [ 5910.992397] atlantic 0000:02:00.0: PM: failed to resume async: error -110 Fixes: 1809c30b6e5a ("net: atlantic: always deep reset on pm op, fixing up my null deref regression") Signed-off-by: Chia-Lin Kao (AceLan) Link: https://lore.kernel.org/r/20220713111224.1535938-2-acelan.kao@canonical.com Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c index 8c05b2b79339..a0ce213c473b 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c @@ -419,9 +419,6 @@ static int atl_resume_common(struct device *dev) pci_set_power_state(pdev, PCI_D0); pci_restore_state(pdev); - /* Reinitialize Nic/Vecs objects */ - aq_nic_deinit(nic, !nic->aq_hw->aq_nic_cfg->wol); - if (netif_running(nic->ndev)) { ret = aq_nic_init(nic); if (ret) From eb58fd350a851b5cda9f4c9a2cefb15c7ccf33f3 Mon Sep 17 00:00:00 2001 From: Vitaly Kuznetsov Date: Fri, 8 Jul 2022 14:51:47 +0200 Subject: [PATCH 1390/1842] KVM: x86: Fully initialize 'struct kvm_lapic_irq' in kvm_pv_kick_cpu_op() [ Upstream commit 8a414f943f8b5f94bbaafdec863d6f3dbef33f8a ] 'vector' and 'trig_mode' fields of 'struct kvm_lapic_irq' are left uninitialized in kvm_pv_kick_cpu_op(). While these fields are normally not needed for APIC_DM_REMRD, they're still referenced by __apic_accept_irq() for trace_kvm_apic_accept_irq(). Fully initialize the structure to avoid consuming random stack memory. Fixes: a183b638b61c ("KVM: x86: make apic_accept_irq tracepoint more generic") Reported-by: syzbot+d6caa905917d353f0d07@syzkaller.appspotmail.com Signed-off-by: Vitaly Kuznetsov Reviewed-by: Sean Christopherson Message-Id: <20220708125147.593975-1-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini Signed-off-by: Sasha Levin --- arch/x86/kvm/x86.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index da547752580a..c71f702c037d 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8142,15 +8142,17 @@ static int kvm_pv_clock_pairing(struct kvm_vcpu *vcpu, gpa_t paddr, */ static void kvm_pv_kick_cpu_op(struct kvm *kvm, unsigned long flags, int apicid) { - struct kvm_lapic_irq lapic_irq; + /* + * All other fields are unused for APIC_DM_REMRD, but may be consumed by + * common code, e.g. for tracing. Defer initialization to the compiler. + */ + struct kvm_lapic_irq lapic_irq = { + .delivery_mode = APIC_DM_REMRD, + .dest_mode = APIC_DEST_PHYSICAL, + .shorthand = APIC_DEST_NOSHORT, + .dest_id = apicid, + }; - lapic_irq.shorthand = APIC_DEST_NOSHORT; - lapic_irq.dest_mode = APIC_DEST_PHYSICAL; - lapic_irq.level = 0; - lapic_irq.dest_id = apicid; - lapic_irq.msi_redir_hint = false; - - lapic_irq.delivery_mode = APIC_DM_REMRD; kvm_irq_delivery_to_apic(kvm, NULL, &lapic_irq, NULL); } From c713de1d80a5d7035dc7f667b485bded83b4e74a Mon Sep 17 00:00:00 2001 From: Tariq Toukan Date: Thu, 14 Jul 2022 10:07:54 +0300 Subject: [PATCH 1391/1842] net/tls: Check for errors in tls_device_init [ Upstream commit 3d8c51b25a235e283e37750943bbf356ef187230 ] Add missing error checks in tls_device_init. Fixes: e8f69799810c ("net/tls: Add generic NIC offload infrastructure") Reported-by: Jakub Kicinski Reviewed-by: Maxim Mikityanskiy Signed-off-by: Tariq Toukan Link: https://lore.kernel.org/r/20220714070754.1428-1-tariqt@nvidia.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- include/net/tls.h | 4 ++-- net/tls/tls_device.c | 4 ++-- net/tls/tls_main.c | 7 ++++++- 3 files changed, 10 insertions(+), 5 deletions(-) diff --git a/include/net/tls.h b/include/net/tls.h index 745b3bc6ce91..d9cb597cab46 100644 --- a/include/net/tls.h +++ b/include/net/tls.h @@ -707,7 +707,7 @@ int tls_sw_fallback_init(struct sock *sk, struct tls_crypto_info *crypto_info); #ifdef CONFIG_TLS_DEVICE -void tls_device_init(void); +int tls_device_init(void); void tls_device_cleanup(void); void tls_device_sk_destruct(struct sock *sk); int tls_set_device_offload(struct sock *sk, struct tls_context *ctx); @@ -727,7 +727,7 @@ static inline bool tls_is_sk_rx_device_offloaded(struct sock *sk) return tls_get_ctx(sk)->rx_conf == TLS_HW; } #else -static inline void tls_device_init(void) {} +static inline int tls_device_init(void) { return 0; } static inline void tls_device_cleanup(void) {} static inline int diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index 3c82286e5bcc..6ae2ce411b4b 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -1390,9 +1390,9 @@ static struct notifier_block tls_dev_notifier = { .notifier_call = tls_dev_event, }; -void __init tls_device_init(void) +int __init tls_device_init(void) { - register_netdevice_notifier(&tls_dev_notifier); + return register_netdevice_notifier(&tls_dev_notifier); } void __exit tls_device_cleanup(void) diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c index 58d22d6b86ae..e537085b184f 100644 --- a/net/tls/tls_main.c +++ b/net/tls/tls_main.c @@ -905,7 +905,12 @@ static int __init tls_register(void) if (err) return err; - tls_device_init(); + err = tls_device_init(); + if (err) { + unregister_pernet_subsys(&tls_proc_ops); + return err; + } + tcp_register_ulp(&tcp_tls_ulp_ops); return 0; From 31e16a5e113fd843785194b28cbb16c1541e65ba Mon Sep 17 00:00:00 2001 From: Muchun Song Date: Thu, 9 Jun 2022 18:40:32 +0800 Subject: [PATCH 1392/1842] mm: sysctl: fix missing numa_stat when !CONFIG_HUGETLB_PAGE [ Upstream commit 43b5240ca6b33108998810593248186b1e3ae34a ] "numa_stat" should not be included in the scope of CONFIG_HUGETLB_PAGE, if CONFIG_HUGETLB_PAGE is not configured even if CONFIG_NUMA is configured, "numa_stat" is missed form /proc. Move it out of CONFIG_HUGETLB_PAGE to fix it. Fixes: 4518085e127d ("mm, sysctl: make NUMA stats configurable") Signed-off-by: Muchun Song Cc: Acked-by: Michal Hocko Acked-by: Mel Gorman Signed-off-by: Luis Chamberlain Signed-off-by: Sasha Levin --- kernel/sysctl.c | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 642dc51b6503..f0dd1a3b66eb 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -2814,6 +2814,17 @@ static struct ctl_table vm_table[] = { .extra1 = SYSCTL_ZERO, .extra2 = &two_hundred, }, +#ifdef CONFIG_NUMA + { + .procname = "numa_stat", + .data = &sysctl_vm_numa_stat, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = sysctl_vm_numa_stat_handler, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, + }, +#endif #ifdef CONFIG_HUGETLB_PAGE { .procname = "nr_hugepages", @@ -2830,15 +2841,6 @@ static struct ctl_table vm_table[] = { .mode = 0644, .proc_handler = &hugetlb_mempolicy_sysctl_handler, }, - { - .procname = "numa_stat", - .data = &sysctl_vm_numa_stat, - .maxlen = sizeof(int), - .mode = 0644, - .proc_handler = sysctl_vm_numa_stat_handler, - .extra1 = SYSCTL_ZERO, - .extra2 = SYSCTL_ONE, - }, #endif { .procname = "hugetlb_shm_group", From d6115800325c984f51532b67c6e8420704583101 Mon Sep 17 00:00:00 2001 From: Stephan Gerhold Date: Tue, 21 Jun 2022 13:06:20 +0200 Subject: [PATCH 1393/1842] virtio_mmio: Add missing PM calls to freeze/restore [ Upstream commit ed7ac37fde33ccd84e4bd2b9363c191f925364c7 ] Most virtio drivers provide freeze/restore callbacks to finish up device usage before suspend and to reinitialize the virtio device after resume. However, these callbacks are currently only called when using virtio_pci. virtio_mmio does not have any PM ops defined. This causes problems for example after suspend to disk (hibernation), since the virtio devices might lose their state after the VMM is restarted. Calling virtio_device_freeze()/restore() ensures that the virtio devices are re-initialized correctly. Fix this by implementing the dev_pm_ops for virtio_mmio, similar to virtio_pci_common. Signed-off-by: Stephan Gerhold Message-Id: <20220621110621.3638025-2-stephan.gerhold@kernkonzept.com> Signed-off-by: Michael S. Tsirkin Signed-off-by: Sasha Levin --- drivers/virtio/virtio_mmio.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c index 5c970e6f664c..7dec1418bf7c 100644 --- a/drivers/virtio/virtio_mmio.c +++ b/drivers/virtio/virtio_mmio.c @@ -62,6 +62,7 @@ #include #include #include +#include #include #include #include @@ -543,6 +544,25 @@ static const struct virtio_config_ops virtio_mmio_config_ops = { .get_shm_region = vm_get_shm_region, }; +#ifdef CONFIG_PM_SLEEP +static int virtio_mmio_freeze(struct device *dev) +{ + struct virtio_mmio_device *vm_dev = dev_get_drvdata(dev); + + return virtio_device_freeze(&vm_dev->vdev); +} + +static int virtio_mmio_restore(struct device *dev) +{ + struct virtio_mmio_device *vm_dev = dev_get_drvdata(dev); + + return virtio_device_restore(&vm_dev->vdev); +} + +static const struct dev_pm_ops virtio_mmio_pm_ops = { + SET_SYSTEM_SLEEP_PM_OPS(virtio_mmio_freeze, virtio_mmio_restore) +}; +#endif static void virtio_mmio_release_dev(struct device *_d) { @@ -787,6 +807,9 @@ static struct platform_driver virtio_mmio_driver = { .name = "virtio-mmio", .of_match_table = virtio_mmio_match, .acpi_match_table = ACPI_PTR(virtio_mmio_acpi_match), +#ifdef CONFIG_PM_SLEEP + .pm = &virtio_mmio_pm_ops, +#endif }, }; From 93135dca8c4c4a107ea8125ace32ea3c08fe3d84 Mon Sep 17 00:00:00 2001 From: Stephan Gerhold Date: Tue, 21 Jun 2022 13:06:21 +0200 Subject: [PATCH 1394/1842] virtio_mmio: Restore guest page size on resume [ Upstream commit e0c2ce8217955537dd5434baeba061f209797119 ] Virtio devices might lose their state when the VMM is restarted after a suspend to disk (hibernation) cycle. This means that the guest page size register must be restored for the virtio_mmio legacy interface, since otherwise the virtio queues are not functional. This is particularly problematic for QEMU that currently still defaults to using the legacy interface for virtio_mmio. Write the guest page size register again in virtio_mmio_restore() to make legacy virtio_mmio devices work correctly after hibernation. Signed-off-by: Stephan Gerhold Message-Id: <20220621110621.3638025-3-stephan.gerhold@kernkonzept.com> Signed-off-by: Michael S. Tsirkin Signed-off-by: Sasha Levin --- drivers/virtio/virtio_mmio.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c index 7dec1418bf7c..e8ef0c66e558 100644 --- a/drivers/virtio/virtio_mmio.c +++ b/drivers/virtio/virtio_mmio.c @@ -556,6 +556,9 @@ static int virtio_mmio_restore(struct device *dev) { struct virtio_mmio_device *vm_dev = dev_get_drvdata(dev); + if (vm_dev->version == 1) + writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE); + return virtio_device_restore(&vm_dev->vdev); } From c458ebd6591e4dea76849ddfe62bf3c623e09b19 Mon Sep 17 00:00:00 2001 From: Florian Westphal Date: Tue, 21 Jun 2022 18:26:03 +0200 Subject: [PATCH 1395/1842] netfilter: br_netfilter: do not skip all hooks with 0 priority [ Upstream commit c2577862eeb0be94f151f2f1fff662b028061b00 ] When br_netfilter module is loaded, skbs may be diverted to the ipv4/ipv6 hooks, just like as if we were routing. Unfortunately, bridge filter hooks with priority 0 may be skipped in this case. Example: 1. an nftables bridge ruleset is loaded, with a prerouting hook that has priority 0. 2. interface is added to the bridge. 3. no tcp packet is ever seen by the bridge prerouting hook. 4. flush the ruleset 5. load the bridge ruleset again. 6. tcp packets are processed as expected. After 1) the only registered hook is the bridge prerouting hook, but its not called yet because the bridge hasn't been brought up yet. After 2), hook order is: 0 br_nf_pre_routing // br_netfilter internal hook 0 chain bridge f prerouting // nftables bridge ruleset The packet is diverted to br_nf_pre_routing. If call-iptables is off, the nftables bridge ruleset is called as expected. But if its enabled, br_nf_hook_thresh() will skip it because it assumes that all 0-priority hooks had been called previously in bridge context. To avoid this, check for the br_nf_pre_routing hook itself, we need to resume directly after it, even if this hook has a priority of 0. Unfortunately, this still results in different packet flow. With this fix, the eval order after in 3) is: 1. br_nf_pre_routing 2. ip(6)tables (if enabled) 3. nftables bridge but after 5 its the much saner: 1. nftables bridge 2. br_nf_pre_routing 3. ip(6)tables (if enabled) Unfortunately I don't see a solution here: It would be possible to move br_nf_pre_routing to a higher priority so that it will be called later in the pipeline, but this also impacts ebtables evaluation order, and would still result in this very ordering problem for all nftables-bridge hooks with the same priority as the br_nf_pre_routing one. Searching back through the git history I don't think this has ever behaved in any other way, hence, no fixes-tag. Reported-by: Radim Hrazdil Signed-off-by: Florian Westphal Signed-off-by: Pablo Neira Ayuso Signed-off-by: Sasha Levin --- net/bridge/br_netfilter_hooks.c | 21 ++++++++++++++++++--- 1 file changed, 18 insertions(+), 3 deletions(-) diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c index 68c0d0f92890..10a2c7bca719 100644 --- a/net/bridge/br_netfilter_hooks.c +++ b/net/bridge/br_netfilter_hooks.c @@ -1012,9 +1012,24 @@ int br_nf_hook_thresh(unsigned int hook, struct net *net, return okfn(net, sk, skb); ops = nf_hook_entries_get_hook_ops(e); - for (i = 0; i < e->num_hook_entries && - ops[i]->priority <= NF_BR_PRI_BRNF; i++) - ; + for (i = 0; i < e->num_hook_entries; i++) { + /* These hooks have already been called */ + if (ops[i]->priority < NF_BR_PRI_BRNF) + continue; + + /* These hooks have not been called yet, run them. */ + if (ops[i]->priority > NF_BR_PRI_BRNF) + break; + + /* take a closer look at NF_BR_PRI_BRNF. */ + if (ops[i]->hook == br_nf_pre_routing) { + /* This hook diverted the skb to this function, + * hooks after this have not been run yet. + */ + i++; + break; + } + } nf_hook_state_init(&state, hook, NFPROTO_BRIDGE, indev, outdev, sk, net, okfn); From 24cd0b9bfdff126c066032b0d40ab0962d35e777 Mon Sep 17 00:00:00 2001 From: John Garry Date: Thu, 23 Jun 2022 20:41:59 +0800 Subject: [PATCH 1396/1842] scsi: hisi_sas: Limit max hw sectors for v3 HW [ Upstream commit fce54ed027577517df1e74b7d54dc2b1bd536887 ] If the controller is behind an IOMMU then the IOMMU IOVA caching range can affect performance, as discussed in [0]. Limit the max HW sectors to not exceed this limit. We need to hardcode the value until a proper DMA mapping API is available. [0] https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@huawei.com/ Link: https://lore.kernel.org/r/1655988119-223714-1-git-send-email-john.garry@huawei.com Signed-off-by: John Garry Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin --- drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c index cd41dc061d87..dfe7e6370d84 100644 --- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c +++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c @@ -2738,6 +2738,7 @@ static int slave_configure_v3_hw(struct scsi_device *sdev) struct hisi_hba *hisi_hba = shost_priv(shost); struct device *dev = hisi_hba->dev; int ret = sas_slave_configure(sdev); + unsigned int max_sectors; if (ret) return ret; @@ -2755,6 +2756,12 @@ static int slave_configure_v3_hw(struct scsi_device *sdev) } } + /* Set according to IOMMU IOVA caching limit */ + max_sectors = min_t(size_t, queue_max_hw_sectors(sdev->request_queue), + (PAGE_SIZE * 32) >> SECTOR_SHIFT); + + blk_queue_max_hw_sectors(sdev->request_queue, max_sectors); + return 0; } From 3ea9dbf7c2f436952bca331c6f5d72f75aca224e Mon Sep 17 00:00:00 2001 From: Liang He Date: Sat, 18 Jun 2022 10:25:45 +0800 Subject: [PATCH 1397/1842] cpufreq: pmac32-cpufreq: Fix refcount leak bug [ Upstream commit ccd7567d4b6cf187fdfa55f003a9e461ee629e36 ] In pmac_cpufreq_init_MacRISC3(), we need to add corresponding of_node_put() for the three node pointers whose refcount have been incremented by of_find_node_by_name(). Signed-off-by: Liang He Signed-off-by: Viresh Kumar Signed-off-by: Sasha Levin --- drivers/cpufreq/pmac32-cpufreq.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/cpufreq/pmac32-cpufreq.c b/drivers/cpufreq/pmac32-cpufreq.c index 73621bc11976..3704476bb83a 100644 --- a/drivers/cpufreq/pmac32-cpufreq.c +++ b/drivers/cpufreq/pmac32-cpufreq.c @@ -471,6 +471,10 @@ static int pmac_cpufreq_init_MacRISC3(struct device_node *cpunode) if (slew_done_gpio_np) slew_done_gpio = read_gpio(slew_done_gpio_np); + of_node_put(volt_gpio_np); + of_node_put(freq_gpio_np); + of_node_put(slew_done_gpio_np); + /* If we use the frequency GPIOs, calculate the min/max speeds based * on the bus frequencies */ From bacfef0bf2fa26e7386a08d02235957a0e2a54b5 Mon Sep 17 00:00:00 2001 From: Kai-Heng Feng Date: Tue, 28 Jun 2022 20:37:26 +0800 Subject: [PATCH 1398/1842] platform/x86: hp-wmi: Ignore Sanitization Mode event [ Upstream commit 9ab762a84b8094540c18a170e5ddd6488632c456 ] After system resume the hp-wmi driver may complain: [ 702.620180] hp_wmi: Unknown event_id - 23 - 0x0 According to HP it means 'Sanitization Mode' and it's harmless to just ignore the event. Cc: Jorge Lopez Signed-off-by: Kai-Heng Feng Link: https://lore.kernel.org/r/20220628123726.250062-1-kai.heng.feng@canonical.com Reviewed-by: Hans de Goede Signed-off-by: Hans de Goede Signed-off-by: Sasha Levin --- drivers/platform/x86/hp-wmi.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c index e94e59283ecb..012639f6d335 100644 --- a/drivers/platform/x86/hp-wmi.c +++ b/drivers/platform/x86/hp-wmi.c @@ -62,6 +62,7 @@ enum hp_wmi_event_ids { HPWMI_BACKLIT_KB_BRIGHTNESS = 0x0D, HPWMI_PEAKSHIFT_PERIOD = 0x0F, HPWMI_BATTERY_CHARGE_PERIOD = 0x10, + HPWMI_SANITIZATION_MODE = 0x17, }; struct bios_args { @@ -629,6 +630,8 @@ static void hp_wmi_notify(u32 value, void *context) break; case HPWMI_BATTERY_CHARGE_PERIOD: break; + case HPWMI_SANITIZATION_MODE: + break; default: pr_info("Unknown event_id - %d - 0x%x\n", event_id, event_data); break; From efa78f2ae363428525fb4981bb63c555ee79f3c7 Mon Sep 17 00:00:00 2001 From: Hangyu Hua Date: Wed, 29 Jun 2022 14:34:18 +0800 Subject: [PATCH 1399/1842] net: tipc: fix possible refcount leak in tipc_sk_create() [ Upstream commit 00aff3590fc0a73bddd3b743863c14e76fd35c0c ] Free sk in case tipc_sk_insert() fails. Signed-off-by: Hangyu Hua Reviewed-by: Tung Nguyen Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/tipc/socket.c | 1 + 1 file changed, 1 insertion(+) diff --git a/net/tipc/socket.c b/net/tipc/socket.c index 42283dc6c5b7..38256aabf4f1 100644 --- a/net/tipc/socket.c +++ b/net/tipc/socket.c @@ -489,6 +489,7 @@ static int tipc_sk_create(struct net *net, struct socket *sock, sock_init_data(sock, sk); tipc_set_sk_state(sk, TIPC_OPEN); if (tipc_sk_insert(tsk)) { + sk_free(sk); pr_warn("Socket create failed; port number exhausted\n"); return -EINVAL; } From de876f36f9a34eaa2fbcfab0123975970f9db1fa Mon Sep 17 00:00:00 2001 From: Michael Walle Date: Mon, 27 Jun 2022 19:06:43 +0200 Subject: [PATCH 1400/1842] NFC: nxp-nci: don't print header length mismatch on i2c error [ Upstream commit 9577fc5fdc8b07b891709af6453545db405e24ad ] Don't print a misleading header length mismatch error if the i2c call returns an error. Instead just return the error code without any error message. Signed-off-by: Michael Walle Reviewed-by: Krzysztof Kozlowski Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/nfc/nxp-nci/i2c.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/nfc/nxp-nci/i2c.c b/drivers/nfc/nxp-nci/i2c.c index 3943a30053b3..f426dcdfcdd6 100644 --- a/drivers/nfc/nxp-nci/i2c.c +++ b/drivers/nfc/nxp-nci/i2c.c @@ -122,7 +122,9 @@ static int nxp_nci_i2c_fw_read(struct nxp_nci_i2c_phy *phy, skb_put_data(*skb, &header, NXP_NCI_FW_HDR_LEN); r = i2c_master_recv(client, skb_put(*skb, frame_len), frame_len); - if (r != frame_len) { + if (r < 0) { + goto fw_read_exit_free_skb; + } else if (r != frame_len) { nfc_err(&client->dev, "Invalid frame length: %u (expected %zu)\n", r, frame_len); @@ -166,7 +168,9 @@ static int nxp_nci_i2c_nci_read(struct nxp_nci_i2c_phy *phy, return 0; r = i2c_master_recv(client, skb_put(*skb, header.plen), header.plen); - if (r != header.plen) { + if (r < 0) { + goto nci_read_exit_free_skb; + } else if (r != header.plen) { nfc_err(&client->dev, "Invalid frame payload length: %u (expected %u)\n", r, header.plen); From 5504e63832e7eb72607149ab5a83b155efd35d14 Mon Sep 17 00:00:00 2001 From: Sagi Grimberg Date: Sun, 26 Jun 2022 12:24:51 +0300 Subject: [PATCH 1401/1842] nvme-tcp: always fail a request when sending it failed [ Upstream commit 41d07df7de841bfbc32725ce21d933ad358f2844 ] queue stoppage and inflight requests cancellation is fully fenced from io_work and thus failing a request from this context. Hence we don't need to try to guess from the socket retcode if this failure is because the queue is about to be torn down or not. We are perfectly safe to just fail it, the request will not be cancelled later on. This solves possible very long shutdown delays when the users issues a 'nvme disconnect-all' Reported-by: Daniel Wagner Signed-off-by: Sagi Grimberg Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/tcp.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 7e3932033707..d5e162f2c23a 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -1149,8 +1149,7 @@ static int nvme_tcp_try_send(struct nvme_tcp_queue *queue) } else if (ret < 0) { dev_err(queue->ctrl->ctrl.device, "failed to send request %d\n", ret); - if (ret != -EPIPE && ret != -ECONNRESET) - nvme_tcp_fail_request(queue->request); + nvme_tcp_fail_request(queue->request); nvme_tcp_done_send_req(queue); } return ret; From 104594de27789971e086aa92fb3f1b1da4887ede Mon Sep 17 00:00:00 2001 From: Ruozhu Li Date: Thu, 23 Jun 2022 14:45:39 +0800 Subject: [PATCH 1402/1842] nvme: fix regression when disconnect a recovering ctrl [ Upstream commit f7f70f4aa09dc43d7455c060143e86a017c30548 ] We encountered a problem that the disconnect command hangs. After analyzing the log and stack, we found that the triggering process is as follows: CPU0 CPU1 nvme_rdma_error_recovery_work nvme_rdma_teardown_io_queues nvme_do_delete_ctrl nvme_stop_queues nvme_remove_namespaces --clear ctrl->namespaces nvme_start_queues --no ns in ctrl->namespaces nvme_ns_remove return(because ctrl is deleting) blk_freeze_queue blk_mq_freeze_queue_wait --wait for ns to unquiesce to clean infligt IO, hang forever This problem was not found in older kernels because we will flush err work in nvme_stop_ctrl before nvme_remove_namespaces.It does not seem to be modified for functional reasons, the patch can be revert to solve the problem. Revert commit 794a4cb3d2f7 ("nvme: remove the .stop_ctrl callout") Signed-off-by: Ruozhu Li Reviewed-by: Sagi Grimberg Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/core.c | 2 ++ drivers/nvme/host/nvme.h | 1 + drivers/nvme/host/rdma.c | 12 +++++++++--- drivers/nvme/host/tcp.c | 10 +++++++--- 4 files changed, 19 insertions(+), 6 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index af2902d70b19..ab060b4911ff 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -4460,6 +4460,8 @@ void nvme_stop_ctrl(struct nvme_ctrl *ctrl) nvme_stop_keep_alive(ctrl); flush_work(&ctrl->async_event_work); cancel_work_sync(&ctrl->fw_act_work); + if (ctrl->ops->stop_ctrl) + ctrl->ops->stop_ctrl(ctrl); } EXPORT_SYMBOL_GPL(nvme_stop_ctrl); diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 8e40a6306e53..58cf9e39d613 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -478,6 +478,7 @@ struct nvme_ctrl_ops { void (*free_ctrl)(struct nvme_ctrl *ctrl); void (*submit_async_event)(struct nvme_ctrl *ctrl); void (*delete_ctrl)(struct nvme_ctrl *ctrl); + void (*stop_ctrl)(struct nvme_ctrl *ctrl); int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size); }; diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 8eacc9bd58f5..b61924394032 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -1057,6 +1057,14 @@ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl, } } +static void nvme_rdma_stop_ctrl(struct nvme_ctrl *nctrl) +{ + struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl); + + cancel_work_sync(&ctrl->err_work); + cancel_delayed_work_sync(&ctrl->reconnect_work); +} + static void nvme_rdma_free_ctrl(struct nvme_ctrl *nctrl) { struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl); @@ -2236,9 +2244,6 @@ static const struct blk_mq_ops nvme_rdma_admin_mq_ops = { static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown) { - cancel_work_sync(&ctrl->err_work); - cancel_delayed_work_sync(&ctrl->reconnect_work); - nvme_rdma_teardown_io_queues(ctrl, shutdown); blk_mq_quiesce_queue(ctrl->ctrl.admin_q); if (shutdown) @@ -2288,6 +2293,7 @@ static const struct nvme_ctrl_ops nvme_rdma_ctrl_ops = { .submit_async_event = nvme_rdma_submit_async_event, .delete_ctrl = nvme_rdma_delete_ctrl, .get_address = nvmf_get_address, + .stop_ctrl = nvme_rdma_stop_ctrl, }; /* diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index d5e162f2c23a..fe8c27bbc3f2 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -2135,9 +2135,6 @@ static void nvme_tcp_error_recovery_work(struct work_struct *work) static void nvme_tcp_teardown_ctrl(struct nvme_ctrl *ctrl, bool shutdown) { - cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work); - cancel_delayed_work_sync(&to_tcp_ctrl(ctrl)->connect_work); - nvme_tcp_teardown_io_queues(ctrl, shutdown); blk_mq_quiesce_queue(ctrl->admin_q); if (shutdown) @@ -2177,6 +2174,12 @@ static void nvme_reset_ctrl_work(struct work_struct *work) nvme_tcp_reconnect_or_remove(ctrl); } +static void nvme_tcp_stop_ctrl(struct nvme_ctrl *ctrl) +{ + cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work); + cancel_delayed_work_sync(&to_tcp_ctrl(ctrl)->connect_work); +} + static void nvme_tcp_free_ctrl(struct nvme_ctrl *nctrl) { struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); @@ -2499,6 +2502,7 @@ static const struct nvme_ctrl_ops nvme_tcp_ctrl_ops = { .submit_async_event = nvme_tcp_submit_async_event, .delete_ctrl = nvme_tcp_delete_ctrl, .get_address = nvmf_get_address, + .stop_ctrl = nvme_tcp_stop_ctrl, }; static bool From 67dc32542a1fb7790d0853cf4a5cf859ac6a2002 Mon Sep 17 00:00:00 2001 From: Jianglei Nie Date: Wed, 29 Jun 2022 15:55:50 +0800 Subject: [PATCH 1403/1842] net: sfp: fix memory leak in sfp_probe() [ Upstream commit 0a18d802d65cf662644fd1d369c86d84a5630652 ] sfp_probe() allocates a memory chunk from sfp with sfp_alloc(). When devm_add_action() fails, sfp is not freed, which leads to a memory leak. We should use devm_add_action_or_reset() instead of devm_add_action(). Signed-off-by: Jianglei Nie Reviewed-by: Russell King (Oracle) Link: https://lore.kernel.org/r/20220629075550.2152003-1-niejianglei2021@163.com Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- drivers/net/phy/sfp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c index 96068e0d841a..dcbe278086dc 100644 --- a/drivers/net/phy/sfp.c +++ b/drivers/net/phy/sfp.c @@ -2427,7 +2427,7 @@ static int sfp_probe(struct platform_device *pdev) platform_set_drvdata(pdev, sfp); - err = devm_add_action(sfp->dev, sfp_cleanup, sfp); + err = devm_add_action_or_reset(sfp->dev, sfp_cleanup, sfp); if (err < 0) return err; From 13fb9105cfc9693d1218ab8a6dbf7a51abd1333d Mon Sep 17 00:00:00 2001 From: Mark Brown Date: Sat, 4 Jun 2022 11:52:46 +0100 Subject: [PATCH 1404/1842] ASoC: ops: Fix off by one in range control validation [ Upstream commit 5871321fb4558c55bf9567052b618ff0be6b975e ] We currently report that range controls accept a range of 0..(max-min) but accept writes in the range 0..(max-min+1). Remove that extra +1. Signed-off-by: Mark Brown Link: https://lore.kernel.org/r/20220604105246.4055214-1-broonie@kernel.org Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/soc-ops.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c index 15bfcdbdfaa4..0f26d6c31ce5 100644 --- a/sound/soc/soc-ops.c +++ b/sound/soc/soc-ops.c @@ -517,7 +517,7 @@ int snd_soc_put_volsw_range(struct snd_kcontrol *kcontrol, return -EINVAL; if (mc->platform_max && tmp > mc->platform_max) return -EINVAL; - if (tmp > mc->max - mc->min + 1) + if (tmp > mc->max - mc->min) return -EINVAL; if (invert) @@ -538,7 +538,7 @@ int snd_soc_put_volsw_range(struct snd_kcontrol *kcontrol, return -EINVAL; if (mc->platform_max && tmp > mc->platform_max) return -EINVAL; - if (tmp > mc->max - mc->min + 1) + if (tmp > mc->max - mc->min) return -EINVAL; if (invert) From ef1e38532f4b2f0f3b460e938a2e7076c3bed5ee Mon Sep 17 00:00:00 2001 From: Haowen Bai Date: Thu, 21 Apr 2022 10:26:59 +0800 Subject: [PATCH 1405/1842] pinctrl: aspeed: Fix potential NULL dereference in aspeed_pinmux_set_mux() [ Upstream commit 84a85d3fef2e75b1fe9fc2af6f5267122555a1ed ] pdesc could be null but still dereference pdesc->name and it will lead to a null pointer access. So we move a null check before dereference. Signed-off-by: Haowen Bai Link: https://lore.kernel.org/r/1650508019-22554-1-git-send-email-baihaowen@meizu.com Signed-off-by: Linus Walleij Signed-off-by: Sasha Levin --- drivers/pinctrl/aspeed/pinctrl-aspeed.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/pinctrl/aspeed/pinctrl-aspeed.c b/drivers/pinctrl/aspeed/pinctrl-aspeed.c index 9c65d560d48f..e792318c3894 100644 --- a/drivers/pinctrl/aspeed/pinctrl-aspeed.c +++ b/drivers/pinctrl/aspeed/pinctrl-aspeed.c @@ -235,11 +235,11 @@ int aspeed_pinmux_set_mux(struct pinctrl_dev *pctldev, unsigned int function, const struct aspeed_sig_expr **funcs; const struct aspeed_sig_expr ***prios; - pr_debug("Muxing pin %s for %s\n", pdesc->name, pfunc->name); - if (!pdesc) return -EINVAL; + pr_debug("Muxing pin %s for %s\n", pdesc->name, pfunc->name); + prios = pdesc->prios; if (!prios) From 6cbbe59fdc7e01a614481831fbd61ff143140947 Mon Sep 17 00:00:00 2001 From: Peter Ujfalusi Date: Thu, 9 Jun 2022 11:59:49 +0300 Subject: [PATCH 1406/1842] ASoC: SOF: Intel: hda-loader: Clarify the cl_dsp_init() flow [ Upstream commit bbfef046c6613404c01aeb9e9928bebb78dd327a ] Update the comment for the cl_dsp_init() to clarify what is done by the function and use the chip->init_core_mask instead of BIT(0) when unstalling/running the init core. Complements: 2a68ff846164 ("ASoC: SOF: Intel: hda: Revisit IMR boot sequence") Signed-off-by: Peter Ujfalusi Reviewed-by: Pierre-Louis Bossart Reviewed-by: Bard Liao Reviewed-by: Ranjani Sridharan Link: https://lore.kernel.org/r/20220609085949.29062-4-peter.ujfalusi@linux.intel.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/sof/intel/hda-loader.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/sound/soc/sof/intel/hda-loader.c b/sound/soc/sof/intel/hda-loader.c index 347636a80b48..4012097a9d60 100644 --- a/sound/soc/sof/intel/hda-loader.c +++ b/sound/soc/sof/intel/hda-loader.c @@ -79,9 +79,9 @@ static struct hdac_ext_stream *cl_stream_prepare(struct snd_sof_dev *sdev, unsig } /* - * first boot sequence has some extra steps. core 0 waits for power - * status on core 1, so power up core 1 also momentarily, keep it in - * reset/stall and then turn it off + * first boot sequence has some extra steps. + * power on all host managed cores and only unstall/run the boot core to boot the + * DSP then turn off all non boot cores (if any) is powered on. */ static int cl_dsp_init(struct snd_sof_dev *sdev, int stream_tag) { @@ -115,7 +115,7 @@ static int cl_dsp_init(struct snd_sof_dev *sdev, int stream_tag) ((stream_tag - 1) << 9))); /* step 3: unset core 0 reset state & unstall/run core 0 */ - ret = hda_dsp_core_run(sdev, BIT(0)); + ret = hda_dsp_core_run(sdev, chip->init_core_mask); if (ret < 0) { if (hda->boot_iteration == HDA_FW_BOOT_ATTEMPTS) dev_err(sdev->dev, From 11a14e4f31b73adc90b9f55277343080bfe311a5 Mon Sep 17 00:00:00 2001 From: Charles Keepax Date: Tue, 21 Jun 2022 11:20:39 +0100 Subject: [PATCH 1407/1842] ASoC: wm5110: Fix DRE control [ Upstream commit 0bc0ae9a5938d512fd5d44f11c9c04892dcf4961 ] The DRE controls on wm5110 should return a value of 1 if the DRE state is actually changed, update to fix this. Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220621102041.1713504-2-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/wm5110.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/sound/soc/codecs/wm5110.c b/sound/soc/codecs/wm5110.c index 4238929b2375..d0cef982215d 100644 --- a/sound/soc/codecs/wm5110.c +++ b/sound/soc/codecs/wm5110.c @@ -413,6 +413,7 @@ static int wm5110_put_dre(struct snd_kcontrol *kcontrol, unsigned int rnew = (!!ucontrol->value.integer.value[1]) << mc->rshift; unsigned int lold, rold; unsigned int lena, rena; + bool change = false; int ret; snd_soc_dapm_mutex_lock(dapm); @@ -440,8 +441,8 @@ static int wm5110_put_dre(struct snd_kcontrol *kcontrol, goto err; } - ret = regmap_update_bits(arizona->regmap, ARIZONA_DRE_ENABLE, - mask, lnew | rnew); + ret = regmap_update_bits_check(arizona->regmap, ARIZONA_DRE_ENABLE, + mask, lnew | rnew, &change); if (ret) { dev_err(arizona->dev, "Failed to set DRE: %d\n", ret); goto err; @@ -454,6 +455,9 @@ static int wm5110_put_dre(struct snd_kcontrol *kcontrol, if (!rnew && rold) wm5110_clear_pga_volume(arizona, mc->rshift); + if (change) + ret = 1; + err: snd_soc_dapm_mutex_unlock(dapm); From 41f97b0ecfb345c98db4e7d7db21793bd1ae0920 Mon Sep 17 00:00:00 2001 From: Charles Keepax Date: Thu, 23 Jun 2022 11:51:15 +0100 Subject: [PATCH 1408/1842] ASoC: dapm: Initialise kcontrol data for mux/demux controls [ Upstream commit 11d7a12f7f50baa5af9090b131c9b03af59503e7 ] DAPM keeps a copy of the current value of mux/demux controls, however this value is only initialised in the case of autodisable controls. This leads to false notification events when first modifying a DAPM kcontrol that has a non-zero default. Autodisable controls are left as they are, since they already initialise the value, and there would be more work required to support autodisable muxes where the first option isn't disabled and/or that isn't the default. Technically this issue could affect mixer/switch elements as well, although not on any of the devices I am currently running. There is also a little more work to do to address the issue there due to that side supporting stereo controls, so that has not been tackled in this patch. Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220623105120.1981154-1-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/soc-dapm.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c index f2f7f2dde93c..754c1f16ee83 100644 --- a/sound/soc/soc-dapm.c +++ b/sound/soc/soc-dapm.c @@ -62,6 +62,8 @@ struct snd_soc_dapm_widget * snd_soc_dapm_new_control_unlocked(struct snd_soc_dapm_context *dapm, const struct snd_soc_dapm_widget *widget); +static unsigned int soc_dapm_read(struct snd_soc_dapm_context *dapm, int reg); + /* dapm power sequences - make this per codec in the future */ static int dapm_up_seq[] = { [snd_soc_dapm_pre] = 1, @@ -442,6 +444,9 @@ static int dapm_kcontrol_data_alloc(struct snd_soc_dapm_widget *widget, snd_soc_dapm_add_path(widget->dapm, data->widget, widget, NULL, NULL); + } else if (e->reg != SND_SOC_NOPM) { + data->value = soc_dapm_read(widget->dapm, e->reg) & + (e->mask << e->shift_l); } break; default: From a7634527cb231ddc6049bc90608c6a2b309bdc25 Mon Sep 17 00:00:00 2001 From: Charles Keepax Date: Thu, 23 Jun 2022 11:51:17 +0100 Subject: [PATCH 1409/1842] ASoC: cs47l15: Fix event generation for low power mux control [ Upstream commit 7f103af4a10f375b9b346b4d0b730f6a66b8c451 ] cs47l15_in1_adc_put always returns zero regardless of if the control value was updated. This results in missing notifications to user-space of the control change. Update the handling to return 1 when the value is changed. Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220623105120.1981154-3-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/cs47l15.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/sound/soc/codecs/cs47l15.c b/sound/soc/codecs/cs47l15.c index 254f9d96e766..7c20642f160a 100644 --- a/sound/soc/codecs/cs47l15.c +++ b/sound/soc/codecs/cs47l15.c @@ -122,6 +122,9 @@ static int cs47l15_in1_adc_put(struct snd_kcontrol *kcontrol, snd_soc_kcontrol_component(kcontrol); struct cs47l15 *cs47l15 = snd_soc_component_get_drvdata(component); + if (!!ucontrol->value.integer.value[0] == cs47l15->in1_lp_mode) + return 0; + switch (ucontrol->value.integer.value[0]) { case 0: /* Set IN1 to normal mode */ @@ -150,7 +153,7 @@ static int cs47l15_in1_adc_put(struct snd_kcontrol *kcontrol, break; } - return 0; + return 1; } static const struct snd_kcontrol_new cs47l15_snd_controls[] = { From cae4b78f3c7d0874c05f01474ad8a84ec3533a45 Mon Sep 17 00:00:00 2001 From: Charles Keepax Date: Thu, 23 Jun 2022 11:51:18 +0100 Subject: [PATCH 1410/1842] ASoC: madera: Fix event generation for OUT1 demux [ Upstream commit e3cabbef3db8269207a6b8808f510137669f8deb ] madera_out1_demux_put returns the value of snd_soc_dapm_mux_update_power, which returns a 1 if a path was found for the kcontrol. This is obviously different to the expected return a 1 if the control was updated value. This results in spurious notifications to user-space. Update the handling to only return a 1 when the value is changed. Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220623105120.1981154-4-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/madera.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/sound/soc/codecs/madera.c b/sound/soc/codecs/madera.c index 680f31a6493a..a74c9b28368b 100644 --- a/sound/soc/codecs/madera.c +++ b/sound/soc/codecs/madera.c @@ -618,7 +618,13 @@ int madera_out1_demux_put(struct snd_kcontrol *kcontrol, end: snd_soc_dapm_mutex_unlock(dapm); - return snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL); + ret = snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL); + if (ret < 0) { + dev_err(madera->dev, "Failed to update demux power state: %d\n", ret); + return ret; + } + + return change; } EXPORT_SYMBOL_GPL(madera_out1_demux_put); From dae43b37925c7acead2ae0e4a1b64ec813c22321 Mon Sep 17 00:00:00 2001 From: Charles Keepax Date: Thu, 23 Jun 2022 11:51:19 +0100 Subject: [PATCH 1411/1842] ASoC: madera: Fix event generation for rate controls [ Upstream commit 980555e95f7cabdc9c80a07107622b097ba23703 ] madera_adsp_rate_put always returns zero regardless of if the control value was updated. This results in missing notifications to user-space of the control change. Update the handling to return 1 when the value is changed. Signed-off-by: Charles Keepax Link: https://lore.kernel.org/r/20220623105120.1981154-5-ckeepax@opensource.cirrus.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin --- sound/soc/codecs/madera.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/sound/soc/codecs/madera.c b/sound/soc/codecs/madera.c index a74c9b28368b..bbab4bc1f6b5 100644 --- a/sound/soc/codecs/madera.c +++ b/sound/soc/codecs/madera.c @@ -899,7 +899,7 @@ static int madera_adsp_rate_put(struct snd_kcontrol *kcontrol, struct soc_enum *e = (struct soc_enum *)kcontrol->private_value; const int adsp_num = e->shift_l; const unsigned int item = ucontrol->value.enumerated.item[0]; - int ret; + int ret = 0; if (item >= e->items) return -EINVAL; @@ -916,10 +916,10 @@ static int madera_adsp_rate_put(struct snd_kcontrol *kcontrol, "Cannot change '%s' while in use by active audio paths\n", kcontrol->id.name); ret = -EBUSY; - } else { + } else if (priv->adsp_rate_cache[adsp_num] != e->values[item]) { /* Volatile register so defer until the codec is powered up */ priv->adsp_rate_cache[adsp_num] = e->values[item]; - ret = 0; + ret = 1; } mutex_unlock(&priv->rate_lock); From fd830d8dd59a8040a9c3009c8d6e175c3f23637c Mon Sep 17 00:00:00 2001 From: Stafford Horne Date: Wed, 15 Jun 2022 08:54:26 +0900 Subject: [PATCH 1412/1842] irqchip: or1k-pic: Undefine mask_ack for level triggered hardware [ Upstream commit 8520501346ed8d1c4a6dfa751cb57328a9c843f1 ] The mask_ack operation clears the interrupt by writing to the PICSR register. This we don't want for level triggered interrupt because it does not actually clear the interrupt on the source hardware. This was causing issues in qemu with multi core setups where interrupts would continue to fire even though they had been cleared in PICSR. Just remove the mask_ack operation. Acked-by: Marc Zyngier Signed-off-by: Stafford Horne Signed-off-by: Sasha Levin --- drivers/irqchip/irq-or1k-pic.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/irqchip/irq-or1k-pic.c b/drivers/irqchip/irq-or1k-pic.c index 03d2366118dd..d5f1fabc45d7 100644 --- a/drivers/irqchip/irq-or1k-pic.c +++ b/drivers/irqchip/irq-or1k-pic.c @@ -66,7 +66,6 @@ static struct or1k_pic_dev or1k_pic_level = { .name = "or1k-PIC-level", .irq_unmask = or1k_pic_unmask, .irq_mask = or1k_pic_mask, - .irq_mask_ack = or1k_pic_mask_ack, }, .handle = handle_level_irq, .flags = IRQ_LEVEL | IRQ_NOPROBE, From 136d7987fcfdeca73ee3c6a29e48f99fdd0f4d87 Mon Sep 17 00:00:00 2001 From: Juergen Gross Date: Thu, 30 Jun 2022 09:14:40 +0200 Subject: [PATCH 1413/1842] x86: Clear .brk area at early boot [ Upstream commit 38fa5479b41376dc9d7f57e71c83514285a25ca0 ] The .brk section has the same properties as .bss: it is an alloc-only section and should be cleared before being used. Not doing so is especially a problem for Xen PV guests, as the hypervisor will validate page tables (check for writable page tables and hypervisor private bits) before accepting them to be used. Make sure .brk is initially zero by letting clear_bss() clear the brk area, too. Signed-off-by: Juergen Gross Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/20220630071441.28576-3-jgross@suse.com Signed-off-by: Sasha Levin --- arch/x86/kernel/head64.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c index 05e117137b45..efe13ab366f4 100644 --- a/arch/x86/kernel/head64.c +++ b/arch/x86/kernel/head64.c @@ -419,6 +419,8 @@ static void __init clear_bss(void) { memset(__bss_start, 0, (unsigned long) __bss_stop - (unsigned long) __bss_start); + memset(__brk_base, 0, + (unsigned long) __brk_limit - (unsigned long) __brk_base); } static unsigned long get_cmd_line_ptr(void) From 227ee155eaf528e3b446b976e08b8698d2ad683a Mon Sep 17 00:00:00 2001 From: Linus Walleij Date: Sun, 26 Jun 2022 09:43:15 +0200 Subject: [PATCH 1414/1842] soc: ixp4xx/npe: Fix unused match warning [ Upstream commit 620f83b8326ce9706b1118334f0257ae028ce045 ] The kernel test robot found this inconsistency: drivers/soc/ixp4xx/ixp4xx-npe.c:737:34: warning: 'ixp4xx_npe_of_match' defined but not used [-Wunused-const-variable=] 737 | static const struct of_device_id ixp4xx_npe_of_match[] = { This is because the match is enclosed in the of_match_ptr() which compiles into NULL when OF is disabled and this is unnecessary. Fix it by dropping of_match_ptr() around the match. Signed-off-by: Linus Walleij Link: https://lore.kernel.org/r/20220626074315.61209-1-linus.walleij@linaro.org' Signed-off-by: Arnd Bergmann Signed-off-by: Sasha Levin --- drivers/soc/ixp4xx/ixp4xx-npe.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/soc/ixp4xx/ixp4xx-npe.c b/drivers/soc/ixp4xx/ixp4xx-npe.c index 6065aaab6740..8482a4892b83 100644 --- a/drivers/soc/ixp4xx/ixp4xx-npe.c +++ b/drivers/soc/ixp4xx/ixp4xx-npe.c @@ -735,7 +735,7 @@ static const struct of_device_id ixp4xx_npe_of_match[] = { static struct platform_driver ixp4xx_npe_driver = { .driver = { .name = "ixp4xx-npe", - .of_match_table = of_match_ptr(ixp4xx_npe_of_match), + .of_match_table = ixp4xx_npe_of_match, }, .probe = ixp4xx_npe_probe, .remove = ixp4xx_npe_remove, From 35ce2c64e57e51be166f575330968b6b31298d6d Mon Sep 17 00:00:00 2001 From: Gabriel Fernandez Date: Fri, 24 Jun 2022 11:27:13 +0200 Subject: [PATCH 1415/1842] ARM: dts: stm32: use the correct clock source for CEC on stm32mp151 [ Upstream commit 78ece8cce1ba0c3f3e5a7c6c1b914b3794f04c44 ] The peripheral clock of CEC is not LSE but CEC. Signed-off-by: Gabriel Fernandez Signed-off-by: Alexandre Torgue Signed-off-by: Sasha Levin --- arch/arm/boot/dts/stm32mp151.dtsi | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/boot/dts/stm32mp151.dtsi b/arch/arm/boot/dts/stm32mp151.dtsi index 7a0ef01de969..9919fc86bdc3 100644 --- a/arch/arm/boot/dts/stm32mp151.dtsi +++ b/arch/arm/boot/dts/stm32mp151.dtsi @@ -543,7 +543,7 @@ cec: cec@40016000 { compatible = "st,stm32-cec"; reg = <0x40016000 0x400>; interrupts = ; - clocks = <&rcc CEC_K>, <&clk_lse>; + clocks = <&rcc CEC_K>, <&rcc CEC>; clock-names = "cec", "hdmi-cec"; status = "disabled"; }; From 54bf0b8c75af7663ce802178c93163e6695dbe39 Mon Sep 17 00:00:00 2001 From: Srinivas Neeli Date: Thu, 9 Jun 2022 13:54:32 +0530 Subject: [PATCH 1416/1842] Revert "can: xilinx_can: Limit CANFD brp to 2" [ Upstream commit c6da4590fe819dfe28a4f8037a8dc1e056542fb4 ] This reverts commit 05ca14fdb6fe65614e0652d03e44b02748d25af7. On early silicon engineering samples observed bit shrinking issue when we use brp as 1. Hence updated brp_min as 2. As in production silicon this issue is fixed, so reverting the patch. Link: https://lore.kernel.org/all/20220609082433.1191060-2-srinivas.neeli@xilinx.com Signed-off-by: Srinivas Neeli Signed-off-by: Marc Kleine-Budde Signed-off-by: Sasha Levin --- drivers/net/can/xilinx_can.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c index 1c42417810fc..1a3fba352cad 100644 --- a/drivers/net/can/xilinx_can.c +++ b/drivers/net/can/xilinx_can.c @@ -259,7 +259,7 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd2 = { .tseg2_min = 1, .tseg2_max = 128, .sjw_max = 128, - .brp_min = 2, + .brp_min = 1, .brp_max = 256, .brp_inc = 1, }; @@ -272,7 +272,7 @@ static const struct can_bittiming_const xcan_data_bittiming_const_canfd2 = { .tseg2_min = 1, .tseg2_max = 16, .sjw_max = 16, - .brp_min = 2, + .brp_min = 1, .brp_max = 256, .brp_inc = 1, }; From e75f692b79b477826296841d1535dae6e3eaa8bb Mon Sep 17 00:00:00 2001 From: Keith Busch Date: Tue, 5 Jul 2022 10:21:02 -0700 Subject: [PATCH 1417/1842] nvme-pci: phison e16 has bogus namespace ids [ Upstream commit 73029c9b23cf1213e5f54c2b59efce08665199e7 ] Add the quirk. Link: https://bugzilla.kernel.org/show_bug.cgi?id=216049 Reported-by: Chris Egolf Signed-off-by: Keith Busch Reviewed-by: Chaitanya Kulkarni Signed-off-by: Christoph Hellwig Signed-off-by: Sasha Levin --- drivers/nvme/host/pci.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 3622c5c9515f..ce129655ef0a 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -3234,7 +3234,8 @@ static const struct pci_device_id nvme_id_table[] = { NVME_QUIRK_DISABLE_WRITE_ZEROES| NVME_QUIRK_IGNORE_DEV_SUBNQN, }, { PCI_DEVICE(0x1987, 0x5016), /* Phison E16 */ - .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, }, + .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN | + NVME_QUIRK_BOGUS_NID, }, { PCI_DEVICE(0x1b4b, 0x1092), /* Lexar 256 GB SSD */ .driver_data = NVME_QUIRK_NO_NS_DESC_LIST | NVME_QUIRK_IGNORE_DEV_SUBNQN, }, From 58b94325ee80c709f57b15f762bf3b25d8a534f1 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Wed, 6 Jul 2022 12:20:59 -0700 Subject: [PATCH 1418/1842] signal handling: don't use BUG_ON() for debugging [ Upstream commit a382f8fee42ca10c9bfce0d2352d4153f931f5dc ] These are indeed "should not happen" situations, but it turns out recent changes made the 'task_is_stopped_or_trace()' case trigger (fix for that exists, is pending more testing), and the BUG_ON() makes it unnecessarily hard to actually debug for no good reason. It's been that way for a long time, but let's make it clear: BUG_ON() is not good for debugging, and should never be used in situations where you could just say "this shouldn't happen, but we can continue". Use WARN_ON_ONCE() instead to make sure it gets logged, and then just continue running. Instead of making the system basically unusuable because you crashed the machine while potentially holding some very core locks (eg this function is commonly called while holding 'tasklist_lock' for writing). Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- kernel/signal.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/kernel/signal.c b/kernel/signal.c index 6bb2df4f6109..d05f783d5a5e 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -1912,12 +1912,12 @@ bool do_notify_parent(struct task_struct *tsk, int sig) bool autoreap = false; u64 utime, stime; - BUG_ON(sig == -1); + WARN_ON_ONCE(sig == -1); - /* do_notify_parent_cldstop should have been called instead. */ - BUG_ON(task_is_stopped_or_traced(tsk)); + /* do_notify_parent_cldstop should have been called instead. */ + WARN_ON_ONCE(task_is_stopped_or_traced(tsk)); - BUG_ON(!tsk->ptrace && + WARN_ON_ONCE(!tsk->ptrace && (tsk->group_leader != tsk || !thread_group_empty(tsk))); /* Wake up all pidfd waiters */ From 61ab5d644e16f0e27d31c8e2d33cbf6f570c3714 Mon Sep 17 00:00:00 2001 From: Lucien Buchmann Date: Sat, 25 Jun 2022 02:17:44 +0200 Subject: [PATCH 1419/1842] USB: serial: ftdi_sio: add Belimo device ids commit 7c239a071d1f04b7137789810807b4108d475c72 upstream. Those two product ids are known. Signed-off-by: Lucien Buchmann Cc: stable@vger.kernel.org Signed-off-by: Johan Hovold Signed-off-by: Greg Kroah-Hartman --- drivers/usb/serial/ftdi_sio.c | 3 +++ drivers/usb/serial/ftdi_sio_ids.h | 6 ++++++ 2 files changed, 9 insertions(+) diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c index b74621dc2a65..8f980fc6efc1 100644 --- a/drivers/usb/serial/ftdi_sio.c +++ b/drivers/usb/serial/ftdi_sio.c @@ -1023,6 +1023,9 @@ static const struct usb_device_id id_table_combined[] = { { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_DISPLAY_PID) }, { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_LITE_PID) }, { USB_DEVICE(FTDI_VID, CHETCO_SEASMART_ANALOG_PID) }, + /* Belimo Automation devices */ + { USB_DEVICE(FTDI_VID, BELIMO_ZTH_PID) }, + { USB_DEVICE(FTDI_VID, BELIMO_ZIP_PID) }, /* ICP DAS I-756xU devices */ { USB_DEVICE(ICPDAS_VID, ICPDAS_I7560U_PID) }, { USB_DEVICE(ICPDAS_VID, ICPDAS_I7561U_PID) }, diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h index d1a9564697a4..4e92c165c86b 100644 --- a/drivers/usb/serial/ftdi_sio_ids.h +++ b/drivers/usb/serial/ftdi_sio_ids.h @@ -1568,6 +1568,12 @@ #define CHETCO_SEASMART_LITE_PID 0xA5AE /* SeaSmart Lite USB Adapter */ #define CHETCO_SEASMART_ANALOG_PID 0xA5AF /* SeaSmart Analog Adapter */ +/* + * Belimo Automation + */ +#define BELIMO_ZTH_PID 0x8050 +#define BELIMO_ZIP_PID 0xC811 + /* * Unjo AB */ From f1e01a42dcbde7d210a1b0688e6a07b1bb6b45f7 Mon Sep 17 00:00:00 2001 From: Linyu Yuan Date: Fri, 1 Jul 2022 16:08:54 +0800 Subject: [PATCH 1420/1842] usb: typec: add missing uevent when partner support PD commit 6fb9e1d94789e8ee5a258a23bc588693f743fd6c upstream. System like Android allow user control power role from UI, it is possible to implement application base on typec uevent to refresh UI, but found there is chance that UI show different state from typec attribute file. In typec_set_pwr_opmode(), when partner support PD, there is no uevent send to user space which cause the problem. Fix it by sending uevent notification when change power mode to PD. Fixes: bdecb33af34f ("usb: typec: API for controlling USB Type-C Multiplexers") Cc: stable@vger.kernel.org Signed-off-by: Linyu Yuan Link: https://lore.kernel.org/r/1656662934-10226-1-git-send-email-quic_linyyuan@quicinc.com Signed-off-by: Greg Kroah-Hartman --- drivers/usb/typec/class.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c index c7d44daa05c4..9d3a35b2046d 100644 --- a/drivers/usb/typec/class.c +++ b/drivers/usb/typec/class.c @@ -1444,6 +1444,7 @@ void typec_set_pwr_opmode(struct typec_port *port, partner->usb_pd = 1; sysfs_notify(&partner_dev->kobj, NULL, "supports_usb_power_delivery"); + kobject_uevent(&partner_dev->kobj, KOBJ_CHANGE); } put_device(partner_dev); } From 0e5668ed7b7aaab4a53126daeef0258c5c573d5d Mon Sep 17 00:00:00 2001 From: Thinh Nguyen Date: Mon, 27 Jun 2022 18:41:19 -0700 Subject: [PATCH 1421/1842] usb: dwc3: gadget: Fix event pending check commit 7441b273388b9a59d8387a03ffbbca9d5af6348c upstream. The DWC3_EVENT_PENDING flag is used to protect against invalid call to top-half interrupt handler, which can occur when there's a delay in software detection of the interrupt line deassertion. However, the clearing of this flag was done prior to unmasking the interrupt line, creating opportunity where the top-half handler can come. This breaks the serialization and creates a race between the top-half and bottom-half handler, resulting in losing synchronization between the controller and the driver when processing events. To fix this, make sure the clearing of the DWC3_EVENT_PENDING is done at the end of the bottom-half handler. Fixes: d325a1de49d6 ("usb: dwc3: gadget: Prevent losing events in event cache") Cc: stable@vger.kernel.org Signed-off-by: Thinh Nguyen Link: https://lore.kernel.org/r/8670aaf1cf52e7d1e6df2a827af2d77263b93b75.1656380429.git.Thinh.Nguyen@synopsys.com Signed-off-by: Greg Kroah-Hartman --- drivers/usb/dwc3/gadget.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c index 05fe6ded66a5..94e9d336855b 100644 --- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -3781,7 +3781,6 @@ static irqreturn_t dwc3_process_event_buf(struct dwc3_event_buffer *evt) } evt->count = 0; - evt->flags &= ~DWC3_EVENT_PENDING; ret = IRQ_HANDLED; /* Unmask interrupt */ @@ -3794,6 +3793,9 @@ static irqreturn_t dwc3_process_event_buf(struct dwc3_event_buffer *evt) dwc3_writel(dwc->regs, DWC3_DEV_IMOD(0), dwc->imod_interval); } + /* Keep the clearing of DWC3_EVENT_PENDING at the end */ + evt->flags &= ~DWC3_EVENT_PENDING; + return ret; } From 5450430199e319446d6b7a0ec963e3c951b121ab Mon Sep 17 00:00:00 2001 From: Chanho Park Date: Mon, 27 Jun 2022 15:51:13 +0900 Subject: [PATCH 1422/1842] tty: serial: samsung_tty: set dma burst_size to 1 commit f7e35e4bf1e8dc2c8cbd5e0955dc1bd58558dae0 upstream. The src_maxburst and dst_maxburst have been changed to 1 but the settings of the UCON register aren't changed yet. They should be changed as well according to the dmaengine slave config. Fixes: aa2f80e752c7 ("serial: samsung: fix maxburst parameter for DMA transactions") Cc: stable Cc: Marek Szyprowski Reviewed-by: Krzysztof Kozlowski Signed-off-by: Chanho Park Link: https://lore.kernel.org/r/20220627065113.139520-1-chanho61.park@samsung.com Signed-off-by: Greg Kroah-Hartman --- drivers/tty/serial/samsung_tty.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/tty/serial/samsung_tty.c b/drivers/tty/serial/samsung_tty.c index 81faead3c4f8..263c33260d8a 100644 --- a/drivers/tty/serial/samsung_tty.c +++ b/drivers/tty/serial/samsung_tty.c @@ -361,8 +361,7 @@ static void enable_tx_dma(struct s3c24xx_uart_port *ourport) /* Enable tx dma mode */ ucon = rd_regl(port, S3C2410_UCON); ucon &= ~(S3C64XX_UCON_TXBURST_MASK | S3C64XX_UCON_TXMODE_MASK); - ucon |= (dma_get_cache_alignment() >= 16) ? - S3C64XX_UCON_TXBURST_16 : S3C64XX_UCON_TXBURST_1; + ucon |= S3C64XX_UCON_TXBURST_1; ucon |= S3C64XX_UCON_TXMODE_DMA; wr_regl(port, S3C2410_UCON, ucon); @@ -634,7 +633,7 @@ static void enable_rx_dma(struct s3c24xx_uart_port *ourport) S3C64XX_UCON_DMASUS_EN | S3C64XX_UCON_TIMEOUT_EN | S3C64XX_UCON_RXMODE_MASK); - ucon |= S3C64XX_UCON_RXBURST_16 | + ucon |= S3C64XX_UCON_RXBURST_1 | 0xf << S3C64XX_UCON_TIMEOUT_SHIFT | S3C64XX_UCON_EMPTYINT_EN | S3C64XX_UCON_TIMEOUT_EN | From bfee93c9a6c395f9aa62268f1cedf64999844926 Mon Sep 17 00:00:00 2001 From: Yangxi Xiang Date: Tue, 28 Jun 2022 17:33:22 +0800 Subject: [PATCH 1423/1842] vt: fix memory overlapping when deleting chars in the buffer commit 39cdb68c64d84e71a4a717000b6e5de208ee60cc upstream. A memory overlapping copy occurs when deleting a long line. This memory overlapping copy can cause data corruption when scr_memcpyw is optimized to memcpy because memcpy does not ensure its behavior if the destination buffer overlaps with the source buffer. The line buffer is not always broken, because the memcpy utilizes the hardware acceleration, whose result is not deterministic. Fix this problem by using replacing the scr_memcpyw with scr_memmovew. Fixes: 81732c3b2fed ("tty vt: Fix line garbage in virtual console on command line edition") Cc: stable Signed-off-by: Yangxi Xiang Link: https://lore.kernel.org/r/20220628093322.5688-1-xyangxi5@gmail.com Signed-off-by: Greg Kroah-Hartman --- drivers/tty/vt/vt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c index 0a6336d54a65..f043fd7e0f92 100644 --- a/drivers/tty/vt/vt.c +++ b/drivers/tty/vt/vt.c @@ -855,7 +855,7 @@ static void delete_char(struct vc_data *vc, unsigned int nr) unsigned short *p = (unsigned short *) vc->vc_pos; vc_uniscr_delete(vc, nr); - scr_memcpyw(p, p + nr, (vc->vc_cols - vc->state.x - nr) * 2); + scr_memmovew(p, p + nr, (vc->vc_cols - vc->state.x - nr) * 2); scr_memsetw(p + vc->vc_cols - vc->state.x - nr, vc->vc_video_erase_char, nr * 2); vc->vc_need_wrap = 0; From 039ffe436ae55dabfaa4f46be39a7214c910cacc Mon Sep 17 00:00:00 2001 From: Yi Yang Date: Tue, 28 Jun 2022 16:35:15 +0800 Subject: [PATCH 1424/1842] serial: 8250: fix return error code in serial8250_request_std_resource() commit 6e690d54cfa802f939cefbd2fa2c91bd0b8bd1b6 upstream. If port->mapbase = NULL in serial8250_request_std_resource() , it need return a error code instead of 0. If uart_set_info() fail to request new regions by serial8250_request_std_resource() but the return value of serial8250_request_std_resource() is 0, The system incorrectly considers that the resource application is successful and does not attempt to restore the old setting. A null pointer reference is triggered when the port resource is later invoked. Signed-off-by: Yi Yang Cc: stable Link: https://lore.kernel.org/r/20220628083515.64138-1-yiyang13@huawei.com Signed-off-by: Greg Kroah-Hartman --- drivers/tty/serial/8250/8250_port.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c index 9cf5177815a8..43884e8b5161 100644 --- a/drivers/tty/serial/8250/8250_port.c +++ b/drivers/tty/serial/8250/8250_port.c @@ -2953,8 +2953,10 @@ static int serial8250_request_std_resource(struct uart_8250_port *up) case UPIO_MEM32BE: case UPIO_MEM16: case UPIO_MEM: - if (!port->mapbase) + if (!port->mapbase) { + ret = -EINVAL; break; + } if (!request_mem_region(port->mapbase, size, "serial")) { ret = -EBUSY; From b8c4661126568b8cd3e18908b920c3f83eba28e1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ilpo=20J=C3=A4rvinen?= Date: Mon, 27 Jun 2022 18:07:52 +0300 Subject: [PATCH 1425/1842] serial: stm32: Clear prev values before setting RTS delays MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 5c5f44e36217de5ead789ff25da71c31c2331c96 upstream. The code lacks clearing of previous DEAT/DEDT values. Thus, changing values on the fly results in garbage delays tending towards the maximum value as more and more bits are ORed together. (Leaving RS485 mode would have cleared the old values though). Fixes: 1bcda09d2910 ("serial: stm32: add support for RS485 hardware control mode") Cc: stable Signed-off-by: Ilpo Järvinen Link: https://lore.kernel.org/r/20220627150753.34510-1-ilpo.jarvinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman --- drivers/tty/serial/stm32-usart.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c index 8cd9e5b077b6..9377da1e97c0 100644 --- a/drivers/tty/serial/stm32-usart.c +++ b/drivers/tty/serial/stm32-usart.c @@ -70,6 +70,8 @@ static void stm32_usart_config_reg_rs485(u32 *cr1, u32 *cr3, u32 delay_ADE, *cr3 |= USART_CR3_DEM; over8 = *cr1 & USART_CR1_OVER8; + *cr1 &= ~(USART_CR1_DEDT_MASK | USART_CR1_DEAT_MASK); + if (over8) rs485_deat_dedt = delay_ADE * baud * 8; else From e1bd94dd9e5c63ac7280d67e730061cf684125ba Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ilpo=20J=C3=A4rvinen?= Date: Tue, 14 Jun 2022 10:56:37 +0300 Subject: [PATCH 1426/1842] serial: pl011: UPSTAT_AUTORTS requires .throttle/unthrottle MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 211565b100993c90b53bf40851eacaefc830cfe0 upstream. The driver must provide throttle and unthrottle in uart_ops when it sets UPSTAT_AUTORTS. Add them using existing stop_rx & enable_interrupts functions. Fixes: 2a76fa283098 (serial: pl011: Adopt generic flag to store auto RTS status) Cc: stable Cc: Lukas Wunner Reported-by: Nuno Gonçalves Tested-by: Nuno Gonçalves Signed-off-by: Ilpo Järvinen Link: https://lore.kernel.org/r/20220614075637.8558-1-ilpo.jarvinen@linux.intel.com Signed-off-by: Greg Kroah-Hartman --- drivers/tty/serial/amba-pl011.c | 23 +++++++++++++++++++++-- 1 file changed, 21 insertions(+), 2 deletions(-) diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c index 07b19e97f850..9900ee3f9068 100644 --- a/drivers/tty/serial/amba-pl011.c +++ b/drivers/tty/serial/amba-pl011.c @@ -1326,6 +1326,15 @@ static void pl011_stop_rx(struct uart_port *port) pl011_dma_rx_stop(uap); } +static void pl011_throttle_rx(struct uart_port *port) +{ + unsigned long flags; + + spin_lock_irqsave(&port->lock, flags); + pl011_stop_rx(port); + spin_unlock_irqrestore(&port->lock, flags); +} + static void pl011_enable_ms(struct uart_port *port) { struct uart_amba_port *uap = @@ -1717,9 +1726,10 @@ static int pl011_allocate_irq(struct uart_amba_port *uap) */ static void pl011_enable_interrupts(struct uart_amba_port *uap) { + unsigned long flags; unsigned int i; - spin_lock_irq(&uap->port.lock); + spin_lock_irqsave(&uap->port.lock, flags); /* Clear out any spuriously appearing RX interrupts */ pl011_write(UART011_RTIS | UART011_RXIS, uap, REG_ICR); @@ -1741,7 +1751,14 @@ static void pl011_enable_interrupts(struct uart_amba_port *uap) if (!pl011_dma_rx_running(uap)) uap->im |= UART011_RXIM; pl011_write(uap->im, uap, REG_IMSC); - spin_unlock_irq(&uap->port.lock); + spin_unlock_irqrestore(&uap->port.lock, flags); +} + +static void pl011_unthrottle_rx(struct uart_port *port) +{ + struct uart_amba_port *uap = container_of(port, struct uart_amba_port, port); + + pl011_enable_interrupts(uap); } static int pl011_startup(struct uart_port *port) @@ -2116,6 +2133,8 @@ static const struct uart_ops amba_pl011_pops = { .stop_tx = pl011_stop_tx, .start_tx = pl011_start_tx, .stop_rx = pl011_stop_rx, + .throttle = pl011_throttle_rx, + .unthrottle = pl011_unthrottle_rx, .enable_ms = pl011_enable_ms, .break_ctl = pl011_break_ctl, .startup = pl011_startup, From d9cb6fabc90102f9e61fe35bd0160db88f4f53b4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Ilpo=20J=C3=A4rvinen?= Date: Wed, 29 Jun 2022 12:48:41 +0300 Subject: [PATCH 1427/1842] serial: 8250: Fix PM usage_count for console handover MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit f9b11229b79c0fb2100b5bb4628a101b1d37fbf6 upstream. When console is enabled, univ8250_console_setup() calls serial8250_console_setup() before .dev is set to uart_port. Therefore, it will not call pm_runtime_get_sync(). Later, when the actual driver is going to take over univ8250_console_exit() is called. As .dev is already set, serial8250_console_exit() makes pm_runtime_put_sync() call with usage count being zero triggering PM usage count warning (extra debug for univ8250_console_setup(), univ8250_console_exit(), and serial8250_register_ports()): [ 0.068987] univ8250_console_setup ttyS0 nodev [ 0.499670] printk: console [ttyS0] enabled [ 0.717955] printk: console [ttyS0] printing thread started [ 1.960163] serial8250_register_ports assigned dev for ttyS0 [ 1.976830] printk: console [ttyS0] disabled [ 1.976888] printk: console [ttyS0] printing thread stopped [ 1.977073] univ8250_console_exit ttyS0 usage:0 [ 1.977075] serial8250 serial8250: Runtime PM usage count underflow! [ 1.977429] dw-apb-uart.6: ttyS0 at MMIO 0x4010006000 (irq = 33, base_baud = 115200) is a 16550A [ 1.977812] univ8250_console_setup ttyS0 usage:2 [ 1.978167] printk: console [ttyS0] printing thread started [ 1.978203] printk: console [ttyS0] enabled To fix the issue, call pm_runtime_get_sync() in serial8250_register_ports() as soon as .dev is set for an uart_port if it has console enabled. This problem became apparent only recently because 82586a721595 ("PM: runtime: Avoid device usage count underflows") added the warning printout. I confirmed this problem also occurs with v5.18 (w/o the warning printout, obviously). Fixes: bedb404e91bb ("serial: 8250_port: Don't use power management for kernel console") Cc: stable Tested-by: Tony Lindgren Reviewed-by: Andy Shevchenko Reviewed-by: Tony Lindgren Signed-off-by: Ilpo Järvinen Link: https://lore.kernel.org/r/b4f428e9-491f-daf2-2232-819928dc276e@linux.intel.com Signed-off-by: Greg Kroah-Hartman --- drivers/tty/serial/8250/8250_core.c | 4 ++++ drivers/tty/serial/serial_core.c | 5 ----- include/linux/serial_core.h | 5 +++++ 3 files changed, 9 insertions(+), 5 deletions(-) diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c index cae61d1ebec5..98ce484f1089 100644 --- a/drivers/tty/serial/8250/8250_core.c +++ b/drivers/tty/serial/8250/8250_core.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include @@ -571,6 +572,9 @@ serial8250_register_ports(struct uart_driver *drv, struct device *dev) up->port.dev = dev; + if (uart_console_enabled(&up->port)) + pm_runtime_get_sync(up->port.dev); + serial8250_apply_quirks(up); uart_add_one_port(drv, &up->port); } diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c index 32d09d024f6c..b578f7090b63 100644 --- a/drivers/tty/serial/serial_core.c +++ b/drivers/tty/serial/serial_core.c @@ -1941,11 +1941,6 @@ static int uart_proc_show(struct seq_file *m, void *v) } #endif -static inline bool uart_console_enabled(struct uart_port *port) -{ - return uart_console(port) && (port->cons->flags & CON_ENABLED); -} - static void uart_port_spin_lock_init(struct uart_port *port) { spin_lock_init(&port->lock); diff --git a/include/linux/serial_core.h b/include/linux/serial_core.h index 35b26743dbb2..9c1292ea47fd 100644 --- a/include/linux/serial_core.h +++ b/include/linux/serial_core.h @@ -394,6 +394,11 @@ static const bool earlycon_acpi_spcr_enable EARLYCON_USED_OR_UNUSED; static inline int setup_earlycon(char *buf) { return 0; } #endif +static inline bool uart_console_enabled(struct uart_port *port) +{ + return uart_console(port) && (port->cons->flags & CON_ENABLED); +} + struct uart_port *uart_get_console(struct uart_port *ports, int nr, struct console *c); int uart_parse_earlycon(char *p, unsigned char *iotype, resource_size_t *addr, From 06a5dc3911a3b29acefd53470bdeccb88deb155e Mon Sep 17 00:00:00 2001 From: Juergen Gross Date: Fri, 8 Jul 2022 15:14:56 +0200 Subject: [PATCH 1428/1842] x86/pat: Fix x86_has_pat_wp() commit 230ec83d4299b30c51a1c133b4f2a669972cc08a upstream. x86_has_pat_wp() is using a wrong test, as it relies on the normal PAT configuration used by the kernel. In case the PAT MSR has been setup by another entity (e.g. Xen hypervisor) it might return false even if the PAT configuration is allowing WP mappings. This due to the fact that when running as Xen PV guest the PAT MSR is setup by the hypervisor and cannot be changed by the guest. This results in the WP related entry to be at a different position when running as Xen PV guest compared to the bare metal or fully virtualized case. The correct way to test for WP support is: 1. Get the PTE protection bits needed to select WP mode by reading __cachemode2pte_tbl[_PAGE_CACHE_MODE_WP] (depending on the PAT MSR setting this might return protection bits for a stronger mode, e.g. UC-) 2. Translate those bits back into the real cache mode selected by those PTE bits by reading __pte2cachemode_tbl[__pte2cm_idx(prot)] 3. Test for the cache mode to be _PAGE_CACHE_MODE_WP Fixes: f88a68facd9a ("x86/mm: Extend early_memremap() support with additional attrs") Signed-off-by: Juergen Gross Signed-off-by: Borislav Petkov Cc: # 4.14 Link: https://lore.kernel.org/r/20220503132207.17234-1-jgross@suse.com Signed-off-by: Greg Kroah-Hartman --- arch/x86/mm/init.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index c7a47603537f..63d8c6c7d125 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -78,10 +78,20 @@ static uint8_t __pte2cachemode_tbl[8] = { [__pte2cm_idx(_PAGE_PWT | _PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC, }; -/* Check that the write-protect PAT entry is set for write-protect */ +/* + * Check that the write-protect PAT entry is set for write-protect. + * To do this without making assumptions how PAT has been set up (Xen has + * another layout than the kernel), translate the _PAGE_CACHE_MODE_WP cache + * mode via the __cachemode2pte_tbl[] into protection bits (those protection + * bits will select a cache mode of WP or better), and then translate the + * protection bits back into the cache mode using __pte2cm_idx() and the + * __pte2cachemode_tbl[] array. This will return the really used cache mode. + */ bool x86_has_pat_wp(void) { - return __pte2cachemode_tbl[_PAGE_CACHE_MODE_WP] == _PAGE_CACHE_MODE_WP; + uint16_t prot = __cachemode2pte_tbl[_PAGE_CACHE_MODE_WP]; + + return __pte2cachemode_tbl[__pte2cm_idx(prot)] == _PAGE_CACHE_MODE_WP; } enum page_cache_mode pgprot2cachemode(pgprot_t pgprot) From 7748091a31277b35d55bffa6fecda439d8526366 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 21 Jul 2022 21:20:20 +0200 Subject: [PATCH 1429/1842] Linux 5.10.132 Link: https://lore.kernel.org/r/20220719114626.156073229@linuxfoundation.org Tested-by: Florian Fainelli Tested-by: Hulk Robot Tested-by: Guenter Roeck Tested-by: Jon Hunter Tested-by: Linux Kernel Functional Testing Tested-by: Sudip Mukherjee Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 53f1a45ae69b..5bee8f281b06 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 131 +SUBLEVEL = 132 EXTRAVERSION = NAME = Dare mighty things From 0ca2ba6e4d139da809061a25626174f812303b7a Mon Sep 17 00:00:00 2001 From: Uros Bizjak Date: Thu, 29 Oct 2020 15:04:57 +0100 Subject: [PATCH 1430/1842] KVM/VMX: Use TEST %REG,%REG instead of CMP $0,%REG in vmenter.S commit 6c44221b05236cc65d76cb5dc2463f738edff39d upstream. Saves one byte in __vmx_vcpu_run for the same functionality. Cc: Paolo Bonzini Cc: Sean Christopherson Signed-off-by: Uros Bizjak Message-Id: <20201029140457.126965-1-ubizjak@gmail.com> Signed-off-by: Paolo Bonzini Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/vmx/vmenter.S | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 90ad7a6246e3..e85aa5faa22d 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -132,7 +132,7 @@ SYM_FUNC_START(__vmx_vcpu_run) mov (%_ASM_SP), %_ASM_AX /* Check if vmlaunch or vmresume is needed */ - cmpb $0, %bl + testb %bl, %bl /* Load guest registers. Don't clobber flags. */ mov VCPU_RCX(%_ASM_AX), %_ASM_CX From dd87aa5f610be44f195cf5a99b7bc153faf30a3d Mon Sep 17 00:00:00 2001 From: Uros Bizjak Date: Wed, 30 Dec 2020 16:26:57 -0800 Subject: [PATCH 1431/1842] KVM/nVMX: Use __vmx_vcpu_run in nested_vmx_check_vmentry_hw commit 150f17bfab37e981ba03b37440638138ff2aa9ec upstream. Replace inline assembly in nested_vmx_check_vmentry_hw with a call to __vmx_vcpu_run. The function is not performance critical, so (double) GPR save/restore in __vmx_vcpu_run can be tolerated, as far as performance effects are concerned. Cc: Paolo Bonzini Cc: Sean Christopherson Reviewed-and-tested-by: Sean Christopherson Signed-off-by: Uros Bizjak [sean: dropped versioning info from changelog] Signed-off-by: Sean Christopherson Message-Id: <20201231002702.2223707-5-seanjc@google.com> Signed-off-by: Paolo Bonzini Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/vmx/nested.c | 32 +++----------------------------- arch/x86/kvm/vmx/vmenter.S | 2 +- arch/x86/kvm/vmx/vmx.c | 2 -- arch/x86/kvm/vmx/vmx.h | 1 + 4 files changed, 5 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 90881d7b42ea..67fe1cf79caf 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -12,6 +12,7 @@ #include "nested.h" #include "pmu.h" #include "trace.h" +#include "vmx.h" #include "x86.h" static bool __read_mostly enable_shadow_vmcs = 1; @@ -3075,35 +3076,8 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) vmx->loaded_vmcs->host_state.cr4 = cr4; } - asm( - "sub $%c[wordsize], %%" _ASM_SP "\n\t" /* temporarily adjust RSP for CALL */ - "cmp %%" _ASM_SP ", %c[host_state_rsp](%[loaded_vmcs]) \n\t" - "je 1f \n\t" - __ex("vmwrite %%" _ASM_SP ", %[HOST_RSP]") "\n\t" - "mov %%" _ASM_SP ", %c[host_state_rsp](%[loaded_vmcs]) \n\t" - "1: \n\t" - "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ - - /* Check if vmlaunch or vmresume is needed */ - "cmpb $0, %c[launched](%[loaded_vmcs])\n\t" - - /* - * VMLAUNCH and VMRESUME clear RFLAGS.{CF,ZF} on VM-Exit, set - * RFLAGS.CF on VM-Fail Invalid and set RFLAGS.ZF on VM-Fail - * Valid. vmx_vmenter() directly "returns" RFLAGS, and so the - * results of VM-Enter is captured via CC_{SET,OUT} to vm_fail. - */ - "call vmx_vmenter\n\t" - - CC_SET(be) - : ASM_CALL_CONSTRAINT, CC_OUT(be) (vm_fail) - : [HOST_RSP]"r"((unsigned long)HOST_RSP), - [loaded_vmcs]"r"(vmx->loaded_vmcs), - [launched]"i"(offsetof(struct loaded_vmcs, launched)), - [host_state_rsp]"i"(offsetof(struct loaded_vmcs, host_state.rsp)), - [wordsize]"i"(sizeof(ulong)) - : "memory" - ); + vm_fail = __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs, + vmx->loaded_vmcs->launched); if (vmx->msr_autoload.host.nr) vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr); diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index e85aa5faa22d..3a6461694fc2 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -44,7 +44,7 @@ * they VM-Fail, whereas a successful VM-Enter + VM-Exit will jump * to vmx_vmexit. */ -SYM_FUNC_START(vmx_vmenter) +SYM_FUNC_START_LOCAL(vmx_vmenter) /* EFLAGS.ZF is set if VMCS.LAUNCHED == 0 */ je 2f diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index cc647dcc228b..e71e81b2ea52 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6687,8 +6687,6 @@ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu) } } -bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched); - static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) { diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 31317c8915e4..08835468314f 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -365,6 +365,7 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu); struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr); void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu); void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp); +bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched); int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr); void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu); From 1d516bd72a68e4e610d8e3b5ad99e25807a85947 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Thu, 17 Dec 2020 15:02:42 -0600 Subject: [PATCH 1432/1842] objtool: Refactor ORC section generation commit ab4e0744e99b87e1a223e89fc3c9ae44f727c9a6 upstream. Decouple ORC entries from instructions. This simplifies the control/data flow, and is going to make it easier to support alternative instructions which change the stack layout. Signed-off-by: Josh Poimboeuf Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/Makefile | 4 - tools/objtool/arch.h | 4 - tools/objtool/builtin-orc.c | 6 +- tools/objtool/check.h | 3 - tools/objtool/objtool.h | 3 +- tools/objtool/orc_gen.c | 284 ++++++++++++++++++------------------ tools/objtool/weak.c | 7 +- 7 files changed, 146 insertions(+), 165 deletions(-) diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile index 5cdb19036d7f..a43096f713c7 100644 --- a/tools/objtool/Makefile +++ b/tools/objtool/Makefile @@ -46,10 +46,6 @@ ifeq ($(SRCARCH),x86) SUBCMD_ORC := y endif -ifeq ($(SUBCMD_ORC),y) - CFLAGS += -DINSN_USE_ORC -endif - export SUBCMD_CHECK SUBCMD_ORC export srctree OUTPUT CFLAGS SRCARCH AWK include $(srctree)/tools/build/Makefile.include diff --git a/tools/objtool/arch.h b/tools/objtool/arch.h index 4a84c3081b8e..5e3f3ea8bb89 100644 --- a/tools/objtool/arch.h +++ b/tools/objtool/arch.h @@ -11,10 +11,6 @@ #include "objtool.h" #include "cfi.h" -#ifdef INSN_USE_ORC -#include -#endif - enum insn_type { INSN_JUMP_CONDITIONAL, INSN_JUMP_UNCONDITIONAL, diff --git a/tools/objtool/builtin-orc.c b/tools/objtool/builtin-orc.c index 7b31121fa60b..508bdf6ae8dc 100644 --- a/tools/objtool/builtin-orc.c +++ b/tools/objtool/builtin-orc.c @@ -51,11 +51,7 @@ int cmd_orc(int argc, const char **argv) if (list_empty(&file->insn_list)) return 0; - ret = create_orc(file); - if (ret) - return ret; - - ret = create_orc_sections(file); + ret = orc_create(file); if (ret) return ret; diff --git a/tools/objtool/check.h b/tools/objtool/check.h index 2804848e628e..c280880f5eb4 100644 --- a/tools/objtool/check.h +++ b/tools/objtool/check.h @@ -43,9 +43,6 @@ struct instruction { struct symbol *func; struct list_head stack_ops; struct cfi_state cfi; -#ifdef INSN_USE_ORC - struct orc_entry orc; -#endif }; static inline bool is_static_jump(struct instruction *insn) diff --git a/tools/objtool/objtool.h b/tools/objtool/objtool.h index 4125d4578b23..5e58d3537e2f 100644 --- a/tools/objtool/objtool.h +++ b/tools/objtool/objtool.h @@ -26,7 +26,6 @@ struct objtool_file *objtool_open_read(const char *_objname); int check(struct objtool_file *file); int orc_dump(const char *objname); -int create_orc(struct objtool_file *file); -int create_orc_sections(struct objtool_file *file); +int orc_create(struct objtool_file *file); #endif /* _OBJTOOL_H */ diff --git a/tools/objtool/orc_gen.c b/tools/objtool/orc_gen.c index 9ce68b385a1b..7191b6234120 100644 --- a/tools/objtool/orc_gen.c +++ b/tools/objtool/orc_gen.c @@ -12,89 +12,84 @@ #include "check.h" #include "warn.h" -int create_orc(struct objtool_file *file) +static int init_orc_entry(struct orc_entry *orc, struct cfi_state *cfi) { - struct instruction *insn; + struct instruction *insn = container_of(cfi, struct instruction, cfi); + struct cfi_reg *bp = &cfi->regs[CFI_BP]; - for_each_insn(file, insn) { - struct orc_entry *orc = &insn->orc; - struct cfi_reg *cfa = &insn->cfi.cfa; - struct cfi_reg *bp = &insn->cfi.regs[CFI_BP]; + memset(orc, 0, sizeof(*orc)); - if (!insn->sec->text) - continue; + orc->end = cfi->end; - orc->end = insn->cfi.end; - - if (cfa->base == CFI_UNDEFINED) { - orc->sp_reg = ORC_REG_UNDEFINED; - continue; - } - - switch (cfa->base) { - case CFI_SP: - orc->sp_reg = ORC_REG_SP; - break; - case CFI_SP_INDIRECT: - orc->sp_reg = ORC_REG_SP_INDIRECT; - break; - case CFI_BP: - orc->sp_reg = ORC_REG_BP; - break; - case CFI_BP_INDIRECT: - orc->sp_reg = ORC_REG_BP_INDIRECT; - break; - case CFI_R10: - orc->sp_reg = ORC_REG_R10; - break; - case CFI_R13: - orc->sp_reg = ORC_REG_R13; - break; - case CFI_DI: - orc->sp_reg = ORC_REG_DI; - break; - case CFI_DX: - orc->sp_reg = ORC_REG_DX; - break; - default: - WARN_FUNC("unknown CFA base reg %d", - insn->sec, insn->offset, cfa->base); - return -1; - } - - switch(bp->base) { - case CFI_UNDEFINED: - orc->bp_reg = ORC_REG_UNDEFINED; - break; - case CFI_CFA: - orc->bp_reg = ORC_REG_PREV_SP; - break; - case CFI_BP: - orc->bp_reg = ORC_REG_BP; - break; - default: - WARN_FUNC("unknown BP base reg %d", - insn->sec, insn->offset, bp->base); - return -1; - } - - orc->sp_offset = cfa->offset; - orc->bp_offset = bp->offset; - orc->type = insn->cfi.type; + if (cfi->cfa.base == CFI_UNDEFINED) { + orc->sp_reg = ORC_REG_UNDEFINED; + return 0; } + switch (cfi->cfa.base) { + case CFI_SP: + orc->sp_reg = ORC_REG_SP; + break; + case CFI_SP_INDIRECT: + orc->sp_reg = ORC_REG_SP_INDIRECT; + break; + case CFI_BP: + orc->sp_reg = ORC_REG_BP; + break; + case CFI_BP_INDIRECT: + orc->sp_reg = ORC_REG_BP_INDIRECT; + break; + case CFI_R10: + orc->sp_reg = ORC_REG_R10; + break; + case CFI_R13: + orc->sp_reg = ORC_REG_R13; + break; + case CFI_DI: + orc->sp_reg = ORC_REG_DI; + break; + case CFI_DX: + orc->sp_reg = ORC_REG_DX; + break; + default: + WARN_FUNC("unknown CFA base reg %d", + insn->sec, insn->offset, cfi->cfa.base); + return -1; + } + + switch (bp->base) { + case CFI_UNDEFINED: + orc->bp_reg = ORC_REG_UNDEFINED; + break; + case CFI_CFA: + orc->bp_reg = ORC_REG_PREV_SP; + break; + case CFI_BP: + orc->bp_reg = ORC_REG_BP; + break; + default: + WARN_FUNC("unknown BP base reg %d", + insn->sec, insn->offset, bp->base); + return -1; + } + + orc->sp_offset = cfi->cfa.offset; + orc->bp_offset = bp->offset; + orc->type = cfi->type; + return 0; } -static int create_orc_entry(struct elf *elf, struct section *u_sec, struct section *ip_relocsec, - unsigned int idx, struct section *insn_sec, - unsigned long insn_off, struct orc_entry *o) +static int write_orc_entry(struct elf *elf, struct section *orc_sec, + struct section *ip_rsec, unsigned int idx, + struct section *insn_sec, unsigned long insn_off, + struct orc_entry *o) { struct orc_entry *orc; struct reloc *reloc; /* populate ORC data */ - orc = (struct orc_entry *)u_sec->data->d_buf + idx; + orc = (struct orc_entry *)orc_sec->data->d_buf + idx; memcpy(orc, o, sizeof(*orc)); /* populate reloc for ip */ @@ -114,102 +109,109 @@ static int create_orc_entry(struct elf *elf, struct section *u_sec, struct secti reloc->type = R_X86_64_PC32; reloc->offset = idx * sizeof(int); - reloc->sec = ip_relocsec; + reloc->sec = ip_rsec; elf_add_reloc(elf, reloc); return 0; } -int create_orc_sections(struct objtool_file *file) -{ - struct instruction *insn, *prev_insn; - struct section *sec, *u_sec, *ip_relocsec; - unsigned int idx; +struct orc_list_entry { + struct list_head list; + struct orc_entry orc; + struct section *insn_sec; + unsigned long insn_off; +}; - struct orc_entry empty = { - .sp_reg = ORC_REG_UNDEFINED, +static int orc_list_add(struct list_head *orc_list, struct orc_entry *orc, + struct section *sec, unsigned long offset) +{ + struct orc_list_entry *entry = malloc(sizeof(*entry)); + + if (!entry) { + WARN("malloc failed"); + return -1; + } + + entry->orc = *orc; + entry->insn_sec = sec; + entry->insn_off = offset; + + list_add_tail(&entry->list, orc_list); + return 0; +} + +int orc_create(struct objtool_file *file) +{ + struct section *sec, *ip_rsec, *orc_sec; + unsigned int nr = 0, idx = 0; + struct orc_list_entry *entry; + struct list_head orc_list; + + struct orc_entry null = { + .sp_reg = ORC_REG_UNDEFINED, .bp_reg = ORC_REG_UNDEFINED, .type = UNWIND_HINT_TYPE_CALL, }; + /* Build a deduplicated list of ORC entries: */ + INIT_LIST_HEAD(&orc_list); + for_each_sec(file, sec) { + struct orc_entry orc, prev_orc = {0}; + struct instruction *insn; + bool empty = true; + + if (!sec->text) + continue; + + sec_for_each_insn(file, sec, insn) { + if (init_orc_entry(&orc, &insn->cfi)) + return -1; + if (!memcmp(&prev_orc, &orc, sizeof(orc))) + continue; + if (orc_list_add(&orc_list, &orc, sec, insn->offset)) + return -1; + nr++; + prev_orc = orc; + empty = false; + } + + /* Add a section terminator */ + if (!empty) { + orc_list_add(&orc_list, &null, sec, sec->len); + nr++; + } + } + if (!nr) + return 0; + + /* Create .orc_unwind, .orc_unwind_ip and .rela.orc_unwind_ip sections: */ sec = find_section_by_name(file->elf, ".orc_unwind"); if (sec) { WARN("file already has .orc_unwind section, skipping"); return -1; } - - /* count the number of needed orcs */ - idx = 0; - for_each_sec(file, sec) { - if (!sec->text) - continue; - - prev_insn = NULL; - sec_for_each_insn(file, sec, insn) { - if (!prev_insn || - memcmp(&insn->orc, &prev_insn->orc, - sizeof(struct orc_entry))) { - idx++; - } - prev_insn = insn; - } - - /* section terminator */ - if (prev_insn) - idx++; - } - if (!idx) + orc_sec = elf_create_section(file->elf, ".orc_unwind", 0, + sizeof(struct orc_entry), nr); + if (!orc_sec) return -1; - - /* create .orc_unwind_ip and .rela.orc_unwind_ip sections */ - sec = elf_create_section(file->elf, ".orc_unwind_ip", 0, sizeof(int), idx); + sec = elf_create_section(file->elf, ".orc_unwind_ip", 0, sizeof(int), nr); if (!sec) return -1; - - ip_relocsec = elf_create_reloc_section(file->elf, sec, SHT_RELA); - if (!ip_relocsec) + ip_rsec = elf_create_reloc_section(file->elf, sec, SHT_RELA); + if (!ip_rsec) return -1; - /* create .orc_unwind section */ - u_sec = elf_create_section(file->elf, ".orc_unwind", 0, - sizeof(struct orc_entry), idx); - - /* populate sections */ - idx = 0; - for_each_sec(file, sec) { - if (!sec->text) - continue; - - prev_insn = NULL; - sec_for_each_insn(file, sec, insn) { - if (!prev_insn || memcmp(&insn->orc, &prev_insn->orc, - sizeof(struct orc_entry))) { - - if (create_orc_entry(file->elf, u_sec, ip_relocsec, idx, - insn->sec, insn->offset, - &insn->orc)) - return -1; - - idx++; - } - prev_insn = insn; - } - - /* section terminator */ - if (prev_insn) { - if (create_orc_entry(file->elf, u_sec, ip_relocsec, idx, - prev_insn->sec, - prev_insn->offset + prev_insn->len, - &empty)) - return -1; - - idx++; - } + /* Write ORC entries to sections: */ + list_for_each_entry(entry, &orc_list, list) { + if (write_orc_entry(file->elf, orc_sec, ip_rsec, idx++, + entry->insn_sec, entry->insn_off, + &entry->orc)) + return -1; } - if (elf_rebuild_reloc_section(file->elf, ip_relocsec)) + if (elf_rebuild_reloc_section(file->elf, ip_rsec)) return -1; return 0; diff --git a/tools/objtool/weak.c b/tools/objtool/weak.c index 7843e9a7a72f..553ec9ce51ba 100644 --- a/tools/objtool/weak.c +++ b/tools/objtool/weak.c @@ -25,12 +25,7 @@ int __weak orc_dump(const char *_objname) UNSUPPORTED("orc"); } -int __weak create_orc(struct objtool_file *file) -{ - UNSUPPORTED("orc"); -} - -int __weak create_orc_sections(struct objtool_file *file) +int __weak orc_create(struct objtool_file *file) { UNSUPPORTED("orc"); } From e9197d768f976199a2356842400df947b4007377 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Fri, 18 Dec 2020 14:19:32 -0600 Subject: [PATCH 1433/1842] objtool: Add 'alt_group' struct commit b23cc71c62747f2e4c3e56138872cf47e1294f8a upstream. Create a new struct associated with each group of alternatives instructions. This will help with the removal of fake jumps, and more importantly with adding support for stack layout changes in alternatives. Signed-off-by: Josh Poimboeuf Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 29 +++++++++++++++++++++++------ tools/objtool/check.h | 13 ++++++++++++- 2 files changed, 35 insertions(+), 7 deletions(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 8932f41c387f..6143a0790aa4 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -1012,20 +1012,28 @@ static int handle_group_alt(struct objtool_file *file, struct instruction *orig_insn, struct instruction **new_insn) { - static unsigned int alt_group_next_index = 1; struct instruction *last_orig_insn, *last_new_insn, *insn, *fake_jump = NULL; - unsigned int alt_group = alt_group_next_index++; + struct alt_group *orig_alt_group, *new_alt_group; unsigned long dest_off; + + orig_alt_group = malloc(sizeof(*orig_alt_group)); + if (!orig_alt_group) { + WARN("malloc failed"); + return -1; + } last_orig_insn = NULL; insn = orig_insn; sec_for_each_insn_from(file, insn) { if (insn->offset >= special_alt->orig_off + special_alt->orig_len) break; - insn->alt_group = alt_group; + insn->alt_group = orig_alt_group; last_orig_insn = insn; } + orig_alt_group->orig_group = NULL; + orig_alt_group->first_insn = orig_insn; + orig_alt_group->last_insn = last_orig_insn; if (next_insn_same_sec(file, last_orig_insn)) { fake_jump = malloc(sizeof(*fake_jump)); @@ -1056,8 +1064,13 @@ static int handle_group_alt(struct objtool_file *file, return 0; } + new_alt_group = malloc(sizeof(*new_alt_group)); + if (!new_alt_group) { + WARN("malloc failed"); + return -1; + } + last_new_insn = NULL; - alt_group = alt_group_next_index++; insn = *new_insn; sec_for_each_insn_from(file, insn) { struct reloc *alt_reloc; @@ -1069,7 +1082,7 @@ static int handle_group_alt(struct objtool_file *file, insn->ignore = orig_insn->ignore_alts; insn->func = orig_insn->func; - insn->alt_group = alt_group; + insn->alt_group = new_alt_group; /* * Since alternative replacement code is copy/pasted by the @@ -1118,6 +1131,10 @@ static int handle_group_alt(struct objtool_file *file, return -1; } + new_alt_group->orig_group = orig_alt_group; + new_alt_group->first_insn = *new_insn; + new_alt_group->last_insn = last_new_insn; + if (fake_jump) list_add(&fake_jump->list, &last_new_insn->list); @@ -2440,7 +2457,7 @@ static int validate_return(struct symbol *func, struct instruction *insn, struct static void fill_alternative_cfi(struct objtool_file *file, struct instruction *insn) { struct instruction *first_insn = insn; - int alt_group = insn->alt_group; + struct alt_group *alt_group = insn->alt_group; sec_for_each_insn_continue(file, insn) { if (insn->alt_group != alt_group) diff --git a/tools/objtool/check.h b/tools/objtool/check.h index c280880f5eb4..dbf9f34622e8 100644 --- a/tools/objtool/check.h +++ b/tools/objtool/check.h @@ -19,6 +19,17 @@ struct insn_state { s8 instr; }; +struct alt_group { + /* + * Pointer from a replacement group to the original group. NULL if it + * *is* the original group. + */ + struct alt_group *orig_group; + + /* First and last instructions in the group */ + struct instruction *first_insn, *last_insn; +}; + struct instruction { struct list_head list; struct hlist_node hash; @@ -34,7 +45,7 @@ struct instruction { s8 instr; u8 visited; u8 ret_offset; - int alt_group; + struct alt_group *alt_group; struct symbol *call_dest; struct instruction *jump_dest; struct instruction *first_jump_src; From 917a4f6348d94d9a3c20d78c800dd4715825362d Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Fri, 18 Dec 2020 14:26:21 -0600 Subject: [PATCH 1434/1842] objtool: Support stack layout changes in alternatives commit c9c324dc22aab1687da37001b321b6dfa93a0699 upstream. The ORC unwinder showed a warning [1] which revealed the stack layout didn't match what was expected. The problem was that paravirt patching had replaced "CALL *pv_ops.irq.save_fl" with "PUSHF;POP". That changed the stack layout between the PUSHF and the POP, so unwinding from an interrupt which occurred between those two instructions would fail. Part of the agreed upon solution was to rework the custom paravirt patching code to use alternatives instead, since objtool already knows how to read alternatives (and converging runtime patching infrastructure is always a good thing anyway). But the main problem still remains, which is that runtime patching can change the stack layout. Making stack layout changes in alternatives was disallowed with commit 7117f16bf460 ("objtool: Fix ORC vs alternatives"), but now that paravirt is going to be doing it, it needs to be supported. One way to do so would be to modify the ORC table when the code gets patched. But ORC is simple -- a good thing! -- and it's best to leave it alone. Instead, support stack layout changes by "flattening" all possible stack states (CFI) from parallel alternative code streams into a single set of linear states. The only necessary limitation is that CFI conflicts are disallowed at all possible instruction boundaries. For example, this scenario is allowed: Alt1 Alt2 Alt3 0x00 CALL *pv_ops.save_fl CALL xen_save_fl PUSHF 0x01 POP %RAX 0x02 NOP ... 0x05 NOP ... 0x07 The unwind information for offset-0x00 is identical for all 3 alternatives. Similarly offset-0x05 and higher also are identical (and the same as 0x00). However offset-0x01 has deviating CFI, but that is only relevant for Alt3, neither of the other alternative instruction streams will ever hit that offset. This scenario is NOT allowed: Alt1 Alt2 0x00 CALL *pv_ops.save_fl PUSHF 0x01 NOP6 ... 0x07 NOP POP %RAX The problem here is that offset-0x7, which is an instruction boundary in both possible instruction patch streams, has two conflicting stack layouts. [ The above examples were stolen from Peter Zijlstra. ] The new flattened CFI array is used both for the detection of conflicts (like the second example above) and the generation of linear ORC entries. BTW, another benefit of these changes is that, thanks to some related cleanups (new fake nops and alt_group struct) objtool can finally be rid of fake jumps, which were a constant source of headaches. [1] https://lkml.kernel.org/r/20201111170536.arx2zbn4ngvjoov7@treble Cc: Shinichiro Kawasaki Signed-off-by: Josh Poimboeuf Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- .../Documentation/stack-validation.txt | 14 +- tools/objtool/check.c | 198 +++++++++--------- tools/objtool/check.h | 6 + tools/objtool/orc_gen.c | 56 ++++- 4 files changed, 161 insertions(+), 113 deletions(-) diff --git a/tools/objtool/Documentation/stack-validation.txt b/tools/objtool/Documentation/stack-validation.txt index 0542e46c7552..30f38fdc0d56 100644 --- a/tools/objtool/Documentation/stack-validation.txt +++ b/tools/objtool/Documentation/stack-validation.txt @@ -315,13 +315,15 @@ they mean, and suggestions for how to fix them. function tracing inserts additional calls, which is not obvious from the sources). -10. file.o: warning: func()+0x5c: alternative modifies stack +10. file.o: warning: func()+0x5c: stack layout conflict in alternatives - This means that an alternative includes instructions that modify the - stack. The problem is that there is only one ORC unwind table, this means - that the ORC unwind entries must be valid for each of the alternatives. - The easiest way to enforce this is to ensure alternatives do not contain - any ORC entries, which in turn implies the above constraint. + This means that in the use of the alternative() or ALTERNATIVE() + macro, the code paths have conflicting modifications to the stack. + The problem is that there is only one ORC unwind table, which means + that the ORC unwind entries must be consistent for all possible + instruction boundaries regardless of which code has been patched. + This limitation can be overcome by massaging the alternatives with + NOPs to shift the stack changes around so they no longer conflict. 11. file.o: warning: unannotated intra-function call diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 6143a0790aa4..32fdbd339ca9 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -19,8 +19,6 @@ #include #include -#define FAKE_JUMP_OFFSET -1 - struct alternative { struct list_head list; struct instruction *insn; @@ -789,9 +787,6 @@ static int add_jump_destinations(struct objtool_file *file) if (!is_static_jump(insn)) continue; - if (insn->offset == FAKE_JUMP_OFFSET) - continue; - reloc = find_reloc_by_dest_range(file->elf, insn->sec, insn->offset, insn->len); if (!reloc) { @@ -991,28 +986,15 @@ static int add_call_destinations(struct objtool_file *file) } /* - * The .alternatives section requires some extra special care, over and above - * what other special sections require: - * - * 1. Because alternatives are patched in-place, we need to insert a fake jump - * instruction at the end so that validate_branch() skips all the original - * replaced instructions when validating the new instruction path. - * - * 2. An added wrinkle is that the new instruction length might be zero. In - * that case the old instructions are replaced with noops. We simulate that - * by creating a fake jump as the only new instruction. - * - * 3. In some cases, the alternative section includes an instruction which - * conditionally jumps to the _end_ of the entry. We have to modify these - * jumps' destinations to point back to .text rather than the end of the - * entry in .altinstr_replacement. + * The .alternatives section requires some extra special care over and above + * other special sections because alternatives are patched in place. */ static int handle_group_alt(struct objtool_file *file, struct special_alt *special_alt, struct instruction *orig_insn, struct instruction **new_insn) { - struct instruction *last_orig_insn, *last_new_insn, *insn, *fake_jump = NULL; + struct instruction *last_orig_insn, *last_new_insn = NULL, *insn, *nop = NULL; struct alt_group *orig_alt_group, *new_alt_group; unsigned long dest_off; @@ -1022,6 +1004,13 @@ static int handle_group_alt(struct objtool_file *file, WARN("malloc failed"); return -1; } + orig_alt_group->cfi = calloc(special_alt->orig_len, + sizeof(struct cfi_state *)); + if (!orig_alt_group->cfi) { + WARN("calloc failed"); + return -1; + } + last_orig_insn = NULL; insn = orig_insn; sec_for_each_insn_from(file, insn) { @@ -1035,34 +1024,6 @@ static int handle_group_alt(struct objtool_file *file, orig_alt_group->first_insn = orig_insn; orig_alt_group->last_insn = last_orig_insn; - if (next_insn_same_sec(file, last_orig_insn)) { - fake_jump = malloc(sizeof(*fake_jump)); - if (!fake_jump) { - WARN("malloc failed"); - return -1; - } - memset(fake_jump, 0, sizeof(*fake_jump)); - INIT_LIST_HEAD(&fake_jump->alts); - INIT_LIST_HEAD(&fake_jump->stack_ops); - init_cfi_state(&fake_jump->cfi); - - fake_jump->sec = special_alt->new_sec; - fake_jump->offset = FAKE_JUMP_OFFSET; - fake_jump->type = INSN_JUMP_UNCONDITIONAL; - fake_jump->jump_dest = list_next_entry(last_orig_insn, list); - fake_jump->func = orig_insn->func; - } - - if (!special_alt->new_len) { - if (!fake_jump) { - WARN("%s: empty alternative at end of section", - special_alt->orig_sec->name); - return -1; - } - - *new_insn = fake_jump; - return 0; - } new_alt_group = malloc(sizeof(*new_alt_group)); if (!new_alt_group) { @@ -1070,7 +1031,38 @@ static int handle_group_alt(struct objtool_file *file, return -1; } - last_new_insn = NULL; + if (special_alt->new_len < special_alt->orig_len) { + /* + * Insert a fake nop at the end to make the replacement + * alt_group the same size as the original. This is needed to + * allow propagate_alt_cfi() to do its magic. When the last + * instruction affects the stack, the instruction after it (the + * nop) will propagate the new state to the shared CFI array. + */ + nop = malloc(sizeof(*nop)); + if (!nop) { + WARN("malloc failed"); + return -1; + } + memset(nop, 0, sizeof(*nop)); + INIT_LIST_HEAD(&nop->alts); + INIT_LIST_HEAD(&nop->stack_ops); + init_cfi_state(&nop->cfi); + + nop->sec = special_alt->new_sec; + nop->offset = special_alt->new_off + special_alt->new_len; + nop->len = special_alt->orig_len - special_alt->new_len; + nop->type = INSN_NOP; + nop->func = orig_insn->func; + nop->alt_group = new_alt_group; + nop->ignore = orig_insn->ignore_alts; + } + + if (!special_alt->new_len) { + *new_insn = nop; + goto end; + } + insn = *new_insn; sec_for_each_insn_from(file, insn) { struct reloc *alt_reloc; @@ -1109,14 +1101,8 @@ static int handle_group_alt(struct objtool_file *file, continue; dest_off = arch_jump_destination(insn); - if (dest_off == special_alt->new_off + special_alt->new_len) { - if (!fake_jump) { - WARN("%s: alternative jump to end of section", - special_alt->orig_sec->name); - return -1; - } - insn->jump_dest = fake_jump; - } + if (dest_off == special_alt->new_off + special_alt->new_len) + insn->jump_dest = next_insn_same_sec(file, last_orig_insn); if (!insn->jump_dest) { WARN_FUNC("can't find alternative jump destination", @@ -1131,13 +1117,13 @@ static int handle_group_alt(struct objtool_file *file, return -1; } + if (nop) + list_add(&nop->list, &last_new_insn->list); +end: new_alt_group->orig_group = orig_alt_group; new_alt_group->first_insn = *new_insn; - new_alt_group->last_insn = last_new_insn; - - if (fake_jump) - list_add(&fake_jump->list, &last_new_insn->list); - + new_alt_group->last_insn = nop ? : last_new_insn; + new_alt_group->cfi = orig_alt_group->cfi; return 0; } @@ -2237,22 +2223,47 @@ static int update_cfi_state(struct instruction *insn, struct cfi_state *cfi, return 0; } +/* + * The stack layouts of alternatives instructions can sometimes diverge when + * they have stack modifications. That's fine as long as the potential stack + * layouts don't conflict at any given potential instruction boundary. + * + * Flatten the CFIs of the different alternative code streams (both original + * and replacement) into a single shared CFI array which can be used to detect + * conflicts and nicely feed a linear array of ORC entries to the unwinder. + */ +static int propagate_alt_cfi(struct objtool_file *file, struct instruction *insn) +{ + struct cfi_state **alt_cfi; + int group_off; + + if (!insn->alt_group) + return 0; + + alt_cfi = insn->alt_group->cfi; + group_off = insn->offset - insn->alt_group->first_insn->offset; + + if (!alt_cfi[group_off]) { + alt_cfi[group_off] = &insn->cfi; + } else { + if (memcmp(alt_cfi[group_off], &insn->cfi, sizeof(struct cfi_state))) { + WARN_FUNC("stack layout conflict in alternatives", + insn->sec, insn->offset); + return -1; + } + } + + return 0; +} + static int handle_insn_ops(struct instruction *insn, struct insn_state *state) { struct stack_op *op; list_for_each_entry(op, &insn->stack_ops, list) { - struct cfi_state old_cfi = state->cfi; - int res; - res = update_cfi_state(insn, &state->cfi, op); - if (res) - return res; - - if (insn->alt_group && memcmp(&state->cfi, &old_cfi, sizeof(struct cfi_state))) { - WARN_FUNC("alternative modifies stack", insn->sec, insn->offset); - return -1; - } + if (update_cfi_state(insn, &state->cfi, op)) + return 1; if (op->dest.type == OP_DEST_PUSHF) { if (!state->uaccess_stack) { @@ -2442,28 +2453,20 @@ static int validate_return(struct symbol *func, struct instruction *insn, struct return 0; } -/* - * Alternatives should not contain any ORC entries, this in turn means they - * should not contain any CFI ops, which implies all instructions should have - * the same same CFI state. - * - * It is possible to constuct alternatives that have unreachable holes that go - * unreported (because they're NOPs), such holes would result in CFI_UNDEFINED - * states which then results in ORC entries, which we just said we didn't want. - * - * Avoid them by copying the CFI entry of the first instruction into the whole - * alternative. - */ -static void fill_alternative_cfi(struct objtool_file *file, struct instruction *insn) +static struct instruction *next_insn_to_validate(struct objtool_file *file, + struct instruction *insn) { - struct instruction *first_insn = insn; struct alt_group *alt_group = insn->alt_group; - sec_for_each_insn_continue(file, insn) { - if (insn->alt_group != alt_group) - break; - insn->cfi = first_insn->cfi; - } + /* + * Simulate the fact that alternatives are patched in-place. When the + * end of a replacement alt_group is reached, redirect objtool flow to + * the end of the original alt_group. + */ + if (alt_group && insn == alt_group->last_insn && alt_group->orig_group) + return next_insn_same_sec(file, alt_group->orig_group->last_insn); + + return next_insn_same_sec(file, insn); } /* @@ -2484,7 +2487,7 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, sec = insn->sec; while (1) { - next_insn = next_insn_same_sec(file, insn); + next_insn = next_insn_to_validate(file, insn); if (file->c_file && func && insn->func && func != insn->func->pfunc) { WARN("%s() falls through to next function %s()", @@ -2517,6 +2520,9 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, insn->visited |= visited; + if (propagate_alt_cfi(file, insn)) + return 1; + if (!insn->ignore_alts && !list_empty(&insn->alts)) { bool skip_orig = false; @@ -2532,9 +2538,6 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, } } - if (insn->alt_group) - fill_alternative_cfi(file, insn); - if (skip_orig) return 0; } @@ -2767,9 +2770,6 @@ static bool ignore_unreachable_insn(struct objtool_file *file, struct instructio !strcmp(insn->sec->name, ".altinstr_aux")) return true; - if (insn->type == INSN_JUMP_UNCONDITIONAL && insn->offset == FAKE_JUMP_OFFSET) - return true; - if (!insn->func) return false; diff --git a/tools/objtool/check.h b/tools/objtool/check.h index dbf9f34622e8..995a0436d543 100644 --- a/tools/objtool/check.h +++ b/tools/objtool/check.h @@ -28,6 +28,12 @@ struct alt_group { /* First and last instructions in the group */ struct instruction *first_insn, *last_insn; + + /* + * Byte-offset-addressed len-sized array of pointers to CFI structs. + * This is shared with the other alt_groups in the same alternative. + */ + struct cfi_state **cfi; }; struct instruction { diff --git a/tools/objtool/orc_gen.c b/tools/objtool/orc_gen.c index 7191b6234120..0d62f958dbea 100644 --- a/tools/objtool/orc_gen.c +++ b/tools/objtool/orc_gen.c @@ -141,6 +141,13 @@ static int orc_list_add(struct list_head *orc_list, struct orc_entry *orc, return 0; } +static unsigned long alt_group_len(struct alt_group *alt_group) +{ + return alt_group->last_insn->offset + + alt_group->last_insn->len - + alt_group->first_insn->offset; +} + int orc_create(struct objtool_file *file) { struct section *sec, *ip_rsec, *orc_sec; @@ -165,15 +172,48 @@ int orc_create(struct objtool_file *file) continue; sec_for_each_insn(file, sec, insn) { - if (init_orc_entry(&orc, &insn->cfi)) - return -1; - if (!memcmp(&prev_orc, &orc, sizeof(orc))) + struct alt_group *alt_group = insn->alt_group; + int i; + + if (!alt_group) { + if (init_orc_entry(&orc, &insn->cfi)) + return -1; + if (!memcmp(&prev_orc, &orc, sizeof(orc))) + continue; + if (orc_list_add(&orc_list, &orc, sec, + insn->offset)) + return -1; + nr++; + prev_orc = orc; + empty = false; continue; - if (orc_list_add(&orc_list, &orc, sec, insn->offset)) - return -1; - nr++; - prev_orc = orc; - empty = false; + } + + /* + * Alternatives can have different stack layout + * possibilities (but they shouldn't conflict). + * Instead of traversing the instructions, use the + * alt_group's flattened byte-offset-addressed CFI + * array. + */ + for (i = 0; i < alt_group_len(alt_group); i++) { + struct cfi_state *cfi = alt_group->cfi[i]; + if (!cfi) + continue; + if (init_orc_entry(&orc, cfi)) + return -1; + if (!memcmp(&prev_orc, &orc, sizeof(orc))) + continue; + if (orc_list_add(&orc_list, &orc, insn->sec, + insn->offset + i)) + return -1; + nr++; + prev_orc = orc; + empty = false; + } + + /* Skip to the end of the alt_group */ + insn = alt_group->last_insn; } /* Add a section terminator */ From 3e674f26528931c6a0f1bc7aa29445b45fdfd62d Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Thu, 21 Jan 2021 15:29:20 -0600 Subject: [PATCH 1435/1842] objtool: Support retpoline jump detection for vmlinux.o commit 31a7424bc58063a8e0466c3c10f31a52ec2be4f6 upstream. Objtool converts direct retpoline jumps to type INSN_JUMP_DYNAMIC, since that's what they are semantically. That conversion doesn't work in vmlinux.o validation because the indirect thunk function is present in the object, so the intra-object jump check succeeds before the retpoline jump check gets a chance. Rearrange the checks: check for a retpoline jump before checking for an intra-object jump. Signed-off-by: Josh Poimboeuf Link: https://lore.kernel.org/r/4302893513770dde68ddc22a9d6a2a04aca491dd.1611263461.git.jpoimboe@redhat.com Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 32fdbd339ca9..4f8a760bffca 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -795,10 +795,6 @@ static int add_jump_destinations(struct objtool_file *file) } else if (reloc->sym->type == STT_SECTION) { dest_sec = reloc->sym->sec; dest_off = arch_dest_reloc_offset(reloc->addend); - } else if (reloc->sym->sec->idx) { - dest_sec = reloc->sym->sec; - dest_off = reloc->sym->sym.st_value + - arch_dest_reloc_offset(reloc->addend); } else if (!strncmp(reloc->sym->name, "__x86_indirect_thunk_", 21) || !strncmp(reloc->sym->name, "__x86_retpoline_", 16)) { /* @@ -812,6 +808,10 @@ static int add_jump_destinations(struct objtool_file *file) insn->retpoline_safe = true; continue; + } else if (reloc->sym->sec->idx) { + dest_sec = reloc->sym->sec; + dest_off = reloc->sym->sym.st_value + + arch_dest_reloc_offset(reloc->addend); } else { /* external sibling call */ insn->call_dest = reloc->sym; From 53e89bc78e4351924a1a1474683d47a00c2633f2 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Thu, 21 Jan 2021 15:29:22 -0600 Subject: [PATCH 1436/1842] objtool: Assume only ELF functions do sibling calls commit ecf11ba4d066fe527586c6edd6ca68457ca55cf4 upstream. There's an inconsistency in how sibling calls are detected in non-function asm code, depending on the scope of the object. If the target code is external to the object, objtool considers it a sibling call. If the target code is internal but not a function, objtool *doesn't* consider it a sibling call. This can cause some inconsistencies between per-object and vmlinux.o validation. Instead, assume only ELF functions can do sibling calls. This generally matches existing reality, and makes sibling call validation consistent between vmlinux.o and per-object. Signed-off-by: Josh Poimboeuf Link: https://lore.kernel.org/r/0e9ab6f3628cc7bf3bde7aa6762d54d7df19ad78.1611263461.git.jpoimboe@redhat.com Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 36 ++++++++++++++++++++++-------------- 1 file changed, 22 insertions(+), 14 deletions(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 4f8a760bffca..29e4fca80258 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -109,15 +109,20 @@ static struct instruction *prev_insn_same_sym(struct objtool_file *file, static bool is_sibling_call(struct instruction *insn) { + /* + * Assume only ELF functions can make sibling calls. This ensures + * sibling call detection consistency between vmlinux.o and individual + * objects. + */ + if (!insn->func) + return false; + /* An indirect jump is either a sibling call or a jump to a table. */ if (insn->type == INSN_JUMP_DYNAMIC) return list_empty(&insn->alts); - if (!is_static_jump(insn)) - return false; - /* add_jump_destinations() sets insn->call_dest for sibling calls. */ - return !!insn->call_dest; + return (is_static_jump(insn) && insn->call_dest); } /* @@ -788,7 +793,7 @@ static int add_jump_destinations(struct objtool_file *file) continue; reloc = find_reloc_by_dest_range(file->elf, insn->sec, - insn->offset, insn->len); + insn->offset, insn->len); if (!reloc) { dest_sec = insn->sec; dest_off = arch_jump_destination(insn); @@ -808,18 +813,21 @@ static int add_jump_destinations(struct objtool_file *file) insn->retpoline_safe = true; continue; - } else if (reloc->sym->sec->idx) { - dest_sec = reloc->sym->sec; - dest_off = reloc->sym->sym.st_value + - arch_dest_reloc_offset(reloc->addend); - } else { - /* external sibling call */ + } else if (insn->func) { + /* internal or external sibling call (with reloc) */ insn->call_dest = reloc->sym; if (insn->call_dest->static_call_tramp) { list_add_tail(&insn->static_call_node, &file->static_call_list); } continue; + } else if (reloc->sym->sec->idx) { + dest_sec = reloc->sym->sec; + dest_off = reloc->sym->sym.st_value + + arch_dest_reloc_offset(reloc->addend); + } else { + /* non-func asm code jumping to another file */ + continue; } insn->jump_dest = find_insn(file, dest_sec, dest_off); @@ -868,7 +876,7 @@ static int add_jump_destinations(struct objtool_file *file) } else if (insn->jump_dest->func->pfunc != insn->func->pfunc && insn->jump_dest->offset == insn->jump_dest->func->offset) { - /* internal sibling call */ + /* internal sibling call (without reloc) */ insn->call_dest = insn->jump_dest->func; if (insn->call_dest->static_call_tramp) { list_add_tail(&insn->static_call_node, @@ -2570,7 +2578,7 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, case INSN_JUMP_CONDITIONAL: case INSN_JUMP_UNCONDITIONAL: - if (func && is_sibling_call(insn)) { + if (is_sibling_call(insn)) { ret = validate_sibling_call(insn, &state); if (ret) return ret; @@ -2592,7 +2600,7 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, case INSN_JUMP_DYNAMIC: case INSN_JUMP_DYNAMIC_CONDITIONAL: - if (func && is_sibling_call(insn)) { + if (is_sibling_call(insn)) { ret = validate_sibling_call(insn, &state); if (ret) return ret; From 3116dee2704bfb3713efa3637a9e65369d019cc4 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Thu, 21 Jan 2021 15:29:24 -0600 Subject: [PATCH 1437/1842] objtool: Combine UNWIND_HINT_RET_OFFSET and UNWIND_HINT_FUNC commit b735bd3e68824316655252a931a3353a6ebc036f upstream. The ORC metadata generated for UNWIND_HINT_FUNC isn't actually very func-like. With certain usages it can cause stack state mismatches because it doesn't set the return address (CFI_RA). Also, users of UNWIND_HINT_RET_OFFSET no longer need to set a custom return stack offset. Instead they just need to specify a func-like situation, so the current ret_offset code is hacky for no good reason. Solve both problems by simplifying the RET_OFFSET handling and converting it into a more useful UNWIND_HINT_FUNC. If we end up needing the old 'ret_offset' functionality again in the future, we should be able to support it pretty easily with the addition of a custom 'sp_offset' in UNWIND_HINT_FUNC. Signed-off-by: Josh Poimboeuf Link: https://lore.kernel.org/r/db9d1f5d79dddfbb3725ef6d8ec3477ad199948d.1611263462.git.jpoimboe@redhat.com [bwh: Backported to 5.10: - Don't use bswap_if_needed() since we don't have any of the other fixes for mixed-endian cross-compilation - Adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/unwind_hints.h | 13 ++-------- arch/x86/kernel/ftrace_64.S | 2 +- arch/x86/lib/retpoline.S | 2 +- include/linux/objtool.h | 5 +++- tools/include/linux/objtool.h | 5 +++- tools/objtool/arch/x86/decode.c | 4 ++-- tools/objtool/check.c | 37 ++++++++++++----------------- tools/objtool/check.h | 1 - 8 files changed, 29 insertions(+), 40 deletions(-) diff --git a/arch/x86/include/asm/unwind_hints.h b/arch/x86/include/asm/unwind_hints.h index 664d4610d700..8e574c0afef8 100644 --- a/arch/x86/include/asm/unwind_hints.h +++ b/arch/x86/include/asm/unwind_hints.h @@ -48,17 +48,8 @@ UNWIND_HINT_REGS base=\base offset=\offset partial=1 .endm -.macro UNWIND_HINT_FUNC sp_offset=8 - UNWIND_HINT sp_reg=ORC_REG_SP sp_offset=\sp_offset type=UNWIND_HINT_TYPE_CALL -.endm - -/* - * RET_OFFSET: Used on instructions that terminate a function; mostly RETURN - * and sibling calls. On these, sp_offset denotes the expected offset from - * initial_func_cfi. - */ -.macro UNWIND_HINT_RET_OFFSET sp_offset=8 - UNWIND_HINT sp_reg=ORC_REG_SP type=UNWIND_HINT_TYPE_RET_OFFSET sp_offset=\sp_offset +.macro UNWIND_HINT_FUNC + UNWIND_HINT sp_reg=ORC_REG_SP sp_offset=8 type=UNWIND_HINT_TYPE_FUNC .endm #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S index ac3d5f22fe64..ad1da5e85aeb 100644 --- a/arch/x86/kernel/ftrace_64.S +++ b/arch/x86/kernel/ftrace_64.S @@ -265,7 +265,7 @@ SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL) restore_mcount_regs 8 /* Restore flags */ popfq - UNWIND_HINT_RET_OFFSET + UNWIND_HINT_FUNC jmp ftrace_epilogue SYM_FUNC_END(ftrace_regs_caller) diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index b4c43a9b1483..f6fb1d218dcc 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -28,7 +28,7 @@ SYM_FUNC_START_NOALIGN(__x86_retpoline_\reg) jmp .Lspec_trap_\@ .Ldo_rop_\@: mov %\reg, (%_ASM_SP) - UNWIND_HINT_RET_OFFSET + UNWIND_HINT_FUNC ret SYM_FUNC_END(__x86_retpoline_\reg) diff --git a/include/linux/objtool.h b/include/linux/objtool.h index 577f51436cf9..8ed079c52c47 100644 --- a/include/linux/objtool.h +++ b/include/linux/objtool.h @@ -29,11 +29,14 @@ struct unwind_hint { * * UNWIND_HINT_TYPE_REGS_PARTIAL: Used in entry code to indicate that * sp_reg+sp_offset points to the iret return frame. + * + * UNWIND_HINT_FUNC: Generate the unwind metadata of a callable function. + * Useful for code which doesn't have an ELF function annotation. */ #define UNWIND_HINT_TYPE_CALL 0 #define UNWIND_HINT_TYPE_REGS 1 #define UNWIND_HINT_TYPE_REGS_PARTIAL 2 -#define UNWIND_HINT_TYPE_RET_OFFSET 3 +#define UNWIND_HINT_TYPE_FUNC 3 #ifdef CONFIG_STACK_VALIDATION diff --git a/tools/include/linux/objtool.h b/tools/include/linux/objtool.h index 577f51436cf9..8ed079c52c47 100644 --- a/tools/include/linux/objtool.h +++ b/tools/include/linux/objtool.h @@ -29,11 +29,14 @@ struct unwind_hint { * * UNWIND_HINT_TYPE_REGS_PARTIAL: Used in entry code to indicate that * sp_reg+sp_offset points to the iret return frame. + * + * UNWIND_HINT_FUNC: Generate the unwind metadata of a callable function. + * Useful for code which doesn't have an ELF function annotation. */ #define UNWIND_HINT_TYPE_CALL 0 #define UNWIND_HINT_TYPE_REGS 1 #define UNWIND_HINT_TYPE_REGS_PARTIAL 2 -#define UNWIND_HINT_TYPE_RET_OFFSET 3 +#define UNWIND_HINT_TYPE_FUNC 3 #ifdef CONFIG_STACK_VALIDATION diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c index cde9c36e40ae..6f5a45754ad8 100644 --- a/tools/objtool/arch/x86/decode.c +++ b/tools/objtool/arch/x86/decode.c @@ -563,8 +563,8 @@ void arch_initial_func_cfi_state(struct cfi_init_state *state) state->cfa.offset = 8; /* initial RA (return address) */ - state->regs[16].base = CFI_CFA; - state->regs[16].offset = -8; + state->regs[CFI_RA].base = CFI_CFA; + state->regs[CFI_RA].offset = -8; } const char *arch_nop_insn(int len) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 29e4fca80258..ed5e4b04f508 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -1423,13 +1423,20 @@ static int add_jump_table_alts(struct objtool_file *file) return 0; } +static void set_func_state(struct cfi_state *state) +{ + state->cfa = initial_func_cfi.cfa; + memcpy(&state->regs, &initial_func_cfi.regs, + CFI_NUM_REGS * sizeof(struct cfi_reg)); + state->stack_size = initial_func_cfi.cfa.offset; +} + static int read_unwind_hints(struct objtool_file *file) { struct section *sec, *relocsec; struct reloc *reloc; struct unwind_hint *hint; struct instruction *insn; - struct cfi_reg *cfa; int i; sec = find_section_by_name(file->elf, ".discard.unwind_hints"); @@ -1464,22 +1471,20 @@ static int read_unwind_hints(struct objtool_file *file) return -1; } - cfa = &insn->cfi.cfa; + insn->hint = true; - if (hint->type == UNWIND_HINT_TYPE_RET_OFFSET) { - insn->ret_offset = hint->sp_offset; + if (hint->type == UNWIND_HINT_TYPE_FUNC) { + set_func_state(&insn->cfi); continue; } - insn->hint = true; - if (arch_decode_hint_reg(insn, hint->sp_reg)) { WARN_FUNC("unsupported unwind_hint sp base reg %d", insn->sec, insn->offset, hint->sp_reg); return -1; } - cfa->offset = hint->sp_offset; + insn->cfi.cfa.offset = hint->sp_offset; insn->cfi.type = hint->type; insn->cfi.end = hint->end; } @@ -1742,27 +1747,18 @@ static bool is_fentry_call(struct instruction *insn) static bool has_modified_stack_frame(struct instruction *insn, struct insn_state *state) { - u8 ret_offset = insn->ret_offset; struct cfi_state *cfi = &state->cfi; int i; if (cfi->cfa.base != initial_func_cfi.cfa.base || cfi->drap) return true; - if (cfi->cfa.offset != initial_func_cfi.cfa.offset + ret_offset) + if (cfi->cfa.offset != initial_func_cfi.cfa.offset) return true; - if (cfi->stack_size != initial_func_cfi.cfa.offset + ret_offset) + if (cfi->stack_size != initial_func_cfi.cfa.offset) return true; - /* - * If there is a ret offset hint then don't check registers - * because a callee-saved register might have been pushed on - * the stack. - */ - if (ret_offset) - return false; - for (i = 0; i < CFI_NUM_REGS; i++) { if (cfi->regs[i].base != initial_func_cfi.regs[i].base || cfi->regs[i].offset != initial_func_cfi.regs[i].offset) @@ -2863,10 +2859,7 @@ static int validate_section(struct objtool_file *file, struct section *sec) continue; init_insn_state(&state, sec); - state.cfi.cfa = initial_func_cfi.cfa; - memcpy(&state.cfi.regs, &initial_func_cfi.regs, - CFI_NUM_REGS * sizeof(struct cfi_reg)); - state.cfi.stack_size = initial_func_cfi.cfa.offset; + set_func_state(&state.cfi); warnings += validate_symbol(file, sec, func, &state); } diff --git a/tools/objtool/check.h b/tools/objtool/check.h index 995a0436d543..d255c43b134a 100644 --- a/tools/objtool/check.h +++ b/tools/objtool/check.h @@ -50,7 +50,6 @@ struct instruction { bool retpoline_safe; s8 instr; u8 visited; - u8 ret_offset; struct alt_group *alt_group; struct symbol *call_dest; struct instruction *jump_dest; From b626e17c11f58d49b01bd8bcdf0e0ec11570b6df Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Thu, 21 Jan 2021 15:29:28 -0600 Subject: [PATCH 1438/1842] x86/xen: Support objtool validation in xen-asm.S commit cde07a4e4434ddfb9b1616ac971edf6d66329804 upstream. The OBJECT_FILES_NON_STANDARD annotation is used to tell objtool to ignore a file. File-level ignores won't work when validating vmlinux.o. Tweak the ELF metadata and unwind hints to allow objtool to follow the code. Cc: Juergen Gross Reviewed-by: Boris Ostrovsky Signed-off-by: Josh Poimboeuf Link: https://lore.kernel.org/r/8b042a09c69e8645f3b133ef6653ba28f896807d.1611263462.git.jpoimboe@redhat.com Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/xen/Makefile | 1 - arch/x86/xen/xen-asm.S | 29 +++++++++++++++++++---------- 2 files changed, 19 insertions(+), 11 deletions(-) diff --git a/arch/x86/xen/Makefile b/arch/x86/xen/Makefile index fc5c5ba4aacb..40b5779fce21 100644 --- a/arch/x86/xen/Makefile +++ b/arch/x86/xen/Makefile @@ -1,5 +1,4 @@ # SPDX-License-Identifier: GPL-2.0 -OBJECT_FILES_NON_STANDARD_xen-asm.o := y ifdef CONFIG_FUNCTION_TRACER # Do not profile debug and lowlevel utilities diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S index 011ec649f388..e75f60a06056 100644 --- a/arch/x86/xen/xen-asm.S +++ b/arch/x86/xen/xen-asm.S @@ -14,6 +14,7 @@ #include #include #include +#include #include @@ -147,6 +148,7 @@ SYM_FUNC_END(xen_read_cr2_direct); .macro xen_pv_trap name SYM_CODE_START(xen_\name) + UNWIND_HINT_EMPTY pop %rcx pop %r11 jmp \name @@ -186,6 +188,7 @@ xen_pv_trap asm_exc_xen_hypervisor_callback SYM_CODE_START(xen_early_idt_handler_array) i = 0 .rept NUM_EXCEPTION_VECTORS + UNWIND_HINT_EMPTY pop %rcx pop %r11 jmp early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE @@ -212,11 +215,13 @@ hypercall_iret = hypercall_page + __HYPERVISOR_iret * 32 * rsp->rax } */ SYM_CODE_START(xen_iret) + UNWIND_HINT_EMPTY pushq $0 jmp hypercall_iret SYM_CODE_END(xen_iret) SYM_CODE_START(xen_sysret64) + UNWIND_HINT_EMPTY /* * We're already on the usermode stack at this point, but * still with the kernel gs, so we can easily switch back. @@ -271,7 +276,8 @@ SYM_CODE_END(xenpv_restore_regs_and_return_to_usermode) */ /* Normal 64-bit system call target */ -SYM_FUNC_START(xen_syscall_target) +SYM_CODE_START(xen_syscall_target) + UNWIND_HINT_EMPTY popq %rcx popq %r11 @@ -284,12 +290,13 @@ SYM_FUNC_START(xen_syscall_target) movq $__USER_CS, 1*8(%rsp) jmp entry_SYSCALL_64_after_hwframe -SYM_FUNC_END(xen_syscall_target) +SYM_CODE_END(xen_syscall_target) #ifdef CONFIG_IA32_EMULATION /* 32-bit compat syscall target */ -SYM_FUNC_START(xen_syscall32_target) +SYM_CODE_START(xen_syscall32_target) + UNWIND_HINT_EMPTY popq %rcx popq %r11 @@ -302,10 +309,11 @@ SYM_FUNC_START(xen_syscall32_target) movq $__USER32_CS, 1*8(%rsp) jmp entry_SYSCALL_compat_after_hwframe -SYM_FUNC_END(xen_syscall32_target) +SYM_CODE_END(xen_syscall32_target) /* 32-bit compat sysenter target */ -SYM_FUNC_START(xen_sysenter_target) +SYM_CODE_START(xen_sysenter_target) + UNWIND_HINT_EMPTY /* * NB: Xen is polite and clears TF from EFLAGS for us. This means * that we don't need to guard against single step exceptions here. @@ -322,17 +330,18 @@ SYM_FUNC_START(xen_sysenter_target) movq $__USER32_CS, 1*8(%rsp) jmp entry_SYSENTER_compat_after_hwframe -SYM_FUNC_END(xen_sysenter_target) +SYM_CODE_END(xen_sysenter_target) #else /* !CONFIG_IA32_EMULATION */ -SYM_FUNC_START_ALIAS(xen_syscall32_target) -SYM_FUNC_START(xen_sysenter_target) +SYM_CODE_START(xen_syscall32_target) +SYM_CODE_START(xen_sysenter_target) + UNWIND_HINT_EMPTY lea 16(%rsp), %rsp /* strip %rcx, %r11 */ mov $-ENOSYS, %rax pushq $0 jmp hypercall_iret -SYM_FUNC_END(xen_sysenter_target) -SYM_FUNC_END_ALIAS(xen_syscall32_target) +SYM_CODE_END(xen_sysenter_target) +SYM_CODE_END(xen_syscall32_target) #endif /* CONFIG_IA32_EMULATION */ From 5f93d900b9d33b0d5f7e1a7e455f26aab86875c5 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Thu, 21 Jan 2021 15:29:29 -0600 Subject: [PATCH 1439/1842] x86/xen: Support objtool vmlinux.o validation in xen-head.S commit f4b4bc10b0b85ec66f1a9bf5dddf475e6695b6d2 upstream. The Xen hypercall page is filled with zeros, causing objtool to fall through all the empty hypercall functions until it reaches a real function, resulting in a stack state mismatch. The build-time contents of the hypercall page don't matter because the page gets rewritten by the hypervisor. Make it more palatable to objtool by making each hypervisor function a true empty function, with nops and a return. Cc: Juergen Gross Reviewed-by: Boris Ostrovsky Signed-off-by: Josh Poimboeuf Link: https://lore.kernel.org/r/0883bde1d7a1fb3b6a4c952bc0200e873752f609.1611263462.git.jpoimboe@redhat.com Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/xen/xen-head.S | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S index 2d7c8f34f56c..cb6538ae2fe0 100644 --- a/arch/x86/xen/xen-head.S +++ b/arch/x86/xen/xen-head.S @@ -68,8 +68,9 @@ SYM_CODE_END(asm_cpu_bringup_and_idle) .balign PAGE_SIZE SYM_CODE_START(hypercall_page) .rept (PAGE_SIZE / 32) - UNWIND_HINT_EMPTY - .skip 32 + UNWIND_HINT_FUNC + .skip 31, 0x90 + ret .endr #define HYPERCALL(n) \ From c9cf908b89ca3b5aa6563181bf78764ac1ab793e Mon Sep 17 00:00:00 2001 From: Juergen Gross Date: Thu, 11 Mar 2021 15:23:06 +0100 Subject: [PATCH 1440/1842] x86/alternative: Merge include files commit 5e21a3ecad1500e35b46701e7f3f232e15d78e69 upstream. Merge arch/x86/include/asm/alternative-asm.h into arch/x86/include/asm/alternative.h in order to make it easier to use common definitions later. Signed-off-by: Juergen Gross Signed-off-by: Borislav Petkov Link: https://lkml.kernel.org/r/20210311142319.4723-2-jgross@suse.com Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/entry/entry_32.S | 2 +- arch/x86/entry/vdso/vdso32/system_call.S | 2 +- arch/x86/include/asm/alternative-asm.h | 114 ----------------------- arch/x86/include/asm/alternative.h | 112 +++++++++++++++++++++- arch/x86/include/asm/nospec-branch.h | 1 - arch/x86/include/asm/smap.h | 5 +- arch/x86/lib/atomic64_386_32.S | 2 +- arch/x86/lib/atomic64_cx8_32.S | 2 +- arch/x86/lib/copy_page_64.S | 2 +- arch/x86/lib/copy_user_64.S | 2 +- arch/x86/lib/memcpy_64.S | 2 +- arch/x86/lib/memmove_64.S | 2 +- arch/x86/lib/memset_64.S | 2 +- arch/x86/lib/retpoline.S | 2 +- 14 files changed, 120 insertions(+), 132 deletions(-) delete mode 100644 arch/x86/include/asm/alternative-asm.h diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index df8c017e6161..4e079f250962 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -40,7 +40,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/arch/x86/entry/vdso/vdso32/system_call.S b/arch/x86/entry/vdso/vdso32/system_call.S index de1fff7188aa..d6a6080bade0 100644 --- a/arch/x86/entry/vdso/vdso32/system_call.S +++ b/arch/x86/entry/vdso/vdso32/system_call.S @@ -6,7 +6,7 @@ #include #include #include -#include +#include .text .globl __kernel_vsyscall diff --git a/arch/x86/include/asm/alternative-asm.h b/arch/x86/include/asm/alternative-asm.h deleted file mode 100644 index 464034db299f..000000000000 --- a/arch/x86/include/asm/alternative-asm.h +++ /dev/null @@ -1,114 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_X86_ALTERNATIVE_ASM_H -#define _ASM_X86_ALTERNATIVE_ASM_H - -#ifdef __ASSEMBLY__ - -#include - -#ifdef CONFIG_SMP - .macro LOCK_PREFIX -672: lock - .pushsection .smp_locks,"a" - .balign 4 - .long 672b - . - .popsection - .endm -#else - .macro LOCK_PREFIX - .endm -#endif - -/* - * objtool annotation to ignore the alternatives and only consider the original - * instruction(s). - */ -.macro ANNOTATE_IGNORE_ALTERNATIVE - .Lannotate_\@: - .pushsection .discard.ignore_alts - .long .Lannotate_\@ - . - .popsection -.endm - -/* - * Issue one struct alt_instr descriptor entry (need to put it into - * the section .altinstructions, see below). This entry contains - * enough information for the alternatives patching code to patch an - * instruction. See apply_alternatives(). - */ -.macro altinstruction_entry orig alt feature orig_len alt_len pad_len - .long \orig - . - .long \alt - . - .word \feature - .byte \orig_len - .byte \alt_len - .byte \pad_len -.endm - -/* - * Define an alternative between two instructions. If @feature is - * present, early code in apply_alternatives() replaces @oldinstr with - * @newinstr. ".skip" directive takes care of proper instruction padding - * in case @newinstr is longer than @oldinstr. - */ -.macro ALTERNATIVE oldinstr, newinstr, feature -140: - \oldinstr -141: - .skip -(((144f-143f)-(141b-140b)) > 0) * ((144f-143f)-(141b-140b)),0x90 -142: - - .pushsection .altinstructions,"a" - altinstruction_entry 140b,143f,\feature,142b-140b,144f-143f,142b-141b - .popsection - - .pushsection .altinstr_replacement,"ax" -143: - \newinstr -144: - .popsection -.endm - -#define old_len 141b-140b -#define new_len1 144f-143f -#define new_len2 145f-144f - -/* - * gas compatible max based on the idea from: - * http://graphics.stanford.edu/~seander/bithacks.html#IntegerMinOrMax - * - * The additional "-" is needed because gas uses a "true" value of -1. - */ -#define alt_max_short(a, b) ((a) ^ (((a) ^ (b)) & -(-((a) < (b))))) - - -/* - * Same as ALTERNATIVE macro above but for two alternatives. If CPU - * has @feature1, it replaces @oldinstr with @newinstr1. If CPU has - * @feature2, it replaces @oldinstr with @feature2. - */ -.macro ALTERNATIVE_2 oldinstr, newinstr1, feature1, newinstr2, feature2 -140: - \oldinstr -141: - .skip -((alt_max_short(new_len1, new_len2) - (old_len)) > 0) * \ - (alt_max_short(new_len1, new_len2) - (old_len)),0x90 -142: - - .pushsection .altinstructions,"a" - altinstruction_entry 140b,143f,\feature1,142b-140b,144f-143f,142b-141b - altinstruction_entry 140b,144f,\feature2,142b-140b,145f-144f,142b-141b - .popsection - - .pushsection .altinstr_replacement,"ax" -143: - \newinstr1 -144: - \newinstr2 -145: - .popsection -.endm - -#endif /* __ASSEMBLY__ */ - -#endif /* _ASM_X86_ALTERNATIVE_ASM_H */ diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h index 13adca37c99a..aa5f246fdc07 100644 --- a/arch/x86/include/asm/alternative.h +++ b/arch/x86/include/asm/alternative.h @@ -2,13 +2,14 @@ #ifndef _ASM_X86_ALTERNATIVE_H #define _ASM_X86_ALTERNATIVE_H -#ifndef __ASSEMBLY__ - #include -#include #include #include +#ifndef __ASSEMBLY__ + +#include + /* * Alternative inline assembly for SMP. * @@ -271,6 +272,111 @@ static inline int alternatives_text_reserved(void *start, void *end) */ #define ASM_NO_INPUT_CLOBBER(clbr...) "i" (0) : clbr +#else /* __ASSEMBLY__ */ + +#ifdef CONFIG_SMP + .macro LOCK_PREFIX +672: lock + .pushsection .smp_locks,"a" + .balign 4 + .long 672b - . + .popsection + .endm +#else + .macro LOCK_PREFIX + .endm +#endif + +/* + * objtool annotation to ignore the alternatives and only consider the original + * instruction(s). + */ +.macro ANNOTATE_IGNORE_ALTERNATIVE + .Lannotate_\@: + .pushsection .discard.ignore_alts + .long .Lannotate_\@ - . + .popsection +.endm + +/* + * Issue one struct alt_instr descriptor entry (need to put it into + * the section .altinstructions, see below). This entry contains + * enough information for the alternatives patching code to patch an + * instruction. See apply_alternatives(). + */ +.macro altinstruction_entry orig alt feature orig_len alt_len pad_len + .long \orig - . + .long \alt - . + .word \feature + .byte \orig_len + .byte \alt_len + .byte \pad_len +.endm + +/* + * Define an alternative between two instructions. If @feature is + * present, early code in apply_alternatives() replaces @oldinstr with + * @newinstr. ".skip" directive takes care of proper instruction padding + * in case @newinstr is longer than @oldinstr. + */ +.macro ALTERNATIVE oldinstr, newinstr, feature +140: + \oldinstr +141: + .skip -(((144f-143f)-(141b-140b)) > 0) * ((144f-143f)-(141b-140b)),0x90 +142: + + .pushsection .altinstructions,"a" + altinstruction_entry 140b,143f,\feature,142b-140b,144f-143f,142b-141b + .popsection + + .pushsection .altinstr_replacement,"ax" +143: + \newinstr +144: + .popsection +.endm + +#define old_len 141b-140b +#define new_len1 144f-143f +#define new_len2 145f-144f + +/* + * gas compatible max based on the idea from: + * http://graphics.stanford.edu/~seander/bithacks.html#IntegerMinOrMax + * + * The additional "-" is needed because gas uses a "true" value of -1. + */ +#define alt_max_short(a, b) ((a) ^ (((a) ^ (b)) & -(-((a) < (b))))) + + +/* + * Same as ALTERNATIVE macro above but for two alternatives. If CPU + * has @feature1, it replaces @oldinstr with @newinstr1. If CPU has + * @feature2, it replaces @oldinstr with @feature2. + */ +.macro ALTERNATIVE_2 oldinstr, newinstr1, feature1, newinstr2, feature2 +140: + \oldinstr +141: + .skip -((alt_max_short(new_len1, new_len2) - (old_len)) > 0) * \ + (alt_max_short(new_len1, new_len2) - (old_len)),0x90 +142: + + .pushsection .altinstructions,"a" + altinstruction_entry 140b,143f,\feature1,142b-140b,144f-143f,142b-141b + altinstruction_entry 140b,144f,\feature2,142b-140b,145f-144f,142b-141b + .popsection + + .pushsection .altinstr_replacement,"ax" +143: + \newinstr1 +144: + \newinstr2 +145: + .popsection +.endm + #endif /* __ASSEMBLY__ */ #endif /* _ASM_X86_ALTERNATIVE_H */ diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index e247151c3dcf..c5820916cd4f 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -7,7 +7,6 @@ #include #include -#include #include #include #include diff --git a/arch/x86/include/asm/smap.h b/arch/x86/include/asm/smap.h index 8b58d6975d5d..ea1d8eb644cb 100644 --- a/arch/x86/include/asm/smap.h +++ b/arch/x86/include/asm/smap.h @@ -11,6 +11,7 @@ #include #include +#include /* "Raw" instruction opcodes */ #define __ASM_CLAC ".byte 0x0f,0x01,0xca" @@ -18,8 +19,6 @@ #ifdef __ASSEMBLY__ -#include - #ifdef CONFIG_X86_SMAP #define ASM_CLAC \ @@ -37,8 +36,6 @@ #else /* __ASSEMBLY__ */ -#include - #ifdef CONFIG_X86_SMAP static __always_inline void clac(void) diff --git a/arch/x86/lib/atomic64_386_32.S b/arch/x86/lib/atomic64_386_32.S index 3b6544111ac9..16bc9130e7a5 100644 --- a/arch/x86/lib/atomic64_386_32.S +++ b/arch/x86/lib/atomic64_386_32.S @@ -6,7 +6,7 @@ */ #include -#include +#include /* if you want SMP support, implement these with real spinlocks */ .macro LOCK reg diff --git a/arch/x86/lib/atomic64_cx8_32.S b/arch/x86/lib/atomic64_cx8_32.S index 1c5c81c16b06..ce6935690766 100644 --- a/arch/x86/lib/atomic64_cx8_32.S +++ b/arch/x86/lib/atomic64_cx8_32.S @@ -6,7 +6,7 @@ */ #include -#include +#include .macro read64 reg movl %ebx, %eax diff --git a/arch/x86/lib/copy_page_64.S b/arch/x86/lib/copy_page_64.S index 2402d4c489d2..db4b4f9197c7 100644 --- a/arch/x86/lib/copy_page_64.S +++ b/arch/x86/lib/copy_page_64.S @@ -3,7 +3,7 @@ #include #include -#include +#include #include /* diff --git a/arch/x86/lib/copy_user_64.S b/arch/x86/lib/copy_user_64.S index 77b9b2a3b5c8..57b79c577496 100644 --- a/arch/x86/lib/copy_user_64.S +++ b/arch/x86/lib/copy_user_64.S @@ -11,7 +11,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S index 1e299ac73c86..1cc9da6e29c7 100644 --- a/arch/x86/lib/memcpy_64.S +++ b/arch/x86/lib/memcpy_64.S @@ -4,7 +4,7 @@ #include #include #include -#include +#include #include .pushsection .noinstr.text, "ax" diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S index 41902fe8b859..64801010d312 100644 --- a/arch/x86/lib/memmove_64.S +++ b/arch/x86/lib/memmove_64.S @@ -8,7 +8,7 @@ */ #include #include -#include +#include #include #undef memmove diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S index 0bfd26e4ca9e..9827ae267f96 100644 --- a/arch/x86/lib/memset_64.S +++ b/arch/x86/lib/memset_64.S @@ -3,7 +3,7 @@ #include #include -#include +#include #include /* diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index f6fb1d218dcc..6bb74b5c238c 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -4,7 +4,7 @@ #include #include #include -#include +#include #include #include #include From 0c4c698569962d32b76ea8ae13334c81ea647be0 Mon Sep 17 00:00:00 2001 From: Juergen Gross Date: Thu, 11 Mar 2021 15:23:10 +0100 Subject: [PATCH 1441/1842] x86/alternative: Support not-feature commit dda7bb76484978316bb412a353789ebc5901de36 upstream. Add support for alternative patching for the case a feature is not present on the current CPU. For users of ALTERNATIVE() and friends, an inverted feature is specified by applying the ALT_NOT() macro to it, e.g.: ALTERNATIVE(old, new, ALT_NOT(feature)); Committer note: The decision to encode the NOT-bit in the feature bit itself is because a future change which would make objtool generate such alternative calls, would keep the code in objtool itself fairly simple. Also, this allows for the alternative macros to support the NOT feature without having to change them. Finally, the u16 cpuid member encoding the X86_FEATURE_ flags is not an ABI so if more bits are needed, cpuid itself can be enlarged or a flags field can be added to struct alt_instr after having considered the size growth in either cases. Signed-off-by: Juergen Gross Signed-off-by: Borislav Petkov Link: https://lkml.kernel.org/r/20210311142319.4723-6-jgross@suse.com Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/alternative.h | 3 +++ arch/x86/kernel/alternative.c | 20 +++++++++++++++----- 2 files changed, 18 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h index aa5f246fdc07..c6690d1e1de6 100644 --- a/arch/x86/include/asm/alternative.h +++ b/arch/x86/include/asm/alternative.h @@ -6,6 +6,9 @@ #include #include +#define ALTINSTR_FLAG_INV (1 << 15) +#define ALT_NOT(feat) ((feat) | ALTINSTR_FLAG_INV) + #ifndef __ASSEMBLY__ #include diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 2400ad62f330..ff2987d5aeed 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -388,21 +388,31 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, */ for (a = start; a < end; a++) { int insn_buff_sz = 0; + /* Mask away "NOT" flag bit for feature to test. */ + u16 feature = a->cpuid & ~ALTINSTR_FLAG_INV; instr = (u8 *)&a->instr_offset + a->instr_offset; replacement = (u8 *)&a->repl_offset + a->repl_offset; BUG_ON(a->instrlen > sizeof(insn_buff)); - BUG_ON(a->cpuid >= (NCAPINTS + NBUGINTS) * 32); - if (!boot_cpu_has(a->cpuid)) { + BUG_ON(feature >= (NCAPINTS + NBUGINTS) * 32); + + /* + * Patch if either: + * - feature is present + * - feature not present but ALTINSTR_FLAG_INV is set to mean, + * patch if feature is *NOT* present. + */ + if (!boot_cpu_has(feature) == !(a->cpuid & ALTINSTR_FLAG_INV)) { if (a->padlen > 1) optimize_nops(a, instr); continue; } - DPRINTK("feat: %d*32+%d, old: (%pS (%px) len: %d), repl: (%px, len: %d), pad: %d", - a->cpuid >> 5, - a->cpuid & 0x1f, + DPRINTK("feat: %s%d*32+%d, old: (%pS (%px) len: %d), repl: (%px, len: %d), pad: %d", + (a->cpuid & ALTINSTR_FLAG_INV) ? "!" : "", + feature >> 5, + feature & 0x1f, instr, instr, a->instrlen, replacement, a->replacementlen, a->padlen); From 341e6178c1cf6225e085de9eaba6216d624d641e Mon Sep 17 00:00:00 2001 From: Juergen Gross Date: Thu, 11 Mar 2021 15:23:11 +0100 Subject: [PATCH 1442/1842] x86/alternative: Support ALTERNATIVE_TERNARY commit e208b3c4a9748b2c17aa09ba663b5096ccf82dce upstream. Add ALTERNATIVE_TERNARY support for replacing an initial instruction with either of two instructions depending on a feature: ALTERNATIVE_TERNARY "default_instr", FEATURE_NR, "feature_on_instr", "feature_off_instr" which will start with "default_instr" and at patch time will, depending on FEATURE_NR being set or not, patch that with either "feature_on_instr" or "feature_off_instr". [ bp: Add comment ontop. ] Signed-off-by: Juergen Gross Signed-off-by: Borislav Petkov Acked-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20210311142319.4723-7-jgross@suse.com Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/alternative.h | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h index c6690d1e1de6..be29b927ff1e 100644 --- a/arch/x86/include/asm/alternative.h +++ b/arch/x86/include/asm/alternative.h @@ -179,6 +179,11 @@ static inline int alternatives_text_reserved(void *start, void *end) ALTINSTR_REPLACEMENT(newinstr2, feature2, 2) \ ".popsection\n" +/* If @feature is set, patch in @newinstr_yes, otherwise @newinstr_no. */ +#define ALTERNATIVE_TERNARY(oldinstr, feature, newinstr_yes, newinstr_no) \ + ALTERNATIVE_2(oldinstr, newinstr_no, X86_FEATURE_ALWAYS, \ + newinstr_yes, feature) + #define ALTERNATIVE_3(oldinsn, newinsn1, feat1, newinsn2, feat2, newinsn3, feat3) \ OLDINSTR_3(oldinsn, 1, 2, 3) \ ".pushsection .altinstructions,\"a\"\n" \ @@ -210,6 +215,9 @@ static inline int alternatives_text_reserved(void *start, void *end) #define alternative_2(oldinstr, newinstr1, feature1, newinstr2, feature2) \ asm_inline volatile(ALTERNATIVE_2(oldinstr, newinstr1, feature1, newinstr2, feature2) ::: "memory") +#define alternative_ternary(oldinstr, feature, newinstr_yes, newinstr_no) \ + asm_inline volatile(ALTERNATIVE_TERNARY(oldinstr, feature, newinstr_yes, newinstr_no) ::: "memory") + /* * Alternative inline assembly with input. * @@ -380,6 +388,11 @@ static inline int alternatives_text_reserved(void *start, void *end) .popsection .endm +/* If @feature is set, patch in @newinstr_yes, otherwise @newinstr_no. */ +#define ALTERNATIVE_TERNARY(oldinstr, feature, newinstr_yes, newinstr_no) \ + ALTERNATIVE_2 oldinstr, newinstr_no, X86_FEATURE_ALWAYS, \ + newinstr_yes, feature + #endif /* __ASSEMBLY__ */ #endif /* _ASM_X86_ALTERNATIVE_H */ From fd80da64cffe952cfd71c1c60085ad2bad7ecb63 Mon Sep 17 00:00:00 2001 From: Juergen Gross Date: Thu, 11 Mar 2021 15:23:12 +0100 Subject: [PATCH 1443/1842] x86/alternative: Use ALTERNATIVE_TERNARY() in _static_cpu_has() commit 2fe2a2c7a97c9bc32acc79154b75e754280f7867 upstream. _static_cpu_has() contains a completely open coded version of ALTERNATIVE_TERNARY(). Replace that with the macro instead. Signed-off-by: Juergen Gross Signed-off-by: Borislav Petkov Link: https://lkml.kernel.org/r/20210311142319.4723-8-jgross@suse.com Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/cpufeature.h | 41 +++++++------------------------ 1 file changed, 9 insertions(+), 32 deletions(-) diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h index 619c1f80a2ab..f4cbc01c0bc4 100644 --- a/arch/x86/include/asm/cpufeature.h +++ b/arch/x86/include/asm/cpufeature.h @@ -8,6 +8,7 @@ #include #include +#include enum cpuid_leafs { @@ -172,39 +173,15 @@ extern void clear_cpu_cap(struct cpuinfo_x86 *c, unsigned int bit); */ static __always_inline bool _static_cpu_has(u16 bit) { - asm_volatile_goto("1: jmp 6f\n" - "2:\n" - ".skip -(((5f-4f) - (2b-1b)) > 0) * " - "((5f-4f) - (2b-1b)),0x90\n" - "3:\n" - ".section .altinstructions,\"a\"\n" - " .long 1b - .\n" /* src offset */ - " .long 4f - .\n" /* repl offset */ - " .word %P[always]\n" /* always replace */ - " .byte 3b - 1b\n" /* src len */ - " .byte 5f - 4f\n" /* repl len */ - " .byte 3b - 2b\n" /* pad len */ - ".previous\n" - ".section .altinstr_replacement,\"ax\"\n" - "4: jmp %l[t_no]\n" - "5:\n" - ".previous\n" - ".section .altinstructions,\"a\"\n" - " .long 1b - .\n" /* src offset */ - " .long 0\n" /* no replacement */ - " .word %P[feature]\n" /* feature bit */ - " .byte 3b - 1b\n" /* src len */ - " .byte 0\n" /* repl len */ - " .byte 0\n" /* pad len */ - ".previous\n" - ".section .altinstr_aux,\"ax\"\n" - "6:\n" - " testb %[bitnum],%[cap_byte]\n" - " jnz %l[t_yes]\n" - " jmp %l[t_no]\n" - ".previous\n" + asm_volatile_goto( + ALTERNATIVE_TERNARY("jmp 6f", %P[feature], "", "jmp %l[t_no]") + ".section .altinstr_aux,\"ax\"\n" + "6:\n" + " testb %[bitnum],%[cap_byte]\n" + " jnz %l[t_yes]\n" + " jmp %l[t_no]\n" + ".previous\n" : : [feature] "i" (bit), - [always] "i" (X86_FEATURE_ALWAYS), [bitnum] "i" (1 << (bit & 7)), [cap_byte] "m" (((const char *)boot_cpu_data.x86_capability)[bit >> 3]) : : t_yes, t_no); From a3d96c74395e162e880515d711ab96f5959856ec Mon Sep 17 00:00:00 2001 From: Borislav Petkov Date: Mon, 2 Nov 2020 18:47:34 +0100 Subject: [PATCH 1444/1842] x86/insn: Rename insn_decode() to insn_decode_from_regs() commit 9e761296c52dcdb1aaa151b65bd39accb05740d9 upstream. Rename insn_decode() to insn_decode_from_regs() to denote that it receives regs as param and uses registers from there during decoding. Free the former name for a more generic version of the function. No functional changes. Signed-off-by: Borislav Petkov Link: https://lkml.kernel.org/r/20210304174237.31945-2-bp@alien8.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/insn-eval.h | 4 ++-- arch/x86/kernel/sev-es.c | 2 +- arch/x86/kernel/umip.c | 2 +- arch/x86/lib/insn-eval.c | 6 +++--- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/insn-eval.h b/arch/x86/include/asm/insn-eval.h index c1438f9239e4..594ad2f57d23 100644 --- a/arch/x86/include/asm/insn-eval.h +++ b/arch/x86/include/asm/insn-eval.h @@ -26,7 +26,7 @@ int insn_fetch_from_user(struct pt_regs *regs, unsigned char buf[MAX_INSN_SIZE]); int insn_fetch_from_user_inatomic(struct pt_regs *regs, unsigned char buf[MAX_INSN_SIZE]); -bool insn_decode(struct insn *insn, struct pt_regs *regs, - unsigned char buf[MAX_INSN_SIZE], int buf_size); +bool insn_decode_from_regs(struct insn *insn, struct pt_regs *regs, + unsigned char buf[MAX_INSN_SIZE], int buf_size); #endif /* _ASM_X86_INSN_EVAL_H */ diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c index c222fab112cb..f441002c2327 100644 --- a/arch/x86/kernel/sev-es.c +++ b/arch/x86/kernel/sev-es.c @@ -236,7 +236,7 @@ static enum es_result vc_decode_insn(struct es_em_ctxt *ctxt) return ES_EXCEPTION; } - if (!insn_decode(&ctxt->insn, ctxt->regs, buffer, res)) + if (!insn_decode_from_regs(&ctxt->insn, ctxt->regs, buffer, res)) return ES_DECODE_FAILED; } else { res = vc_fetch_insn_kernel(ctxt, buffer); diff --git a/arch/x86/kernel/umip.c b/arch/x86/kernel/umip.c index f6225bf22c02..8032f5f7eef9 100644 --- a/arch/x86/kernel/umip.c +++ b/arch/x86/kernel/umip.c @@ -356,7 +356,7 @@ bool fixup_umip_exception(struct pt_regs *regs) if (!nr_copied) return false; - if (!insn_decode(&insn, regs, buf, nr_copied)) + if (!insn_decode_from_regs(&insn, regs, buf, nr_copied)) return false; umip_inst = identify_insn(&insn); diff --git a/arch/x86/lib/insn-eval.c b/arch/x86/lib/insn-eval.c index c6a19c88af54..3a7e307d42aa 100644 --- a/arch/x86/lib/insn-eval.c +++ b/arch/x86/lib/insn-eval.c @@ -1492,7 +1492,7 @@ int insn_fetch_from_user_inatomic(struct pt_regs *regs, unsigned char buf[MAX_IN } /** - * insn_decode() - Decode an instruction + * insn_decode_from_regs() - Decode an instruction * @insn: Structure to store decoded instruction * @regs: Structure with register values as seen when entering kernel mode * @buf: Buffer containing the instruction bytes @@ -1505,8 +1505,8 @@ int insn_fetch_from_user_inatomic(struct pt_regs *regs, unsigned char buf[MAX_IN * * True if instruction was decoded, False otherwise. */ -bool insn_decode(struct insn *insn, struct pt_regs *regs, - unsigned char buf[MAX_INSN_SIZE], int buf_size) +bool insn_decode_from_regs(struct insn *insn, struct pt_regs *regs, + unsigned char buf[MAX_INSN_SIZE], int buf_size) { int seg_defs; From 76c513c87f599bc013c582522323a1b117b8f501 Mon Sep 17 00:00:00 2001 From: Borislav Petkov Date: Mon, 22 Feb 2021 13:34:40 +0100 Subject: [PATCH 1445/1842] x86/insn: Add a __ignore_sync_check__ marker commit d30c7b820be5c4777fe6c3b0c21f9d0064251e51 upstream. Add an explicit __ignore_sync_check__ marker which will be used to mark lines which are supposed to be ignored by file synchronization check scripts, its advantage being that it explicitly denotes such lines in the code. Signed-off-by: Borislav Petkov Reviewed-by: Masami Hiramatsu Link: https://lkml.kernel.org/r/20210304174237.31945-4-bp@alien8.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/inat.h | 2 +- arch/x86/include/asm/insn.h | 2 +- arch/x86/lib/inat.c | 2 +- arch/x86/lib/insn.c | 6 +++--- tools/arch/x86/include/asm/inat.h | 2 +- tools/arch/x86/include/asm/insn.h | 2 +- tools/arch/x86/lib/inat.c | 2 +- tools/arch/x86/lib/insn.c | 6 +++--- tools/objtool/sync-check.sh | 17 +++++++++++++---- tools/perf/check-headers.sh | 15 +++++++++++---- 10 files changed, 36 insertions(+), 20 deletions(-) diff --git a/arch/x86/include/asm/inat.h b/arch/x86/include/asm/inat.h index 4cf2ad521f65..b56c5741581a 100644 --- a/arch/x86/include/asm/inat.h +++ b/arch/x86/include/asm/inat.h @@ -6,7 +6,7 @@ * * Written by Masami Hiramatsu */ -#include +#include /* __ignore_sync_check__ */ /* * Internal bits. Don't use bitmasks directly, because these bits are diff --git a/arch/x86/include/asm/insn.h b/arch/x86/include/asm/insn.h index a8c3d284fa46..17c130f1ba57 100644 --- a/arch/x86/include/asm/insn.h +++ b/arch/x86/include/asm/insn.h @@ -8,7 +8,7 @@ */ /* insn_attr_t is defined in inat.h */ -#include +#include /* __ignore_sync_check__ */ struct insn_field { union { diff --git a/arch/x86/lib/inat.c b/arch/x86/lib/inat.c index 12539fca75c4..b0f3b2a62ae2 100644 --- a/arch/x86/lib/inat.c +++ b/arch/x86/lib/inat.c @@ -4,7 +4,7 @@ * * Written by Masami Hiramatsu */ -#include +#include /* __ignore_sync_check__ */ /* Attribute tables are generated from opcode map */ #include "inat-tables.c" diff --git a/arch/x86/lib/insn.c b/arch/x86/lib/insn.c index 404279563891..ce07af6ba5e9 100644 --- a/arch/x86/lib/insn.c +++ b/arch/x86/lib/insn.c @@ -10,10 +10,10 @@ #else #include #endif -#include -#include +#include /*__ignore_sync_check__ */ +#include /* __ignore_sync_check__ */ -#include +#include /* __ignore_sync_check__ */ /* Verify next sizeof(t) bytes can be on the same instruction */ #define validate_next(t, insn, n) \ diff --git a/tools/arch/x86/include/asm/inat.h b/tools/arch/x86/include/asm/inat.h index 877827b7c2c3..a61051400311 100644 --- a/tools/arch/x86/include/asm/inat.h +++ b/tools/arch/x86/include/asm/inat.h @@ -6,7 +6,7 @@ * * Written by Masami Hiramatsu */ -#include "inat_types.h" +#include "inat_types.h" /* __ignore_sync_check__ */ /* * Internal bits. Don't use bitmasks directly, because these bits are diff --git a/tools/arch/x86/include/asm/insn.h b/tools/arch/x86/include/asm/insn.h index 52c6262e6bfd..33d41982a5dd 100644 --- a/tools/arch/x86/include/asm/insn.h +++ b/tools/arch/x86/include/asm/insn.h @@ -8,7 +8,7 @@ */ /* insn_attr_t is defined in inat.h */ -#include "inat.h" +#include "inat.h" /* __ignore_sync_check__ */ struct insn_field { union { diff --git a/tools/arch/x86/lib/inat.c b/tools/arch/x86/lib/inat.c index 4f5ed49e1b4e..dfbcc6405941 100644 --- a/tools/arch/x86/lib/inat.c +++ b/tools/arch/x86/lib/inat.c @@ -4,7 +4,7 @@ * * Written by Masami Hiramatsu */ -#include "../include/asm/insn.h" +#include "../include/asm/insn.h" /* __ignore_sync_check__ */ /* Attribute tables are generated from opcode map */ #include "inat-tables.c" diff --git a/tools/arch/x86/lib/insn.c b/tools/arch/x86/lib/insn.c index 0151dfc6da61..6c82e3526127 100644 --- a/tools/arch/x86/lib/insn.c +++ b/tools/arch/x86/lib/insn.c @@ -10,10 +10,10 @@ #else #include #endif -#include "../include/asm/inat.h" -#include "../include/asm/insn.h" +#include "../include/asm/inat.h" /* __ignore_sync_check__ */ +#include "../include/asm/insn.h" /* __ignore_sync_check__ */ -#include "../include/asm/emulate_prefix.h" +#include "../include/asm/emulate_prefix.h" /* __ignore_sync_check__ */ /* Verify next sizeof(t) bytes can be on the same instruction */ #define validate_next(t, insn, n) \ diff --git a/tools/objtool/sync-check.sh b/tools/objtool/sync-check.sh index 606a4b5e929f..4bbabaecab14 100755 --- a/tools/objtool/sync-check.sh +++ b/tools/objtool/sync-check.sh @@ -16,11 +16,14 @@ arch/x86/include/asm/emulate_prefix.h arch/x86/lib/x86-opcode-map.txt arch/x86/tools/gen-insn-attr-x86.awk include/linux/static_call_types.h -arch/x86/include/asm/inat.h -I '^#include [\"<]\(asm/\)*inat_types.h[\">]' -arch/x86/include/asm/insn.h -I '^#include [\"<]\(asm/\)*inat.h[\">]' -arch/x86/lib/inat.c -I '^#include [\"<]\(../include/\)*asm/insn.h[\">]' -arch/x86/lib/insn.c -I '^#include [\"<]\(../include/\)*asm/in\(at\|sn\).h[\">]' -I '^#include [\"<]\(../include/\)*asm/emulate_prefix.h[\">]' " + +SYNC_CHECK_FILES=' +arch/x86/include/asm/inat.h +arch/x86/include/asm/insn.h +arch/x86/lib/inat.c +arch/x86/lib/insn.c +' fi check_2 () { @@ -63,3 +66,9 @@ while read -r file_entry; do done <string syscall @@ -129,6 +136,10 @@ for i in $FILES; do check $i -B done +for i in $SYNC_CHECK_FILES; do + check $i '-I "^.*\/\*.*__ignore_sync_check__.*\*\/.*$"' +done + # diff with extra ignore lines check arch/x86/lib/memcpy_64.S '-I "^EXPORT_SYMBOL" -I "^#include " -I"^SYM_FUNC_START\(_LOCAL\)*(memcpy_\(erms\|orig\))"' check arch/x86/lib/memset_64.S '-I "^EXPORT_SYMBOL" -I "^#include " -I"^SYM_FUNC_START\(_LOCAL\)*(memset_\(erms\|orig\))"' @@ -137,10 +148,6 @@ check include/uapi/linux/mman.h '-I "^#include <\(uapi/\)*asm/mman.h>"' check include/linux/build_bug.h '-I "^#\(ifndef\|endif\)\( \/\/\)* static_assert$"' check include/linux/ctype.h '-I "isdigit("' check lib/ctype.c '-I "^EXPORT_SYMBOL" -I "^#include " -B' -check arch/x86/include/asm/inat.h '-I "^#include [\"<]\(asm/\)*inat_types.h[\">]"' -check arch/x86/include/asm/insn.h '-I "^#include [\"<]\(asm/\)*inat.h[\">]"' -check arch/x86/lib/inat.c '-I "^#include [\"<]\(../include/\)*asm/insn.h[\">]"' -check arch/x86/lib/insn.c '-I "^#include [\"<]\(../include/\)*asm/in\(at\|sn\).h[\">]" -I "^#include [\"<]\(../include/\)*asm/emulate_prefix.h[\">]"' # diff non-symmetric files check_2 tools/perf/arch/x86/entry/syscalls/syscall_64.tbl arch/x86/entry/syscalls/syscall_64.tbl From 6bc6875b82a0cb99212c4b78fe7606418888af30 Mon Sep 17 00:00:00 2001 From: Borislav Petkov Date: Tue, 3 Nov 2020 17:28:30 +0100 Subject: [PATCH 1446/1842] x86/insn: Add an insn_decode() API commit 93281c4a96572a34504244969b938e035204778d upstream. Users of the instruction decoder should use this to decode instruction bytes. For that, have insn*() helpers return an int value to denote success/failure. When there's an error fetching the next insn byte and the insn falls short, return -ENODATA to denote that. While at it, make insn_get_opcode() more stricter as to whether what has seen so far is a valid insn and if not. Copy linux/kconfig.h for the tools-version of the decoder so that it can use IS_ENABLED(). Also, cast the INSN_MODE_KERN dummy define value to (enum insn_mode) for tools use of the decoder because perf tool builds with -Werror and errors out with -Werror=sign-compare otherwise. Signed-off-by: Borislav Petkov Acked-by: Masami Hiramatsu Link: https://lkml.kernel.org/r/20210304174237.31945-5-bp@alien8.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/insn.h | 24 +++- arch/x86/lib/insn.c | 216 +++++++++++++++++++++++------ tools/arch/x86/include/asm/insn.h | 24 +++- tools/arch/x86/lib/insn.c | 222 +++++++++++++++++++++++------- tools/include/linux/kconfig.h | 73 ++++++++++ 5 files changed, 452 insertions(+), 107 deletions(-) create mode 100644 tools/include/linux/kconfig.h diff --git a/arch/x86/include/asm/insn.h b/arch/x86/include/asm/insn.h index 17c130f1ba57..546436b3c215 100644 --- a/arch/x86/include/asm/insn.h +++ b/arch/x86/include/asm/insn.h @@ -87,13 +87,23 @@ struct insn { #define X86_VEX_M_MAX 0x1f /* VEX3.M Maximum value */ extern void insn_init(struct insn *insn, const void *kaddr, int buf_len, int x86_64); -extern void insn_get_prefixes(struct insn *insn); -extern void insn_get_opcode(struct insn *insn); -extern void insn_get_modrm(struct insn *insn); -extern void insn_get_sib(struct insn *insn); -extern void insn_get_displacement(struct insn *insn); -extern void insn_get_immediate(struct insn *insn); -extern void insn_get_length(struct insn *insn); +extern int insn_get_prefixes(struct insn *insn); +extern int insn_get_opcode(struct insn *insn); +extern int insn_get_modrm(struct insn *insn); +extern int insn_get_sib(struct insn *insn); +extern int insn_get_displacement(struct insn *insn); +extern int insn_get_immediate(struct insn *insn); +extern int insn_get_length(struct insn *insn); + +enum insn_mode { + INSN_MODE_32, + INSN_MODE_64, + /* Mode is determined by the current kernel build. */ + INSN_MODE_KERN, + INSN_NUM_MODES, +}; + +extern int insn_decode(struct insn *insn, const void *kaddr, int buf_len, enum insn_mode m); /* Attribute will be determined after getting ModRM (for opcode groups) */ static inline void insn_get_attribute(struct insn *insn) diff --git a/arch/x86/lib/insn.c b/arch/x86/lib/insn.c index ce07af6ba5e9..24e89239dcca 100644 --- a/arch/x86/lib/insn.c +++ b/arch/x86/lib/insn.c @@ -13,6 +13,9 @@ #include /*__ignore_sync_check__ */ #include /* __ignore_sync_check__ */ +#include +#include + #include /* __ignore_sync_check__ */ /* Verify next sizeof(t) bytes can be on the same instruction */ @@ -97,8 +100,12 @@ static void insn_get_emulate_prefix(struct insn *insn) * Populates the @insn->prefixes bitmap, and updates @insn->next_byte * to point to the (first) opcode. No effect if @insn->prefixes.got * is already set. + * + * * Returns: + * 0: on success + * < 0: on error */ -void insn_get_prefixes(struct insn *insn) +int insn_get_prefixes(struct insn *insn) { struct insn_field *prefixes = &insn->prefixes; insn_attr_t attr; @@ -106,7 +113,7 @@ void insn_get_prefixes(struct insn *insn) int i, nb; if (prefixes->got) - return; + return 0; insn_get_emulate_prefix(insn); @@ -217,8 +224,10 @@ void insn_get_prefixes(struct insn *insn) prefixes->got = 1; + return 0; + err_out: - return; + return -ENODATA; } /** @@ -230,16 +239,25 @@ void insn_get_prefixes(struct insn *insn) * If necessary, first collects any preceding (prefix) bytes. * Sets @insn->opcode.value = opcode1. No effect if @insn->opcode.got * is already 1. + * + * Returns: + * 0: on success + * < 0: on error */ -void insn_get_opcode(struct insn *insn) +int insn_get_opcode(struct insn *insn) { struct insn_field *opcode = &insn->opcode; + int pfx_id, ret; insn_byte_t op; - int pfx_id; + if (opcode->got) - return; - if (!insn->prefixes.got) - insn_get_prefixes(insn); + return 0; + + if (!insn->prefixes.got) { + ret = insn_get_prefixes(insn); + if (ret) + return ret; + } /* Get first opcode */ op = get_next(insn_byte_t, insn); @@ -254,9 +272,13 @@ void insn_get_opcode(struct insn *insn) insn->attr = inat_get_avx_attribute(op, m, p); if ((inat_must_evex(insn->attr) && !insn_is_evex(insn)) || (!inat_accept_vex(insn->attr) && - !inat_is_group(insn->attr))) - insn->attr = 0; /* This instruction is bad */ - goto end; /* VEX has only 1 byte for opcode */ + !inat_is_group(insn->attr))) { + /* This instruction is bad */ + insn->attr = 0; + return -EINVAL; + } + /* VEX has only 1 byte for opcode */ + goto end; } insn->attr = inat_get_opcode_attribute(op); @@ -267,13 +289,18 @@ void insn_get_opcode(struct insn *insn) pfx_id = insn_last_prefix_id(insn); insn->attr = inat_get_escape_attribute(op, pfx_id, insn->attr); } - if (inat_must_vex(insn->attr)) - insn->attr = 0; /* This instruction is bad */ + + if (inat_must_vex(insn->attr)) { + /* This instruction is bad */ + insn->attr = 0; + return -EINVAL; + } end: opcode->got = 1; + return 0; err_out: - return; + return -ENODATA; } /** @@ -283,15 +310,25 @@ void insn_get_opcode(struct insn *insn) * Populates @insn->modrm and updates @insn->next_byte to point past the * ModRM byte, if any. If necessary, first collects the preceding bytes * (prefixes and opcode(s)). No effect if @insn->modrm.got is already 1. + * + * Returns: + * 0: on success + * < 0: on error */ -void insn_get_modrm(struct insn *insn) +int insn_get_modrm(struct insn *insn) { struct insn_field *modrm = &insn->modrm; insn_byte_t pfx_id, mod; + int ret; + if (modrm->got) - return; - if (!insn->opcode.got) - insn_get_opcode(insn); + return 0; + + if (!insn->opcode.got) { + ret = insn_get_opcode(insn); + if (ret) + return ret; + } if (inat_has_modrm(insn->attr)) { mod = get_next(insn_byte_t, insn); @@ -301,17 +338,22 @@ void insn_get_modrm(struct insn *insn) pfx_id = insn_last_prefix_id(insn); insn->attr = inat_get_group_attribute(mod, pfx_id, insn->attr); - if (insn_is_avx(insn) && !inat_accept_vex(insn->attr)) - insn->attr = 0; /* This is bad */ + if (insn_is_avx(insn) && !inat_accept_vex(insn->attr)) { + /* Bad insn */ + insn->attr = 0; + return -EINVAL; + } } } if (insn->x86_64 && inat_is_force64(insn->attr)) insn->opnd_bytes = 8; + modrm->got = 1; + return 0; err_out: - return; + return -ENODATA; } @@ -325,11 +367,16 @@ void insn_get_modrm(struct insn *insn) int insn_rip_relative(struct insn *insn) { struct insn_field *modrm = &insn->modrm; + int ret; if (!insn->x86_64) return 0; - if (!modrm->got) - insn_get_modrm(insn); + + if (!modrm->got) { + ret = insn_get_modrm(insn); + if (ret) + return 0; + } /* * For rip-relative instructions, the mod field (top 2 bits) * is zero and the r/m field (bottom 3 bits) is 0x5. @@ -343,15 +390,25 @@ int insn_rip_relative(struct insn *insn) * * If necessary, first collects the instruction up to and including the * ModRM byte. + * + * Returns: + * 0: if decoding succeeded + * < 0: otherwise. */ -void insn_get_sib(struct insn *insn) +int insn_get_sib(struct insn *insn) { insn_byte_t modrm; + int ret; if (insn->sib.got) - return; - if (!insn->modrm.got) - insn_get_modrm(insn); + return 0; + + if (!insn->modrm.got) { + ret = insn_get_modrm(insn); + if (ret) + return ret; + } + if (insn->modrm.nbytes) { modrm = (insn_byte_t)insn->modrm.value; if (insn->addr_bytes != 2 && @@ -362,8 +419,10 @@ void insn_get_sib(struct insn *insn) } insn->sib.got = 1; + return 0; + err_out: - return; + return -ENODATA; } @@ -374,15 +433,25 @@ void insn_get_sib(struct insn *insn) * If necessary, first collects the instruction up to and including the * SIB byte. * Displacement value is sign-expanded. + * + * * Returns: + * 0: if decoding succeeded + * < 0: otherwise. */ -void insn_get_displacement(struct insn *insn) +int insn_get_displacement(struct insn *insn) { insn_byte_t mod, rm, base; + int ret; if (insn->displacement.got) - return; - if (!insn->sib.got) - insn_get_sib(insn); + return 0; + + if (!insn->sib.got) { + ret = insn_get_sib(insn); + if (ret) + return ret; + } + if (insn->modrm.nbytes) { /* * Interpreting the modrm byte: @@ -425,9 +494,10 @@ void insn_get_displacement(struct insn *insn) } out: insn->displacement.got = 1; + return 0; err_out: - return; + return -ENODATA; } /* Decode moffset16/32/64. Return 0 if failed */ @@ -538,20 +608,30 @@ static int __get_immptr(struct insn *insn) } /** - * insn_get_immediate() - Get the immediates of instruction + * insn_get_immediate() - Get the immediate in an instruction * @insn: &struct insn containing instruction * * If necessary, first collects the instruction up to and including the * displacement bytes. * Basically, most of immediates are sign-expanded. Unsigned-value can be - * get by bit masking with ((1 << (nbytes * 8)) - 1) + * computed by bit masking with ((1 << (nbytes * 8)) - 1) + * + * Returns: + * 0: on success + * < 0: on error */ -void insn_get_immediate(struct insn *insn) +int insn_get_immediate(struct insn *insn) { + int ret; + if (insn->immediate.got) - return; - if (!insn->displacement.got) - insn_get_displacement(insn); + return 0; + + if (!insn->displacement.got) { + ret = insn_get_displacement(insn); + if (ret) + return ret; + } if (inat_has_moffset(insn->attr)) { if (!__get_moffset(insn)) @@ -604,9 +684,10 @@ void insn_get_immediate(struct insn *insn) } done: insn->immediate.got = 1; + return 0; err_out: - return; + return -ENODATA; } /** @@ -615,13 +696,58 @@ void insn_get_immediate(struct insn *insn) * * If necessary, first collects the instruction up to and including the * immediates bytes. - */ -void insn_get_length(struct insn *insn) + * + * Returns: + * - 0 on success + * - < 0 on error +*/ +int insn_get_length(struct insn *insn) { + int ret; + if (insn->length) - return; - if (!insn->immediate.got) - insn_get_immediate(insn); + return 0; + + if (!insn->immediate.got) { + ret = insn_get_immediate(insn); + if (ret) + return ret; + } + insn->length = (unsigned char)((unsigned long)insn->next_byte - (unsigned long)insn->kaddr); + + return 0; +} + +/** + * insn_decode() - Decode an x86 instruction + * @insn: &struct insn to be initialized + * @kaddr: address (in kernel memory) of instruction (or copy thereof) + * @buf_len: length of the insn buffer at @kaddr + * @m: insn mode, see enum insn_mode + * + * Returns: + * 0: if decoding succeeded + * < 0: otherwise. + */ +int insn_decode(struct insn *insn, const void *kaddr, int buf_len, enum insn_mode m) +{ + int ret; + +/* #define INSN_MODE_KERN -1 __ignore_sync_check__ mode is only valid in the kernel */ + + if (m == INSN_MODE_KERN) + insn_init(insn, kaddr, buf_len, IS_ENABLED(CONFIG_X86_64)); + else + insn_init(insn, kaddr, buf_len, m == INSN_MODE_64); + + ret = insn_get_length(insn); + if (ret) + return ret; + + if (insn_complete(insn)) + return 0; + + return -EINVAL; } diff --git a/tools/arch/x86/include/asm/insn.h b/tools/arch/x86/include/asm/insn.h index 33d41982a5dd..621ab64a6d27 100644 --- a/tools/arch/x86/include/asm/insn.h +++ b/tools/arch/x86/include/asm/insn.h @@ -87,13 +87,23 @@ struct insn { #define X86_VEX_M_MAX 0x1f /* VEX3.M Maximum value */ extern void insn_init(struct insn *insn, const void *kaddr, int buf_len, int x86_64); -extern void insn_get_prefixes(struct insn *insn); -extern void insn_get_opcode(struct insn *insn); -extern void insn_get_modrm(struct insn *insn); -extern void insn_get_sib(struct insn *insn); -extern void insn_get_displacement(struct insn *insn); -extern void insn_get_immediate(struct insn *insn); -extern void insn_get_length(struct insn *insn); +extern int insn_get_prefixes(struct insn *insn); +extern int insn_get_opcode(struct insn *insn); +extern int insn_get_modrm(struct insn *insn); +extern int insn_get_sib(struct insn *insn); +extern int insn_get_displacement(struct insn *insn); +extern int insn_get_immediate(struct insn *insn); +extern int insn_get_length(struct insn *insn); + +enum insn_mode { + INSN_MODE_32, + INSN_MODE_64, + /* Mode is determined by the current kernel build. */ + INSN_MODE_KERN, + INSN_NUM_MODES, +}; + +extern int insn_decode(struct insn *insn, const void *kaddr, int buf_len, enum insn_mode m); /* Attribute will be determined after getting ModRM (for opcode groups) */ static inline void insn_get_attribute(struct insn *insn) diff --git a/tools/arch/x86/lib/insn.c b/tools/arch/x86/lib/insn.c index 6c82e3526127..819c82529fe8 100644 --- a/tools/arch/x86/lib/insn.c +++ b/tools/arch/x86/lib/insn.c @@ -10,10 +10,13 @@ #else #include #endif -#include "../include/asm/inat.h" /* __ignore_sync_check__ */ -#include "../include/asm/insn.h" /* __ignore_sync_check__ */ +#include /* __ignore_sync_check__ */ +#include /* __ignore_sync_check__ */ -#include "../include/asm/emulate_prefix.h" /* __ignore_sync_check__ */ +#include +#include + +#include /* __ignore_sync_check__ */ /* Verify next sizeof(t) bytes can be on the same instruction */ #define validate_next(t, insn, n) \ @@ -97,8 +100,12 @@ static void insn_get_emulate_prefix(struct insn *insn) * Populates the @insn->prefixes bitmap, and updates @insn->next_byte * to point to the (first) opcode. No effect if @insn->prefixes.got * is already set. + * + * * Returns: + * 0: on success + * < 0: on error */ -void insn_get_prefixes(struct insn *insn) +int insn_get_prefixes(struct insn *insn) { struct insn_field *prefixes = &insn->prefixes; insn_attr_t attr; @@ -106,7 +113,7 @@ void insn_get_prefixes(struct insn *insn) int i, nb; if (prefixes->got) - return; + return 0; insn_get_emulate_prefix(insn); @@ -217,8 +224,10 @@ void insn_get_prefixes(struct insn *insn) prefixes->got = 1; + return 0; + err_out: - return; + return -ENODATA; } /** @@ -230,16 +239,25 @@ void insn_get_prefixes(struct insn *insn) * If necessary, first collects any preceding (prefix) bytes. * Sets @insn->opcode.value = opcode1. No effect if @insn->opcode.got * is already 1. + * + * Returns: + * 0: on success + * < 0: on error */ -void insn_get_opcode(struct insn *insn) +int insn_get_opcode(struct insn *insn) { struct insn_field *opcode = &insn->opcode; + int pfx_id, ret; insn_byte_t op; - int pfx_id; + if (opcode->got) - return; - if (!insn->prefixes.got) - insn_get_prefixes(insn); + return 0; + + if (!insn->prefixes.got) { + ret = insn_get_prefixes(insn); + if (ret) + return ret; + } /* Get first opcode */ op = get_next(insn_byte_t, insn); @@ -254,9 +272,13 @@ void insn_get_opcode(struct insn *insn) insn->attr = inat_get_avx_attribute(op, m, p); if ((inat_must_evex(insn->attr) && !insn_is_evex(insn)) || (!inat_accept_vex(insn->attr) && - !inat_is_group(insn->attr))) - insn->attr = 0; /* This instruction is bad */ - goto end; /* VEX has only 1 byte for opcode */ + !inat_is_group(insn->attr))) { + /* This instruction is bad */ + insn->attr = 0; + return -EINVAL; + } + /* VEX has only 1 byte for opcode */ + goto end; } insn->attr = inat_get_opcode_attribute(op); @@ -267,13 +289,18 @@ void insn_get_opcode(struct insn *insn) pfx_id = insn_last_prefix_id(insn); insn->attr = inat_get_escape_attribute(op, pfx_id, insn->attr); } - if (inat_must_vex(insn->attr)) - insn->attr = 0; /* This instruction is bad */ + + if (inat_must_vex(insn->attr)) { + /* This instruction is bad */ + insn->attr = 0; + return -EINVAL; + } end: opcode->got = 1; + return 0; err_out: - return; + return -ENODATA; } /** @@ -283,15 +310,25 @@ void insn_get_opcode(struct insn *insn) * Populates @insn->modrm and updates @insn->next_byte to point past the * ModRM byte, if any. If necessary, first collects the preceding bytes * (prefixes and opcode(s)). No effect if @insn->modrm.got is already 1. + * + * Returns: + * 0: on success + * < 0: on error */ -void insn_get_modrm(struct insn *insn) +int insn_get_modrm(struct insn *insn) { struct insn_field *modrm = &insn->modrm; insn_byte_t pfx_id, mod; + int ret; + if (modrm->got) - return; - if (!insn->opcode.got) - insn_get_opcode(insn); + return 0; + + if (!insn->opcode.got) { + ret = insn_get_opcode(insn); + if (ret) + return ret; + } if (inat_has_modrm(insn->attr)) { mod = get_next(insn_byte_t, insn); @@ -301,17 +338,22 @@ void insn_get_modrm(struct insn *insn) pfx_id = insn_last_prefix_id(insn); insn->attr = inat_get_group_attribute(mod, pfx_id, insn->attr); - if (insn_is_avx(insn) && !inat_accept_vex(insn->attr)) - insn->attr = 0; /* This is bad */ + if (insn_is_avx(insn) && !inat_accept_vex(insn->attr)) { + /* Bad insn */ + insn->attr = 0; + return -EINVAL; + } } } if (insn->x86_64 && inat_is_force64(insn->attr)) insn->opnd_bytes = 8; + modrm->got = 1; + return 0; err_out: - return; + return -ENODATA; } @@ -325,11 +367,16 @@ void insn_get_modrm(struct insn *insn) int insn_rip_relative(struct insn *insn) { struct insn_field *modrm = &insn->modrm; + int ret; if (!insn->x86_64) return 0; - if (!modrm->got) - insn_get_modrm(insn); + + if (!modrm->got) { + ret = insn_get_modrm(insn); + if (ret) + return 0; + } /* * For rip-relative instructions, the mod field (top 2 bits) * is zero and the r/m field (bottom 3 bits) is 0x5. @@ -343,15 +390,25 @@ int insn_rip_relative(struct insn *insn) * * If necessary, first collects the instruction up to and including the * ModRM byte. + * + * Returns: + * 0: if decoding succeeded + * < 0: otherwise. */ -void insn_get_sib(struct insn *insn) +int insn_get_sib(struct insn *insn) { insn_byte_t modrm; + int ret; if (insn->sib.got) - return; - if (!insn->modrm.got) - insn_get_modrm(insn); + return 0; + + if (!insn->modrm.got) { + ret = insn_get_modrm(insn); + if (ret) + return ret; + } + if (insn->modrm.nbytes) { modrm = (insn_byte_t)insn->modrm.value; if (insn->addr_bytes != 2 && @@ -362,8 +419,10 @@ void insn_get_sib(struct insn *insn) } insn->sib.got = 1; + return 0; + err_out: - return; + return -ENODATA; } @@ -374,15 +433,25 @@ void insn_get_sib(struct insn *insn) * If necessary, first collects the instruction up to and including the * SIB byte. * Displacement value is sign-expanded. + * + * * Returns: + * 0: if decoding succeeded + * < 0: otherwise. */ -void insn_get_displacement(struct insn *insn) +int insn_get_displacement(struct insn *insn) { insn_byte_t mod, rm, base; + int ret; if (insn->displacement.got) - return; - if (!insn->sib.got) - insn_get_sib(insn); + return 0; + + if (!insn->sib.got) { + ret = insn_get_sib(insn); + if (ret) + return ret; + } + if (insn->modrm.nbytes) { /* * Interpreting the modrm byte: @@ -425,9 +494,10 @@ void insn_get_displacement(struct insn *insn) } out: insn->displacement.got = 1; + return 0; err_out: - return; + return -ENODATA; } /* Decode moffset16/32/64. Return 0 if failed */ @@ -538,20 +608,30 @@ static int __get_immptr(struct insn *insn) } /** - * insn_get_immediate() - Get the immediates of instruction + * insn_get_immediate() - Get the immediate in an instruction * @insn: &struct insn containing instruction * * If necessary, first collects the instruction up to and including the * displacement bytes. * Basically, most of immediates are sign-expanded. Unsigned-value can be - * get by bit masking with ((1 << (nbytes * 8)) - 1) + * computed by bit masking with ((1 << (nbytes * 8)) - 1) + * + * Returns: + * 0: on success + * < 0: on error */ -void insn_get_immediate(struct insn *insn) +int insn_get_immediate(struct insn *insn) { + int ret; + if (insn->immediate.got) - return; - if (!insn->displacement.got) - insn_get_displacement(insn); + return 0; + + if (!insn->displacement.got) { + ret = insn_get_displacement(insn); + if (ret) + return ret; + } if (inat_has_moffset(insn->attr)) { if (!__get_moffset(insn)) @@ -604,9 +684,10 @@ void insn_get_immediate(struct insn *insn) } done: insn->immediate.got = 1; + return 0; err_out: - return; + return -ENODATA; } /** @@ -615,13 +696,58 @@ void insn_get_immediate(struct insn *insn) * * If necessary, first collects the instruction up to and including the * immediates bytes. - */ -void insn_get_length(struct insn *insn) + * + * Returns: + * - 0 on success + * - < 0 on error +*/ +int insn_get_length(struct insn *insn) { + int ret; + if (insn->length) - return; - if (!insn->immediate.got) - insn_get_immediate(insn); + return 0; + + if (!insn->immediate.got) { + ret = insn_get_immediate(insn); + if (ret) + return ret; + } + insn->length = (unsigned char)((unsigned long)insn->next_byte - (unsigned long)insn->kaddr); + + return 0; +} + +/** + * insn_decode() - Decode an x86 instruction + * @insn: &struct insn to be initialized + * @kaddr: address (in kernel memory) of instruction (or copy thereof) + * @buf_len: length of the insn buffer at @kaddr + * @m: insn mode, see enum insn_mode + * + * Returns: + * 0: if decoding succeeded + * < 0: otherwise. + */ +int insn_decode(struct insn *insn, const void *kaddr, int buf_len, enum insn_mode m) +{ + int ret; + +#define INSN_MODE_KERN (enum insn_mode)-1 /* __ignore_sync_check__ mode is only valid in the kernel */ + + if (m == INSN_MODE_KERN) + insn_init(insn, kaddr, buf_len, IS_ENABLED(CONFIG_X86_64)); + else + insn_init(insn, kaddr, buf_len, m == INSN_MODE_64); + + ret = insn_get_length(insn); + if (ret) + return ret; + + if (insn_complete(insn)) + return 0; + + return -EINVAL; } diff --git a/tools/include/linux/kconfig.h b/tools/include/linux/kconfig.h new file mode 100644 index 000000000000..1555a0c4f345 --- /dev/null +++ b/tools/include/linux/kconfig.h @@ -0,0 +1,73 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _TOOLS_LINUX_KCONFIG_H +#define _TOOLS_LINUX_KCONFIG_H + +/* CONFIG_CC_VERSION_TEXT (Do not delete this comment. See help in Kconfig) */ + +#ifdef CONFIG_CPU_BIG_ENDIAN +#define __BIG_ENDIAN 4321 +#else +#define __LITTLE_ENDIAN 1234 +#endif + +#define __ARG_PLACEHOLDER_1 0, +#define __take_second_arg(__ignored, val, ...) val + +/* + * The use of "&&" / "||" is limited in certain expressions. + * The following enable to calculate "and" / "or" with macro expansion only. + */ +#define __and(x, y) ___and(x, y) +#define ___and(x, y) ____and(__ARG_PLACEHOLDER_##x, y) +#define ____and(arg1_or_junk, y) __take_second_arg(arg1_or_junk y, 0) + +#define __or(x, y) ___or(x, y) +#define ___or(x, y) ____or(__ARG_PLACEHOLDER_##x, y) +#define ____or(arg1_or_junk, y) __take_second_arg(arg1_or_junk 1, y) + +/* + * Helper macros to use CONFIG_ options in C/CPP expressions. Note that + * these only work with boolean and tristate options. + */ + +/* + * Getting something that works in C and CPP for an arg that may or may + * not be defined is tricky. Here, if we have "#define CONFIG_BOOGER 1" + * we match on the placeholder define, insert the "0," for arg1 and generate + * the triplet (0, 1, 0). Then the last step cherry picks the 2nd arg (a one). + * When CONFIG_BOOGER is not defined, we generate a (... 1, 0) pair, and when + * the last step cherry picks the 2nd arg, we get a zero. + */ +#define __is_defined(x) ___is_defined(x) +#define ___is_defined(val) ____is_defined(__ARG_PLACEHOLDER_##val) +#define ____is_defined(arg1_or_junk) __take_second_arg(arg1_or_junk 1, 0) + +/* + * IS_BUILTIN(CONFIG_FOO) evaluates to 1 if CONFIG_FOO is set to 'y', 0 + * otherwise. For boolean options, this is equivalent to + * IS_ENABLED(CONFIG_FOO). + */ +#define IS_BUILTIN(option) __is_defined(option) + +/* + * IS_MODULE(CONFIG_FOO) evaluates to 1 if CONFIG_FOO is set to 'm', 0 + * otherwise. + */ +#define IS_MODULE(option) __is_defined(option##_MODULE) + +/* + * IS_REACHABLE(CONFIG_FOO) evaluates to 1 if the currently compiled + * code can call a function defined in code compiled based on CONFIG_FOO. + * This is similar to IS_ENABLED(), but returns false when invoked from + * built-in code when CONFIG_FOO is set to 'm'. + */ +#define IS_REACHABLE(option) __or(IS_BUILTIN(option), \ + __and(IS_MODULE(option), __is_defined(MODULE))) + +/* + * IS_ENABLED(CONFIG_FOO) evaluates to 1 if CONFIG_FOO is set to 'y' or 'm', + * 0 otherwise. + */ +#define IS_ENABLED(option) __or(IS_BUILTIN(option), IS_MODULE(option)) + +#endif /* _TOOLS_LINUX_KCONFIG_H */ From e6f8dc86a1c15b862486a61abcb54b88e8c177e3 Mon Sep 17 00:00:00 2001 From: Borislav Petkov Date: Thu, 19 Nov 2020 19:20:18 +0100 Subject: [PATCH 1447/1842] x86/insn-eval: Handle return values from the decoder commit 6e8c83d2a3afbfd5ee019ec720b75a42df515caa upstream. Now that the different instruction-inspecting functions return a value, test that and return early from callers if error has been encountered. While at it, do not call insn_get_modrm() when calling insn_get_displacement() because latter will make sure to call insn_get_modrm() if ModRM hasn't been parsed yet. Signed-off-by: Borislav Petkov Link: https://lkml.kernel.org/r/20210304174237.31945-6-bp@alien8.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/lib/insn-eval.c | 34 +++++++++++++++++++++------------- 1 file changed, 21 insertions(+), 13 deletions(-) diff --git a/arch/x86/lib/insn-eval.c b/arch/x86/lib/insn-eval.c index 3a7e307d42aa..ffc8b7dcf1fe 100644 --- a/arch/x86/lib/insn-eval.c +++ b/arch/x86/lib/insn-eval.c @@ -928,10 +928,11 @@ static int get_seg_base_limit(struct insn *insn, struct pt_regs *regs, static int get_eff_addr_reg(struct insn *insn, struct pt_regs *regs, int *regoff, long *eff_addr) { - insn_get_modrm(insn); + int ret; - if (!insn->modrm.nbytes) - return -EINVAL; + ret = insn_get_modrm(insn); + if (ret) + return ret; if (X86_MODRM_MOD(insn->modrm.value) != 3) return -EINVAL; @@ -977,14 +978,14 @@ static int get_eff_addr_modrm(struct insn *insn, struct pt_regs *regs, int *regoff, long *eff_addr) { long tmp; + int ret; if (insn->addr_bytes != 8 && insn->addr_bytes != 4) return -EINVAL; - insn_get_modrm(insn); - - if (!insn->modrm.nbytes) - return -EINVAL; + ret = insn_get_modrm(insn); + if (ret) + return ret; if (X86_MODRM_MOD(insn->modrm.value) > 2) return -EINVAL; @@ -1106,18 +1107,21 @@ static int get_eff_addr_modrm_16(struct insn *insn, struct pt_regs *regs, * @base_offset will have a register, as an offset from the base of pt_regs, * that can be used to resolve the associated segment. * - * -EINVAL on error. + * Negative value on error. */ static int get_eff_addr_sib(struct insn *insn, struct pt_regs *regs, int *base_offset, long *eff_addr) { long base, indx; int indx_offset; + int ret; if (insn->addr_bytes != 8 && insn->addr_bytes != 4) return -EINVAL; - insn_get_modrm(insn); + ret = insn_get_modrm(insn); + if (ret) + return ret; if (!insn->modrm.nbytes) return -EINVAL; @@ -1125,7 +1129,9 @@ static int get_eff_addr_sib(struct insn *insn, struct pt_regs *regs, if (X86_MODRM_MOD(insn->modrm.value) > 2) return -EINVAL; - insn_get_sib(insn); + ret = insn_get_sib(insn); + if (ret) + return ret; if (!insn->sib.nbytes) return -EINVAL; @@ -1194,8 +1200,8 @@ static void __user *get_addr_ref_16(struct insn *insn, struct pt_regs *regs) short eff_addr; long tmp; - insn_get_modrm(insn); - insn_get_displacement(insn); + if (insn_get_displacement(insn)) + goto out; if (insn->addr_bytes != 2) goto out; @@ -1529,7 +1535,9 @@ bool insn_decode_from_regs(struct insn *insn, struct pt_regs *regs, insn->addr_bytes = INSN_CODE_SEG_ADDR_SZ(seg_defs); insn->opnd_bytes = INSN_CODE_SEG_OPND_SZ(seg_defs); - insn_get_length(insn); + if (insn_get_length(insn)) + return false; + if (buf_size < insn->length) return false; From d9cd21911498a9b423e2bdf728b283e4507e968e Mon Sep 17 00:00:00 2001 From: Borislav Petkov Date: Fri, 6 Nov 2020 19:37:25 +0100 Subject: [PATCH 1448/1842] x86/alternative: Use insn_decode() commit 63c66cde7bbcc79aac14b25861c5b2495eede57b upstream. No functional changes, just simplification. Signed-off-by: Borislav Petkov Link: https://lkml.kernel.org/r/20210304174237.31945-10-bp@alien8.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/alternative.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index ff2987d5aeed..9da3267ec3b5 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -1284,15 +1284,15 @@ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr, const void *opcode, size_t len, const void *emulate) { struct insn insn; + int ret; memcpy((void *)tp->text, opcode, len); if (!emulate) emulate = opcode; - kernel_insn_init(&insn, emulate, MAX_INSN_SIZE); - insn_get_length(&insn); + ret = insn_decode(&insn, emulate, MAX_INSN_SIZE, INSN_MODE_KERN); - BUG_ON(!insn_complete(&insn)); + BUG_ON(ret < 0); BUG_ON(len != insn.length); tp->rel_addr = addr - (void *)_stext; From 9a6471666b7387ba0af70d504fe1602cc3d3e5b2 Mon Sep 17 00:00:00 2001 From: Ben Hutchings Date: Mon, 11 Jul 2022 00:43:31 +0200 Subject: [PATCH 1449/1842] x86: Add insn_decode_kernel() This was done by commit 52fa82c21f64e900a72437269a5cc9e0034b424e upstream, but this backport avoids changing all callers of the old decoder API. Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/insn.h | 2 ++ arch/x86/kernel/alternative.c | 2 +- tools/arch/x86/include/asm/insn.h | 2 ++ 3 files changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/insn.h b/arch/x86/include/asm/insn.h index 546436b3c215..0da37756917a 100644 --- a/arch/x86/include/asm/insn.h +++ b/arch/x86/include/asm/insn.h @@ -105,6 +105,8 @@ enum insn_mode { extern int insn_decode(struct insn *insn, const void *kaddr, int buf_len, enum insn_mode m); +#define insn_decode_kernel(_insn, _ptr) insn_decode((_insn), (_ptr), MAX_INSN_SIZE, INSN_MODE_KERN) + /* Attribute will be determined after getting ModRM (for opcode groups) */ static inline void insn_get_attribute(struct insn *insn) { diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 9da3267ec3b5..75b9ab3d7099 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -1290,7 +1290,7 @@ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr, if (!emulate) emulate = opcode; - ret = insn_decode(&insn, emulate, MAX_INSN_SIZE, INSN_MODE_KERN); + ret = insn_decode_kernel(&insn, emulate); BUG_ON(ret < 0); BUG_ON(len != insn.length); diff --git a/tools/arch/x86/include/asm/insn.h b/tools/arch/x86/include/asm/insn.h index 621ab64a6d27..636ec02793a7 100644 --- a/tools/arch/x86/include/asm/insn.h +++ b/tools/arch/x86/include/asm/insn.h @@ -105,6 +105,8 @@ enum insn_mode { extern int insn_decode(struct insn *insn, const void *kaddr, int buf_len, enum insn_mode m); +#define insn_decode_kernel(_insn, _ptr) insn_decode((_insn), (_ptr), MAX_INSN_SIZE, INSN_MODE_KERN) + /* Attribute will be determined after getting ModRM (for opcode groups) */ static inline void insn_get_attribute(struct insn *insn) { From e68db6f780c6e0ec777045ece0880f5764617394 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 26 Mar 2021 16:12:01 +0100 Subject: [PATCH 1450/1842] x86/alternatives: Optimize optimize_nops() commit 23c1ad538f4f371bdb67d8a112314842d5db7e5a upstream. Currently, optimize_nops() scans to see if the alternative starts with NOPs. However, the emit pattern is: 141: \oldinstr 142: .skip (len-(142b-141b)), 0x90 That is, when 'oldinstr' is short, the tail is padded with NOPs. This case never gets optimized. Rewrite optimize_nops() to replace any trailing string of NOPs inside the alternative to larger NOPs. Also run it irrespective of patching, replacing NOPs in both the original and replaced code. A direct consequence is that 'padlen' becomes superfluous, so remove it. [ bp: - Adjust commit message - remove a stale comment about needing to pad - add a comment in optimize_nops() - exit early if the NOP verif. loop catches a mismatch - function should not not add NOPs in that case - fix the "optimized NOPs" offsets output ] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Ingo Molnar Link: https://lkml.kernel.org/r/20210326151259.442992235@infradead.org Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/alternative.h | 17 ++----- arch/x86/kernel/alternative.c | 49 ++++++++++++------- tools/objtool/arch/x86/include/arch_special.h | 2 +- 3 files changed, 37 insertions(+), 31 deletions(-) diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h index be29b927ff1e..9965638d184f 100644 --- a/arch/x86/include/asm/alternative.h +++ b/arch/x86/include/asm/alternative.h @@ -65,7 +65,6 @@ struct alt_instr { u16 cpuid; /* cpuid bit set for replacement */ u8 instrlen; /* length of original instruction */ u8 replacementlen; /* length of new instruction */ - u8 padlen; /* length of build-time padding */ } __packed; /* @@ -104,7 +103,6 @@ static inline int alternatives_text_reserved(void *start, void *end) #define alt_end_marker "663" #define alt_slen "662b-661b" -#define alt_pad_len alt_end_marker"b-662b" #define alt_total_slen alt_end_marker"b-661b" #define alt_rlen(num) e_replacement(num)"f-"b_replacement(num)"f" @@ -151,8 +149,7 @@ static inline int alternatives_text_reserved(void *start, void *end) " .long " b_replacement(num)"f - .\n" /* new instruction */ \ " .word " __stringify(feature) "\n" /* feature bit */ \ " .byte " alt_total_slen "\n" /* source len */ \ - " .byte " alt_rlen(num) "\n" /* replacement len */ \ - " .byte " alt_pad_len "\n" /* pad len */ + " .byte " alt_rlen(num) "\n" /* replacement len */ #define ALTINSTR_REPLACEMENT(newinstr, feature, num) /* replacement */ \ "# ALT: replacement " #num "\n" \ @@ -224,9 +221,6 @@ static inline int alternatives_text_reserved(void *start, void *end) * Peculiarities: * No memory clobber here. * Argument numbers start with 1. - * Best is to use constraints that are fixed size (like (%1) ... "r") - * If you use variable sized constraints like "m" or "g" in the - * replacement make sure to pad to the worst case length. * Leaving an unused argument 0 to keep API compatibility. */ #define alternative_input(oldinstr, newinstr, feature, input...) \ @@ -315,13 +309,12 @@ static inline int alternatives_text_reserved(void *start, void *end) * enough information for the alternatives patching code to patch an * instruction. See apply_alternatives(). */ -.macro altinstruction_entry orig alt feature orig_len alt_len pad_len +.macro altinstruction_entry orig alt feature orig_len alt_len .long \orig - . .long \alt - . .word \feature .byte \orig_len .byte \alt_len - .byte \pad_len .endm /* @@ -338,7 +331,7 @@ static inline int alternatives_text_reserved(void *start, void *end) 142: .pushsection .altinstructions,"a" - altinstruction_entry 140b,143f,\feature,142b-140b,144f-143f,142b-141b + altinstruction_entry 140b,143f,\feature,142b-140b,144f-143f .popsection .pushsection .altinstr_replacement,"ax" @@ -375,8 +368,8 @@ static inline int alternatives_text_reserved(void *start, void *end) 142: .pushsection .altinstructions,"a" - altinstruction_entry 140b,143f,\feature1,142b-140b,144f-143f,142b-141b - altinstruction_entry 140b,144f,\feature2,142b-140b,145f-144f,142b-141b + altinstruction_entry 140b,143f,\feature1,142b-140b,144f-143f + altinstruction_entry 140b,144f,\feature2,142b-140b,145f-144f .popsection .pushsection .altinstr_replacement,"ax" diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 75b9ab3d7099..a655a9ebb2e2 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -344,19 +344,35 @@ recompute_jump(struct alt_instr *a, u8 *orig_insn, u8 *repl_insn, u8 *insn_buff) static void __init_or_module noinline optimize_nops(struct alt_instr *a, u8 *instr) { unsigned long flags; - int i; + struct insn insn; + int nop, i = 0; - for (i = 0; i < a->padlen; i++) { - if (instr[i] != 0x90) + /* + * Jump over the non-NOP insns, the remaining bytes must be single-byte + * NOPs, optimize them. + */ + for (;;) { + if (insn_decode_kernel(&insn, &instr[i])) + return; + + if (insn.length == 1 && insn.opcode.bytes[0] == 0x90) + break; + + if ((i += insn.length) >= a->instrlen) + return; + } + + for (nop = i; i < a->instrlen; i++) { + if (WARN_ONCE(instr[i] != 0x90, "Not a NOP at 0x%px\n", &instr[i])) return; } local_irq_save(flags); - add_nops(instr + (a->instrlen - a->padlen), a->padlen); + add_nops(instr + nop, i - nop); local_irq_restore(flags); DUMP_BYTES(instr, a->instrlen, "%px: [%d:%d) optimized NOPs: ", - instr, a->instrlen - a->padlen, a->padlen); + instr, nop, a->instrlen); } /* @@ -402,19 +418,15 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, * - feature not present but ALTINSTR_FLAG_INV is set to mean, * patch if feature is *NOT* present. */ - if (!boot_cpu_has(feature) == !(a->cpuid & ALTINSTR_FLAG_INV)) { - if (a->padlen > 1) - optimize_nops(a, instr); + if (!boot_cpu_has(feature) == !(a->cpuid & ALTINSTR_FLAG_INV)) + goto next; - continue; - } - - DPRINTK("feat: %s%d*32+%d, old: (%pS (%px) len: %d), repl: (%px, len: %d), pad: %d", + DPRINTK("feat: %s%d*32+%d, old: (%pS (%px) len: %d), repl: (%px, len: %d)", (a->cpuid & ALTINSTR_FLAG_INV) ? "!" : "", feature >> 5, feature & 0x1f, instr, instr, a->instrlen, - replacement, a->replacementlen, a->padlen); + replacement, a->replacementlen); DUMP_BYTES(instr, a->instrlen, "%px: old_insn: ", instr); DUMP_BYTES(replacement, a->replacementlen, "%px: rpl_insn: ", replacement); @@ -438,14 +450,15 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, if (a->replacementlen && is_jmp(replacement[0])) recompute_jump(a, instr, replacement, insn_buff); - if (a->instrlen > a->replacementlen) { - add_nops(insn_buff + a->replacementlen, - a->instrlen - a->replacementlen); - insn_buff_sz += a->instrlen - a->replacementlen; - } + for (; insn_buff_sz < a->instrlen; insn_buff_sz++) + insn_buff[insn_buff_sz] = 0x90; + DUMP_BYTES(insn_buff, insn_buff_sz, "%px: final_insn: ", instr); text_poke_early(instr, insn_buff, insn_buff_sz); + +next: + optimize_nops(a, instr); } } diff --git a/tools/objtool/arch/x86/include/arch_special.h b/tools/objtool/arch/x86/include/arch_special.h index d818b2bffa02..14271cca0c74 100644 --- a/tools/objtool/arch/x86/include/arch_special.h +++ b/tools/objtool/arch/x86/include/arch_special.h @@ -10,7 +10,7 @@ #define JUMP_ORIG_OFFSET 0 #define JUMP_NEW_OFFSET 4 -#define ALT_ENTRY_SIZE 13 +#define ALT_ENTRY_SIZE 12 #define ALT_ORIG_OFFSET 0 #define ALT_NEW_OFFSET 4 #define ALT_FEATURE_OFFSET 8 From 28ca351296742a9e7506a548acaf7ea3bc9feef0 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 26 Mar 2021 16:12:02 +0100 Subject: [PATCH 1451/1842] x86/retpoline: Simplify retpolines commit 119251855f9adf9421cb5eb409933092141ab2c7 upstream. Due to: c9c324dc22aa ("objtool: Support stack layout changes in alternatives") it is now possible to simplify the retpolines. Currently our retpolines consist of 2 symbols: - __x86_indirect_thunk_\reg: the compiler target - __x86_retpoline_\reg: the actual retpoline. Both are consecutive in code and aligned such that for any one register they both live in the same cacheline: 0000000000000000 <__x86_indirect_thunk_rax>: 0: ff e0 jmpq *%rax 2: 90 nop 3: 90 nop 4: 90 nop 0000000000000005 <__x86_retpoline_rax>: 5: e8 07 00 00 00 callq 11 <__x86_retpoline_rax+0xc> a: f3 90 pause c: 0f ae e8 lfence f: eb f9 jmp a <__x86_retpoline_rax+0x5> 11: 48 89 04 24 mov %rax,(%rsp) 15: c3 retq 16: 66 2e 0f 1f 84 00 00 00 00 00 nopw %cs:0x0(%rax,%rax,1) The thunk is an alternative_2, where one option is a JMP to the retpoline. This was done so that objtool didn't need to deal with alternatives with stack ops. But that problem has been solved, so now it is possible to fold the entire retpoline into the alternative to simplify and consolidate unused bytes: 0000000000000000 <__x86_indirect_thunk_rax>: 0: ff e0 jmpq *%rax 2: 90 nop 3: 90 nop 4: 90 nop 5: 90 nop 6: 90 nop 7: 90 nop 8: 90 nop 9: 90 nop a: 90 nop b: 90 nop c: 90 nop d: 90 nop e: 90 nop f: 90 nop 10: 90 nop 11: 66 66 2e 0f 1f 84 00 00 00 00 00 data16 nopw %cs:0x0(%rax,%rax,1) 1c: 0f 1f 40 00 nopl 0x0(%rax) Notice that since the longest alternative sequence is now: 0: e8 07 00 00 00 callq c <.altinstr_replacement+0xc> 5: f3 90 pause 7: 0f ae e8 lfence a: eb f9 jmp 5 <.altinstr_replacement+0x5> c: 48 89 04 24 mov %rax,(%rsp) 10: c3 retq 17 bytes, we have 15 bytes NOP at the end of our 32 byte slot. (IOW, if we can shrink the retpoline by 1 byte we can pack it more densely). [ bp: Massage commit message. ] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Ingo Molnar Link: https://lkml.kernel.org/r/20210326151259.506071949@infradead.org [bwh: Backported to 5.10: - Use X86_FEATRURE_RETPOLINE_LFENCE flag instead of X86_FEATURE_RETPOLINE_AMD, since the later renaming of this flag has already been applied - Adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/asm-prototypes.h | 7 ----- arch/x86/include/asm/nospec-branch.h | 6 ++--- arch/x86/lib/retpoline.S | 38 +++++++++++++-------------- tools/objtool/check.c | 3 +-- 4 files changed, 23 insertions(+), 31 deletions(-) diff --git a/arch/x86/include/asm/asm-prototypes.h b/arch/x86/include/asm/asm-prototypes.h index 51e2bf27cc9b..0545b07895a5 100644 --- a/arch/x86/include/asm/asm-prototypes.h +++ b/arch/x86/include/asm/asm-prototypes.h @@ -22,15 +22,8 @@ extern void cmpxchg8b_emu(void); #define DECL_INDIRECT_THUNK(reg) \ extern asmlinkage void __x86_indirect_thunk_ ## reg (void); -#define DECL_RETPOLINE(reg) \ - extern asmlinkage void __x86_retpoline_ ## reg (void); - #undef GEN #define GEN(reg) DECL_INDIRECT_THUNK(reg) #include -#undef GEN -#define GEN(reg) DECL_RETPOLINE(reg) -#include - #endif /* CONFIG_RETPOLINE */ diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index c5820916cd4f..838c15e1b775 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -80,7 +80,7 @@ .macro JMP_NOSPEC reg:req #ifdef CONFIG_RETPOLINE ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), \ - __stringify(jmp __x86_retpoline_\reg), X86_FEATURE_RETPOLINE, \ + __stringify(jmp __x86_indirect_thunk_\reg), X86_FEATURE_RETPOLINE, \ __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_LFENCE #else jmp *%\reg @@ -90,7 +90,7 @@ .macro CALL_NOSPEC reg:req #ifdef CONFIG_RETPOLINE ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; call *%\reg), \ - __stringify(call __x86_retpoline_\reg), X86_FEATURE_RETPOLINE, \ + __stringify(call __x86_indirect_thunk_\reg), X86_FEATURE_RETPOLINE, \ __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *%\reg), X86_FEATURE_RETPOLINE_LFENCE #else call *%\reg @@ -128,7 +128,7 @@ ALTERNATIVE_2( \ ANNOTATE_RETPOLINE_SAFE \ "call *%[thunk_target]\n", \ - "call __x86_retpoline_%V[thunk_target]\n", \ + "call __x86_indirect_thunk_%V[thunk_target]\n", \ X86_FEATURE_RETPOLINE, \ "lfence;\n" \ ANNOTATE_RETPOLINE_SAFE \ diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index 6bb74b5c238c..b22075d6e38d 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -10,27 +10,31 @@ #include #include +.macro RETPOLINE reg + ANNOTATE_INTRA_FUNCTION_CALL + call .Ldo_rop_\@ +.Lspec_trap_\@: + UNWIND_HINT_EMPTY + pause + lfence + jmp .Lspec_trap_\@ +.Ldo_rop_\@: + mov %\reg, (%_ASM_SP) + UNWIND_HINT_FUNC + ret +.endm + .macro THUNK reg .section .text.__x86.indirect_thunk .align 32 SYM_FUNC_START(__x86_indirect_thunk_\reg) - JMP_NOSPEC \reg -SYM_FUNC_END(__x86_indirect_thunk_\reg) -SYM_FUNC_START_NOALIGN(__x86_retpoline_\reg) - ANNOTATE_INTRA_FUNCTION_CALL - call .Ldo_rop_\@ -.Lspec_trap_\@: - UNWIND_HINT_EMPTY - pause - lfence - jmp .Lspec_trap_\@ -.Ldo_rop_\@: - mov %\reg, (%_ASM_SP) - UNWIND_HINT_FUNC - ret -SYM_FUNC_END(__x86_retpoline_\reg) + ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), \ + __stringify(RETPOLINE \reg), X86_FEATURE_RETPOLINE, \ + __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_LFENCE + +SYM_FUNC_END(__x86_indirect_thunk_\reg) .endm @@ -48,7 +52,6 @@ SYM_FUNC_END(__x86_retpoline_\reg) #define __EXPORT_THUNK(sym) _ASM_NOKPROBE(sym); EXPORT_SYMBOL(sym) #define EXPORT_THUNK(reg) __EXPORT_THUNK(__x86_indirect_thunk_ ## reg) -#define EXPORT_RETPOLINE(reg) __EXPORT_THUNK(__x86_retpoline_ ## reg) #undef GEN #define GEN(reg) THUNK reg @@ -58,6 +61,3 @@ SYM_FUNC_END(__x86_retpoline_\reg) #define GEN(reg) EXPORT_THUNK(reg) #include -#undef GEN -#define GEN(reg) EXPORT_RETPOLINE(reg) -#include diff --git a/tools/objtool/check.c b/tools/objtool/check.c index ed5e4b04f508..334781ee60f1 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -800,8 +800,7 @@ static int add_jump_destinations(struct objtool_file *file) } else if (reloc->sym->type == STT_SECTION) { dest_sec = reloc->sym->sec; dest_off = arch_dest_reloc_offset(reloc->addend); - } else if (!strncmp(reloc->sym->name, "__x86_indirect_thunk_", 21) || - !strncmp(reloc->sym->name, "__x86_retpoline_", 16)) { + } else if (!strncmp(reloc->sym->name, "__x86_indirect_thunk_", 21)) { /* * Retpoline jumps are really dynamic jumps in * disguise, so convert them accordingly. From 6e95f8caffb3f10e48b100c47e753ca83042fe6f Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 26 Mar 2021 16:12:03 +0100 Subject: [PATCH 1452/1842] objtool: Correctly handle retpoline thunk calls commit bcb1b6ff39da7e8a6a986eb08126fba2b5e13c32 upstream. Just like JMP handling, convert a direct CALL to a retpoline thunk into a retpoline safe indirect CALL. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Ingo Molnar Reviewed-by: Miroslav Benes Link: https://lkml.kernel.org/r/20210326151259.567568238@infradead.org Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 334781ee60f1..da78811ffda5 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -953,6 +953,18 @@ static int add_call_destinations(struct objtool_file *file) dest_off); return -1; } + + } else if (!strncmp(reloc->sym->name, "__x86_indirect_thunk_", 21)) { + /* + * Retpoline calls are really dynamic calls in + * disguise, so convert them accordingly. + */ + insn->type = INSN_CALL_DYNAMIC; + insn->retpoline_safe = true; + + remove_insn_ops(insn); + continue; + } else insn->call_dest = reloc->sym; From d42fa5bf19fc04833f3c27e9555051c428422248 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 26 Mar 2021 16:12:04 +0100 Subject: [PATCH 1453/1842] objtool: Handle per arch retpoline naming commit 530b4ddd9dd92b263081f5c7786d39a8129c8b2d upstream. The __x86_indirect_ naming is obviously not generic. Shorten to allow matching some additional magic names later. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Ingo Molnar Reviewed-by: Miroslav Benes Link: https://lkml.kernel.org/r/20210326151259.630296706@infradead.org Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/arch.h | 2 ++ tools/objtool/arch/x86/decode.c | 5 +++++ tools/objtool/check.c | 9 +++++++-- 3 files changed, 14 insertions(+), 2 deletions(-) diff --git a/tools/objtool/arch.h b/tools/objtool/arch.h index 5e3f3ea8bb89..82896d7615f5 100644 --- a/tools/objtool/arch.h +++ b/tools/objtool/arch.h @@ -86,4 +86,6 @@ const char *arch_nop_insn(int len); int arch_decode_hint_reg(struct instruction *insn, u8 sp_reg); +bool arch_is_retpoline(struct symbol *sym); + #endif /* _ARCH_H */ diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c index 6f5a45754ad8..a540c8cda8e1 100644 --- a/tools/objtool/arch/x86/decode.c +++ b/tools/objtool/arch/x86/decode.c @@ -620,3 +620,8 @@ int arch_decode_hint_reg(struct instruction *insn, u8 sp_reg) return 0; } + +bool arch_is_retpoline(struct symbol *sym) +{ + return !strncmp(sym->name, "__x86_indirect_", 15); +} diff --git a/tools/objtool/check.c b/tools/objtool/check.c index da78811ffda5..5e407ec95949 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -778,6 +778,11 @@ static int add_ignore_alternatives(struct objtool_file *file) return 0; } +__weak bool arch_is_retpoline(struct symbol *sym) +{ + return false; +} + /* * Find the destination instructions for all jumps. */ @@ -800,7 +805,7 @@ static int add_jump_destinations(struct objtool_file *file) } else if (reloc->sym->type == STT_SECTION) { dest_sec = reloc->sym->sec; dest_off = arch_dest_reloc_offset(reloc->addend); - } else if (!strncmp(reloc->sym->name, "__x86_indirect_thunk_", 21)) { + } else if (arch_is_retpoline(reloc->sym)) { /* * Retpoline jumps are really dynamic jumps in * disguise, so convert them accordingly. @@ -954,7 +959,7 @@ static int add_call_destinations(struct objtool_file *file) return -1; } - } else if (!strncmp(reloc->sym->name, "__x86_indirect_thunk_", 21)) { + } else if (arch_is_retpoline(reloc->sym)) { /* * Retpoline calls are really dynamic calls in * disguise, so convert them accordingly. From c9049cf4804ab6f2b73d4cc244c3e2f6e0a9f10e Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 26 Mar 2021 16:12:06 +0100 Subject: [PATCH 1454/1842] objtool: Rework the elf_rebuild_reloc_section() logic commit 3a647607b57ad8346e659ddd3b951ac292c83690 upstream. Instead of manually calling elf_rebuild_reloc_section() on sections we've called elf_add_reloc() on, have elf_write() DTRT. This makes it easier to add random relocations in places without carefully tracking when we're done and need to flush what section. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Ingo Molnar Reviewed-by: Miroslav Benes Link: https://lkml.kernel.org/r/20210326151259.754213408@infradead.org [bwh: Backported to 5.10: drop changes in create_mcount_loc_sections()] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 3 --- tools/objtool/elf.c | 20 ++++++++++++++------ tools/objtool/elf.h | 1 - tools/objtool/orc_gen.c | 3 --- 4 files changed, 14 insertions(+), 13 deletions(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 5e407ec95949..fb9be8ec893d 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -542,9 +542,6 @@ static int create_static_call_sections(struct objtool_file *file) idx++; } - if (elf_rebuild_reloc_section(file->elf, reloc_sec)) - return -1; - return 0; } diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c index d8421e1d06be..ce08aafa7da0 100644 --- a/tools/objtool/elf.c +++ b/tools/objtool/elf.c @@ -530,6 +530,8 @@ void elf_add_reloc(struct elf *elf, struct reloc *reloc) list_add_tail(&reloc->list, &sec->reloc_list); elf_hash_add(elf->reloc_hash, &reloc->hash, reloc_hash(reloc)); + + sec->changed = true; } static int read_rel_reloc(struct section *sec, int i, struct reloc *reloc, unsigned int *symndx) @@ -609,7 +611,9 @@ static int read_relocs(struct elf *elf) return -1; } - elf_add_reloc(elf, reloc); + list_add_tail(&reloc->list, &sec->reloc_list); + elf_hash_add(elf->reloc_hash, &reloc->hash, reloc_hash(reloc)); + nr_reloc++; } max_reloc = max(max_reloc, nr_reloc); @@ -920,14 +924,11 @@ static int elf_rebuild_rela_reloc_section(struct section *sec, int nr) return 0; } -int elf_rebuild_reloc_section(struct elf *elf, struct section *sec) +static int elf_rebuild_reloc_section(struct elf *elf, struct section *sec) { struct reloc *reloc; int nr; - sec->changed = true; - elf->changed = true; - nr = 0; list_for_each_entry(reloc, &sec->reloc_list, list) nr++; @@ -991,9 +992,15 @@ int elf_write(struct elf *elf) struct section *sec; Elf_Scn *s; - /* Update section headers for changed sections: */ + /* Update changed relocation sections and section headers: */ list_for_each_entry(sec, &elf->sections, list) { if (sec->changed) { + if (sec->base && + elf_rebuild_reloc_section(elf, sec)) { + WARN("elf_rebuild_reloc_section"); + return -1; + } + s = elf_getscn(elf->elf, sec->idx); if (!s) { WARN_ELF("elf_getscn"); @@ -1005,6 +1012,7 @@ int elf_write(struct elf *elf) } sec->changed = false; + elf->changed = true; } } diff --git a/tools/objtool/elf.h b/tools/objtool/elf.h index e6890cc70a25..fc576edacfc4 100644 --- a/tools/objtool/elf.h +++ b/tools/objtool/elf.h @@ -142,7 +142,6 @@ struct reloc *find_reloc_by_dest_range(const struct elf *elf, struct section *se struct symbol *find_func_containing(struct section *sec, unsigned long offset); void insn_to_reloc_sym_addend(struct section *sec, unsigned long offset, struct reloc *reloc); -int elf_rebuild_reloc_section(struct elf *elf, struct section *sec); #define for_each_sec(file, sec) \ list_for_each_entry(sec, &file->elf->sections, list) diff --git a/tools/objtool/orc_gen.c b/tools/objtool/orc_gen.c index 0d62f958dbea..02e02fc49bbd 100644 --- a/tools/objtool/orc_gen.c +++ b/tools/objtool/orc_gen.c @@ -251,8 +251,5 @@ int orc_create(struct objtool_file *file) return -1; } - if (elf_rebuild_reloc_section(file->elf, ip_rsec)) - return -1; - return 0; } From fcdb7926d399910ee847856b28d7bde5437f77f0 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 26 Mar 2021 16:12:07 +0100 Subject: [PATCH 1455/1842] objtool: Add elf_create_reloc() helper commit ef47cc01cb4abcd760d8ac66b9361d6ade4d0846 upstream. We have 4 instances of adding a relocation. Create a common helper to avoid growing even more. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Ingo Molnar Reviewed-by: Miroslav Benes Link: https://lkml.kernel.org/r/20210326151259.817438847@infradead.org [bwh: Backported to 5.10: drop changes in create_mcount_loc_sections()] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 43 +++++---------------- tools/objtool/elf.c | 86 +++++++++++++++++++++++++++-------------- tools/objtool/elf.h | 10 +++-- tools/objtool/orc_gen.c | 30 +++----------- 4 files changed, 79 insertions(+), 90 deletions(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index fb9be8ec893d..76a63294d64e 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -433,8 +433,7 @@ static int add_dead_ends(struct objtool_file *file) static int create_static_call_sections(struct objtool_file *file) { - struct section *sec, *reloc_sec; - struct reloc *reloc; + struct section *sec; struct static_call_site *site; struct instruction *insn; struct symbol *key_sym; @@ -460,8 +459,7 @@ static int create_static_call_sections(struct objtool_file *file) if (!sec) return -1; - reloc_sec = elf_create_reloc_section(file->elf, sec, SHT_RELA); - if (!reloc_sec) + if (!elf_create_reloc_section(file->elf, sec, SHT_RELA)) return -1; idx = 0; @@ -471,25 +469,11 @@ static int create_static_call_sections(struct objtool_file *file) memset(site, 0, sizeof(struct static_call_site)); /* populate reloc for 'addr' */ - reloc = malloc(sizeof(*reloc)); - - if (!reloc) { - perror("malloc"); + if (elf_add_reloc_to_insn(file->elf, sec, + idx * sizeof(struct static_call_site), + R_X86_64_PC32, + insn->sec, insn->offset)) return -1; - } - memset(reloc, 0, sizeof(*reloc)); - - insn_to_reloc_sym_addend(insn->sec, insn->offset, reloc); - if (!reloc->sym) { - WARN_FUNC("static call tramp: missing containing symbol", - insn->sec, insn->offset); - return -1; - } - - reloc->type = R_X86_64_PC32; - reloc->offset = idx * sizeof(struct static_call_site); - reloc->sec = reloc_sec; - elf_add_reloc(file->elf, reloc); /* find key symbol */ key_name = strdup(insn->call_dest->name); @@ -526,18 +510,11 @@ static int create_static_call_sections(struct objtool_file *file) free(key_name); /* populate reloc for 'key' */ - reloc = malloc(sizeof(*reloc)); - if (!reloc) { - perror("malloc"); + if (elf_add_reloc(file->elf, sec, + idx * sizeof(struct static_call_site) + 4, + R_X86_64_PC32, key_sym, + is_sibling_call(insn) * STATIC_CALL_SITE_TAIL)) return -1; - } - memset(reloc, 0, sizeof(*reloc)); - reloc->sym = key_sym; - reloc->addend = is_sibling_call(insn) ? STATIC_CALL_SITE_TAIL : 0; - reloc->type = R_X86_64_PC32; - reloc->offset = idx * sizeof(struct static_call_site) + 4; - reloc->sec = reloc_sec; - elf_add_reloc(file->elf, reloc); idx++; } diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c index ce08aafa7da0..973f0c481bac 100644 --- a/tools/objtool/elf.c +++ b/tools/objtool/elf.c @@ -262,32 +262,6 @@ struct reloc *find_reloc_by_dest(const struct elf *elf, struct section *sec, uns return find_reloc_by_dest_range(elf, sec, offset, 1); } -void insn_to_reloc_sym_addend(struct section *sec, unsigned long offset, - struct reloc *reloc) -{ - if (sec->sym) { - reloc->sym = sec->sym; - reloc->addend = offset; - return; - } - - /* - * The Clang assembler strips section symbols, so we have to reference - * the function symbol instead: - */ - reloc->sym = find_symbol_containing(sec, offset); - if (!reloc->sym) { - /* - * Hack alert. This happens when we need to reference the NOP - * pad insn immediately after the function. - */ - reloc->sym = find_symbol_containing(sec, offset - 1); - } - - if (reloc->sym) - reloc->addend = offset - reloc->sym->offset; -} - static int read_sections(struct elf *elf) { Elf_Scn *s = NULL; @@ -524,14 +498,66 @@ static int read_symbols(struct elf *elf) return -1; } -void elf_add_reloc(struct elf *elf, struct reloc *reloc) +int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset, + unsigned int type, struct symbol *sym, int addend) { - struct section *sec = reloc->sec; + struct reloc *reloc; - list_add_tail(&reloc->list, &sec->reloc_list); + reloc = malloc(sizeof(*reloc)); + if (!reloc) { + perror("malloc"); + return -1; + } + memset(reloc, 0, sizeof(*reloc)); + + reloc->sec = sec->reloc; + reloc->offset = offset; + reloc->type = type; + reloc->sym = sym; + reloc->addend = addend; + + list_add_tail(&reloc->list, &sec->reloc->reloc_list); elf_hash_add(elf->reloc_hash, &reloc->hash, reloc_hash(reloc)); - sec->changed = true; + sec->reloc->changed = true; + + return 0; +} + +int elf_add_reloc_to_insn(struct elf *elf, struct section *sec, + unsigned long offset, unsigned int type, + struct section *insn_sec, unsigned long insn_off) +{ + struct symbol *sym; + int addend; + + if (insn_sec->sym) { + sym = insn_sec->sym; + addend = insn_off; + + } else { + /* + * The Clang assembler strips section symbols, so we have to + * reference the function symbol instead: + */ + sym = find_symbol_containing(insn_sec, insn_off); + if (!sym) { + /* + * Hack alert. This happens when we need to reference + * the NOP pad insn immediately after the function. + */ + sym = find_symbol_containing(insn_sec, insn_off - 1); + } + + if (!sym) { + WARN("can't find symbol containing %s+0x%lx", insn_sec->name, insn_off); + return -1; + } + + addend = insn_off - sym->offset; + } + + return elf_add_reloc(elf, sec, offset, type, sym, addend); } static int read_rel_reloc(struct section *sec, int i, struct reloc *reloc, unsigned int *symndx) diff --git a/tools/objtool/elf.h b/tools/objtool/elf.h index fc576edacfc4..825ad326201d 100644 --- a/tools/objtool/elf.h +++ b/tools/objtool/elf.h @@ -123,7 +123,13 @@ static inline u32 reloc_hash(struct reloc *reloc) struct elf *elf_open_read(const char *name, int flags); struct section *elf_create_section(struct elf *elf, const char *name, unsigned int sh_flags, size_t entsize, int nr); struct section *elf_create_reloc_section(struct elf *elf, struct section *base, int reltype); -void elf_add_reloc(struct elf *elf, struct reloc *reloc); + +int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset, + unsigned int type, struct symbol *sym, int addend); +int elf_add_reloc_to_insn(struct elf *elf, struct section *sec, + unsigned long offset, unsigned int type, + struct section *insn_sec, unsigned long insn_off); + int elf_write_insn(struct elf *elf, struct section *sec, unsigned long offset, unsigned int len, const char *insn); @@ -140,8 +146,6 @@ struct reloc *find_reloc_by_dest(const struct elf *elf, struct section *sec, uns struct reloc *find_reloc_by_dest_range(const struct elf *elf, struct section *sec, unsigned long offset, unsigned int len); struct symbol *find_func_containing(struct section *sec, unsigned long offset); -void insn_to_reloc_sym_addend(struct section *sec, unsigned long offset, - struct reloc *reloc); #define for_each_sec(file, sec) \ list_for_each_entry(sec, &file->elf->sections, list) diff --git a/tools/objtool/orc_gen.c b/tools/objtool/orc_gen.c index 02e02fc49bbd..57de202c51b4 100644 --- a/tools/objtool/orc_gen.c +++ b/tools/objtool/orc_gen.c @@ -81,37 +81,20 @@ static int init_orc_entry(struct orc_entry *orc, struct cfi_state *cfi) } static int write_orc_entry(struct elf *elf, struct section *orc_sec, - struct section *ip_rsec, unsigned int idx, + struct section *ip_sec, unsigned int idx, struct section *insn_sec, unsigned long insn_off, struct orc_entry *o) { struct orc_entry *orc; - struct reloc *reloc; /* populate ORC data */ orc = (struct orc_entry *)orc_sec->data->d_buf + idx; memcpy(orc, o, sizeof(*orc)); /* populate reloc for ip */ - reloc = malloc(sizeof(*reloc)); - if (!reloc) { - perror("malloc"); + if (elf_add_reloc_to_insn(elf, ip_sec, idx * sizeof(int), R_X86_64_PC32, + insn_sec, insn_off)) return -1; - } - memset(reloc, 0, sizeof(*reloc)); - - insn_to_reloc_sym_addend(insn_sec, insn_off, reloc); - if (!reloc->sym) { - WARN("missing symbol for insn at offset 0x%lx", - insn_off); - return -1; - } - - reloc->type = R_X86_64_PC32; - reloc->offset = idx * sizeof(int); - reloc->sec = ip_rsec; - - elf_add_reloc(elf, reloc); return 0; } @@ -150,7 +133,7 @@ static unsigned long alt_group_len(struct alt_group *alt_group) int orc_create(struct objtool_file *file) { - struct section *sec, *ip_rsec, *orc_sec; + struct section *sec, *orc_sec; unsigned int nr = 0, idx = 0; struct orc_list_entry *entry; struct list_head orc_list; @@ -239,13 +222,12 @@ int orc_create(struct objtool_file *file) sec = elf_create_section(file->elf, ".orc_unwind_ip", 0, sizeof(int), nr); if (!sec) return -1; - ip_rsec = elf_create_reloc_section(file->elf, sec, SHT_RELA); - if (!ip_rsec) + if (!elf_create_reloc_section(file->elf, sec, SHT_RELA)) return -1; /* Write ORC entries to sections: */ list_for_each_entry(entry, &orc_list, list) { - if (write_orc_entry(file->elf, orc_sec, ip_rsec, idx++, + if (write_orc_entry(file->elf, orc_sec, sec, idx++, entry->insn_sec, entry->insn_off, &entry->orc)) return -1; From b37c439250118f6fecfd6436d8b218a452ab6fa8 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 26 Mar 2021 16:12:08 +0100 Subject: [PATCH 1456/1842] objtool: Create reloc sections implicitly commit d0c5c4cc73da0b05b0d9e5f833f2d859e1b45f8e upstream. Have elf_add_reloc() create the relocation section implicitly. Suggested-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Ingo Molnar Reviewed-by: Miroslav Benes Link: https://lkml.kernel.org/r/20210326151259.880174448@infradead.org [bwh: Backported to 5.10: drop changes in create_mcount_loc_sections()] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 3 --- tools/objtool/elf.c | 9 ++++++++- tools/objtool/elf.h | 1 - tools/objtool/orc_gen.c | 2 -- 4 files changed, 8 insertions(+), 7 deletions(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 76a63294d64e..2cb409859081 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -459,9 +459,6 @@ static int create_static_call_sections(struct objtool_file *file) if (!sec) return -1; - if (!elf_create_reloc_section(file->elf, sec, SHT_RELA)) - return -1; - idx = 0; list_for_each_entry(insn, &file->static_call_list, static_call_node) { diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c index 973f0c481bac..b76d687bc962 100644 --- a/tools/objtool/elf.c +++ b/tools/objtool/elf.c @@ -498,11 +498,18 @@ static int read_symbols(struct elf *elf) return -1; } +static struct section *elf_create_reloc_section(struct elf *elf, + struct section *base, + int reltype); + int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset, unsigned int type, struct symbol *sym, int addend) { struct reloc *reloc; + if (!sec->reloc && !elf_create_reloc_section(elf, sec, SHT_RELA)) + return -1; + reloc = malloc(sizeof(*reloc)); if (!reloc) { perror("malloc"); @@ -880,7 +887,7 @@ static struct section *elf_create_rela_reloc_section(struct elf *elf, struct sec return sec; } -struct section *elf_create_reloc_section(struct elf *elf, +static struct section *elf_create_reloc_section(struct elf *elf, struct section *base, int reltype) { diff --git a/tools/objtool/elf.h b/tools/objtool/elf.h index 825ad326201d..463f329f1c66 100644 --- a/tools/objtool/elf.h +++ b/tools/objtool/elf.h @@ -122,7 +122,6 @@ static inline u32 reloc_hash(struct reloc *reloc) struct elf *elf_open_read(const char *name, int flags); struct section *elf_create_section(struct elf *elf, const char *name, unsigned int sh_flags, size_t entsize, int nr); -struct section *elf_create_reloc_section(struct elf *elf, struct section *base, int reltype); int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset, unsigned int type, struct symbol *sym, int addend); diff --git a/tools/objtool/orc_gen.c b/tools/objtool/orc_gen.c index 57de202c51b4..671f3b296c1a 100644 --- a/tools/objtool/orc_gen.c +++ b/tools/objtool/orc_gen.c @@ -222,8 +222,6 @@ int orc_create(struct objtool_file *file) sec = elf_create_section(file->elf, ".orc_unwind_ip", 0, sizeof(int), nr); if (!sec) return -1; - if (!elf_create_reloc_section(file->elf, sec, SHT_RELA)) - return -1; /* Write ORC entries to sections: */ list_for_each_entry(entry, &orc_list, list) { From da962cd0a2fe2e2c29c75f425fb29fc09b4233cc Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 26 Mar 2021 16:12:09 +0100 Subject: [PATCH 1457/1842] objtool: Extract elf_strtab_concat() commit 417a4dc91e559f92404c2544f785b02ce75784c3 upstream. Create a common helper to append strings to a strtab. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Ingo Molnar Reviewed-by: Miroslav Benes Link: https://lkml.kernel.org/r/20210326151259.941474004@infradead.org Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/elf.c | 60 ++++++++++++++++++++++++++++----------------- 1 file changed, 38 insertions(+), 22 deletions(-) diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c index b76d687bc962..07805b13abe0 100644 --- a/tools/objtool/elf.c +++ b/tools/objtool/elf.c @@ -724,13 +724,48 @@ struct elf *elf_open_read(const char *name, int flags) return NULL; } +static int elf_add_string(struct elf *elf, struct section *strtab, char *str) +{ + Elf_Data *data; + Elf_Scn *s; + int len; + + if (!strtab) + strtab = find_section_by_name(elf, ".strtab"); + if (!strtab) { + WARN("can't find .strtab section"); + return -1; + } + + s = elf_getscn(elf->elf, strtab->idx); + if (!s) { + WARN_ELF("elf_getscn"); + return -1; + } + + data = elf_newdata(s); + if (!data) { + WARN_ELF("elf_newdata"); + return -1; + } + + data->d_buf = str; + data->d_size = strlen(str) + 1; + data->d_align = 1; + + len = strtab->len; + strtab->len += data->d_size; + strtab->changed = true; + + return len; +} + struct section *elf_create_section(struct elf *elf, const char *name, unsigned int sh_flags, size_t entsize, int nr) { struct section *sec, *shstrtab; size_t size = entsize * nr; Elf_Scn *s; - Elf_Data *data; sec = malloc(sizeof(*sec)); if (!sec) { @@ -787,7 +822,6 @@ struct section *elf_create_section(struct elf *elf, const char *name, sec->sh.sh_addralign = 1; sec->sh.sh_flags = SHF_ALLOC | sh_flags; - /* Add section name to .shstrtab (or .strtab for Clang) */ shstrtab = find_section_by_name(elf, ".shstrtab"); if (!shstrtab) @@ -796,27 +830,9 @@ struct section *elf_create_section(struct elf *elf, const char *name, WARN("can't find .shstrtab or .strtab section"); return NULL; } - - s = elf_getscn(elf->elf, shstrtab->idx); - if (!s) { - WARN_ELF("elf_getscn"); + sec->sh.sh_name = elf_add_string(elf, shstrtab, sec->name); + if (sec->sh.sh_name == -1) return NULL; - } - - data = elf_newdata(s); - if (!data) { - WARN_ELF("elf_newdata"); - return NULL; - } - - data->d_buf = sec->name; - data->d_size = strlen(name) + 1; - data->d_align = 1; - - sec->sh.sh_name = shstrtab->len; - - shstrtab->len += strlen(name) + 1; - shstrtab->changed = true; list_add_tail(&sec->list, &elf->sections); elf_hash_add(elf->section_hash, &sec->hash, sec->idx); From b69e1b4b689faa1af25a0a76cd1ef8c612770608 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 26 Mar 2021 16:12:10 +0100 Subject: [PATCH 1458/1842] objtool: Extract elf_symbol_add() commit 9a7827b7789c630c1efdb121daa42c6e77dce97f upstream. Create a common helper to add symbols. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Ingo Molnar Reviewed-by: Miroslav Benes Link: https://lkml.kernel.org/r/20210326151300.003468981@infradead.org [bwh: Backported to 5.10: rb_add() parameter order is different] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/elf.c | 56 +++++++++++++++++++++++++-------------------- 1 file changed, 31 insertions(+), 25 deletions(-) diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c index 07805b13abe0..74f4b3bb7961 100644 --- a/tools/objtool/elf.c +++ b/tools/objtool/elf.c @@ -341,12 +341,39 @@ static int read_sections(struct elf *elf) return 0; } +static void elf_add_symbol(struct elf *elf, struct symbol *sym) +{ + struct list_head *entry; + struct rb_node *pnode; + + sym->type = GELF_ST_TYPE(sym->sym.st_info); + sym->bind = GELF_ST_BIND(sym->sym.st_info); + + sym->offset = sym->sym.st_value; + sym->len = sym->sym.st_size; + + rb_add(&sym->sec->symbol_tree, &sym->node, symbol_to_offset); + pnode = rb_prev(&sym->node); + if (pnode) + entry = &rb_entry(pnode, struct symbol, node)->list; + else + entry = &sym->sec->symbol_list; + list_add(&sym->list, entry); + elf_hash_add(elf->symbol_hash, &sym->hash, sym->idx); + elf_hash_add(elf->symbol_name_hash, &sym->name_hash, str_hash(sym->name)); + + /* + * Don't store empty STT_NOTYPE symbols in the rbtree. They + * can exist within a function, confusing the sorting. + */ + if (!sym->len) + rb_erase(&sym->node, &sym->sec->symbol_tree); +} + static int read_symbols(struct elf *elf) { struct section *symtab, *symtab_shndx, *sec; struct symbol *sym, *pfunc; - struct list_head *entry; - struct rb_node *pnode; int symbols_nr, i; char *coldstr; Elf_Data *shndx_data = NULL; @@ -391,9 +418,6 @@ static int read_symbols(struct elf *elf) goto err; } - sym->type = GELF_ST_TYPE(sym->sym.st_info); - sym->bind = GELF_ST_BIND(sym->sym.st_info); - if ((sym->sym.st_shndx > SHN_UNDEF && sym->sym.st_shndx < SHN_LORESERVE) || (shndx_data && sym->sym.st_shndx == SHN_XINDEX)) { @@ -406,32 +430,14 @@ static int read_symbols(struct elf *elf) sym->name); goto err; } - if (sym->type == STT_SECTION) { + if (GELF_ST_TYPE(sym->sym.st_info) == STT_SECTION) { sym->name = sym->sec->name; sym->sec->sym = sym; } } else sym->sec = find_section_by_index(elf, 0); - sym->offset = sym->sym.st_value; - sym->len = sym->sym.st_size; - - rb_add(&sym->sec->symbol_tree, &sym->node, symbol_to_offset); - pnode = rb_prev(&sym->node); - if (pnode) - entry = &rb_entry(pnode, struct symbol, node)->list; - else - entry = &sym->sec->symbol_list; - list_add(&sym->list, entry); - elf_hash_add(elf->symbol_hash, &sym->hash, sym->idx); - elf_hash_add(elf->symbol_name_hash, &sym->name_hash, str_hash(sym->name)); - - /* - * Don't store empty STT_NOTYPE symbols in the rbtree. They - * can exist within a function, confusing the sorting. - */ - if (!sym->len) - rb_erase(&sym->node, &sym->sec->symbol_tree); + elf_add_symbol(elf, sym); } if (stats) From 8a6d73f7db7f8486918d144e457e3b1d2cd22dba Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 26 Mar 2021 16:12:11 +0100 Subject: [PATCH 1459/1842] objtool: Add elf_create_undef_symbol() commit 2f2f7e47f0525cbaad5dd9675fd9d8aa8da12046 upstream. Allow objtool to create undefined symbols; this allows creating relocations to symbols not currently in the symbol table. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Ingo Molnar Reviewed-by: Miroslav Benes Link: https://lkml.kernel.org/r/20210326151300.064743095@infradead.org Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/elf.c | 60 +++++++++++++++++++++++++++++++++++++++++++++ tools/objtool/elf.h | 1 + 2 files changed, 61 insertions(+) diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c index 74f4b3bb7961..fff4c5587e3e 100644 --- a/tools/objtool/elf.c +++ b/tools/objtool/elf.c @@ -766,6 +766,66 @@ static int elf_add_string(struct elf *elf, struct section *strtab, char *str) return len; } +struct symbol *elf_create_undef_symbol(struct elf *elf, const char *name) +{ + struct section *symtab; + struct symbol *sym; + Elf_Data *data; + Elf_Scn *s; + + sym = malloc(sizeof(*sym)); + if (!sym) { + perror("malloc"); + return NULL; + } + memset(sym, 0, sizeof(*sym)); + + sym->name = strdup(name); + + sym->sym.st_name = elf_add_string(elf, NULL, sym->name); + if (sym->sym.st_name == -1) + return NULL; + + sym->sym.st_info = GELF_ST_INFO(STB_GLOBAL, STT_NOTYPE); + // st_other 0 + // st_shndx 0 + // st_value 0 + // st_size 0 + + symtab = find_section_by_name(elf, ".symtab"); + if (!symtab) { + WARN("can't find .symtab"); + return NULL; + } + + s = elf_getscn(elf->elf, symtab->idx); + if (!s) { + WARN_ELF("elf_getscn"); + return NULL; + } + + data = elf_newdata(s); + if (!data) { + WARN_ELF("elf_newdata"); + return NULL; + } + + data->d_buf = &sym->sym; + data->d_size = sizeof(sym->sym); + data->d_align = 1; + + sym->idx = symtab->len / sizeof(sym->sym); + + symtab->len += data->d_size; + symtab->changed = true; + + sym->sec = find_section_by_index(elf, 0); + + elf_add_symbol(elf, sym); + + return sym; +} + struct section *elf_create_section(struct elf *elf, const char *name, unsigned int sh_flags, size_t entsize, int nr) { diff --git a/tools/objtool/elf.h b/tools/objtool/elf.h index 463f329f1c66..45e5ede363b0 100644 --- a/tools/objtool/elf.h +++ b/tools/objtool/elf.h @@ -133,6 +133,7 @@ int elf_write_insn(struct elf *elf, struct section *sec, unsigned long offset, unsigned int len, const char *insn); int elf_write_reloc(struct elf *elf, struct reloc *reloc); +struct symbol *elf_create_undef_symbol(struct elf *elf, const char *name); int elf_write(struct elf *elf); void elf_close(struct elf *elf); From 33092b486686c31432c5354dbb18651e44200668 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 26 Mar 2021 16:12:12 +0100 Subject: [PATCH 1460/1842] objtool: Keep track of retpoline call sites commit 43d5430ad74ef5156353af7aec352426ec7a8e57 upstream. Provide infrastructure for architectures to rewrite/augment compiler generated retpoline calls. Similar to what we do for static_call()s, keep track of the instructions that are retpoline calls. Use the same list_head, since a retpoline call cannot also be a static_call. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Ingo Molnar Reviewed-by: Miroslav Benes Link: https://lkml.kernel.org/r/20210326151300.130805730@infradead.org [bwh: Backported to 5.10: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/arch.h | 2 ++ tools/objtool/check.c | 34 +++++++++++++++++++++++++++++----- tools/objtool/check.h | 2 +- tools/objtool/objtool.c | 1 + tools/objtool/objtool.h | 1 + 5 files changed, 34 insertions(+), 6 deletions(-) diff --git a/tools/objtool/arch.h b/tools/objtool/arch.h index 82896d7615f5..8918de4f8189 100644 --- a/tools/objtool/arch.h +++ b/tools/objtool/arch.h @@ -88,4 +88,6 @@ int arch_decode_hint_reg(struct instruction *insn, u8 sp_reg); bool arch_is_retpoline(struct symbol *sym); +int arch_rewrite_retpolines(struct objtool_file *file); + #endif /* _ARCH_H */ diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 2cb409859081..f87a8782c817 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -451,7 +451,7 @@ static int create_static_call_sections(struct objtool_file *file) return 0; idx = 0; - list_for_each_entry(insn, &file->static_call_list, static_call_node) + list_for_each_entry(insn, &file->static_call_list, call_node) idx++; sec = elf_create_section(file->elf, ".static_call_sites", SHF_WRITE, @@ -460,7 +460,7 @@ static int create_static_call_sections(struct objtool_file *file) return -1; idx = 0; - list_for_each_entry(insn, &file->static_call_list, static_call_node) { + list_for_each_entry(insn, &file->static_call_list, call_node) { site = (struct static_call_site *)sec->data->d_buf + idx; memset(site, 0, sizeof(struct static_call_site)); @@ -786,13 +786,16 @@ static int add_jump_destinations(struct objtool_file *file) else insn->type = INSN_JUMP_DYNAMIC_CONDITIONAL; + list_add_tail(&insn->call_node, + &file->retpoline_call_list); + insn->retpoline_safe = true; continue; } else if (insn->func) { /* internal or external sibling call (with reloc) */ insn->call_dest = reloc->sym; if (insn->call_dest->static_call_tramp) { - list_add_tail(&insn->static_call_node, + list_add_tail(&insn->call_node, &file->static_call_list); } continue; @@ -854,7 +857,7 @@ static int add_jump_destinations(struct objtool_file *file) /* internal sibling call (without reloc) */ insn->call_dest = insn->jump_dest->func; if (insn->call_dest->static_call_tramp) { - list_add_tail(&insn->static_call_node, + list_add_tail(&insn->call_node, &file->static_call_list); } } @@ -938,6 +941,9 @@ static int add_call_destinations(struct objtool_file *file) insn->type = INSN_CALL_DYNAMIC; insn->retpoline_safe = true; + list_add_tail(&insn->call_node, + &file->retpoline_call_list); + remove_insn_ops(insn); continue; @@ -945,7 +951,7 @@ static int add_call_destinations(struct objtool_file *file) insn->call_dest = reloc->sym; if (insn->call_dest && insn->call_dest->static_call_tramp) { - list_add_tail(&insn->static_call_node, + list_add_tail(&insn->call_node, &file->static_call_list); } @@ -1655,6 +1661,11 @@ static void mark_rodata(struct objtool_file *file) file->rodata = found; } +__weak int arch_rewrite_retpolines(struct objtool_file *file) +{ + return 0; +} + static int decode_sections(struct objtool_file *file) { int ret; @@ -1683,6 +1694,10 @@ static int decode_sections(struct objtool_file *file) if (ret) return ret; + /* + * Must be before add_special_section_alts() as that depends on + * jump_dest being set. + */ ret = add_jump_destinations(file); if (ret) return ret; @@ -1719,6 +1734,15 @@ static int decode_sections(struct objtool_file *file) if (ret) return ret; + /* + * Must be after add_special_section_alts(), since this will emit + * alternatives. Must be after add_{jump,call}_destination(), since + * those create the call insn lists. + */ + ret = arch_rewrite_retpolines(file); + if (ret) + return ret; + return 0; } diff --git a/tools/objtool/check.h b/tools/objtool/check.h index d255c43b134a..e0ae2fc13e67 100644 --- a/tools/objtool/check.h +++ b/tools/objtool/check.h @@ -39,7 +39,7 @@ struct alt_group { struct instruction { struct list_head list; struct hlist_node hash; - struct list_head static_call_node; + struct list_head call_node; struct section *sec; unsigned long offset; unsigned int len; diff --git a/tools/objtool/objtool.c b/tools/objtool/objtool.c index 9df0cd86d310..c0b1f5a99df9 100644 --- a/tools/objtool/objtool.c +++ b/tools/objtool/objtool.c @@ -61,6 +61,7 @@ struct objtool_file *objtool_open_read(const char *_objname) INIT_LIST_HEAD(&file.insn_list); hash_init(file.insn_hash); + INIT_LIST_HEAD(&file.retpoline_call_list); INIT_LIST_HEAD(&file.static_call_list); file.c_file = !vmlinux && find_section_by_name(file.elf, ".comment"); file.ignore_unreachables = no_unreachable; diff --git a/tools/objtool/objtool.h b/tools/objtool/objtool.h index 5e58d3537e2f..0b57150ea0fe 100644 --- a/tools/objtool/objtool.h +++ b/tools/objtool/objtool.h @@ -18,6 +18,7 @@ struct objtool_file { struct elf *elf; struct list_head insn_list; DECLARE_HASHTABLE(insn_hash, 20); + struct list_head retpoline_call_list; struct list_head static_call_list; bool ignore_unreachables, c_file, hints, rodata; }; From e87c18c4a951bba1d69c7acaf401d613cf9a828c Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 26 Mar 2021 16:12:13 +0100 Subject: [PATCH 1461/1842] objtool: Cache instruction relocs commit 7bd2a600f3e9d27286bbf23c83d599e9cc7cf245 upstream. Track the reloc of instructions in the new instruction->reloc field to avoid having to look them up again later. ( Technically x86 instructions can have two relocations, but not jumps and calls, for which we're using this. ) Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Ingo Molnar Reviewed-by: Miroslav Benes Link: https://lkml.kernel.org/r/20210326151300.195441549@infradead.org Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 28 ++++++++++++++++++++++------ tools/objtool/check.h | 1 + 2 files changed, 23 insertions(+), 6 deletions(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index f87a8782c817..6d62ad940fce 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -754,6 +754,25 @@ __weak bool arch_is_retpoline(struct symbol *sym) return false; } +#define NEGATIVE_RELOC ((void *)-1L) + +static struct reloc *insn_reloc(struct objtool_file *file, struct instruction *insn) +{ + if (insn->reloc == NEGATIVE_RELOC) + return NULL; + + if (!insn->reloc) { + insn->reloc = find_reloc_by_dest_range(file->elf, insn->sec, + insn->offset, insn->len); + if (!insn->reloc) { + insn->reloc = NEGATIVE_RELOC; + return NULL; + } + } + + return insn->reloc; +} + /* * Find the destination instructions for all jumps. */ @@ -768,8 +787,7 @@ static int add_jump_destinations(struct objtool_file *file) if (!is_static_jump(insn)) continue; - reloc = find_reloc_by_dest_range(file->elf, insn->sec, - insn->offset, insn->len); + reloc = insn_reloc(file, insn); if (!reloc) { dest_sec = insn->sec; dest_off = arch_jump_destination(insn); @@ -901,8 +919,7 @@ static int add_call_destinations(struct objtool_file *file) if (insn->type != INSN_CALL) continue; - reloc = find_reloc_by_dest_range(file->elf, insn->sec, - insn->offset, insn->len); + reloc = insn_reloc(file, insn); if (!reloc) { dest_off = arch_jump_destination(insn); insn->call_dest = find_call_destination(insn->sec, dest_off); @@ -1085,8 +1102,7 @@ static int handle_group_alt(struct objtool_file *file, * alternatives code can adjust the relative offsets * accordingly. */ - alt_reloc = find_reloc_by_dest_range(file->elf, insn->sec, - insn->offset, insn->len); + alt_reloc = insn_reloc(file, insn); if (alt_reloc && !arch_support_alt_relocation(special_alt, insn, alt_reloc)) { diff --git a/tools/objtool/check.h b/tools/objtool/check.h index e0ae2fc13e67..ca3259eebac8 100644 --- a/tools/objtool/check.h +++ b/tools/objtool/check.h @@ -55,6 +55,7 @@ struct instruction { struct instruction *jump_dest; struct instruction *first_jump_src; struct reloc *jump_table; + struct reloc *reloc; struct list_head alts; struct symbol *func; struct list_head stack_ops; From ed7783dca5baff4103c214214abf0a3aeb27a79f Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 26 Mar 2021 16:12:14 +0100 Subject: [PATCH 1462/1842] objtool: Skip magical retpoline .altinstr_replacement commit 50e7b4a1a1b264fc7df0698f2defb93cadf19a7b upstream. When the .altinstr_replacement is a retpoline, skip the alternative. We already special case retpolines anyway. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Ingo Molnar Reviewed-by: Miroslav Benes Link: https://lkml.kernel.org/r/20210326151300.259429287@infradead.org Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/special.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/tools/objtool/special.c b/tools/objtool/special.c index 1a2420febd08..272e29cf8a0c 100644 --- a/tools/objtool/special.c +++ b/tools/objtool/special.c @@ -104,6 +104,14 @@ static int get_alt_entry(struct elf *elf, struct special_entry *entry, return -1; } + /* + * Skip retpoline .altinstr_replacement... we already rewrite the + * instructions for retpolines anyway, see arch_is_retpoline() + * usage in add_{call,jump}_destinations(). + */ + if (arch_is_retpoline(new_reloc->sym)) + return 1; + alt->new_sec = new_reloc->sym->sec; alt->new_off = (unsigned int)new_reloc->addend; @@ -152,7 +160,9 @@ int special_get_alts(struct elf *elf, struct list_head *alts) memset(alt, 0, sizeof(*alt)); ret = get_alt_entry(elf, entry, sec, idx, alt); - if (ret) + if (ret > 0) + continue; + if (ret < 0) return ret; list_add_tail(&alt->list, alts); From 0b2c8bf4983bdefbb69337c7d68cbe7c2d47a61a Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 26 Mar 2021 16:12:15 +0100 Subject: [PATCH 1463/1842] objtool/x86: Rewrite retpoline thunk calls commit 9bc0bb50727c8ac69fbb33fb937431cf3518ff37 upstream. When the compiler emits: "CALL __x86_indirect_thunk_\reg" for an indirect call, have objtool rewrite it to: ALTERNATIVE "call __x86_indirect_thunk_\reg", "call *%reg", ALT_NOT(X86_FEATURE_RETPOLINE) Additionally, in order to not emit endless identical .altinst_replacement chunks, use a global symbol for them, see __x86_indirect_alt_*. This also avoids objtool from having to do code generation. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Ingo Molnar Reviewed-by: Miroslav Benes Link: https://lkml.kernel.org/r/20210326151300.320177914@infradead.org [bwh: Backported to 5.10: include "arch_elf.h" instead of "arch/elf.h"] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/asm-prototypes.h | 12 ++- arch/x86/lib/retpoline.S | 41 ++++++++- tools/objtool/arch/x86/decode.c | 117 ++++++++++++++++++++++++++ 3 files changed, 167 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/asm-prototypes.h b/arch/x86/include/asm/asm-prototypes.h index 0545b07895a5..4cb726c71ed8 100644 --- a/arch/x86/include/asm/asm-prototypes.h +++ b/arch/x86/include/asm/asm-prototypes.h @@ -19,11 +19,19 @@ extern void cmpxchg8b_emu(void); #ifdef CONFIG_RETPOLINE -#define DECL_INDIRECT_THUNK(reg) \ +#undef GEN +#define GEN(reg) \ extern asmlinkage void __x86_indirect_thunk_ ## reg (void); +#include #undef GEN -#define GEN(reg) DECL_INDIRECT_THUNK(reg) +#define GEN(reg) \ + extern asmlinkage void __x86_indirect_alt_call_ ## reg (void); +#include + +#undef GEN +#define GEN(reg) \ + extern asmlinkage void __x86_indirect_alt_jmp_ ## reg (void); #include #endif /* CONFIG_RETPOLINE */ diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index b22075d6e38d..fa30f2326547 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -10,6 +10,8 @@ #include #include + .section .text.__x86.indirect_thunk + .macro RETPOLINE reg ANNOTATE_INTRA_FUNCTION_CALL call .Ldo_rop_\@ @@ -25,9 +27,9 @@ .endm .macro THUNK reg - .section .text.__x86.indirect_thunk .align 32 + SYM_FUNC_START(__x86_indirect_thunk_\reg) ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), \ @@ -38,6 +40,32 @@ SYM_FUNC_END(__x86_indirect_thunk_\reg) .endm +/* + * This generates .altinstr_replacement symbols for use by objtool. They, + * however, must not actually live in .altinstr_replacement since that will be + * discarded after init, but module alternatives will also reference these + * symbols. + * + * Their names matches the "__x86_indirect_" prefix to mark them as retpolines. + */ +.macro ALT_THUNK reg + + .align 1 + +SYM_FUNC_START_NOALIGN(__x86_indirect_alt_call_\reg) + ANNOTATE_RETPOLINE_SAFE +1: call *%\reg +2: .skip 5-(2b-1b), 0x90 +SYM_FUNC_END(__x86_indirect_alt_call_\reg) + +SYM_FUNC_START_NOALIGN(__x86_indirect_alt_jmp_\reg) + ANNOTATE_RETPOLINE_SAFE +1: jmp *%\reg +2: .skip 5-(2b-1b), 0x90 +SYM_FUNC_END(__x86_indirect_alt_jmp_\reg) + +.endm + /* * Despite being an assembler file we can't just use .irp here * because __KSYM_DEPS__ only uses the C preprocessor and would @@ -61,3 +89,14 @@ SYM_FUNC_END(__x86_indirect_thunk_\reg) #define GEN(reg) EXPORT_THUNK(reg) #include +#undef GEN +#define GEN(reg) ALT_THUNK reg +#include + +#undef GEN +#define GEN(reg) __EXPORT_THUNK(__x86_indirect_alt_call_ ## reg) +#include + +#undef GEN +#define GEN(reg) __EXPORT_THUNK(__x86_indirect_alt_jmp_ ## reg) +#include diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c index a540c8cda8e1..4154921291d0 100644 --- a/tools/objtool/arch/x86/decode.c +++ b/tools/objtool/arch/x86/decode.c @@ -16,6 +16,7 @@ #include "../../arch.h" #include "../../warn.h" #include +#include "arch_elf.h" static unsigned char op_to_cfi_reg[][2] = { {CFI_AX, CFI_R8}, @@ -585,6 +586,122 @@ const char *arch_nop_insn(int len) return nops[len-1]; } +/* asm/alternative.h ? */ + +#define ALTINSTR_FLAG_INV (1 << 15) +#define ALT_NOT(feat) ((feat) | ALTINSTR_FLAG_INV) + +struct alt_instr { + s32 instr_offset; /* original instruction */ + s32 repl_offset; /* offset to replacement instruction */ + u16 cpuid; /* cpuid bit set for replacement */ + u8 instrlen; /* length of original instruction */ + u8 replacementlen; /* length of new instruction */ +} __packed; + +static int elf_add_alternative(struct elf *elf, + struct instruction *orig, struct symbol *sym, + int cpuid, u8 orig_len, u8 repl_len) +{ + const int size = sizeof(struct alt_instr); + struct alt_instr *alt; + struct section *sec; + Elf_Scn *s; + + sec = find_section_by_name(elf, ".altinstructions"); + if (!sec) { + sec = elf_create_section(elf, ".altinstructions", + SHF_WRITE, size, 0); + + if (!sec) { + WARN_ELF("elf_create_section"); + return -1; + } + } + + s = elf_getscn(elf->elf, sec->idx); + if (!s) { + WARN_ELF("elf_getscn"); + return -1; + } + + sec->data = elf_newdata(s); + if (!sec->data) { + WARN_ELF("elf_newdata"); + return -1; + } + + sec->data->d_size = size; + sec->data->d_align = 1; + + alt = sec->data->d_buf = malloc(size); + if (!sec->data->d_buf) { + perror("malloc"); + return -1; + } + memset(sec->data->d_buf, 0, size); + + if (elf_add_reloc_to_insn(elf, sec, sec->sh.sh_size, + R_X86_64_PC32, orig->sec, orig->offset)) { + WARN("elf_create_reloc: alt_instr::instr_offset"); + return -1; + } + + if (elf_add_reloc(elf, sec, sec->sh.sh_size + 4, + R_X86_64_PC32, sym, 0)) { + WARN("elf_create_reloc: alt_instr::repl_offset"); + return -1; + } + + alt->cpuid = cpuid; + alt->instrlen = orig_len; + alt->replacementlen = repl_len; + + sec->sh.sh_size += size; + sec->changed = true; + + return 0; +} + +#define X86_FEATURE_RETPOLINE ( 7*32+12) + +int arch_rewrite_retpolines(struct objtool_file *file) +{ + struct instruction *insn; + struct reloc *reloc; + struct symbol *sym; + char name[32] = ""; + + list_for_each_entry(insn, &file->retpoline_call_list, call_node) { + + if (!strcmp(insn->sec->name, ".text.__x86.indirect_thunk")) + continue; + + reloc = insn->reloc; + + sprintf(name, "__x86_indirect_alt_%s_%s", + insn->type == INSN_JUMP_DYNAMIC ? "jmp" : "call", + reloc->sym->name + 21); + + sym = find_symbol_by_name(file->elf, name); + if (!sym) { + sym = elf_create_undef_symbol(file->elf, name); + if (!sym) { + WARN("elf_create_undef_symbol"); + return -1; + } + } + + if (elf_add_alternative(file->elf, insn, sym, + ALT_NOT(X86_FEATURE_RETPOLINE), 5, 5)) { + WARN("elf_add_alternative"); + return -1; + } + } + + return 0; +} + int arch_decode_hint_reg(struct instruction *insn, u8 sp_reg) { struct cfi_reg *cfa = &insn->cfi.cfa; From f3fe1b141d2cd956ca59d142ef3b5f5cf4e5149c Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Wed, 24 Feb 2021 10:29:14 -0600 Subject: [PATCH 1464/1842] objtool: Support asm jump tables commit 99033461e685b48549ec77608b4bda75ddf772ce upstream. Objtool detection of asm jump tables would normally just work, except for the fact that asm retpolines use alternatives. Objtool thinks the alternative code path (a jump to the retpoline) is a sibling call. Don't treat alternative indirect branches as sibling calls when the original instruction has a jump table. Signed-off-by: Josh Poimboeuf Tested-by: Ard Biesheuvel Acked-by: Ard Biesheuvel Tested-by: Sami Tolvanen Acked-by: Peter Zijlstra (Intel) Acked-by: Herbert Xu Link: https://lore.kernel.org/r/460cf4dc675d64e1124146562cabd2c05aa322e8.1614182415.git.jpoimboe@redhat.com Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 6d62ad940fce..eb180c78f440 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -107,6 +107,18 @@ static struct instruction *prev_insn_same_sym(struct objtool_file *file, for (insn = next_insn_same_sec(file, insn); insn; \ insn = next_insn_same_sec(file, insn)) +static bool is_jump_table_jump(struct instruction *insn) +{ + struct alt_group *alt_group = insn->alt_group; + + if (insn->jump_table) + return true; + + /* Retpoline alternative for a jump table? */ + return alt_group && alt_group->orig_group && + alt_group->orig_group->first_insn->jump_table; +} + static bool is_sibling_call(struct instruction *insn) { /* @@ -119,7 +131,7 @@ static bool is_sibling_call(struct instruction *insn) /* An indirect jump is either a sibling call or a jump to a table. */ if (insn->type == INSN_JUMP_DYNAMIC) - return list_empty(&insn->alts); + return !is_jump_table_jump(insn); /* add_jump_destinations() sets insn->call_dest for sibling calls. */ return (is_static_jump(insn) && insn->call_dest); From 76474a9dd34a7a33abc82952b82f7092750f3bdc Mon Sep 17 00:00:00 2001 From: Borislav Petkov Date: Tue, 1 Jun 2021 17:51:22 +0200 Subject: [PATCH 1465/1842] x86/alternative: Optimize single-byte NOPs at an arbitrary position commit 2b31e8ed96b260ce2c22bd62ecbb9458399e3b62 upstream. Up until now the assumption was that an alternative patching site would have some instructions at the beginning and trailing single-byte NOPs (0x90) padding. Therefore, the patching machinery would go and optimize those single-byte NOPs into longer ones. However, this assumption is broken on 32-bit when code like hv_do_hypercall() in hyperv_init() would use the ratpoline speculation killer CALL_NOSPEC. The 32-bit version of that macro would align certain insns to 16 bytes, leading to the compiler issuing a one or more single-byte NOPs, depending on the holes it needs to fill for alignment. That would lead to the warning in optimize_nops() to fire: ------------[ cut here ]------------ Not a NOP at 0xc27fb598 WARNING: CPU: 0 PID: 0 at arch/x86/kernel/alternative.c:211 optimize_nops.isra.13 due to that function verifying whether all of the following bytes really are single-byte NOPs. Therefore, carve out the NOP padding into a separate function and call it for each NOP range beginning with a single-byte NOP. Fixes: 23c1ad538f4f ("x86/alternatives: Optimize optimize_nops()") Reported-by: Richard Narron Signed-off-by: Borislav Petkov Acked-by: Peter Zijlstra (Intel) Link: https://bugzilla.kernel.org/show_bug.cgi?id=213301 Link: https://lkml.kernel.org/r/20210601212125.17145-1-bp@alien8.de Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/alternative.c | 64 +++++++++++++++++++++++++---------- 1 file changed, 46 insertions(+), 18 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index a655a9ebb2e2..0c0c4c8704e5 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -337,42 +337,70 @@ recompute_jump(struct alt_instr *a, u8 *orig_insn, u8 *repl_insn, u8 *insn_buff) n_dspl, (unsigned long)orig_insn + n_dspl + repl_len); } +/* + * optimize_nops_range() - Optimize a sequence of single byte NOPs (0x90) + * + * @instr: instruction byte stream + * @instrlen: length of the above + * @off: offset within @instr where the first NOP has been detected + * + * Return: number of NOPs found (and replaced). + */ +static __always_inline int optimize_nops_range(u8 *instr, u8 instrlen, int off) +{ + unsigned long flags; + int i = off, nnops; + + while (i < instrlen) { + if (instr[i] != 0x90) + break; + + i++; + } + + nnops = i - off; + + if (nnops <= 1) + return nnops; + + local_irq_save(flags); + add_nops(instr + off, nnops); + local_irq_restore(flags); + + DUMP_BYTES(instr, instrlen, "%px: [%d:%d) optimized NOPs: ", instr, off, i); + + return nnops; +} + /* * "noinline" to cause control flow change and thus invalidate I$ and * cause refetch after modification. */ static void __init_or_module noinline optimize_nops(struct alt_instr *a, u8 *instr) { - unsigned long flags; struct insn insn; - int nop, i = 0; + int i = 0; /* - * Jump over the non-NOP insns, the remaining bytes must be single-byte - * NOPs, optimize them. + * Jump over the non-NOP insns and optimize single-byte NOPs into bigger + * ones. */ for (;;) { if (insn_decode_kernel(&insn, &instr[i])) return; + /* + * See if this and any potentially following NOPs can be + * optimized. + */ if (insn.length == 1 && insn.opcode.bytes[0] == 0x90) - break; + i += optimize_nops_range(instr, a->instrlen, i); + else + i += insn.length; - if ((i += insn.length) >= a->instrlen) + if (i >= a->instrlen) return; } - - for (nop = i; i < a->instrlen; i++) { - if (WARN_ONCE(instr[i] != 0x90, "Not a NOP at 0x%px\n", &instr[i])) - return; - } - - local_irq_save(flags); - add_nops(instr + nop, i - nop); - local_irq_restore(flags); - - DUMP_BYTES(instr, a->instrlen, "%px: [%d:%d) optimized NOPs: ", - instr, nop, a->instrlen); } /* From a0319253825ebd8c5b19e31902c4f35f85e93285 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Mon, 7 Jun 2021 11:45:58 +0200 Subject: [PATCH 1466/1842] objtool: Fix .symtab_shndx handling for elf_create_undef_symbol() commit 584fd3b31889852d0d6f3dd1e3d8e9619b660d2c upstream. When an ELF object uses extended symbol section indexes (IOW it has a .symtab_shndx section), these must be kept in sync with the regular symbol table (.symtab). So for every new symbol we emit, make sure to also emit a .symtab_shndx value to keep the arrays of equal size. Note: since we're writing an UNDEF symbol, most GElf_Sym fields will be 0 and we can repurpose one (st_size) to host the 0 for the xshndx value. Fixes: 2f2f7e47f052 ("objtool: Add elf_create_undef_symbol()") Reported-by: Nick Desaulniers Suggested-by: Fangrui Song Signed-off-by: Peter Zijlstra (Intel) Tested-by: Nick Desaulniers Link: https://lkml.kernel.org/r/YL3q1qFO9QIRL/BA@hirez.programming.kicks-ass.net Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/elf.c | 25 ++++++++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-) diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c index fff4c5587e3e..9c7d395f0db1 100644 --- a/tools/objtool/elf.c +++ b/tools/objtool/elf.c @@ -768,7 +768,7 @@ static int elf_add_string(struct elf *elf, struct section *strtab, char *str) struct symbol *elf_create_undef_symbol(struct elf *elf, const char *name) { - struct section *symtab; + struct section *symtab, *symtab_shndx; struct symbol *sym; Elf_Data *data; Elf_Scn *s; @@ -819,6 +819,29 @@ struct symbol *elf_create_undef_symbol(struct elf *elf, const char *name) symtab->len += data->d_size; symtab->changed = true; + symtab_shndx = find_section_by_name(elf, ".symtab_shndx"); + if (symtab_shndx) { + s = elf_getscn(elf->elf, symtab_shndx->idx); + if (!s) { + WARN_ELF("elf_getscn"); + return NULL; + } + + data = elf_newdata(s); + if (!data) { + WARN_ELF("elf_newdata"); + return NULL; + } + + data->d_buf = &sym->sym.st_size; /* conveniently 0 */ + data->d_size = sizeof(Elf32_Word); + data->d_align = 4; + data->d_type = ELF_T_WORD; + + symtab_shndx->len += 4; + symtab_shndx->changed = true; + } + sym->sec = find_section_by_index(elf, 0); elf_add_symbol(elf, sym); From e32542e9ed362bf8ea48941d965495e1593b5cef Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Thu, 10 Jun 2021 09:04:29 +0200 Subject: [PATCH 1467/1842] objtool: Only rewrite unconditional retpoline thunk calls commit 2d49b721dc18c113d5221f4cf5a6104eb66cb7f2 upstream. It turns out that the compilers generate conditional branches to the retpoline thunks like: 5d5: 0f 85 00 00 00 00 jne 5db 5d7: R_X86_64_PLT32 __x86_indirect_thunk_r11-0x4 while the rewrite can only handle JMP/CALL to the thunks. The result is the alternative wrecking the code. Make sure to skip writing the alternatives for conditional branches. Fixes: 9bc0bb50727c ("objtool/x86: Rewrite retpoline thunk calls") Reported-by: Lukasz Majczak Reported-by: Nathan Chancellor Signed-off-by: Peter Zijlstra (Intel) Tested-by: Nathan Chancellor Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/arch/x86/decode.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c index 4154921291d0..09035e28f472 100644 --- a/tools/objtool/arch/x86/decode.c +++ b/tools/objtool/arch/x86/decode.c @@ -674,6 +674,10 @@ int arch_rewrite_retpolines(struct objtool_file *file) list_for_each_entry(insn, &file->retpoline_call_list, call_node) { + if (insn->type != INSN_JUMP_DYNAMIC && + insn->type != INSN_CALL_DYNAMIC) + continue; + if (!strcmp(insn->sec->name, ".text.__x86.indirect_thunk")) continue; From f231b2ee8533d79b93bd163e93caf578d991726b Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Mon, 21 Jun 2021 16:13:55 +0200 Subject: [PATCH 1468/1842] objtool/x86: Ignore __x86_indirect_alt_* symbols commit 31197d3a0f1caeb60fb01f6755e28347e4f44037 upstream. Because the __x86_indirect_alt* symbols are just that, objtool will try and validate them as regular symbols, instead of the alternative replacements that they are. This goes sideways for FRAME_POINTER=y builds; which generate a fair amount of warnings. Fixes: 9bc0bb50727c ("objtool/x86: Rewrite retpoline thunk calls") Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Ingo Molnar Link: https://lore.kernel.org/r/YNCgxwLBiK9wclYJ@hirez.programming.kicks-ass.net Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/lib/retpoline.S | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index fa30f2326547..5385d26af6e4 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -58,12 +58,16 @@ SYM_FUNC_START_NOALIGN(__x86_indirect_alt_call_\reg) 2: .skip 5-(2b-1b), 0x90 SYM_FUNC_END(__x86_indirect_alt_call_\reg) +STACK_FRAME_NON_STANDARD(__x86_indirect_alt_call_\reg) + SYM_FUNC_START_NOALIGN(__x86_indirect_alt_jmp_\reg) ANNOTATE_RETPOLINE_SAFE 1: jmp *%\reg 2: .skip 5-(2b-1b), 0x90 SYM_FUNC_END(__x86_indirect_alt_jmp_\reg) +STACK_FRAME_NON_STANDARD(__x86_indirect_alt_jmp_\reg) + .endm /* From 364e463097a7c949e19fa93283f1a888ba96ed21 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Wed, 23 Jun 2021 10:42:28 -0500 Subject: [PATCH 1469/1842] objtool: Don't make .altinstructions writable commit e31694e0a7a709293319475d8001e05e31f2178c upstream. When objtool creates the .altinstructions section, it sets the SHF_WRITE flag to make the section writable -- unless the section had already been previously created by the kernel. The mismatch between kernel-created and objtool-created section flags can cause failures with external tooling (kpatch-build). And the section doesn't need to be writable anyway. Make the section flags consistent with the kernel's. Fixes: 9bc0bb50727c ("objtool/x86: Rewrite retpoline thunk calls") Reported-by: Joe Lawrence Signed-off-by: Josh Poimboeuf Signed-off-by: Ingo Molnar Link: https://lore.kernel.org/r/6c284ae89717889ea136f9f0064d914cd8329d31.1624462939.git.jpoimboe@redhat.com Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/arch/x86/decode.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c index 09035e28f472..b1feb45cf215 100644 --- a/tools/objtool/arch/x86/decode.c +++ b/tools/objtool/arch/x86/decode.c @@ -611,7 +611,7 @@ static int elf_add_alternative(struct elf *elf, sec = find_section_by_name(elf, ".altinstructions"); if (!sec) { sec = elf_create_section(elf, ".altinstructions", - SHF_WRITE, size, 0); + SHF_ALLOC, size, 0); if (!sec) { WARN_ELF("elf_create_section"); From 7ea073195745a8db3cd561faba5cd9870a862045 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Thu, 30 Sep 2021 12:43:10 +0200 Subject: [PATCH 1470/1842] objtool: Teach get_alt_entry() about more relocation types commit 24ff652573754fe4c03213ebd26b17e86842feb3 upstream. Occasionally objtool encounters symbol (as opposed to section) relocations in .altinstructions. Typically they are the alternatives written by elf_add_alternative() as encountered on a noinstr validation run on vmlinux after having already ran objtool on the individual .o files. Basically this is the counterpart of commit 44f6a7c0755d ("objtool: Fix seg fault with Clang non-section symbols"), because when these new assemblers (binutils now also does this) strip the section symbols, elf_add_reloc_to_insn() is forced to emit symbol based relocations. As such, teach get_alt_entry() about different relocation types. Fixes: 9bc0bb50727c ("objtool/x86: Rewrite retpoline thunk calls") Reported-by: Stephen Rothwell Reported-by: Borislav Petkov Signed-off-by: Peter Zijlstra (Intel) Acked-by: Josh Poimboeuf Tested-by: Nathan Chancellor Link: https://lore.kernel.org/r/YVWUvknIEVNkPvnP@hirez.programming.kicks-ass.net Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/special.c | 32 +++++++++++++++++++++++++------- 1 file changed, 25 insertions(+), 7 deletions(-) diff --git a/tools/objtool/special.c b/tools/objtool/special.c index 272e29cf8a0c..81d501613389 100644 --- a/tools/objtool/special.c +++ b/tools/objtool/special.c @@ -55,6 +55,24 @@ void __weak arch_handle_alternative(unsigned short feature, struct special_alt * { } +static bool reloc2sec_off(struct reloc *reloc, struct section **sec, unsigned long *off) +{ + switch (reloc->sym->type) { + case STT_FUNC: + *sec = reloc->sym->sec; + *off = reloc->sym->offset + reloc->addend; + return true; + + case STT_SECTION: + *sec = reloc->sym->sec; + *off = reloc->addend; + return true; + + default: + return false; + } +} + static int get_alt_entry(struct elf *elf, struct special_entry *entry, struct section *sec, int idx, struct special_alt *alt) @@ -87,15 +105,12 @@ static int get_alt_entry(struct elf *elf, struct special_entry *entry, WARN_FUNC("can't find orig reloc", sec, offset + entry->orig); return -1; } - if (orig_reloc->sym->type != STT_SECTION) { - WARN_FUNC("don't know how to handle non-section reloc symbol %s", + if (!reloc2sec_off(orig_reloc, &alt->orig_sec, &alt->orig_off)) { + WARN_FUNC("don't know how to handle reloc symbol type: %s", sec, offset + entry->orig, orig_reloc->sym->name); return -1; } - alt->orig_sec = orig_reloc->sym->sec; - alt->orig_off = orig_reloc->addend; - if (!entry->group || alt->new_len) { new_reloc = find_reloc_by_dest(elf, sec, offset + entry->new); if (!new_reloc) { @@ -112,8 +127,11 @@ static int get_alt_entry(struct elf *elf, struct special_entry *entry, if (arch_is_retpoline(new_reloc->sym)) return 1; - alt->new_sec = new_reloc->sym->sec; - alt->new_off = (unsigned int)new_reloc->addend; + if (!reloc2sec_off(new_reloc, &alt->new_sec, &alt->new_off)) { + WARN_FUNC("don't know how to handle reloc symbol type: %s", + sec, offset + entry->new, new_reloc->sym->name); + return -1; + } /* _ASM_EXTABLE_EX hack */ if (alt->new_off >= 0x7ffffff0) From e7118a25a87f6b456c70f6a216b1b5042709cee7 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Sun, 3 Oct 2021 13:45:48 -0700 Subject: [PATCH 1471/1842] objtool: print out the symbol type when complaining about it commit 7fab1c12bde926c5a8c7d5984c551d0854d7e0b3 upstream. The objtool warning that the kvm instruction emulation code triggered wasn't very useful: arch/x86/kvm/emulate.o: warning: objtool: __ex_table+0x4: don't know how to handle reloc symbol type: kvm_fastop_exception in that it helpfully tells you which symbol name it had trouble figuring out the relocation for, but it doesn't actually say what the unknown symbol type was that triggered it all. In this case it was because of missing type information (type 0, aka STT_NOTYPE), but on the whole it really should just have printed that out as part of the message. Because if this warning triggers, that's very much the first thing you want to know - why did reloc2sec_off() return failure for that symbol? So rather than just saying you can't handle some type of symbol without saying what the type _was_, just print out the type number too. Fixes: 24ff65257375 ("objtool: Teach get_alt_entry() about more relocation types") Link: https://lore.kernel.org/lkml/CAHk-=wiZwq-0LknKhXN4M+T8jbxn_2i9mcKpO+OaBSSq_Eh7tg@mail.gmail.com/ Signed-off-by: Linus Torvalds Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/special.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/tools/objtool/special.c b/tools/objtool/special.c index 81d501613389..5f6205fa19d6 100644 --- a/tools/objtool/special.c +++ b/tools/objtool/special.c @@ -106,8 +106,10 @@ static int get_alt_entry(struct elf *elf, struct special_entry *entry, return -1; } if (!reloc2sec_off(orig_reloc, &alt->orig_sec, &alt->orig_off)) { - WARN_FUNC("don't know how to handle reloc symbol type: %s", - sec, offset + entry->orig, orig_reloc->sym->name); + WARN_FUNC("don't know how to handle reloc symbol type %d: %s", + sec, offset + entry->orig, + orig_reloc->sym->type, + orig_reloc->sym->name); return -1; } @@ -128,8 +130,10 @@ static int get_alt_entry(struct elf *elf, struct special_entry *entry, return 1; if (!reloc2sec_off(new_reloc, &alt->new_sec, &alt->new_off)) { - WARN_FUNC("don't know how to handle reloc symbol type: %s", - sec, offset + entry->new, new_reloc->sym->name); + WARN_FUNC("don't know how to handle reloc symbol type %d: %s", + sec, offset + entry->new, + new_reloc->sym->type, + new_reloc->sym->name); return -1; } From 1afa44480b62ef3928cad4e7dea0fe076ae163c8 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Mon, 4 Oct 2021 10:07:50 -0700 Subject: [PATCH 1472/1842] objtool: Remove reloc symbol type checks in get_alt_entry() commit 4d8b35968bbf9e42b6b202eedb510e2c82ad8b38 upstream. Converting a special section's relocation reference to a symbol is straightforward. No need for objtool to complain that it doesn't know how to handle it. Just handle it. This fixes the following warning: arch/x86/kvm/emulate.o: warning: objtool: __ex_table+0x4: don't know how to handle reloc symbol type: kvm_fastop_exception Fixes: 24ff65257375 ("objtool: Teach get_alt_entry() about more relocation types") Reported-by: Linus Torvalds Signed-off-by: Josh Poimboeuf Link: https://lore.kernel.org/r/feadbc3dfb3440d973580fad8d3db873cbfe1694.1633367242.git.jpoimboe@redhat.com Cc: Peter Zijlstra Cc: x86@kernel.org Cc: Miroslav Benes Cc: linux-kernel@vger.kernel.org Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/special.c | 36 +++++++----------------------------- 1 file changed, 7 insertions(+), 29 deletions(-) diff --git a/tools/objtool/special.c b/tools/objtool/special.c index 5f6205fa19d6..99a3a35772a5 100644 --- a/tools/objtool/special.c +++ b/tools/objtool/special.c @@ -55,22 +55,11 @@ void __weak arch_handle_alternative(unsigned short feature, struct special_alt * { } -static bool reloc2sec_off(struct reloc *reloc, struct section **sec, unsigned long *off) +static void reloc_to_sec_off(struct reloc *reloc, struct section **sec, + unsigned long *off) { - switch (reloc->sym->type) { - case STT_FUNC: - *sec = reloc->sym->sec; - *off = reloc->sym->offset + reloc->addend; - return true; - - case STT_SECTION: - *sec = reloc->sym->sec; - *off = reloc->addend; - return true; - - default: - return false; - } + *sec = reloc->sym->sec; + *off = reloc->sym->offset + reloc->addend; } static int get_alt_entry(struct elf *elf, struct special_entry *entry, @@ -105,13 +94,8 @@ static int get_alt_entry(struct elf *elf, struct special_entry *entry, WARN_FUNC("can't find orig reloc", sec, offset + entry->orig); return -1; } - if (!reloc2sec_off(orig_reloc, &alt->orig_sec, &alt->orig_off)) { - WARN_FUNC("don't know how to handle reloc symbol type %d: %s", - sec, offset + entry->orig, - orig_reloc->sym->type, - orig_reloc->sym->name); - return -1; - } + + reloc_to_sec_off(orig_reloc, &alt->orig_sec, &alt->orig_off); if (!entry->group || alt->new_len) { new_reloc = find_reloc_by_dest(elf, sec, offset + entry->new); @@ -129,13 +113,7 @@ static int get_alt_entry(struct elf *elf, struct special_entry *entry, if (arch_is_retpoline(new_reloc->sym)) return 1; - if (!reloc2sec_off(new_reloc, &alt->new_sec, &alt->new_off)) { - WARN_FUNC("don't know how to handle reloc symbol type %d: %s", - sec, offset + entry->new, - new_reloc->sym->type, - new_reloc->sym->name); - return -1; - } + reloc_to_sec_off(new_reloc, &alt->new_sec, &alt->new_off); /* _ASM_EXTABLE_EX hack */ if (alt->new_off >= 0x7ffffff0) From e8b1128fb0d6aa97420d5012ab9f62ad262f8f77 Mon Sep 17 00:00:00 2001 From: Joe Lawrence Date: Sun, 22 Aug 2021 18:50:36 -0400 Subject: [PATCH 1473/1842] objtool: Make .altinstructions section entry size consistent commit dc02368164bd0ec603e3f5b3dd8252744a667b8a upstream. Commit e31694e0a7a7 ("objtool: Don't make .altinstructions writable") aligned objtool-created and kernel-created .altinstructions section flags, but there remains a minor discrepency in their use of a section entry size: objtool sets one while the kernel build does not. While sh_entsize of sizeof(struct alt_instr) seems intuitive, this small deviation can cause failures with external tooling (kpatch-build). Fix this by creating new .altinstructions sections with sh_entsize of 0 and then later updating sec->sh_size as alternatives are added to the section. An added benefit is avoiding the data descriptor and buffer created by elf_create_section(), but previously unused by elf_add_alternative(). Fixes: 9bc0bb50727c ("objtool/x86: Rewrite retpoline thunk calls") Signed-off-by: Joe Lawrence Reviewed-by: Miroslav Benes Signed-off-by: Josh Poimboeuf Link: https://lore.kernel.org/r/20210822225037.54620-2-joe.lawrence@redhat.com Cc: Andy Lavr Cc: Peter Zijlstra Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/arch/x86/decode.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c index b1feb45cf215..99b47720dce9 100644 --- a/tools/objtool/arch/x86/decode.c +++ b/tools/objtool/arch/x86/decode.c @@ -611,7 +611,7 @@ static int elf_add_alternative(struct elf *elf, sec = find_section_by_name(elf, ".altinstructions"); if (!sec) { sec = elf_create_section(elf, ".altinstructions", - SHF_ALLOC, size, 0); + SHF_ALLOC, 0, 0); if (!sec) { WARN_ELF("elf_create_section"); From 9d7ec2418a3a6669a69d96927bf9ce8f2efea444 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Thu, 24 Jun 2021 11:41:01 +0200 Subject: [PATCH 1474/1842] objtool: Introduce CFI hash commit 8b946cc38e063f0f7bb67789478c38f6d7d457c9 upstream. Andi reported that objtool on vmlinux.o consumes more memory than his system has, leading to horrific performance. This is in part because we keep a struct instruction for every instruction in the file in-memory. Shrink struct instruction by removing the CFI state (which includes full register state) from it and demand allocating it. Given most instructions don't actually change CFI state, there's lots of repetition there, so add a hash table to find previous CFI instances. Reduces memory consumption (and runtime) for processing an x86_64-allyesconfig: pre: 4:40.84 real, 143.99 user, 44.18 sys, 30624988 mem post: 2:14.61 real, 108.58 user, 25.04 sys, 16396184 mem Suggested-by: Andi Kleen Signed-off-by: Peter Zijlstra (Intel) Link: https://lore.kernel.org/r/20210624095147.756759107@infradead.org Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: - Don't use bswap_if_needed() since we don't have any of the other fixes for mixed-endian cross-compilation - Since we don't have "objtool: Rewrite hashtable sizing", make cfi_hash_alloc() set the number of bits similarly to elf_hash_bits() - objtool doesn't have any mcount handling - Adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/arch.h | 2 +- tools/objtool/arch/x86/decode.c | 20 ++--- tools/objtool/cfi.h | 2 + tools/objtool/check.c | 154 ++++++++++++++++++++++++++++---- tools/objtool/check.h | 2 +- tools/objtool/orc_gen.c | 15 +++- 6 files changed, 160 insertions(+), 35 deletions(-) diff --git a/tools/objtool/arch.h b/tools/objtool/arch.h index 8918de4f8189..35e3f94413eb 100644 --- a/tools/objtool/arch.h +++ b/tools/objtool/arch.h @@ -84,7 +84,7 @@ unsigned long arch_dest_reloc_offset(int addend); const char *arch_nop_insn(int len); -int arch_decode_hint_reg(struct instruction *insn, u8 sp_reg); +int arch_decode_hint_reg(u8 sp_reg, int *base); bool arch_is_retpoline(struct symbol *sym); diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c index 99b47720dce9..5005a4811157 100644 --- a/tools/objtool/arch/x86/decode.c +++ b/tools/objtool/arch/x86/decode.c @@ -706,34 +706,32 @@ int arch_rewrite_retpolines(struct objtool_file *file) return 0; } -int arch_decode_hint_reg(struct instruction *insn, u8 sp_reg) +int arch_decode_hint_reg(u8 sp_reg, int *base) { - struct cfi_reg *cfa = &insn->cfi.cfa; - switch (sp_reg) { case ORC_REG_UNDEFINED: - cfa->base = CFI_UNDEFINED; + *base = CFI_UNDEFINED; break; case ORC_REG_SP: - cfa->base = CFI_SP; + *base = CFI_SP; break; case ORC_REG_BP: - cfa->base = CFI_BP; + *base = CFI_BP; break; case ORC_REG_SP_INDIRECT: - cfa->base = CFI_SP_INDIRECT; + *base = CFI_SP_INDIRECT; break; case ORC_REG_R10: - cfa->base = CFI_R10; + *base = CFI_R10; break; case ORC_REG_R13: - cfa->base = CFI_R13; + *base = CFI_R13; break; case ORC_REG_DI: - cfa->base = CFI_DI; + *base = CFI_DI; break; case ORC_REG_DX: - cfa->base = CFI_DX; + *base = CFI_DX; break; default: return -1; diff --git a/tools/objtool/cfi.h b/tools/objtool/cfi.h index c7c59c6a44ee..f579802d7ec2 100644 --- a/tools/objtool/cfi.h +++ b/tools/objtool/cfi.h @@ -7,6 +7,7 @@ #define _OBJTOOL_CFI_H #include "cfi_regs.h" +#include #define CFI_UNDEFINED -1 #define CFI_CFA -2 @@ -24,6 +25,7 @@ struct cfi_init_state { }; struct cfi_state { + struct hlist_node hash; /* must be first, cficmp() */ struct cfi_reg regs[CFI_NUM_REGS]; struct cfi_reg vals[CFI_NUM_REGS]; struct cfi_reg cfa; diff --git a/tools/objtool/check.c b/tools/objtool/check.c index eb180c78f440..b6c56c7bb837 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -5,6 +5,7 @@ #include #include +#include #include "builtin.h" #include "cfi.h" @@ -25,7 +26,11 @@ struct alternative { bool skip_orig; }; -struct cfi_init_state initial_func_cfi; +static unsigned long nr_cfi, nr_cfi_reused, nr_cfi_cache; + +static struct cfi_init_state initial_func_cfi; +static struct cfi_state init_cfi; +static struct cfi_state func_cfi; struct instruction *find_insn(struct objtool_file *file, struct section *sec, unsigned long offset) @@ -265,6 +270,78 @@ static void init_insn_state(struct insn_state *state, struct section *sec) state->noinstr = sec->noinstr; } +static struct cfi_state *cfi_alloc(void) +{ + struct cfi_state *cfi = calloc(sizeof(struct cfi_state), 1); + if (!cfi) { + WARN("calloc failed"); + exit(1); + } + nr_cfi++; + return cfi; +} + +static int cfi_bits; +static struct hlist_head *cfi_hash; + +static inline bool cficmp(struct cfi_state *cfi1, struct cfi_state *cfi2) +{ + return memcmp((void *)cfi1 + sizeof(cfi1->hash), + (void *)cfi2 + sizeof(cfi2->hash), + sizeof(struct cfi_state) - sizeof(struct hlist_node)); +} + +static inline u32 cfi_key(struct cfi_state *cfi) +{ + return jhash((void *)cfi + sizeof(cfi->hash), + sizeof(*cfi) - sizeof(cfi->hash), 0); +} + +static struct cfi_state *cfi_hash_find_or_add(struct cfi_state *cfi) +{ + struct hlist_head *head = &cfi_hash[hash_min(cfi_key(cfi), cfi_bits)]; + struct cfi_state *obj; + + hlist_for_each_entry(obj, head, hash) { + if (!cficmp(cfi, obj)) { + nr_cfi_cache++; + return obj; + } + } + + obj = cfi_alloc(); + *obj = *cfi; + hlist_add_head(&obj->hash, head); + + return obj; +} + +static void cfi_hash_add(struct cfi_state *cfi) +{ + struct hlist_head *head = &cfi_hash[hash_min(cfi_key(cfi), cfi_bits)]; + + hlist_add_head(&cfi->hash, head); +} + +static void *cfi_hash_alloc(void) +{ + cfi_bits = vmlinux ? ELF_HASH_BITS - 3 : 13; + cfi_hash = mmap(NULL, sizeof(struct hlist_head) << cfi_bits, + PROT_READ|PROT_WRITE, + MAP_PRIVATE|MAP_ANON, -1, 0); + if (cfi_hash == (void *)-1L) { + WARN("mmap fail cfi_hash"); + cfi_hash = NULL; + } else if (stats) { + printf("cfi_bits: %d\n", cfi_bits); + } + + return cfi_hash; +} + +static unsigned long nr_insns; +static unsigned long nr_insns_visited; + /* * Call the arch-specific instruction decoder for all the instructions and add * them to the global instruction list. @@ -275,7 +352,6 @@ static int decode_instructions(struct objtool_file *file) struct symbol *func; unsigned long offset; struct instruction *insn; - unsigned long nr_insns = 0; int ret; for_each_sec(file, sec) { @@ -301,7 +377,6 @@ static int decode_instructions(struct objtool_file *file) memset(insn, 0, sizeof(*insn)); INIT_LIST_HEAD(&insn->alts); INIT_LIST_HEAD(&insn->stack_ops); - init_cfi_state(&insn->cfi); insn->sec = sec; insn->offset = offset; @@ -1077,7 +1152,6 @@ static int handle_group_alt(struct objtool_file *file, memset(nop, 0, sizeof(*nop)); INIT_LIST_HEAD(&nop->alts); INIT_LIST_HEAD(&nop->stack_ops); - init_cfi_state(&nop->cfi); nop->sec = special_alt->new_sec; nop->offset = special_alt->new_off + special_alt->new_len; @@ -1454,10 +1528,11 @@ static void set_func_state(struct cfi_state *state) static int read_unwind_hints(struct objtool_file *file) { + struct cfi_state cfi = init_cfi; struct section *sec, *relocsec; - struct reloc *reloc; struct unwind_hint *hint; struct instruction *insn; + struct reloc *reloc; int i; sec = find_section_by_name(file->elf, ".discard.unwind_hints"); @@ -1495,19 +1570,24 @@ static int read_unwind_hints(struct objtool_file *file) insn->hint = true; if (hint->type == UNWIND_HINT_TYPE_FUNC) { - set_func_state(&insn->cfi); + insn->cfi = &func_cfi; continue; } - if (arch_decode_hint_reg(insn, hint->sp_reg)) { + if (insn->cfi) + cfi = *(insn->cfi); + + if (arch_decode_hint_reg(hint->sp_reg, &cfi.cfa.base)) { WARN_FUNC("unsupported unwind_hint sp base reg %d", insn->sec, insn->offset, hint->sp_reg); return -1; } - insn->cfi.cfa.offset = hint->sp_offset; - insn->cfi.type = hint->type; - insn->cfi.end = hint->end; + cfi.cfa.offset = hint->sp_offset; + cfi.type = hint->type; + cfi.end = hint->end; + + insn->cfi = cfi_hash_find_or_add(&cfi); } return 0; @@ -2283,13 +2363,18 @@ static int propagate_alt_cfi(struct objtool_file *file, struct instruction *insn if (!insn->alt_group) return 0; + if (!insn->cfi) { + WARN("CFI missing"); + return -1; + } + alt_cfi = insn->alt_group->cfi; group_off = insn->offset - insn->alt_group->first_insn->offset; if (!alt_cfi[group_off]) { - alt_cfi[group_off] = &insn->cfi; + alt_cfi[group_off] = insn->cfi; } else { - if (memcmp(alt_cfi[group_off], &insn->cfi, sizeof(struct cfi_state))) { + if (cficmp(alt_cfi[group_off], insn->cfi)) { WARN_FUNC("stack layout conflict in alternatives", insn->sec, insn->offset); return -1; @@ -2335,9 +2420,14 @@ static int handle_insn_ops(struct instruction *insn, struct insn_state *state) static bool insn_cfi_match(struct instruction *insn, struct cfi_state *cfi2) { - struct cfi_state *cfi1 = &insn->cfi; + struct cfi_state *cfi1 = insn->cfi; int i; + if (!cfi1) { + WARN("CFI missing"); + return false; + } + if (memcmp(&cfi1->cfa, &cfi2->cfa, sizeof(cfi1->cfa))) { WARN_FUNC("stack state mismatch: cfa1=%d%+d cfa2=%d%+d", @@ -2522,7 +2612,7 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, struct instruction *insn, struct insn_state state) { struct alternative *alt; - struct instruction *next_insn; + struct instruction *next_insn, *prev_insn = NULL; struct section *sec; u8 visited; int ret; @@ -2551,15 +2641,25 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, if (insn->visited & visited) return 0; + } else { + nr_insns_visited++; } if (state.noinstr) state.instr += insn->instr; - if (insn->hint) - state.cfi = insn->cfi; - else - insn->cfi = state.cfi; + if (insn->hint) { + state.cfi = *insn->cfi; + } else { + /* XXX track if we actually changed state.cfi */ + + if (prev_insn && !cficmp(prev_insn->cfi, &state.cfi)) { + insn->cfi = prev_insn->cfi; + nr_cfi_reused++; + } else { + insn->cfi = cfi_hash_find_or_add(&state.cfi); + } + } insn->visited |= visited; @@ -2709,6 +2809,7 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, return 1; } + prev_insn = insn; insn = next_insn; } @@ -2964,10 +3065,20 @@ int check(struct objtool_file *file) int ret, warnings = 0; arch_initial_func_cfi_state(&initial_func_cfi); + init_cfi_state(&init_cfi); + init_cfi_state(&func_cfi); + set_func_state(&func_cfi); + + if (!cfi_hash_alloc()) + goto out; + + cfi_hash_add(&init_cfi); + cfi_hash_add(&func_cfi); ret = decode_sections(file); if (ret < 0) goto out; + warnings += ret; if (list_empty(&file->insn_list)) @@ -3011,6 +3122,13 @@ int check(struct objtool_file *file) goto out; warnings += ret; + if (stats) { + printf("nr_insns_visited: %ld\n", nr_insns_visited); + printf("nr_cfi: %ld\n", nr_cfi); + printf("nr_cfi_reused: %ld\n", nr_cfi_reused); + printf("nr_cfi_cache: %ld\n", nr_cfi_cache); + } + out: /* * For now, don't fail the kernel build on fatal warnings. These diff --git a/tools/objtool/check.h b/tools/objtool/check.h index ca3259eebac8..508f712deba1 100644 --- a/tools/objtool/check.h +++ b/tools/objtool/check.h @@ -59,7 +59,7 @@ struct instruction { struct list_head alts; struct symbol *func; struct list_head stack_ops; - struct cfi_state cfi; + struct cfi_state *cfi; }; static inline bool is_static_jump(struct instruction *insn) diff --git a/tools/objtool/orc_gen.c b/tools/objtool/orc_gen.c index 671f3b296c1a..812b33ed9f65 100644 --- a/tools/objtool/orc_gen.c +++ b/tools/objtool/orc_gen.c @@ -12,13 +12,19 @@ #include "check.h" #include "warn.h" -static int init_orc_entry(struct orc_entry *orc, struct cfi_state *cfi) +static int init_orc_entry(struct orc_entry *orc, struct cfi_state *cfi, + struct instruction *insn) { - struct instruction *insn = container_of(cfi, struct instruction, cfi); struct cfi_reg *bp = &cfi->regs[CFI_BP]; memset(orc, 0, sizeof(*orc)); + if (!cfi) { + orc->end = 0; + orc->sp_reg = ORC_REG_UNDEFINED; + return 0; + } + orc->end = cfi->end; if (cfi->cfa.base == CFI_UNDEFINED) { @@ -159,7 +165,7 @@ int orc_create(struct objtool_file *file) int i; if (!alt_group) { - if (init_orc_entry(&orc, &insn->cfi)) + if (init_orc_entry(&orc, insn->cfi, insn)) return -1; if (!memcmp(&prev_orc, &orc, sizeof(orc))) continue; @@ -183,7 +189,8 @@ int orc_create(struct objtool_file *file) struct cfi_state *cfi = alt_group->cfi[i]; if (!cfi) continue; - if (init_orc_entry(&orc, cfi)) + /* errors are reported on the original insn */ + if (init_orc_entry(&orc, cfi, insn)) return -1; if (!memcmp(&prev_orc, &orc, sizeof(orc))) continue; From acc0be56b4152046aac56b48a70729925036b187 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Thu, 24 Jun 2021 11:41:02 +0200 Subject: [PATCH 1475/1842] objtool: Handle __sanitize_cov*() tail calls commit f56dae88a81fded66adf2bea9922d1d98d1da14f upstream. Turns out the compilers also generate tail calls to __sanitize_cov*(), make sure to also patch those out in noinstr code. Fixes: 0f1441b44e82 ("objtool: Fix noinstr vs KCOV") Signed-off-by: Peter Zijlstra (Intel) Acked-by: Marco Elver Link: https://lore.kernel.org/r/20210624095147.818783799@infradead.org Signed-off-by: Sasha Levin [bwh: Backported to 5.10: - objtool doesn't have any mcount handling - Write the NOPs as hex literals since we can't use ] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/arch.h | 1 + tools/objtool/arch/x86/decode.c | 20 ++++++ tools/objtool/check.c | 123 +++++++++++++++++--------------- 3 files changed, 86 insertions(+), 58 deletions(-) diff --git a/tools/objtool/arch.h b/tools/objtool/arch.h index 35e3f94413eb..0031a27b6ad0 100644 --- a/tools/objtool/arch.h +++ b/tools/objtool/arch.h @@ -83,6 +83,7 @@ unsigned long arch_jump_destination(struct instruction *insn); unsigned long arch_dest_reloc_offset(int addend); const char *arch_nop_insn(int len); +const char *arch_ret_insn(int len); int arch_decode_hint_reg(u8 sp_reg, int *base); diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c index 5005a4811157..4fb50239b82a 100644 --- a/tools/objtool/arch/x86/decode.c +++ b/tools/objtool/arch/x86/decode.c @@ -586,6 +586,26 @@ const char *arch_nop_insn(int len) return nops[len-1]; } +#define BYTE_RET 0xC3 + +const char *arch_ret_insn(int len) +{ + static const char ret[5][5] = { + { BYTE_RET }, + { BYTE_RET, 0x90 }, + { BYTE_RET, 0x66, 0x90 }, + { BYTE_RET, 0x0f, 0x1f, 0x00 }, + { BYTE_RET, 0x0f, 0x1f, 0x40, 0x00 }, + }; + + if (len < 1 || len > 5) { + WARN("invalid RET size: %d\n", len); + return NULL; + } + + return ret[len-1]; +} + /* asm/alternative.h ? */ #define ALTINSTR_FLAG_INV (1 << 15) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index b6c56c7bb837..b397868aa6f2 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -860,6 +860,60 @@ static struct reloc *insn_reloc(struct objtool_file *file, struct instruction *i return insn->reloc; } +static void remove_insn_ops(struct instruction *insn) +{ + struct stack_op *op, *tmp; + + list_for_each_entry_safe(op, tmp, &insn->stack_ops, list) { + list_del(&op->list); + free(op); + } +} + +static void add_call_dest(struct objtool_file *file, struct instruction *insn, + struct symbol *dest, bool sibling) +{ + struct reloc *reloc = insn_reloc(file, insn); + + insn->call_dest = dest; + if (!dest) + return; + + if (insn->call_dest->static_call_tramp) { + list_add_tail(&insn->call_node, + &file->static_call_list); + } + + /* + * Many compilers cannot disable KCOV with a function attribute + * so they need a little help, NOP out any KCOV calls from noinstr + * text. + */ + if (insn->sec->noinstr && + !strncmp(insn->call_dest->name, "__sanitizer_cov_", 16)) { + if (reloc) { + reloc->type = R_NONE; + elf_write_reloc(file->elf, reloc); + } + + elf_write_insn(file->elf, insn->sec, + insn->offset, insn->len, + sibling ? arch_ret_insn(insn->len) + : arch_nop_insn(insn->len)); + + insn->type = sibling ? INSN_RETURN : INSN_NOP; + } + + /* + * Whatever stack impact regular CALLs have, should be undone + * by the RETURN of the called function. + * + * Annotated intra-function calls retain the stack_ops but + * are converted to JUMP, see read_intra_function_calls(). + */ + remove_insn_ops(insn); +} + /* * Find the destination instructions for all jumps. */ @@ -898,11 +952,7 @@ static int add_jump_destinations(struct objtool_file *file) continue; } else if (insn->func) { /* internal or external sibling call (with reloc) */ - insn->call_dest = reloc->sym; - if (insn->call_dest->static_call_tramp) { - list_add_tail(&insn->call_node, - &file->static_call_list); - } + add_call_dest(file, insn, reloc->sym, true); continue; } else if (reloc->sym->sec->idx) { dest_sec = reloc->sym->sec; @@ -958,13 +1008,8 @@ static int add_jump_destinations(struct objtool_file *file) } else if (insn->jump_dest->func->pfunc != insn->func->pfunc && insn->jump_dest->offset == insn->jump_dest->func->offset) { - /* internal sibling call (without reloc) */ - insn->call_dest = insn->jump_dest->func; - if (insn->call_dest->static_call_tramp) { - list_add_tail(&insn->call_node, - &file->static_call_list); - } + add_call_dest(file, insn, insn->jump_dest->func, true); } } } @@ -972,16 +1017,6 @@ static int add_jump_destinations(struct objtool_file *file) return 0; } -static void remove_insn_ops(struct instruction *insn) -{ - struct stack_op *op, *tmp; - - list_for_each_entry_safe(op, tmp, &insn->stack_ops, list) { - list_del(&op->list); - free(op); - } -} - static struct symbol *find_call_destination(struct section *sec, unsigned long offset) { struct symbol *call_dest; @@ -1000,6 +1035,7 @@ static int add_call_destinations(struct objtool_file *file) { struct instruction *insn; unsigned long dest_off; + struct symbol *dest; struct reloc *reloc; for_each_insn(file, insn) { @@ -1009,7 +1045,9 @@ static int add_call_destinations(struct objtool_file *file) reloc = insn_reloc(file, insn); if (!reloc) { dest_off = arch_jump_destination(insn); - insn->call_dest = find_call_destination(insn->sec, dest_off); + dest = find_call_destination(insn->sec, dest_off); + + add_call_dest(file, insn, dest, false); if (insn->ignore) continue; @@ -1027,9 +1065,8 @@ static int add_call_destinations(struct objtool_file *file) } else if (reloc->sym->type == STT_SECTION) { dest_off = arch_dest_reloc_offset(reloc->addend); - insn->call_dest = find_call_destination(reloc->sym->sec, - dest_off); - if (!insn->call_dest) { + dest = find_call_destination(reloc->sym->sec, dest_off); + if (!dest) { WARN_FUNC("can't find call dest symbol at %s+0x%lx", insn->sec, insn->offset, reloc->sym->sec->name, @@ -1037,6 +1074,8 @@ static int add_call_destinations(struct objtool_file *file) return -1; } + add_call_dest(file, insn, dest, false); + } else if (arch_is_retpoline(reloc->sym)) { /* * Retpoline calls are really dynamic calls in @@ -1052,39 +1091,7 @@ static int add_call_destinations(struct objtool_file *file) continue; } else - insn->call_dest = reloc->sym; - - if (insn->call_dest && insn->call_dest->static_call_tramp) { - list_add_tail(&insn->call_node, - &file->static_call_list); - } - - /* - * Many compilers cannot disable KCOV with a function attribute - * so they need a little help, NOP out any KCOV calls from noinstr - * text. - */ - if (insn->sec->noinstr && - !strncmp(insn->call_dest->name, "__sanitizer_cov_", 16)) { - if (reloc) { - reloc->type = R_NONE; - elf_write_reloc(file->elf, reloc); - } - - elf_write_insn(file->elf, insn->sec, - insn->offset, insn->len, - arch_nop_insn(insn->len)); - insn->type = INSN_NOP; - } - - /* - * Whatever stack impact regular CALLs have, should be undone - * by the RETURN of the called function. - * - * Annotated intra-function calls retain the stack_ops but - * are converted to JUMP, see read_intra_function_calls(). - */ - remove_insn_ops(insn); + add_call_dest(file, insn, reloc->sym, false); } return 0; From 6e4676f438f8a454d85c12ffa2abf613f9a8f75c Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 26 Oct 2021 14:01:33 +0200 Subject: [PATCH 1476/1842] objtool: Classify symbols commit 1739c66eb7bd5f27f1b69a5a26e10e8327d1e136 upstream. In order to avoid calling str*cmp() on symbol names, over and over, do them all once upfront and store the result. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Borislav Petkov Acked-by: Josh Poimboeuf Tested-by: Alexei Starovoitov Link: https://lore.kernel.org/r/20211026120309.658539311@infradead.org [cascardo: no pv_target on struct symbol, because of missing db2b0c5d7b6f19b3c2cab08c531b65342eb5252b] Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: objtool doesn't have any mcount handling] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 32 +++++++++++++++++++++----------- tools/objtool/elf.h | 7 +++++-- 2 files changed, 26 insertions(+), 13 deletions(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index b397868aa6f2..01b064b6182d 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -889,8 +889,7 @@ static void add_call_dest(struct objtool_file *file, struct instruction *insn, * so they need a little help, NOP out any KCOV calls from noinstr * text. */ - if (insn->sec->noinstr && - !strncmp(insn->call_dest->name, "__sanitizer_cov_", 16)) { + if (insn->sec->noinstr && insn->call_dest->kcov) { if (reloc) { reloc->type = R_NONE; elf_write_reloc(file->elf, reloc); @@ -935,7 +934,7 @@ static int add_jump_destinations(struct objtool_file *file) } else if (reloc->sym->type == STT_SECTION) { dest_sec = reloc->sym->sec; dest_off = arch_dest_reloc_offset(reloc->addend); - } else if (arch_is_retpoline(reloc->sym)) { + } else if (reloc->sym->retpoline_thunk) { /* * Retpoline jumps are really dynamic jumps in * disguise, so convert them accordingly. @@ -1076,7 +1075,7 @@ static int add_call_destinations(struct objtool_file *file) add_call_dest(file, insn, dest, false); - } else if (arch_is_retpoline(reloc->sym)) { + } else if (reloc->sym->retpoline_thunk) { /* * Retpoline calls are really dynamic calls in * disguise, so convert them accordingly. @@ -1733,17 +1732,28 @@ static int read_intra_function_calls(struct objtool_file *file) return 0; } -static int read_static_call_tramps(struct objtool_file *file) +static int classify_symbols(struct objtool_file *file) { struct section *sec; struct symbol *func; for_each_sec(file, sec) { list_for_each_entry(func, &sec->symbol_list, list) { - if (func->bind == STB_GLOBAL && - !strncmp(func->name, STATIC_CALL_TRAMP_PREFIX_STR, + if (func->bind != STB_GLOBAL) + continue; + + if (!strncmp(func->name, STATIC_CALL_TRAMP_PREFIX_STR, strlen(STATIC_CALL_TRAMP_PREFIX_STR))) func->static_call_tramp = true; + + if (arch_is_retpoline(func)) + func->retpoline_thunk = true; + + if (!strcmp(func->name, "__fentry__")) + func->fentry = true; + + if (!strncmp(func->name, "__sanitizer_cov_", 16)) + func->kcov = true; } } @@ -1805,7 +1815,7 @@ static int decode_sections(struct objtool_file *file) /* * Must be before add_{jump_call}_destination. */ - ret = read_static_call_tramps(file); + ret = classify_symbols(file); if (ret) return ret; @@ -1863,9 +1873,9 @@ static int decode_sections(struct objtool_file *file) static bool is_fentry_call(struct instruction *insn) { - if (insn->type == INSN_CALL && insn->call_dest && - insn->call_dest->type == STT_NOTYPE && - !strcmp(insn->call_dest->name, "__fentry__")) + if (insn->type == INSN_CALL && + insn->call_dest && + insn->call_dest->fentry) return true; return false; diff --git a/tools/objtool/elf.h b/tools/objtool/elf.h index 45e5ede363b0..b51e35825dad 100644 --- a/tools/objtool/elf.h +++ b/tools/objtool/elf.h @@ -55,8 +55,11 @@ struct symbol { unsigned long offset; unsigned int len; struct symbol *pfunc, *cfunc, *alias; - bool uaccess_safe; - bool static_call_tramp; + u8 uaccess_safe : 1; + u8 static_call_tramp : 1; + u8 retpoline_thunk : 1; + u8 fentry : 1; + u8 kcov : 1; }; struct reloc { From 023e78bbf13c15a55013ca25641509a6697170e1 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 26 Oct 2021 14:01:34 +0200 Subject: [PATCH 1477/1842] objtool: Explicitly avoid self modifying code in .altinstr_replacement commit dd003edeffa3cb87bc9862582004f405d77d7670 upstream. Assume ALTERNATIVE()s know what they're doing and do not change, or cause to change, instructions in .altinstr_replacement sections. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Borislav Petkov Acked-by: Josh Poimboeuf Tested-by: Alexei Starovoitov Link: https://lore.kernel.org/r/20211026120309.722511775@infradead.org [cascardo: context adjustment] Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: objtool doesn't have any mcount handling] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 36 ++++++++++++++++++++++++++++-------- 1 file changed, 28 insertions(+), 8 deletions(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 01b064b6182d..a70f3811918b 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -870,18 +870,27 @@ static void remove_insn_ops(struct instruction *insn) } } -static void add_call_dest(struct objtool_file *file, struct instruction *insn, - struct symbol *dest, bool sibling) +static void annotate_call_site(struct objtool_file *file, + struct instruction *insn, bool sibling) { struct reloc *reloc = insn_reloc(file, insn); + struct symbol *sym = insn->call_dest; - insn->call_dest = dest; - if (!dest) + if (!sym) + sym = reloc->sym; + + /* + * Alternative replacement code is just template code which is + * sometimes copied to the original instruction. For now, don't + * annotate it. (In the future we might consider annotating the + * original instruction if/when it ever makes sense to do so.) + */ + if (!strcmp(insn->sec->name, ".altinstr_replacement")) return; - if (insn->call_dest->static_call_tramp) { - list_add_tail(&insn->call_node, - &file->static_call_list); + if (sym->static_call_tramp) { + list_add_tail(&insn->call_node, &file->static_call_list); + return; } /* @@ -889,7 +898,7 @@ static void add_call_dest(struct objtool_file *file, struct instruction *insn, * so they need a little help, NOP out any KCOV calls from noinstr * text. */ - if (insn->sec->noinstr && insn->call_dest->kcov) { + if (insn->sec->noinstr && sym->kcov) { if (reloc) { reloc->type = R_NONE; elf_write_reloc(file->elf, reloc); @@ -901,7 +910,16 @@ static void add_call_dest(struct objtool_file *file, struct instruction *insn, : arch_nop_insn(insn->len)); insn->type = sibling ? INSN_RETURN : INSN_NOP; + return; } +} + +static void add_call_dest(struct objtool_file *file, struct instruction *insn, + struct symbol *dest, bool sibling) +{ + insn->call_dest = dest; + if (!dest) + return; /* * Whatever stack impact regular CALLs have, should be undone @@ -911,6 +929,8 @@ static void add_call_dest(struct objtool_file *file, struct instruction *insn, * are converted to JUMP, see read_intra_function_calls(). */ remove_insn_ops(insn); + + annotate_call_site(file, insn, sibling); } /* From 908bd980a80ea23ff834c3fff828c3d37ada6cc2 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 26 Oct 2021 14:01:36 +0200 Subject: [PATCH 1478/1842] objtool,x86: Replace alternatives with .retpoline_sites commit 134ab5bd1883312d7a4b3033b05c6b5a1bb8889b upstream. Instead of writing complete alternatives, simply provide a list of all the retpoline thunk calls. Then the kernel is free to do with them as it pleases. Simpler code all-round. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Borislav Petkov Acked-by: Josh Poimboeuf Tested-by: Alexei Starovoitov Link: https://lore.kernel.org/r/20211026120309.850007165@infradead.org [cascardo: fixed conflict because of missing 8b946cc38e063f0f7bb67789478c38f6d7d457c9] Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: deleted functions had slightly different code] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/vmlinux.lds.S | 14 ++++ tools/objtool/arch/x86/decode.c | 120 ----------------------------- tools/objtool/check.c | 132 ++++++++++++++++++++++---------- tools/objtool/elf.c | 83 -------------------- tools/objtool/elf.h | 1 - tools/objtool/special.c | 8 -- 6 files changed, 107 insertions(+), 251 deletions(-) diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index bf9e0adb5b7e..d7085a94cfed 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -272,6 +272,20 @@ SECTIONS __parainstructions_end = .; } +#ifdef CONFIG_RETPOLINE + /* + * List of instructions that call/jmp/jcc to retpoline thunks + * __x86_indirect_thunk_*(). These instructions can be patched along + * with alternatives, after which the section can be freed. + */ + . = ALIGN(8); + .retpoline_sites : AT(ADDR(.retpoline_sites) - LOAD_OFFSET) { + __retpoline_sites = .; + *(.retpoline_sites) + __retpoline_sites_end = .; + } +#endif + /* * struct alt_inst entries. From the header (alternative.h): * "Alternative instructions for different CPU types or capabilities" diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c index 4fb50239b82a..32a810429306 100644 --- a/tools/objtool/arch/x86/decode.c +++ b/tools/objtool/arch/x86/decode.c @@ -606,126 +606,6 @@ const char *arch_ret_insn(int len) return ret[len-1]; } -/* asm/alternative.h ? */ - -#define ALTINSTR_FLAG_INV (1 << 15) -#define ALT_NOT(feat) ((feat) | ALTINSTR_FLAG_INV) - -struct alt_instr { - s32 instr_offset; /* original instruction */ - s32 repl_offset; /* offset to replacement instruction */ - u16 cpuid; /* cpuid bit set for replacement */ - u8 instrlen; /* length of original instruction */ - u8 replacementlen; /* length of new instruction */ -} __packed; - -static int elf_add_alternative(struct elf *elf, - struct instruction *orig, struct symbol *sym, - int cpuid, u8 orig_len, u8 repl_len) -{ - const int size = sizeof(struct alt_instr); - struct alt_instr *alt; - struct section *sec; - Elf_Scn *s; - - sec = find_section_by_name(elf, ".altinstructions"); - if (!sec) { - sec = elf_create_section(elf, ".altinstructions", - SHF_ALLOC, 0, 0); - - if (!sec) { - WARN_ELF("elf_create_section"); - return -1; - } - } - - s = elf_getscn(elf->elf, sec->idx); - if (!s) { - WARN_ELF("elf_getscn"); - return -1; - } - - sec->data = elf_newdata(s); - if (!sec->data) { - WARN_ELF("elf_newdata"); - return -1; - } - - sec->data->d_size = size; - sec->data->d_align = 1; - - alt = sec->data->d_buf = malloc(size); - if (!sec->data->d_buf) { - perror("malloc"); - return -1; - } - memset(sec->data->d_buf, 0, size); - - if (elf_add_reloc_to_insn(elf, sec, sec->sh.sh_size, - R_X86_64_PC32, orig->sec, orig->offset)) { - WARN("elf_create_reloc: alt_instr::instr_offset"); - return -1; - } - - if (elf_add_reloc(elf, sec, sec->sh.sh_size + 4, - R_X86_64_PC32, sym, 0)) { - WARN("elf_create_reloc: alt_instr::repl_offset"); - return -1; - } - - alt->cpuid = cpuid; - alt->instrlen = orig_len; - alt->replacementlen = repl_len; - - sec->sh.sh_size += size; - sec->changed = true; - - return 0; -} - -#define X86_FEATURE_RETPOLINE ( 7*32+12) - -int arch_rewrite_retpolines(struct objtool_file *file) -{ - struct instruction *insn; - struct reloc *reloc; - struct symbol *sym; - char name[32] = ""; - - list_for_each_entry(insn, &file->retpoline_call_list, call_node) { - - if (insn->type != INSN_JUMP_DYNAMIC && - insn->type != INSN_CALL_DYNAMIC) - continue; - - if (!strcmp(insn->sec->name, ".text.__x86.indirect_thunk")) - continue; - - reloc = insn->reloc; - - sprintf(name, "__x86_indirect_alt_%s_%s", - insn->type == INSN_JUMP_DYNAMIC ? "jmp" : "call", - reloc->sym->name + 21); - - sym = find_symbol_by_name(file->elf, name); - if (!sym) { - sym = elf_create_undef_symbol(file->elf, name); - if (!sym) { - WARN("elf_create_undef_symbol"); - return -1; - } - } - - if (elf_add_alternative(file->elf, insn, sym, - ALT_NOT(X86_FEATURE_RETPOLINE), 5, 5)) { - WARN("elf_add_alternative"); - return -1; - } - } - - return 0; -} - int arch_decode_hint_reg(u8 sp_reg, int *base) { switch (sp_reg) { diff --git a/tools/objtool/check.c b/tools/objtool/check.c index a70f3811918b..09e7807f83ee 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -606,6 +606,52 @@ static int create_static_call_sections(struct objtool_file *file) return 0; } +static int create_retpoline_sites_sections(struct objtool_file *file) +{ + struct instruction *insn; + struct section *sec; + int idx; + + sec = find_section_by_name(file->elf, ".retpoline_sites"); + if (sec) { + WARN("file already has .retpoline_sites, skipping"); + return 0; + } + + idx = 0; + list_for_each_entry(insn, &file->retpoline_call_list, call_node) + idx++; + + if (!idx) + return 0; + + sec = elf_create_section(file->elf, ".retpoline_sites", 0, + sizeof(int), idx); + if (!sec) { + WARN("elf_create_section: .retpoline_sites"); + return -1; + } + + idx = 0; + list_for_each_entry(insn, &file->retpoline_call_list, call_node) { + + int *site = (int *)sec->data->d_buf + idx; + *site = 0; + + if (elf_add_reloc_to_insn(file->elf, sec, + idx * sizeof(int), + R_X86_64_PC32, + insn->sec, insn->offset)) { + WARN("elf_add_reloc_to_insn: .retpoline_sites"); + return -1; + } + + idx++; + } + + return 0; +} + /* * Warnings shouldn't be reported for ignored functions. */ @@ -893,6 +939,11 @@ static void annotate_call_site(struct objtool_file *file, return; } + if (sym->retpoline_thunk) { + list_add_tail(&insn->call_node, &file->retpoline_call_list); + return; + } + /* * Many compilers cannot disable KCOV with a function attribute * so they need a little help, NOP out any KCOV calls from noinstr @@ -933,6 +984,39 @@ static void add_call_dest(struct objtool_file *file, struct instruction *insn, annotate_call_site(file, insn, sibling); } +static void add_retpoline_call(struct objtool_file *file, struct instruction *insn) +{ + /* + * Retpoline calls/jumps are really dynamic calls/jumps in disguise, + * so convert them accordingly. + */ + switch (insn->type) { + case INSN_CALL: + insn->type = INSN_CALL_DYNAMIC; + break; + case INSN_JUMP_UNCONDITIONAL: + insn->type = INSN_JUMP_DYNAMIC; + break; + case INSN_JUMP_CONDITIONAL: + insn->type = INSN_JUMP_DYNAMIC_CONDITIONAL; + break; + default: + return; + } + + insn->retpoline_safe = true; + + /* + * Whatever stack impact regular CALLs have, should be undone + * by the RETURN of the called function. + * + * Annotated intra-function calls retain the stack_ops but + * are converted to JUMP, see read_intra_function_calls(). + */ + remove_insn_ops(insn); + + annotate_call_site(file, insn, false); +} /* * Find the destination instructions for all jumps. */ @@ -955,19 +1039,7 @@ static int add_jump_destinations(struct objtool_file *file) dest_sec = reloc->sym->sec; dest_off = arch_dest_reloc_offset(reloc->addend); } else if (reloc->sym->retpoline_thunk) { - /* - * Retpoline jumps are really dynamic jumps in - * disguise, so convert them accordingly. - */ - if (insn->type == INSN_JUMP_UNCONDITIONAL) - insn->type = INSN_JUMP_DYNAMIC; - else - insn->type = INSN_JUMP_DYNAMIC_CONDITIONAL; - - list_add_tail(&insn->call_node, - &file->retpoline_call_list); - - insn->retpoline_safe = true; + add_retpoline_call(file, insn); continue; } else if (insn->func) { /* internal or external sibling call (with reloc) */ @@ -1096,18 +1168,7 @@ static int add_call_destinations(struct objtool_file *file) add_call_dest(file, insn, dest, false); } else if (reloc->sym->retpoline_thunk) { - /* - * Retpoline calls are really dynamic calls in - * disguise, so convert them accordingly. - */ - insn->type = INSN_CALL_DYNAMIC; - insn->retpoline_safe = true; - - list_add_tail(&insn->call_node, - &file->retpoline_call_list); - - remove_insn_ops(insn); - continue; + add_retpoline_call(file, insn); } else add_call_dest(file, insn, reloc->sym, false); @@ -1806,11 +1867,6 @@ static void mark_rodata(struct objtool_file *file) file->rodata = found; } -__weak int arch_rewrite_retpolines(struct objtool_file *file) -{ - return 0; -} - static int decode_sections(struct objtool_file *file) { int ret; @@ -1879,15 +1935,6 @@ static int decode_sections(struct objtool_file *file) if (ret) return ret; - /* - * Must be after add_special_section_alts(), since this will emit - * alternatives. Must be after add_{jump,call}_destination(), since - * those create the call insn lists. - */ - ret = arch_rewrite_retpolines(file); - if (ret) - return ret; - return 0; } @@ -3159,6 +3206,13 @@ int check(struct objtool_file *file) goto out; warnings += ret; + if (retpoline) { + ret = create_retpoline_sites_sections(file); + if (ret < 0) + goto out; + warnings += ret; + } + if (stats) { printf("nr_insns_visited: %ld\n", nr_insns_visited); printf("nr_cfi: %ld\n", nr_cfi); diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c index 9c7d395f0db1..74f4b3bb7961 100644 --- a/tools/objtool/elf.c +++ b/tools/objtool/elf.c @@ -766,89 +766,6 @@ static int elf_add_string(struct elf *elf, struct section *strtab, char *str) return len; } -struct symbol *elf_create_undef_symbol(struct elf *elf, const char *name) -{ - struct section *symtab, *symtab_shndx; - struct symbol *sym; - Elf_Data *data; - Elf_Scn *s; - - sym = malloc(sizeof(*sym)); - if (!sym) { - perror("malloc"); - return NULL; - } - memset(sym, 0, sizeof(*sym)); - - sym->name = strdup(name); - - sym->sym.st_name = elf_add_string(elf, NULL, sym->name); - if (sym->sym.st_name == -1) - return NULL; - - sym->sym.st_info = GELF_ST_INFO(STB_GLOBAL, STT_NOTYPE); - // st_other 0 - // st_shndx 0 - // st_value 0 - // st_size 0 - - symtab = find_section_by_name(elf, ".symtab"); - if (!symtab) { - WARN("can't find .symtab"); - return NULL; - } - - s = elf_getscn(elf->elf, symtab->idx); - if (!s) { - WARN_ELF("elf_getscn"); - return NULL; - } - - data = elf_newdata(s); - if (!data) { - WARN_ELF("elf_newdata"); - return NULL; - } - - data->d_buf = &sym->sym; - data->d_size = sizeof(sym->sym); - data->d_align = 1; - - sym->idx = symtab->len / sizeof(sym->sym); - - symtab->len += data->d_size; - symtab->changed = true; - - symtab_shndx = find_section_by_name(elf, ".symtab_shndx"); - if (symtab_shndx) { - s = elf_getscn(elf->elf, symtab_shndx->idx); - if (!s) { - WARN_ELF("elf_getscn"); - return NULL; - } - - data = elf_newdata(s); - if (!data) { - WARN_ELF("elf_newdata"); - return NULL; - } - - data->d_buf = &sym->sym.st_size; /* conveniently 0 */ - data->d_size = sizeof(Elf32_Word); - data->d_align = 4; - data->d_type = ELF_T_WORD; - - symtab_shndx->len += 4; - symtab_shndx->changed = true; - } - - sym->sec = find_section_by_index(elf, 0); - - elf_add_symbol(elf, sym); - - return sym; -} - struct section *elf_create_section(struct elf *elf, const char *name, unsigned int sh_flags, size_t entsize, int nr) { diff --git a/tools/objtool/elf.h b/tools/objtool/elf.h index b51e35825dad..de00cac9aede 100644 --- a/tools/objtool/elf.h +++ b/tools/objtool/elf.h @@ -136,7 +136,6 @@ int elf_write_insn(struct elf *elf, struct section *sec, unsigned long offset, unsigned int len, const char *insn); int elf_write_reloc(struct elf *elf, struct reloc *reloc); -struct symbol *elf_create_undef_symbol(struct elf *elf, const char *name); int elf_write(struct elf *elf); void elf_close(struct elf *elf); diff --git a/tools/objtool/special.c b/tools/objtool/special.c index 99a3a35772a5..aff0cee7bac1 100644 --- a/tools/objtool/special.c +++ b/tools/objtool/special.c @@ -105,14 +105,6 @@ static int get_alt_entry(struct elf *elf, struct special_entry *entry, return -1; } - /* - * Skip retpoline .altinstr_replacement... we already rewrite the - * instructions for retpolines anyway, see arch_is_retpoline() - * usage in add_{call,jump}_destinations(). - */ - if (arch_is_retpoline(new_reloc->sym)) - return 1; - reloc_to_sec_off(new_reloc, &alt->new_sec, &alt->new_off); /* _ASM_EXTABLE_EX hack */ From ccb8fc65a3e89815e43abc4f263b3a47aa7397dd Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 26 Oct 2021 14:01:37 +0200 Subject: [PATCH 1479/1842] x86/retpoline: Remove unused replacement symbols commit 4fe79e710d9574a14993f8b4e16b7252da72d5e8 upstream. Now that objtool no longer creates alternatives, these replacement symbols are no longer needed, remove them. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Borislav Petkov Acked-by: Josh Poimboeuf Tested-by: Alexei Starovoitov Link: https://lore.kernel.org/r/20211026120309.915051744@infradead.org Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/asm-prototypes.h | 10 ------- arch/x86/lib/retpoline.S | 42 --------------------------- 2 files changed, 52 deletions(-) diff --git a/arch/x86/include/asm/asm-prototypes.h b/arch/x86/include/asm/asm-prototypes.h index 4cb726c71ed8..a28c5cab893d 100644 --- a/arch/x86/include/asm/asm-prototypes.h +++ b/arch/x86/include/asm/asm-prototypes.h @@ -24,14 +24,4 @@ extern void cmpxchg8b_emu(void); extern asmlinkage void __x86_indirect_thunk_ ## reg (void); #include -#undef GEN -#define GEN(reg) \ - extern asmlinkage void __x86_indirect_alt_call_ ## reg (void); -#include - -#undef GEN -#define GEN(reg) \ - extern asmlinkage void __x86_indirect_alt_jmp_ ## reg (void); -#include - #endif /* CONFIG_RETPOLINE */ diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index 5385d26af6e4..18c093cb2113 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -40,36 +40,6 @@ SYM_FUNC_END(__x86_indirect_thunk_\reg) .endm -/* - * This generates .altinstr_replacement symbols for use by objtool. They, - * however, must not actually live in .altinstr_replacement since that will be - * discarded after init, but module alternatives will also reference these - * symbols. - * - * Their names matches the "__x86_indirect_" prefix to mark them as retpolines. - */ -.macro ALT_THUNK reg - - .align 1 - -SYM_FUNC_START_NOALIGN(__x86_indirect_alt_call_\reg) - ANNOTATE_RETPOLINE_SAFE -1: call *%\reg -2: .skip 5-(2b-1b), 0x90 -SYM_FUNC_END(__x86_indirect_alt_call_\reg) - -STACK_FRAME_NON_STANDARD(__x86_indirect_alt_call_\reg) - -SYM_FUNC_START_NOALIGN(__x86_indirect_alt_jmp_\reg) - ANNOTATE_RETPOLINE_SAFE -1: jmp *%\reg -2: .skip 5-(2b-1b), 0x90 -SYM_FUNC_END(__x86_indirect_alt_jmp_\reg) - -STACK_FRAME_NON_STANDARD(__x86_indirect_alt_jmp_\reg) - -.endm - /* * Despite being an assembler file we can't just use .irp here * because __KSYM_DEPS__ only uses the C preprocessor and would @@ -92,15 +62,3 @@ STACK_FRAME_NON_STANDARD(__x86_indirect_alt_jmp_\reg) #undef GEN #define GEN(reg) EXPORT_THUNK(reg) #include - -#undef GEN -#define GEN(reg) ALT_THUNK reg -#include - -#undef GEN -#define GEN(reg) __EXPORT_THUNK(__x86_indirect_alt_call_ ## reg) -#include - -#undef GEN -#define GEN(reg) __EXPORT_THUNK(__x86_indirect_alt_jmp_ ## reg) -#include From 8ef808b3f406ed920adea8ffc949f63c059bf3a7 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 26 Oct 2021 14:01:38 +0200 Subject: [PATCH 1480/1842] x86/asm: Fix register order commit a92ede2d584a2e070def59c7e47e6b6f6341c55c upstream. Ensure the register order is correct; this allows for easy translation between register number and trampoline and vice-versa. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Borislav Petkov Acked-by: Josh Poimboeuf Tested-by: Alexei Starovoitov Link: https://lore.kernel.org/r/20211026120309.978573921@infradead.org Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/GEN-for-each-reg.h | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/GEN-for-each-reg.h b/arch/x86/include/asm/GEN-for-each-reg.h index 1b07fb102c4e..07949102a08d 100644 --- a/arch/x86/include/asm/GEN-for-each-reg.h +++ b/arch/x86/include/asm/GEN-for-each-reg.h @@ -1,11 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * These are in machine order; things rely on that. + */ #ifdef CONFIG_64BIT GEN(rax) -GEN(rbx) GEN(rcx) GEN(rdx) +GEN(rbx) +GEN(rsp) +GEN(rbp) GEN(rsi) GEN(rdi) -GEN(rbp) GEN(r8) GEN(r9) GEN(r10) @@ -16,10 +21,11 @@ GEN(r14) GEN(r15) #else GEN(eax) -GEN(ebx) GEN(ecx) GEN(edx) +GEN(ebx) +GEN(esp) +GEN(ebp) GEN(esi) GEN(edi) -GEN(ebp) #endif From 41ef958070000770758531247db417523aa6977f Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 26 Oct 2021 14:01:39 +0200 Subject: [PATCH 1481/1842] x86/asm: Fixup odd GEN-for-each-reg.h usage commit b6d3d9944bd7c9e8c06994ead3c9952f673f2a66 upstream. Currently GEN-for-each-reg.h usage leaves GEN defined, relying on any subsequent usage to start with #undef, which is rude. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Borislav Petkov Acked-by: Josh Poimboeuf Tested-by: Alexei Starovoitov Link: https://lore.kernel.org/r/20211026120310.041792350@infradead.org Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/asm-prototypes.h | 2 +- arch/x86/lib/retpoline.S | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/asm-prototypes.h b/arch/x86/include/asm/asm-prototypes.h index a28c5cab893d..a2bed09d3c11 100644 --- a/arch/x86/include/asm/asm-prototypes.h +++ b/arch/x86/include/asm/asm-prototypes.h @@ -19,9 +19,9 @@ extern void cmpxchg8b_emu(void); #ifdef CONFIG_RETPOLINE -#undef GEN #define GEN(reg) \ extern asmlinkage void __x86_indirect_thunk_ ## reg (void); #include +#undef GEN #endif /* CONFIG_RETPOLINE */ diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index 18c093cb2113..e71bfbd5faee 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -55,10 +55,10 @@ SYM_FUNC_END(__x86_indirect_thunk_\reg) #define __EXPORT_THUNK(sym) _ASM_NOKPROBE(sym); EXPORT_SYMBOL(sym) #define EXPORT_THUNK(reg) __EXPORT_THUNK(__x86_indirect_thunk_ ## reg) -#undef GEN #define GEN(reg) THUNK reg #include - #undef GEN + #define GEN(reg) EXPORT_THUNK(reg) #include +#undef GEN From 0de47ad5b9d57e52b81c3ce0faa91c8b7749affe Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 26 Oct 2021 14:01:40 +0200 Subject: [PATCH 1482/1842] x86/retpoline: Move the retpoline thunk declarations to nospec-branch.h commit 6fda8a38865607db739be3e567a2387376222dbd upstream. Because it makes no sense to split the retpoline gunk over multiple headers. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Borislav Petkov Acked-by: Josh Poimboeuf Tested-by: Alexei Starovoitov Link: https://lore.kernel.org/r/20211026120310.106290934@infradead.org Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/asm-prototypes.h | 8 -------- arch/x86/include/asm/nospec-branch.h | 7 +++++++ arch/x86/net/bpf_jit_comp.c | 1 - 3 files changed, 7 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/asm-prototypes.h b/arch/x86/include/asm/asm-prototypes.h index a2bed09d3c11..8f80de627c60 100644 --- a/arch/x86/include/asm/asm-prototypes.h +++ b/arch/x86/include/asm/asm-prototypes.h @@ -17,11 +17,3 @@ extern void cmpxchg8b_emu(void); #endif -#ifdef CONFIG_RETPOLINE - -#define GEN(reg) \ - extern asmlinkage void __x86_indirect_thunk_ ## reg (void); -#include -#undef GEN - -#endif /* CONFIG_RETPOLINE */ diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 838c15e1b775..08ea9888fb5b 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -5,6 +5,7 @@ #include #include +#include #include #include @@ -118,6 +119,12 @@ ".popsection\n\t" #ifdef CONFIG_RETPOLINE + +#define GEN(reg) \ + extern asmlinkage void __x86_indirect_thunk_ ## reg (void); +#include +#undef GEN + #ifdef CONFIG_X86_64 /* diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 1714e85eb26d..b3fafb9ba8a3 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -15,7 +15,6 @@ #include #include #include -#include static u8 *emit_code(u8 *ptr, u32 bytes, unsigned int len) { From 6eb95718f3ea92affae94401f273b580f6b97372 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 26 Oct 2021 14:01:41 +0200 Subject: [PATCH 1483/1842] x86/retpoline: Create a retpoline thunk array commit 1a6f74429c42a3854980359a758e222005712aee upstream. Stick all the retpolines in a single symbol and have the individual thunks as inner labels, this should guarantee thunk order and layout. Previously there were 16 (or rather 15 without rsp) separate symbols and a toolchain might reasonably expect it could displace them however it liked, with disregard for their relative position. However, now they're part of a larger symbol. Any change to their relative position would disrupt this larger _array symbol and thus not be sound. This is the same reasoning used for data symbols. On their own there is no guarantee about their relative position wrt to one aonther, but we're still able to do arrays because an array as a whole is a single larger symbol. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Borislav Petkov Acked-by: Josh Poimboeuf Tested-by: Alexei Starovoitov Link: https://lore.kernel.org/r/20211026120310.169659320@infradead.org Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/nospec-branch.h | 8 +++++++- arch/x86/lib/retpoline.S | 14 +++++++++----- 2 files changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 08ea9888fb5b..a2c73ae30c9e 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -12,6 +12,8 @@ #include #include +#define RETPOLINE_THUNK_SIZE 32 + /* * Fill the CPU return stack buffer. * @@ -120,11 +122,15 @@ #ifdef CONFIG_RETPOLINE +typedef u8 retpoline_thunk_t[RETPOLINE_THUNK_SIZE]; + #define GEN(reg) \ - extern asmlinkage void __x86_indirect_thunk_ ## reg (void); + extern retpoline_thunk_t __x86_indirect_thunk_ ## reg; #include #undef GEN +extern retpoline_thunk_t __x86_indirect_thunk_array[]; + #ifdef CONFIG_X86_64 /* diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index e71bfbd5faee..6565c6f86160 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -28,16 +28,14 @@ .macro THUNK reg - .align 32 - -SYM_FUNC_START(__x86_indirect_thunk_\reg) + .align RETPOLINE_THUNK_SIZE +SYM_INNER_LABEL(__x86_indirect_thunk_\reg, SYM_L_GLOBAL) + UNWIND_HINT_EMPTY ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), \ __stringify(RETPOLINE \reg), X86_FEATURE_RETPOLINE, \ __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_LFENCE -SYM_FUNC_END(__x86_indirect_thunk_\reg) - .endm /* @@ -55,10 +53,16 @@ SYM_FUNC_END(__x86_indirect_thunk_\reg) #define __EXPORT_THUNK(sym) _ASM_NOKPROBE(sym); EXPORT_SYMBOL(sym) #define EXPORT_THUNK(reg) __EXPORT_THUNK(__x86_indirect_thunk_ ## reg) + .align RETPOLINE_THUNK_SIZE +SYM_CODE_START(__x86_indirect_thunk_array) + #define GEN(reg) THUNK reg #include #undef GEN + .align RETPOLINE_THUNK_SIZE +SYM_CODE_END(__x86_indirect_thunk_array) + #define GEN(reg) EXPORT_THUNK(reg) #include #undef GEN From 381fd04c97b469106d83a48343512c5312347155 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 26 Oct 2021 14:01:42 +0200 Subject: [PATCH 1484/1842] x86/alternative: Implement .retpoline_sites support commit 7508500900814d14e2e085cdc4e28142721abbdf upstream. Rewrite retpoline thunk call sites to be indirect calls for spectre_v2=off. This ensures spectre_v2=off is as near to a RETPOLINE=n build as possible. This is the replacement for objtool writing alternative entries to ensure the same and achieves feature-parity with the previous approach. One noteworthy feature is that it relies on the thunks to be in machine order to compute the register index. Specifically, this does not yet address the Jcc __x86_indirect_thunk_* calls generated by clang, a future patch will add this. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Borislav Petkov Acked-by: Josh Poimboeuf Tested-by: Alexei Starovoitov Link: https://lore.kernel.org/r/20211026120310.232495794@infradead.org [cascardo: small conflict fixup at arch/x86/kernel/module.c] Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: - Use hex literal instead of BYTES_NOP1 - Adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/um/kernel/um_arch.c | 4 + arch/x86/include/asm/alternative.h | 1 + arch/x86/kernel/alternative.c | 141 ++++++++++++++++++++++++++++- arch/x86/kernel/module.c | 9 +- 4 files changed, 150 insertions(+), 5 deletions(-) diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c index 76b37297b7d4..2676ac322146 100644 --- a/arch/um/kernel/um_arch.c +++ b/arch/um/kernel/um_arch.c @@ -358,6 +358,10 @@ void __init check_bugs(void) os_check_bugs(); } +void apply_retpolines(s32 *start, s32 *end) +{ +} + void apply_alternatives(struct alt_instr *start, struct alt_instr *end) { } diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h index 9965638d184f..3ac7460933af 100644 --- a/arch/x86/include/asm/alternative.h +++ b/arch/x86/include/asm/alternative.h @@ -75,6 +75,7 @@ extern int alternatives_patched; extern void alternative_instructions(void); extern void apply_alternatives(struct alt_instr *start, struct alt_instr *end); +extern void apply_retpolines(s32 *start, s32 *end); struct module; diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 0c0c4c8704e5..fc14a833423b 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -28,6 +28,7 @@ #include #include #include +#include int __read_mostly alternatives_patched; @@ -268,6 +269,7 @@ static void __init_or_module add_nops(void *insns, unsigned int len) } } +extern s32 __retpoline_sites[], __retpoline_sites_end[]; extern struct alt_instr __alt_instructions[], __alt_instructions_end[]; extern s32 __smp_locks[], __smp_locks_end[]; void text_poke_early(void *addr, const void *opcode, size_t len); @@ -376,7 +378,7 @@ static __always_inline int optimize_nops_range(u8 *instr, u8 instrlen, int off) * "noinline" to cause control flow change and thus invalidate I$ and * cause refetch after modification. */ -static void __init_or_module noinline optimize_nops(struct alt_instr *a, u8 *instr) +static void __init_or_module noinline optimize_nops(u8 *instr, size_t len) { struct insn insn; int i = 0; @@ -394,11 +396,11 @@ static void __init_or_module noinline optimize_nops(struct alt_instr *a, u8 *ins * optimized. */ if (insn.length == 1 && insn.opcode.bytes[0] == 0x90) - i += optimize_nops_range(instr, a->instrlen, i); + i += optimize_nops_range(instr, len, i); else i += insn.length; - if (i >= a->instrlen) + if (i >= len) return; } } @@ -486,10 +488,135 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, text_poke_early(instr, insn_buff, insn_buff_sz); next: - optimize_nops(a, instr); + optimize_nops(instr, a->instrlen); } } +#if defined(CONFIG_RETPOLINE) && defined(CONFIG_STACK_VALIDATION) + +/* + * CALL/JMP *%\reg + */ +static int emit_indirect(int op, int reg, u8 *bytes) +{ + int i = 0; + u8 modrm; + + switch (op) { + case CALL_INSN_OPCODE: + modrm = 0x10; /* Reg = 2; CALL r/m */ + break; + + case JMP32_INSN_OPCODE: + modrm = 0x20; /* Reg = 4; JMP r/m */ + break; + + default: + WARN_ON_ONCE(1); + return -1; + } + + if (reg >= 8) { + bytes[i++] = 0x41; /* REX.B prefix */ + reg -= 8; + } + + modrm |= 0xc0; /* Mod = 3 */ + modrm += reg; + + bytes[i++] = 0xff; /* opcode */ + bytes[i++] = modrm; + + return i; +} + +/* + * Rewrite the compiler generated retpoline thunk calls. + * + * For spectre_v2=off (!X86_FEATURE_RETPOLINE), rewrite them into immediate + * indirect instructions, avoiding the extra indirection. + * + * For example, convert: + * + * CALL __x86_indirect_thunk_\reg + * + * into: + * + * CALL *%\reg + * + */ +static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes) +{ + retpoline_thunk_t *target; + int reg, i = 0; + + target = addr + insn->length + insn->immediate.value; + reg = target - __x86_indirect_thunk_array; + + if (WARN_ON_ONCE(reg & ~0xf)) + return -1; + + /* If anyone ever does: CALL/JMP *%rsp, we're in deep trouble. */ + BUG_ON(reg == 4); + + if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) + return -1; + + i = emit_indirect(insn->opcode.bytes[0], reg, bytes); + if (i < 0) + return i; + + for (; i < insn->length;) + bytes[i++] = 0x90; + + return i; +} + +/* + * Generated by 'objtool --retpoline'. + */ +void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) +{ + s32 *s; + + for (s = start; s < end; s++) { + void *addr = (void *)s + *s; + struct insn insn; + int len, ret; + u8 bytes[16]; + u8 op1, op2; + + ret = insn_decode_kernel(&insn, addr); + if (WARN_ON_ONCE(ret < 0)) + continue; + + op1 = insn.opcode.bytes[0]; + op2 = insn.opcode.bytes[1]; + + switch (op1) { + case CALL_INSN_OPCODE: + case JMP32_INSN_OPCODE: + break; + + default: + WARN_ON_ONCE(1); + continue; + } + + len = patch_retpoline(addr, &insn, bytes); + if (len == insn.length) { + optimize_nops(bytes, len); + text_poke_early(addr, bytes, len); + } + } +} + +#else /* !RETPOLINES || !CONFIG_STACK_VALIDATION */ + +void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) { } + +#endif /* CONFIG_RETPOLINE && CONFIG_STACK_VALIDATION */ + #ifdef CONFIG_SMP static void alternatives_smp_lock(const s32 *start, const s32 *end, u8 *text, u8 *text_end) @@ -774,6 +901,12 @@ void __init alternative_instructions(void) * patching. */ + /* + * Rewrite the retpolines, must be done before alternatives since + * those can rewrite the retpoline thunks. + */ + apply_retpolines(__retpoline_sites, __retpoline_sites_end); + apply_alternatives(__alt_instructions, __alt_instructions_end); #ifdef CONFIG_SMP diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c index 5e9a34b5bd74..169fb6f4cd2e 100644 --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -251,7 +251,8 @@ int module_finalize(const Elf_Ehdr *hdr, struct module *me) { const Elf_Shdr *s, *text = NULL, *alt = NULL, *locks = NULL, - *para = NULL, *orc = NULL, *orc_ip = NULL; + *para = NULL, *orc = NULL, *orc_ip = NULL, + *retpolines = NULL; char *secstrings = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset; for (s = sechdrs; s < sechdrs + hdr->e_shnum; s++) { @@ -267,8 +268,14 @@ int module_finalize(const Elf_Ehdr *hdr, orc = s; if (!strcmp(".orc_unwind_ip", secstrings + s->sh_name)) orc_ip = s; + if (!strcmp(".retpoline_sites", secstrings + s->sh_name)) + retpolines = s; } + if (retpolines) { + void *rseg = (void *)retpolines->sh_addr; + apply_retpolines(rseg, rseg + retpolines->sh_size); + } if (alt) { /* patch .altinstructions */ void *aseg = (void *)alt->sh_addr; From b0e2dc950654162bc68cec530156251e7ad3f03a Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 26 Oct 2021 14:01:43 +0200 Subject: [PATCH 1485/1842] x86/alternative: Handle Jcc __x86_indirect_thunk_\reg commit 2f0cbb2a8e5bbf101e9de118fc0eb168111a5e1e upstream. Handle the rare cases where the compiler (clang) does an indirect conditional tail-call using: Jcc __x86_indirect_thunk_\reg For the !RETPOLINE case this can be rewritten to fit the original (6 byte) instruction like: Jncc.d8 1f JMP *%\reg NOP 1: Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Borislav Petkov Acked-by: Josh Poimboeuf Tested-by: Alexei Starovoitov Link: https://lore.kernel.org/r/20211026120310.296470217@infradead.org Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/alternative.c | 40 +++++++++++++++++++++++++++++++---- 1 file changed, 36 insertions(+), 4 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index fc14a833423b..c8d61b310d15 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -548,7 +548,8 @@ static int emit_indirect(int op, int reg, u8 *bytes) static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes) { retpoline_thunk_t *target; - int reg, i = 0; + int reg, ret, i = 0; + u8 op, cc; target = addr + insn->length + insn->immediate.value; reg = target - __x86_indirect_thunk_array; @@ -562,9 +563,36 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes) if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) return -1; - i = emit_indirect(insn->opcode.bytes[0], reg, bytes); - if (i < 0) - return i; + op = insn->opcode.bytes[0]; + + /* + * Convert: + * + * Jcc.d32 __x86_indirect_thunk_\reg + * + * into: + * + * Jncc.d8 1f + * JMP *%\reg + * NOP + * 1: + */ + /* Jcc.d32 second opcode byte is in the range: 0x80-0x8f */ + if (op == 0x0f && (insn->opcode.bytes[1] & 0xf0) == 0x80) { + cc = insn->opcode.bytes[1] & 0xf; + cc ^= 1; /* invert condition */ + + bytes[i++] = 0x70 + cc; /* Jcc.d8 */ + bytes[i++] = insn->length - 2; /* sizeof(Jcc.d8) == 2 */ + + /* Continue as if: JMP.d32 __x86_indirect_thunk_\reg */ + op = JMP32_INSN_OPCODE; + } + + ret = emit_indirect(op, reg, bytes + i); + if (ret < 0) + return ret; + i += ret; for (; i < insn->length;) bytes[i++] = 0x90; @@ -598,6 +626,10 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) case JMP32_INSN_OPCODE: break; + case 0x0f: /* escape */ + if (op2 >= 0x80 && op2 <= 0x8f) + break; + fallthrough; default: WARN_ON_ONCE(1); continue; From 3d13ee0d411a078ca1538d823c2c759b8b266fb1 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 26 Oct 2021 14:01:44 +0200 Subject: [PATCH 1486/1842] x86/alternative: Try inline spectre_v2=retpoline,amd commit bbe2df3f6b6da7848398d55b1311d58a16ec21e4 upstream. Try and replace retpoline thunk calls with: LFENCE CALL *%\reg for spectre_v2=retpoline,amd. Specifically, the sequence above is 5 bytes for the low 8 registers, but 6 bytes for the high 8 registers. This means that unless the compilers prefix stuff the call with higher registers this replacement will fail. Luckily GCC strongly favours RAX for the indirect calls and most (95%+ for defconfig-x86_64) will be converted. OTOH clang strongly favours R11 and almost nothing gets converted. Note: it will also generate a correct replacement for the Jcc.d32 case, except unless the compilers start to prefix stuff that, it'll never fit. Specifically: Jncc.d8 1f LFENCE JMP *%\reg 1: is 7-8 bytes long, where the original instruction in unpadded form is only 6 bytes. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Borislav Petkov Acked-by: Josh Poimboeuf Tested-by: Alexei Starovoitov Link: https://lore.kernel.org/r/20211026120310.359986601@infradead.org [cascardo: RETPOLINE_AMD was renamed to RETPOLINE_LFENCE] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/alternative.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index c8d61b310d15..be85bf03c137 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -544,6 +544,7 @@ static int emit_indirect(int op, int reg, u8 *bytes) * * CALL *%\reg * + * It also tries to inline spectre_v2=retpoline,amd when size permits. */ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes) { @@ -560,7 +561,8 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes) /* If anyone ever does: CALL/JMP *%rsp, we're in deep trouble. */ BUG_ON(reg == 4); - if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) + if (cpu_feature_enabled(X86_FEATURE_RETPOLINE) && + !cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) return -1; op = insn->opcode.bytes[0]; @@ -573,8 +575,9 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes) * into: * * Jncc.d8 1f + * [ LFENCE ] * JMP *%\reg - * NOP + * [ NOP ] * 1: */ /* Jcc.d32 second opcode byte is in the range: 0x80-0x8f */ @@ -589,6 +592,15 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes) op = JMP32_INSN_OPCODE; } + /* + * For RETPOLINE_AMD: prepend the indirect CALL/JMP with an LFENCE. + */ + if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) { + bytes[i++] = 0x0f; + bytes[i++] = 0xae; + bytes[i++] = 0xe8; /* LFENCE */ + } + ret = emit_indirect(op, reg, bytes + i); if (ret < 0) return ret; From 38a80a3ca2cb069dd5608703b015a206a672aae5 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 26 Oct 2021 14:01:45 +0200 Subject: [PATCH 1487/1842] x86/alternative: Add debug prints to apply_retpolines() commit d4b5a5c993009ffeb5febe3b701da3faab6adb96 upstream. Make sure we can see the text changes when booting with 'debug-alternative'. Example output: [ ] SMP alternatives: retpoline at: __traceiter_initcall_level+0x1f/0x30 (ffffffff8100066f) len: 5 to: __x86_indirect_thunk_rax+0x0/0x20 [ ] SMP alternatives: ffffffff82603e58: [2:5) optimized NOPs: ff d0 0f 1f 00 [ ] SMP alternatives: ffffffff8100066f: orig: e8 cc 30 00 01 [ ] SMP alternatives: ffffffff8100066f: repl: ff d0 0f 1f 00 Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Borislav Petkov Acked-by: Josh Poimboeuf Tested-by: Alexei Starovoitov Link: https://lore.kernel.org/r/20211026120310.422273830@infradead.org Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/alternative.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index be85bf03c137..e2865dac6865 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -647,9 +647,15 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) continue; } + DPRINTK("retpoline at: %pS (%px) len: %d to: %pS", + addr, addr, insn.length, + addr + insn.length + insn.immediate.value); + len = patch_retpoline(addr, &insn, bytes); if (len == insn.length) { optimize_nops(bytes, len); + DUMP_BYTES(((u8*)addr), len, "%px: orig: ", addr); + DUMP_BYTES(((u8*)bytes), len, "%px: repl: ", addr); text_poke_early(addr, bytes, len); } } From 1713e5c4f8527c9ca5327d98ab6fd3c40df788aa Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 26 Oct 2021 14:01:47 +0200 Subject: [PATCH 1488/1842] bpf,x86: Simplify computing label offsets commit dceba0817ca329868a15e2e1dd46eb6340b69206 upstream. Take an idea from the 32bit JIT, which uses the multi-pass nature of the JIT to compute the instruction offsets on a prior pass in order to compute the relative jump offsets on a later pass. Application to the x86_64 JIT is slightly more involved because the offsets depend on program variables (such as callee_regs_used and stack_depth) and hence the computed offsets need to be kept in the context of the JIT. This removes, IMO quite fragile, code that hard-codes the offsets and tries to compute the length of variable parts of it. Convert both emit_bpf_tail_call_*() functions which have an out: label at the end. Additionally emit_bpt_tail_call_direct() also has a poke table entry, for which it computes the offset from the end (and thus already relies on the previous pass to have computed addrs[i]), also convert this to be a forward based offset. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Borislav Petkov Acked-by: Alexei Starovoitov Acked-by: Josh Poimboeuf Tested-by: Alexei Starovoitov Link: https://lore.kernel.org/r/20211026120310.552304864@infradead.org Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: keep the cnt variable in emit_bpf_tail_call_{,in}direct()] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/net/bpf_jit_comp.c | 125 ++++++++++++------------------------ 1 file changed, 42 insertions(+), 83 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index b3fafb9ba8a3..cea8dd3f920a 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -212,6 +212,14 @@ static void jit_fill_hole(void *area, unsigned int size) struct jit_context { int cleanup_addr; /* Epilogue code offset */ + + /* + * Program specific offsets of labels in the code; these rely on the + * JIT doing at least 2 passes, recording the position on the first + * pass, only to generate the correct offset on the second pass. + */ + int tail_call_direct_label; + int tail_call_indirect_label; }; /* Maximum number of bytes emitted while JITing one eBPF insn */ @@ -371,22 +379,6 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t, return __bpf_arch_text_poke(ip, t, old_addr, new_addr, true); } -static int get_pop_bytes(bool *callee_regs_used) -{ - int bytes = 0; - - if (callee_regs_used[3]) - bytes += 2; - if (callee_regs_used[2]) - bytes += 2; - if (callee_regs_used[1]) - bytes += 2; - if (callee_regs_used[0]) - bytes += 1; - - return bytes; -} - /* * Generate the following code: * @@ -402,30 +394,12 @@ static int get_pop_bytes(bool *callee_regs_used) * out: */ static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used, - u32 stack_depth) + u32 stack_depth, u8 *ip, + struct jit_context *ctx) { int tcc_off = -4 - round_up(stack_depth, 8); - u8 *prog = *pprog; - int pop_bytes = 0; - int off1 = 42; - int off2 = 31; - int off3 = 9; - int cnt = 0; - - /* count the additional bytes used for popping callee regs from stack - * that need to be taken into account for each of the offsets that - * are used for bailing out of the tail call - */ - pop_bytes = get_pop_bytes(callee_regs_used); - off1 += pop_bytes; - off2 += pop_bytes; - off3 += pop_bytes; - - if (stack_depth) { - off1 += 7; - off2 += 7; - off3 += 7; - } + u8 *prog = *pprog, *start = *pprog; + int cnt = 0, offset; /* * rdi - pointer to ctx @@ -440,8 +414,9 @@ static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used, EMIT2(0x89, 0xD2); /* mov edx, edx */ EMIT3(0x39, 0x56, /* cmp dword ptr [rsi + 16], edx */ offsetof(struct bpf_array, map.max_entries)); -#define OFFSET1 (off1 + RETPOLINE_RCX_BPF_JIT_SIZE) /* Number of bytes to jump */ - EMIT2(X86_JBE, OFFSET1); /* jbe out */ + + offset = ctx->tail_call_indirect_label - (prog + 2 - start); + EMIT2(X86_JBE, offset); /* jbe out */ /* * if (tail_call_cnt > MAX_TAIL_CALL_CNT) @@ -449,8 +424,9 @@ static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used, */ EMIT2_off32(0x8B, 0x85, tcc_off); /* mov eax, dword ptr [rbp - tcc_off] */ EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */ -#define OFFSET2 (off2 + RETPOLINE_RCX_BPF_JIT_SIZE) - EMIT2(X86_JA, OFFSET2); /* ja out */ + + offset = ctx->tail_call_indirect_label - (prog + 2 - start); + EMIT2(X86_JA, offset); /* ja out */ EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */ EMIT2_off32(0x89, 0x85, tcc_off); /* mov dword ptr [rbp - tcc_off], eax */ @@ -463,12 +439,11 @@ static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used, * goto out; */ EMIT3(0x48, 0x85, 0xC9); /* test rcx,rcx */ -#define OFFSET3 (off3 + RETPOLINE_RCX_BPF_JIT_SIZE) - EMIT2(X86_JE, OFFSET3); /* je out */ - *pprog = prog; - pop_callee_regs(pprog, callee_regs_used); - prog = *pprog; + offset = ctx->tail_call_indirect_label - (prog + 2 - start); + EMIT2(X86_JE, offset); /* je out */ + + pop_callee_regs(&prog, callee_regs_used); EMIT1(0x58); /* pop rax */ if (stack_depth) @@ -488,39 +463,18 @@ static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used, RETPOLINE_RCX_BPF_JIT(); /* out: */ + ctx->tail_call_indirect_label = prog - start; *pprog = prog; } static void emit_bpf_tail_call_direct(struct bpf_jit_poke_descriptor *poke, - u8 **pprog, int addr, u8 *image, - bool *callee_regs_used, u32 stack_depth) + u8 **pprog, u8 *ip, + bool *callee_regs_used, u32 stack_depth, + struct jit_context *ctx) { int tcc_off = -4 - round_up(stack_depth, 8); - u8 *prog = *pprog; - int pop_bytes = 0; - int off1 = 20; - int poke_off; - int cnt = 0; - - /* count the additional bytes used for popping callee regs to stack - * that need to be taken into account for jump offset that is used for - * bailing out from of the tail call when limit is reached - */ - pop_bytes = get_pop_bytes(callee_regs_used); - off1 += pop_bytes; - - /* - * total bytes for: - * - nop5/ jmpq $off - * - pop callee regs - * - sub rsp, $val if depth > 0 - * - pop rax - */ - poke_off = X86_PATCH_SIZE + pop_bytes + 1; - if (stack_depth) { - poke_off += 7; - off1 += 7; - } + u8 *prog = *pprog, *start = *pprog; + int cnt = 0, offset; /* * if (tail_call_cnt > MAX_TAIL_CALL_CNT) @@ -528,28 +482,30 @@ static void emit_bpf_tail_call_direct(struct bpf_jit_poke_descriptor *poke, */ EMIT2_off32(0x8B, 0x85, tcc_off); /* mov eax, dword ptr [rbp - tcc_off] */ EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */ - EMIT2(X86_JA, off1); /* ja out */ + + offset = ctx->tail_call_direct_label - (prog + 2 - start); + EMIT2(X86_JA, offset); /* ja out */ EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */ EMIT2_off32(0x89, 0x85, tcc_off); /* mov dword ptr [rbp - tcc_off], eax */ - poke->tailcall_bypass = image + (addr - poke_off - X86_PATCH_SIZE); + poke->tailcall_bypass = ip + (prog - start); poke->adj_off = X86_TAIL_CALL_OFFSET; - poke->tailcall_target = image + (addr - X86_PATCH_SIZE); + poke->tailcall_target = ip + ctx->tail_call_direct_label - X86_PATCH_SIZE; poke->bypass_addr = (u8 *)poke->tailcall_target + X86_PATCH_SIZE; emit_jump(&prog, (u8 *)poke->tailcall_target + X86_PATCH_SIZE, poke->tailcall_bypass); - *pprog = prog; - pop_callee_regs(pprog, callee_regs_used); - prog = *pprog; + pop_callee_regs(&prog, callee_regs_used); EMIT1(0x58); /* pop rax */ if (stack_depth) EMIT3_off32(0x48, 0x81, 0xC4, round_up(stack_depth, 8)); memcpy(prog, ideal_nops[NOP_ATOMIC5], X86_PATCH_SIZE); prog += X86_PATCH_SIZE; + /* out: */ + ctx->tail_call_direct_label = prog - start; *pprog = prog; } @@ -1274,13 +1230,16 @@ xadd: if (is_imm8(insn->off)) case BPF_JMP | BPF_TAIL_CALL: if (imm32) emit_bpf_tail_call_direct(&bpf_prog->aux->poke_tab[imm32 - 1], - &prog, addrs[i], image, + &prog, image + addrs[i - 1], callee_regs_used, - bpf_prog->aux->stack_depth); + bpf_prog->aux->stack_depth, + ctx); else emit_bpf_tail_call_indirect(&prog, callee_regs_used, - bpf_prog->aux->stack_depth); + bpf_prog->aux->stack_depth, + image + addrs[i - 1], + ctx); break; /* cond jump */ From c2746d567dcda6df41b1c3c1e930f51429f5a364 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 26 Oct 2021 14:01:48 +0200 Subject: [PATCH 1489/1842] bpf,x86: Respect X86_FEATURE_RETPOLINE* commit 87c87ecd00c54ecd677798cb49ef27329e0fab41 upstream. Current BPF codegen doesn't respect X86_FEATURE_RETPOLINE* flags and unconditionally emits a thunk call, this is sub-optimal and doesn't match the regular, compiler generated, code. Update the i386 JIT to emit code equal to what the compiler emits for the regular kernel text (IOW. a plain THUNK call). Update the x86_64 JIT to emit code similar to the result of compiler and kernel rewrites as according to X86_FEATURE_RETPOLINE* flags. Inlining RETPOLINE_AMD (lfence; jmp *%reg) and !RETPOLINE (jmp *%reg), while doing a THUNK call for RETPOLINE. This removes the hard-coded retpoline thunks and shrinks the generated code. Leaving a single retpoline thunk definition in the kernel. Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Borislav Petkov Acked-by: Alexei Starovoitov Acked-by: Josh Poimboeuf Tested-by: Alexei Starovoitov Link: https://lore.kernel.org/r/20211026120310.614772675@infradead.org [cascardo: RETPOLINE_AMD was renamed to RETPOLINE_LFENCE] Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: add the necessary cnt variable to emit_indirect_jump()] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/nospec-branch.h | 59 ---------------------------- arch/x86/net/bpf_jit_comp.c | 49 +++++++++++------------ arch/x86/net/bpf_jit_comp32.c | 22 +++++++++-- 3 files changed, 42 insertions(+), 88 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index a2c73ae30c9e..1bb47df51c26 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -318,63 +318,4 @@ static inline void mds_idle_clear_cpu_buffers(void) #endif /* __ASSEMBLY__ */ -/* - * Below is used in the eBPF JIT compiler and emits the byte sequence - * for the following assembly: - * - * With retpolines configured: - * - * callq do_rop - * spec_trap: - * pause - * lfence - * jmp spec_trap - * do_rop: - * mov %rcx,(%rsp) for x86_64 - * mov %edx,(%esp) for x86_32 - * retq - * - * Without retpolines configured: - * - * jmp *%rcx for x86_64 - * jmp *%edx for x86_32 - */ -#ifdef CONFIG_RETPOLINE -# ifdef CONFIG_X86_64 -# define RETPOLINE_RCX_BPF_JIT_SIZE 17 -# define RETPOLINE_RCX_BPF_JIT() \ -do { \ - EMIT1_off32(0xE8, 7); /* callq do_rop */ \ - /* spec_trap: */ \ - EMIT2(0xF3, 0x90); /* pause */ \ - EMIT3(0x0F, 0xAE, 0xE8); /* lfence */ \ - EMIT2(0xEB, 0xF9); /* jmp spec_trap */ \ - /* do_rop: */ \ - EMIT4(0x48, 0x89, 0x0C, 0x24); /* mov %rcx,(%rsp) */ \ - EMIT1(0xC3); /* retq */ \ -} while (0) -# else /* !CONFIG_X86_64 */ -# define RETPOLINE_EDX_BPF_JIT() \ -do { \ - EMIT1_off32(0xE8, 7); /* call do_rop */ \ - /* spec_trap: */ \ - EMIT2(0xF3, 0x90); /* pause */ \ - EMIT3(0x0F, 0xAE, 0xE8); /* lfence */ \ - EMIT2(0xEB, 0xF9); /* jmp spec_trap */ \ - /* do_rop: */ \ - EMIT3(0x89, 0x14, 0x24); /* mov %edx,(%esp) */ \ - EMIT1(0xC3); /* ret */ \ -} while (0) -# endif -#else /* !CONFIG_RETPOLINE */ -# ifdef CONFIG_X86_64 -# define RETPOLINE_RCX_BPF_JIT_SIZE 2 -# define RETPOLINE_RCX_BPF_JIT() \ - EMIT2(0xFF, 0xE1); /* jmp *%rcx */ -# else /* !CONFIG_X86_64 */ -# define RETPOLINE_EDX_BPF_JIT() \ - EMIT2(0xFF, 0xE2) /* jmp *%edx */ -# endif -#endif - #endif /* _ASM_X86_NOSPEC_BRANCH_H_ */ diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index cea8dd3f920a..2563aef35bc0 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -379,6 +379,26 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t, return __bpf_arch_text_poke(ip, t, old_addr, new_addr, true); } +#define EMIT_LFENCE() EMIT3(0x0F, 0xAE, 0xE8) + +static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip) +{ + u8 *prog = *pprog; + int cnt = 0; + +#ifdef CONFIG_RETPOLINE + if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) { + EMIT_LFENCE(); + EMIT2(0xFF, 0xE0 + reg); + } else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) { + emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip); + } else +#endif + EMIT2(0xFF, 0xE0 + reg); + + *pprog = prog; +} + /* * Generate the following code: * @@ -460,7 +480,7 @@ static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used, * rdi == ctx (1st arg) * rcx == prog->bpf_func + X86_TAIL_CALL_OFFSET */ - RETPOLINE_RCX_BPF_JIT(); + emit_indirect_jump(&prog, 1 /* rcx */, ip + (prog - start)); /* out: */ ctx->tail_call_indirect_label = prog - start; @@ -1099,8 +1119,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, /* speculation barrier */ case BPF_ST | BPF_NOSPEC: if (boot_cpu_has(X86_FEATURE_XMM2)) - /* Emit 'lfence' */ - EMIT3(0x0F, 0xAE, 0xE8); + EMIT_LFENCE(); break; /* ST: *(u8*)(dst_reg + off) = imm */ @@ -1878,26 +1897,6 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i return ret; } -static int emit_fallback_jump(u8 **pprog) -{ - u8 *prog = *pprog; - int err = 0; - -#ifdef CONFIG_RETPOLINE - /* Note that this assumes the the compiler uses external - * thunks for indirect calls. Both clang and GCC use the same - * naming convention for external thunks. - */ - err = emit_jump(&prog, __x86_indirect_thunk_rdx, prog); -#else - int cnt = 0; - - EMIT2(0xFF, 0xE2); /* jmp rdx */ -#endif - *pprog = prog; - return err; -} - static int emit_bpf_dispatcher(u8 **pprog, int a, int b, s64 *progs) { u8 *jg_reloc, *prog = *pprog; @@ -1919,9 +1918,7 @@ static int emit_bpf_dispatcher(u8 **pprog, int a, int b, s64 *progs) if (err) return err; - err = emit_fallback_jump(&prog); /* jmp thunk/indirect */ - if (err) - return err; + emit_indirect_jump(&prog, 2 /* rdx */, prog); *pprog = prog; return 0; diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c index 4bd0f98df700..622af951220c 100644 --- a/arch/x86/net/bpf_jit_comp32.c +++ b/arch/x86/net/bpf_jit_comp32.c @@ -15,6 +15,7 @@ #include #include #include +#include #include /* @@ -1267,6 +1268,21 @@ static void emit_epilogue(u8 **pprog, u32 stack_depth) *pprog = prog; } +static int emit_jmp_edx(u8 **pprog, u8 *ip) +{ + u8 *prog = *pprog; + int cnt = 0; + +#ifdef CONFIG_RETPOLINE + EMIT1_off32(0xE9, (u8 *)__x86_indirect_thunk_edx - (ip + 5)); +#else + EMIT2(0xFF, 0xE2); +#endif + *pprog = prog; + + return cnt; +} + /* * Generate the following code: * ... bpf_tail_call(void *ctx, struct bpf_array *array, u64 index) ... @@ -1280,7 +1296,7 @@ static void emit_epilogue(u8 **pprog, u32 stack_depth) * goto *(prog->bpf_func + prologue_size); * out: */ -static void emit_bpf_tail_call(u8 **pprog) +static void emit_bpf_tail_call(u8 **pprog, u8 *ip) { u8 *prog = *pprog; int cnt = 0; @@ -1362,7 +1378,7 @@ static void emit_bpf_tail_call(u8 **pprog) * eax == ctx (1st arg) * edx == prog->bpf_func + prologue_size */ - RETPOLINE_EDX_BPF_JIT(); + cnt += emit_jmp_edx(&prog, ip + cnt); if (jmp_label1 == -1) jmp_label1 = cnt; @@ -1929,7 +1945,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, break; } case BPF_JMP | BPF_TAIL_CALL: - emit_bpf_tail_call(&prog); + emit_bpf_tail_call(&prog, image + addrs[i - 1]); break; /* cond jump */ From a512fcd881c1ae4ed607d5fed54248be06fd3478 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Sat, 4 Dec 2021 14:43:39 +0100 Subject: [PATCH 1490/1842] x86/lib/atomic64_386_32: Rename things commit 22da5a07c75e1104caf6a42f189c97b83d070073 upstream. Principally, in order to get rid of #define RET in this code to make place for a new RET, but also to clarify the code, rename a bunch of things: s/UNLOCK/IRQ_RESTORE/ s/LOCK/IRQ_SAVE/ s/BEGIN/BEGIN_IRQ_SAVE/ s/\/RET_IRQ_RESTORE/ s/RET_ENDP/\tRET_IRQ_RESTORE\rENDP/ which then leaves RET unused so it can be removed. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/20211204134907.841623970@infradead.org Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/lib/atomic64_386_32.S | 84 +++++++++++++++++++--------------- 1 file changed, 46 insertions(+), 38 deletions(-) diff --git a/arch/x86/lib/atomic64_386_32.S b/arch/x86/lib/atomic64_386_32.S index 16bc9130e7a5..4ad6b97fdb6f 100644 --- a/arch/x86/lib/atomic64_386_32.S +++ b/arch/x86/lib/atomic64_386_32.S @@ -9,81 +9,83 @@ #include /* if you want SMP support, implement these with real spinlocks */ -.macro LOCK reg +.macro IRQ_SAVE reg pushfl cli .endm -.macro UNLOCK reg +.macro IRQ_RESTORE reg popfl .endm -#define BEGIN(op) \ +#define BEGIN_IRQ_SAVE(op) \ .macro endp; \ SYM_FUNC_END(atomic64_##op##_386); \ .purgem endp; \ .endm; \ SYM_FUNC_START(atomic64_##op##_386); \ - LOCK v; + IRQ_SAVE v; #define ENDP endp -#define RET \ - UNLOCK v; \ +#define RET_IRQ_RESTORE \ + IRQ_RESTORE v; \ ret -#define RET_ENDP \ - RET; \ - ENDP - #define v %ecx -BEGIN(read) +BEGIN_IRQ_SAVE(read) movl (v), %eax movl 4(v), %edx -RET_ENDP + RET_IRQ_RESTORE +ENDP #undef v #define v %esi -BEGIN(set) +BEGIN_IRQ_SAVE(set) movl %ebx, (v) movl %ecx, 4(v) -RET_ENDP + RET_IRQ_RESTORE +ENDP #undef v #define v %esi -BEGIN(xchg) +BEGIN_IRQ_SAVE(xchg) movl (v), %eax movl 4(v), %edx movl %ebx, (v) movl %ecx, 4(v) -RET_ENDP + RET_IRQ_RESTORE +ENDP #undef v #define v %ecx -BEGIN(add) +BEGIN_IRQ_SAVE(add) addl %eax, (v) adcl %edx, 4(v) -RET_ENDP + RET_IRQ_RESTORE +ENDP #undef v #define v %ecx -BEGIN(add_return) +BEGIN_IRQ_SAVE(add_return) addl (v), %eax adcl 4(v), %edx movl %eax, (v) movl %edx, 4(v) -RET_ENDP + RET_IRQ_RESTORE +ENDP #undef v #define v %ecx -BEGIN(sub) +BEGIN_IRQ_SAVE(sub) subl %eax, (v) sbbl %edx, 4(v) -RET_ENDP + RET_IRQ_RESTORE +ENDP #undef v #define v %ecx -BEGIN(sub_return) +BEGIN_IRQ_SAVE(sub_return) negl %edx negl %eax sbbl $0, %edx @@ -91,47 +93,52 @@ BEGIN(sub_return) adcl 4(v), %edx movl %eax, (v) movl %edx, 4(v) -RET_ENDP + RET_IRQ_RESTORE +ENDP #undef v #define v %esi -BEGIN(inc) +BEGIN_IRQ_SAVE(inc) addl $1, (v) adcl $0, 4(v) -RET_ENDP + RET_IRQ_RESTORE +ENDP #undef v #define v %esi -BEGIN(inc_return) +BEGIN_IRQ_SAVE(inc_return) movl (v), %eax movl 4(v), %edx addl $1, %eax adcl $0, %edx movl %eax, (v) movl %edx, 4(v) -RET_ENDP + RET_IRQ_RESTORE +ENDP #undef v #define v %esi -BEGIN(dec) +BEGIN_IRQ_SAVE(dec) subl $1, (v) sbbl $0, 4(v) -RET_ENDP + RET_IRQ_RESTORE +ENDP #undef v #define v %esi -BEGIN(dec_return) +BEGIN_IRQ_SAVE(dec_return) movl (v), %eax movl 4(v), %edx subl $1, %eax sbbl $0, %edx movl %eax, (v) movl %edx, 4(v) -RET_ENDP + RET_IRQ_RESTORE +ENDP #undef v #define v %esi -BEGIN(add_unless) +BEGIN_IRQ_SAVE(add_unless) addl %eax, %ecx adcl %edx, %edi addl (v), %eax @@ -143,7 +150,7 @@ BEGIN(add_unless) movl %edx, 4(v) movl $1, %eax 2: - RET + RET_IRQ_RESTORE 3: cmpl %edx, %edi jne 1b @@ -153,7 +160,7 @@ ENDP #undef v #define v %esi -BEGIN(inc_not_zero) +BEGIN_IRQ_SAVE(inc_not_zero) movl (v), %eax movl 4(v), %edx testl %eax, %eax @@ -165,7 +172,7 @@ BEGIN(inc_not_zero) movl %edx, 4(v) movl $1, %eax 2: - RET + RET_IRQ_RESTORE 3: testl %edx, %edx jne 1b @@ -174,7 +181,7 @@ ENDP #undef v #define v %esi -BEGIN(dec_if_positive) +BEGIN_IRQ_SAVE(dec_if_positive) movl (v), %eax movl 4(v), %edx subl $1, %eax @@ -183,5 +190,6 @@ BEGIN(dec_if_positive) movl %eax, (v) movl %edx, 4(v) 1: -RET_ENDP + RET_IRQ_RESTORE +ENDP #undef v From 3c91e2257622c169bf74d587e48b96923285ed74 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Sat, 4 Dec 2021 14:43:40 +0100 Subject: [PATCH 1491/1842] x86: Prepare asm files for straight-line-speculation commit f94909ceb1ed4bfdb2ada72f93236305e6d6951f upstream. Replace all ret/retq instructions with RET in preparation of making RET a macro. Since AS is case insensitive it's a big no-op without RET defined. find arch/x86/ -name \*.S | while read file do sed -i 's/\/RET/' $file done Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/20211204134907.905503893@infradead.org [bwh: Backported to 5.10: ran the above command] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/boot/compressed/efi_thunk_64.S | 2 +- arch/x86/boot/compressed/head_64.S | 4 +- arch/x86/boot/compressed/mem_encrypt.S | 4 +- arch/x86/crypto/aegis128-aesni-asm.S | 48 +++++++++--------- arch/x86/crypto/aes_ctrby8_avx-x86_64.S | 2 +- arch/x86/crypto/aesni-intel_asm.S | 52 ++++++++++---------- arch/x86/crypto/aesni-intel_avx-x86_64.S | 40 +++++++-------- arch/x86/crypto/blake2s-core.S | 4 +- arch/x86/crypto/blowfish-x86_64-asm_64.S | 12 ++--- arch/x86/crypto/camellia-aesni-avx-asm_64.S | 18 +++---- arch/x86/crypto/camellia-aesni-avx2-asm_64.S | 18 +++---- arch/x86/crypto/camellia-x86_64-asm_64.S | 12 ++--- arch/x86/crypto/cast5-avx-x86_64-asm_64.S | 12 ++--- arch/x86/crypto/cast6-avx-x86_64-asm_64.S | 16 +++--- arch/x86/crypto/chacha-avx2-x86_64.S | 6 +-- arch/x86/crypto/chacha-avx512vl-x86_64.S | 6 +-- arch/x86/crypto/chacha-ssse3-x86_64.S | 8 +-- arch/x86/crypto/crc32-pclmul_asm.S | 2 +- arch/x86/crypto/crc32c-pcl-intel-asm_64.S | 2 +- arch/x86/crypto/crct10dif-pcl-asm_64.S | 2 +- arch/x86/crypto/des3_ede-asm_64.S | 4 +- arch/x86/crypto/ghash-clmulni-intel_asm.S | 6 +-- arch/x86/crypto/nh-avx2-x86_64.S | 2 +- arch/x86/crypto/nh-sse2-x86_64.S | 2 +- arch/x86/crypto/serpent-avx-x86_64-asm_64.S | 16 +++--- arch/x86/crypto/serpent-avx2-asm_64.S | 16 +++--- arch/x86/crypto/serpent-sse2-i586-asm_32.S | 6 +-- arch/x86/crypto/serpent-sse2-x86_64-asm_64.S | 6 +-- arch/x86/crypto/sha1_avx2_x86_64_asm.S | 2 +- arch/x86/crypto/sha1_ni_asm.S | 2 +- arch/x86/crypto/sha1_ssse3_asm.S | 2 +- arch/x86/crypto/sha256-avx-asm.S | 2 +- arch/x86/crypto/sha256-avx2-asm.S | 2 +- arch/x86/crypto/sha256-ssse3-asm.S | 2 +- arch/x86/crypto/sha256_ni_asm.S | 2 +- arch/x86/crypto/sha512-avx-asm.S | 2 +- arch/x86/crypto/sha512-avx2-asm.S | 2 +- arch/x86/crypto/sha512-ssse3-asm.S | 2 +- arch/x86/crypto/twofish-avx-x86_64-asm_64.S | 16 +++--- arch/x86/crypto/twofish-i586-asm_32.S | 4 +- arch/x86/crypto/twofish-x86_64-asm_64-3way.S | 6 +-- arch/x86/crypto/twofish-x86_64-asm_64.S | 4 +- arch/x86/entry/entry_32.S | 2 +- arch/x86/entry/entry_64.S | 12 ++--- arch/x86/entry/thunk_32.S | 2 +- arch/x86/entry/thunk_64.S | 2 +- arch/x86/entry/vdso/vdso32/system_call.S | 2 +- arch/x86/entry/vsyscall/vsyscall_emu_64.S | 6 +-- arch/x86/kernel/acpi/wakeup_32.S | 6 +-- arch/x86/kernel/ftrace_32.S | 6 +-- arch/x86/kernel/ftrace_64.S | 10 ++-- arch/x86/kernel/head_32.S | 2 +- arch/x86/kernel/irqflags.S | 4 +- arch/x86/kernel/relocate_kernel_32.S | 10 ++-- arch/x86/kernel/relocate_kernel_64.S | 10 ++-- arch/x86/kernel/sev_verify_cbit.S | 2 +- arch/x86/kernel/verify_cpu.S | 4 +- arch/x86/kvm/svm/vmenter.S | 2 +- arch/x86/kvm/vmx/vmenter.S | 14 +++--- arch/x86/lib/atomic64_386_32.S | 2 +- arch/x86/lib/atomic64_cx8_32.S | 16 +++--- arch/x86/lib/checksum_32.S | 8 +-- arch/x86/lib/clear_page_64.S | 6 +-- arch/x86/lib/cmpxchg16b_emu.S | 4 +- arch/x86/lib/cmpxchg8b_emu.S | 4 +- arch/x86/lib/copy_mc_64.S | 6 +-- arch/x86/lib/copy_page_64.S | 4 +- arch/x86/lib/copy_user_64.S | 12 ++--- arch/x86/lib/csum-copy_64.S | 2 +- arch/x86/lib/getuser.S | 22 ++++----- arch/x86/lib/hweight.S | 6 +-- arch/x86/lib/iomap_copy_64.S | 2 +- arch/x86/lib/memcpy_64.S | 12 ++--- arch/x86/lib/memmove_64.S | 4 +- arch/x86/lib/memset_64.S | 6 +-- arch/x86/lib/msr-reg.S | 4 +- arch/x86/lib/putuser.S | 6 +-- arch/x86/lib/retpoline.S | 2 +- arch/x86/math-emu/div_Xsig.S | 2 +- arch/x86/math-emu/div_small.S | 2 +- arch/x86/math-emu/mul_Xsig.S | 6 +-- arch/x86/math-emu/polynom_Xsig.S | 2 +- arch/x86/math-emu/reg_norm.S | 6 +-- arch/x86/math-emu/reg_round.S | 2 +- arch/x86/math-emu/reg_u_add.S | 2 +- arch/x86/math-emu/reg_u_div.S | 2 +- arch/x86/math-emu/reg_u_mul.S | 2 +- arch/x86/math-emu/reg_u_sub.S | 2 +- arch/x86/math-emu/round_Xsig.S | 4 +- arch/x86/math-emu/shr_Xsig.S | 8 +-- arch/x86/math-emu/wm_shrx.S | 16 +++--- arch/x86/mm/mem_encrypt_boot.S | 4 +- arch/x86/platform/efi/efi_stub_32.S | 2 +- arch/x86/platform/efi/efi_stub_64.S | 2 +- arch/x86/platform/efi/efi_thunk_64.S | 2 +- arch/x86/platform/olpc/xo1-wakeup.S | 6 +-- arch/x86/power/hibernate_asm_32.S | 4 +- arch/x86/power/hibernate_asm_64.S | 4 +- arch/x86/um/checksum_32.S | 4 +- arch/x86/um/setjmp_32.S | 2 +- arch/x86/um/setjmp_64.S | 2 +- arch/x86/xen/xen-asm.S | 14 +++--- arch/x86/xen/xen-head.S | 2 +- 103 files changed, 353 insertions(+), 353 deletions(-) diff --git a/arch/x86/boot/compressed/efi_thunk_64.S b/arch/x86/boot/compressed/efi_thunk_64.S index c4bb0f9363f5..c8052a141b08 100644 --- a/arch/x86/boot/compressed/efi_thunk_64.S +++ b/arch/x86/boot/compressed/efi_thunk_64.S @@ -89,7 +89,7 @@ SYM_FUNC_START(__efi64_thunk) pop %rbx pop %rbp - ret + RET SYM_FUNC_END(__efi64_thunk) .code32 diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S index 72f655c238cf..b55e2007b30c 100644 --- a/arch/x86/boot/compressed/head_64.S +++ b/arch/x86/boot/compressed/head_64.S @@ -786,7 +786,7 @@ SYM_FUNC_START(efi32_pe_entry) 2: popl %edi // restore callee-save registers popl %ebx leave - ret + RET SYM_FUNC_END(efi32_pe_entry) .section ".rodata" @@ -868,7 +868,7 @@ SYM_FUNC_START(startup32_check_sev_cbit) popl %ebx popl %eax #endif - ret + RET SYM_FUNC_END(startup32_check_sev_cbit) /* diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S index a6dea4e8a082..484a9c06f41d 100644 --- a/arch/x86/boot/compressed/mem_encrypt.S +++ b/arch/x86/boot/compressed/mem_encrypt.S @@ -58,7 +58,7 @@ SYM_FUNC_START(get_sev_encryption_bit) #endif /* CONFIG_AMD_MEM_ENCRYPT */ - ret + RET SYM_FUNC_END(get_sev_encryption_bit) .code64 @@ -99,7 +99,7 @@ SYM_FUNC_START(set_sev_encryption_mask) #endif xor %rax, %rax - ret + RET SYM_FUNC_END(set_sev_encryption_mask) .data diff --git a/arch/x86/crypto/aegis128-aesni-asm.S b/arch/x86/crypto/aegis128-aesni-asm.S index 51d46d93efbc..b48ddebb4748 100644 --- a/arch/x86/crypto/aegis128-aesni-asm.S +++ b/arch/x86/crypto/aegis128-aesni-asm.S @@ -122,7 +122,7 @@ SYM_FUNC_START_LOCAL(__load_partial) pxor T0, MSG .Lld_partial_8: - ret + RET SYM_FUNC_END(__load_partial) /* @@ -180,7 +180,7 @@ SYM_FUNC_START_LOCAL(__store_partial) mov %r10b, (%r9) .Lst_partial_1: - ret + RET SYM_FUNC_END(__store_partial) /* @@ -225,7 +225,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_init) movdqu STATE4, 0x40(STATEP) FRAME_END - ret + RET SYM_FUNC_END(crypto_aegis128_aesni_init) /* @@ -337,7 +337,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_ad) movdqu STATE3, 0x30(STATEP) movdqu STATE4, 0x40(STATEP) FRAME_END - ret + RET .Lad_out_1: movdqu STATE4, 0x00(STATEP) @@ -346,7 +346,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_ad) movdqu STATE2, 0x30(STATEP) movdqu STATE3, 0x40(STATEP) FRAME_END - ret + RET .Lad_out_2: movdqu STATE3, 0x00(STATEP) @@ -355,7 +355,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_ad) movdqu STATE1, 0x30(STATEP) movdqu STATE2, 0x40(STATEP) FRAME_END - ret + RET .Lad_out_3: movdqu STATE2, 0x00(STATEP) @@ -364,7 +364,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_ad) movdqu STATE0, 0x30(STATEP) movdqu STATE1, 0x40(STATEP) FRAME_END - ret + RET .Lad_out_4: movdqu STATE1, 0x00(STATEP) @@ -373,11 +373,11 @@ SYM_FUNC_START(crypto_aegis128_aesni_ad) movdqu STATE4, 0x30(STATEP) movdqu STATE0, 0x40(STATEP) FRAME_END - ret + RET .Lad_out: FRAME_END - ret + RET SYM_FUNC_END(crypto_aegis128_aesni_ad) .macro encrypt_block a s0 s1 s2 s3 s4 i @@ -452,7 +452,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_enc) movdqu STATE2, 0x30(STATEP) movdqu STATE3, 0x40(STATEP) FRAME_END - ret + RET .Lenc_out_1: movdqu STATE3, 0x00(STATEP) @@ -461,7 +461,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_enc) movdqu STATE1, 0x30(STATEP) movdqu STATE2, 0x40(STATEP) FRAME_END - ret + RET .Lenc_out_2: movdqu STATE2, 0x00(STATEP) @@ -470,7 +470,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_enc) movdqu STATE0, 0x30(STATEP) movdqu STATE1, 0x40(STATEP) FRAME_END - ret + RET .Lenc_out_3: movdqu STATE1, 0x00(STATEP) @@ -479,7 +479,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_enc) movdqu STATE4, 0x30(STATEP) movdqu STATE0, 0x40(STATEP) FRAME_END - ret + RET .Lenc_out_4: movdqu STATE0, 0x00(STATEP) @@ -488,11 +488,11 @@ SYM_FUNC_START(crypto_aegis128_aesni_enc) movdqu STATE3, 0x30(STATEP) movdqu STATE4, 0x40(STATEP) FRAME_END - ret + RET .Lenc_out: FRAME_END - ret + RET SYM_FUNC_END(crypto_aegis128_aesni_enc) /* @@ -532,7 +532,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_enc_tail) movdqu STATE3, 0x40(STATEP) FRAME_END - ret + RET SYM_FUNC_END(crypto_aegis128_aesni_enc_tail) .macro decrypt_block a s0 s1 s2 s3 s4 i @@ -606,7 +606,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_dec) movdqu STATE2, 0x30(STATEP) movdqu STATE3, 0x40(STATEP) FRAME_END - ret + RET .Ldec_out_1: movdqu STATE3, 0x00(STATEP) @@ -615,7 +615,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_dec) movdqu STATE1, 0x30(STATEP) movdqu STATE2, 0x40(STATEP) FRAME_END - ret + RET .Ldec_out_2: movdqu STATE2, 0x00(STATEP) @@ -624,7 +624,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_dec) movdqu STATE0, 0x30(STATEP) movdqu STATE1, 0x40(STATEP) FRAME_END - ret + RET .Ldec_out_3: movdqu STATE1, 0x00(STATEP) @@ -633,7 +633,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_dec) movdqu STATE4, 0x30(STATEP) movdqu STATE0, 0x40(STATEP) FRAME_END - ret + RET .Ldec_out_4: movdqu STATE0, 0x00(STATEP) @@ -642,11 +642,11 @@ SYM_FUNC_START(crypto_aegis128_aesni_dec) movdqu STATE3, 0x30(STATEP) movdqu STATE4, 0x40(STATEP) FRAME_END - ret + RET .Ldec_out: FRAME_END - ret + RET SYM_FUNC_END(crypto_aegis128_aesni_dec) /* @@ -696,7 +696,7 @@ SYM_FUNC_START(crypto_aegis128_aesni_dec_tail) movdqu STATE3, 0x40(STATEP) FRAME_END - ret + RET SYM_FUNC_END(crypto_aegis128_aesni_dec_tail) /* @@ -743,5 +743,5 @@ SYM_FUNC_START(crypto_aegis128_aesni_final) movdqu MSG, (%rsi) FRAME_END - ret + RET SYM_FUNC_END(crypto_aegis128_aesni_final) diff --git a/arch/x86/crypto/aes_ctrby8_avx-x86_64.S b/arch/x86/crypto/aes_ctrby8_avx-x86_64.S index 3f0fc7dd87d7..c799838242a6 100644 --- a/arch/x86/crypto/aes_ctrby8_avx-x86_64.S +++ b/arch/x86/crypto/aes_ctrby8_avx-x86_64.S @@ -525,7 +525,7 @@ ddq_add_8: /* return updated IV */ vpshufb xbyteswap, xcounter, xcounter vmovdqu xcounter, (p_iv) - ret + RET .endm /* diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S index 57aef3f5a81e..69c7c0dc22ea 100644 --- a/arch/x86/crypto/aesni-intel_asm.S +++ b/arch/x86/crypto/aesni-intel_asm.S @@ -1598,7 +1598,7 @@ SYM_FUNC_START(aesni_gcm_dec) GCM_ENC_DEC dec GCM_COMPLETE arg10, arg11 FUNC_RESTORE - ret + RET SYM_FUNC_END(aesni_gcm_dec) @@ -1687,7 +1687,7 @@ SYM_FUNC_START(aesni_gcm_enc) GCM_COMPLETE arg10, arg11 FUNC_RESTORE - ret + RET SYM_FUNC_END(aesni_gcm_enc) /***************************************************************************** @@ -1705,7 +1705,7 @@ SYM_FUNC_START(aesni_gcm_init) FUNC_SAVE GCM_INIT %arg3, %arg4,%arg5, %arg6 FUNC_RESTORE - ret + RET SYM_FUNC_END(aesni_gcm_init) /***************************************************************************** @@ -1720,7 +1720,7 @@ SYM_FUNC_START(aesni_gcm_enc_update) FUNC_SAVE GCM_ENC_DEC enc FUNC_RESTORE - ret + RET SYM_FUNC_END(aesni_gcm_enc_update) /***************************************************************************** @@ -1735,7 +1735,7 @@ SYM_FUNC_START(aesni_gcm_dec_update) FUNC_SAVE GCM_ENC_DEC dec FUNC_RESTORE - ret + RET SYM_FUNC_END(aesni_gcm_dec_update) /***************************************************************************** @@ -1750,7 +1750,7 @@ SYM_FUNC_START(aesni_gcm_finalize) FUNC_SAVE GCM_COMPLETE %arg3 %arg4 FUNC_RESTORE - ret + RET SYM_FUNC_END(aesni_gcm_finalize) #endif @@ -1766,7 +1766,7 @@ SYM_FUNC_START_LOCAL(_key_expansion_256a) pxor %xmm1, %xmm0 movaps %xmm0, (TKEYP) add $0x10, TKEYP - ret + RET SYM_FUNC_END(_key_expansion_256a) SYM_FUNC_END_ALIAS(_key_expansion_128) @@ -1791,7 +1791,7 @@ SYM_FUNC_START_LOCAL(_key_expansion_192a) shufps $0b01001110, %xmm2, %xmm1 movaps %xmm1, 0x10(TKEYP) add $0x20, TKEYP - ret + RET SYM_FUNC_END(_key_expansion_192a) SYM_FUNC_START_LOCAL(_key_expansion_192b) @@ -1810,7 +1810,7 @@ SYM_FUNC_START_LOCAL(_key_expansion_192b) movaps %xmm0, (TKEYP) add $0x10, TKEYP - ret + RET SYM_FUNC_END(_key_expansion_192b) SYM_FUNC_START_LOCAL(_key_expansion_256b) @@ -1822,7 +1822,7 @@ SYM_FUNC_START_LOCAL(_key_expansion_256b) pxor %xmm1, %xmm2 movaps %xmm2, (TKEYP) add $0x10, TKEYP - ret + RET SYM_FUNC_END(_key_expansion_256b) /* @@ -1937,7 +1937,7 @@ SYM_FUNC_START(aesni_set_key) popl KEYP #endif FRAME_END - ret + RET SYM_FUNC_END(aesni_set_key) /* @@ -1961,7 +1961,7 @@ SYM_FUNC_START(aesni_enc) popl KEYP #endif FRAME_END - ret + RET SYM_FUNC_END(aesni_enc) /* @@ -2018,7 +2018,7 @@ SYM_FUNC_START_LOCAL(_aesni_enc1) aesenc KEY, STATE movaps 0x70(TKEYP), KEY aesenclast KEY, STATE - ret + RET SYM_FUNC_END(_aesni_enc1) /* @@ -2126,7 +2126,7 @@ SYM_FUNC_START_LOCAL(_aesni_enc4) aesenclast KEY, STATE2 aesenclast KEY, STATE3 aesenclast KEY, STATE4 - ret + RET SYM_FUNC_END(_aesni_enc4) /* @@ -2151,7 +2151,7 @@ SYM_FUNC_START(aesni_dec) popl KEYP #endif FRAME_END - ret + RET SYM_FUNC_END(aesni_dec) /* @@ -2208,7 +2208,7 @@ SYM_FUNC_START_LOCAL(_aesni_dec1) aesdec KEY, STATE movaps 0x70(TKEYP), KEY aesdeclast KEY, STATE - ret + RET SYM_FUNC_END(_aesni_dec1) /* @@ -2316,7 +2316,7 @@ SYM_FUNC_START_LOCAL(_aesni_dec4) aesdeclast KEY, STATE2 aesdeclast KEY, STATE3 aesdeclast KEY, STATE4 - ret + RET SYM_FUNC_END(_aesni_dec4) /* @@ -2376,7 +2376,7 @@ SYM_FUNC_START(aesni_ecb_enc) popl LEN #endif FRAME_END - ret + RET SYM_FUNC_END(aesni_ecb_enc) /* @@ -2437,7 +2437,7 @@ SYM_FUNC_START(aesni_ecb_dec) popl LEN #endif FRAME_END - ret + RET SYM_FUNC_END(aesni_ecb_dec) /* @@ -2481,7 +2481,7 @@ SYM_FUNC_START(aesni_cbc_enc) popl IVP #endif FRAME_END - ret + RET SYM_FUNC_END(aesni_cbc_enc) /* @@ -2574,7 +2574,7 @@ SYM_FUNC_START(aesni_cbc_dec) popl IVP #endif FRAME_END - ret + RET SYM_FUNC_END(aesni_cbc_dec) #ifdef __x86_64__ @@ -2602,7 +2602,7 @@ SYM_FUNC_START_LOCAL(_aesni_inc_init) mov $1, TCTR_LOW movq TCTR_LOW, INC movq CTR, TCTR_LOW - ret + RET SYM_FUNC_END(_aesni_inc_init) /* @@ -2630,7 +2630,7 @@ SYM_FUNC_START_LOCAL(_aesni_inc) .Linc_low: movaps CTR, IV pshufb BSWAP_MASK, IV - ret + RET SYM_FUNC_END(_aesni_inc) /* @@ -2693,7 +2693,7 @@ SYM_FUNC_START(aesni_ctr_enc) movups IV, (IVP) .Lctr_enc_just_ret: FRAME_END - ret + RET SYM_FUNC_END(aesni_ctr_enc) /* @@ -2778,7 +2778,7 @@ SYM_FUNC_START(aesni_xts_encrypt) movups IV, (IVP) FRAME_END - ret + RET SYM_FUNC_END(aesni_xts_encrypt) /* @@ -2846,7 +2846,7 @@ SYM_FUNC_START(aesni_xts_decrypt) movups IV, (IVP) FRAME_END - ret + RET SYM_FUNC_END(aesni_xts_decrypt) #endif diff --git a/arch/x86/crypto/aesni-intel_avx-x86_64.S b/arch/x86/crypto/aesni-intel_avx-x86_64.S index 2cf8e94d986a..4d9b2f887064 100644 --- a/arch/x86/crypto/aesni-intel_avx-x86_64.S +++ b/arch/x86/crypto/aesni-intel_avx-x86_64.S @@ -1777,7 +1777,7 @@ SYM_FUNC_START(aesni_gcm_init_avx_gen2) FUNC_SAVE INIT GHASH_MUL_AVX, PRECOMPUTE_AVX FUNC_RESTORE - ret + RET SYM_FUNC_END(aesni_gcm_init_avx_gen2) ############################################################################### @@ -1798,15 +1798,15 @@ SYM_FUNC_START(aesni_gcm_enc_update_avx_gen2) # must be 192 GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, ENC, 11 FUNC_RESTORE - ret + RET key_128_enc_update: GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, ENC, 9 FUNC_RESTORE - ret + RET key_256_enc_update: GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, ENC, 13 FUNC_RESTORE - ret + RET SYM_FUNC_END(aesni_gcm_enc_update_avx_gen2) ############################################################################### @@ -1827,15 +1827,15 @@ SYM_FUNC_START(aesni_gcm_dec_update_avx_gen2) # must be 192 GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, DEC, 11 FUNC_RESTORE - ret + RET key_128_dec_update: GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, DEC, 9 FUNC_RESTORE - ret + RET key_256_dec_update: GCM_ENC_DEC INITIAL_BLOCKS_AVX, GHASH_8_ENCRYPT_8_PARALLEL_AVX, GHASH_LAST_8_AVX, GHASH_MUL_AVX, DEC, 13 FUNC_RESTORE - ret + RET SYM_FUNC_END(aesni_gcm_dec_update_avx_gen2) ############################################################################### @@ -1856,15 +1856,15 @@ SYM_FUNC_START(aesni_gcm_finalize_avx_gen2) # must be 192 GCM_COMPLETE GHASH_MUL_AVX, 11, arg3, arg4 FUNC_RESTORE - ret + RET key_128_finalize: GCM_COMPLETE GHASH_MUL_AVX, 9, arg3, arg4 FUNC_RESTORE - ret + RET key_256_finalize: GCM_COMPLETE GHASH_MUL_AVX, 13, arg3, arg4 FUNC_RESTORE - ret + RET SYM_FUNC_END(aesni_gcm_finalize_avx_gen2) ############################################################################### @@ -2745,7 +2745,7 @@ SYM_FUNC_START(aesni_gcm_init_avx_gen4) FUNC_SAVE INIT GHASH_MUL_AVX2, PRECOMPUTE_AVX2 FUNC_RESTORE - ret + RET SYM_FUNC_END(aesni_gcm_init_avx_gen4) ############################################################################### @@ -2766,15 +2766,15 @@ SYM_FUNC_START(aesni_gcm_enc_update_avx_gen4) # must be 192 GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, ENC, 11 FUNC_RESTORE - ret + RET key_128_enc_update4: GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, ENC, 9 FUNC_RESTORE - ret + RET key_256_enc_update4: GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, ENC, 13 FUNC_RESTORE - ret + RET SYM_FUNC_END(aesni_gcm_enc_update_avx_gen4) ############################################################################### @@ -2795,15 +2795,15 @@ SYM_FUNC_START(aesni_gcm_dec_update_avx_gen4) # must be 192 GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, DEC, 11 FUNC_RESTORE - ret + RET key_128_dec_update4: GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, DEC, 9 FUNC_RESTORE - ret + RET key_256_dec_update4: GCM_ENC_DEC INITIAL_BLOCKS_AVX2, GHASH_8_ENCRYPT_8_PARALLEL_AVX2, GHASH_LAST_8_AVX2, GHASH_MUL_AVX2, DEC, 13 FUNC_RESTORE - ret + RET SYM_FUNC_END(aesni_gcm_dec_update_avx_gen4) ############################################################################### @@ -2824,13 +2824,13 @@ SYM_FUNC_START(aesni_gcm_finalize_avx_gen4) # must be 192 GCM_COMPLETE GHASH_MUL_AVX2, 11, arg3, arg4 FUNC_RESTORE - ret + RET key_128_finalize4: GCM_COMPLETE GHASH_MUL_AVX2, 9, arg3, arg4 FUNC_RESTORE - ret + RET key_256_finalize4: GCM_COMPLETE GHASH_MUL_AVX2, 13, arg3, arg4 FUNC_RESTORE - ret + RET SYM_FUNC_END(aesni_gcm_finalize_avx_gen4) diff --git a/arch/x86/crypto/blake2s-core.S b/arch/x86/crypto/blake2s-core.S index 2ca79974f819..b50b35ff1fdb 100644 --- a/arch/x86/crypto/blake2s-core.S +++ b/arch/x86/crypto/blake2s-core.S @@ -171,7 +171,7 @@ SYM_FUNC_START(blake2s_compress_ssse3) movdqu %xmm1,0x10(%rdi) movdqu %xmm14,0x20(%rdi) .Lendofloop: - ret + RET SYM_FUNC_END(blake2s_compress_ssse3) #ifdef CONFIG_AS_AVX512 @@ -251,6 +251,6 @@ SYM_FUNC_START(blake2s_compress_avx512) vmovdqu %xmm1,0x10(%rdi) vmovdqu %xmm4,0x20(%rdi) vzeroupper - retq + RET SYM_FUNC_END(blake2s_compress_avx512) #endif /* CONFIG_AS_AVX512 */ diff --git a/arch/x86/crypto/blowfish-x86_64-asm_64.S b/arch/x86/crypto/blowfish-x86_64-asm_64.S index 4222ac6d6584..802d71582689 100644 --- a/arch/x86/crypto/blowfish-x86_64-asm_64.S +++ b/arch/x86/crypto/blowfish-x86_64-asm_64.S @@ -135,10 +135,10 @@ SYM_FUNC_START(__blowfish_enc_blk) jnz .L__enc_xor; write_block(); - ret; + RET; .L__enc_xor: xor_block(); - ret; + RET; SYM_FUNC_END(__blowfish_enc_blk) SYM_FUNC_START(blowfish_dec_blk) @@ -170,7 +170,7 @@ SYM_FUNC_START(blowfish_dec_blk) movq %r11, %r12; - ret; + RET; SYM_FUNC_END(blowfish_dec_blk) /********************************************************************** @@ -322,14 +322,14 @@ SYM_FUNC_START(__blowfish_enc_blk_4way) popq %rbx; popq %r12; - ret; + RET; .L__enc_xor4: xor_block4(); popq %rbx; popq %r12; - ret; + RET; SYM_FUNC_END(__blowfish_enc_blk_4way) SYM_FUNC_START(blowfish_dec_blk_4way) @@ -364,5 +364,5 @@ SYM_FUNC_START(blowfish_dec_blk_4way) popq %rbx; popq %r12; - ret; + RET; SYM_FUNC_END(blowfish_dec_blk_4way) diff --git a/arch/x86/crypto/camellia-aesni-avx-asm_64.S b/arch/x86/crypto/camellia-aesni-avx-asm_64.S index ecc0a9a905c4..297b1a7ed2bc 100644 --- a/arch/x86/crypto/camellia-aesni-avx-asm_64.S +++ b/arch/x86/crypto/camellia-aesni-avx-asm_64.S @@ -193,7 +193,7 @@ SYM_FUNC_START_LOCAL(roundsm16_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_c roundsm16(%xmm0, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7, %xmm8, %xmm9, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, %xmm15, %rcx, (%r9)); - ret; + RET; SYM_FUNC_END(roundsm16_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd) .align 8 @@ -201,7 +201,7 @@ SYM_FUNC_START_LOCAL(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_a roundsm16(%xmm4, %xmm5, %xmm6, %xmm7, %xmm0, %xmm1, %xmm2, %xmm3, %xmm12, %xmm13, %xmm14, %xmm15, %xmm8, %xmm9, %xmm10, %xmm11, %rax, (%r9)); - ret; + RET; SYM_FUNC_END(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab) /* @@ -787,7 +787,7 @@ SYM_FUNC_START_LOCAL(__camellia_enc_blk16) %xmm15, (key_table)(CTX, %r8, 8), (%rax), 1 * 16(%rax)); FRAME_END - ret; + RET; .align 8 .Lenc_max32: @@ -874,7 +874,7 @@ SYM_FUNC_START_LOCAL(__camellia_dec_blk16) %xmm15, (key_table)(CTX), (%rax), 1 * 16(%rax)); FRAME_END - ret; + RET; .align 8 .Ldec_max32: @@ -915,7 +915,7 @@ SYM_FUNC_START(camellia_ecb_enc_16way) %xmm8, %rsi); FRAME_END - ret; + RET; SYM_FUNC_END(camellia_ecb_enc_16way) SYM_FUNC_START(camellia_ecb_dec_16way) @@ -945,7 +945,7 @@ SYM_FUNC_START(camellia_ecb_dec_16way) %xmm8, %rsi); FRAME_END - ret; + RET; SYM_FUNC_END(camellia_ecb_dec_16way) SYM_FUNC_START(camellia_cbc_dec_16way) @@ -996,7 +996,7 @@ SYM_FUNC_START(camellia_cbc_dec_16way) %xmm8, %rsi); FRAME_END - ret; + RET; SYM_FUNC_END(camellia_cbc_dec_16way) #define inc_le128(x, minus_one, tmp) \ @@ -1109,7 +1109,7 @@ SYM_FUNC_START(camellia_ctr_16way) %xmm8, %rsi); FRAME_END - ret; + RET; SYM_FUNC_END(camellia_ctr_16way) #define gf128mul_x_ble(iv, mask, tmp) \ @@ -1253,7 +1253,7 @@ SYM_FUNC_START_LOCAL(camellia_xts_crypt_16way) %xmm8, %rsi); FRAME_END - ret; + RET; SYM_FUNC_END(camellia_xts_crypt_16way) SYM_FUNC_START(camellia_xts_enc_16way) diff --git a/arch/x86/crypto/camellia-aesni-avx2-asm_64.S b/arch/x86/crypto/camellia-aesni-avx2-asm_64.S index 0907243c501c..288cd246da20 100644 --- a/arch/x86/crypto/camellia-aesni-avx2-asm_64.S +++ b/arch/x86/crypto/camellia-aesni-avx2-asm_64.S @@ -227,7 +227,7 @@ SYM_FUNC_START_LOCAL(roundsm32_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_c roundsm32(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7, %ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, %ymm15, %rcx, (%r9)); - ret; + RET; SYM_FUNC_END(roundsm32_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd) .align 8 @@ -235,7 +235,7 @@ SYM_FUNC_START_LOCAL(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_a roundsm32(%ymm4, %ymm5, %ymm6, %ymm7, %ymm0, %ymm1, %ymm2, %ymm3, %ymm12, %ymm13, %ymm14, %ymm15, %ymm8, %ymm9, %ymm10, %ymm11, %rax, (%r9)); - ret; + RET; SYM_FUNC_END(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab) /* @@ -825,7 +825,7 @@ SYM_FUNC_START_LOCAL(__camellia_enc_blk32) %ymm15, (key_table)(CTX, %r8, 8), (%rax), 1 * 32(%rax)); FRAME_END - ret; + RET; .align 8 .Lenc_max32: @@ -912,7 +912,7 @@ SYM_FUNC_START_LOCAL(__camellia_dec_blk32) %ymm15, (key_table)(CTX), (%rax), 1 * 32(%rax)); FRAME_END - ret; + RET; .align 8 .Ldec_max32: @@ -957,7 +957,7 @@ SYM_FUNC_START(camellia_ecb_enc_32way) vzeroupper; FRAME_END - ret; + RET; SYM_FUNC_END(camellia_ecb_enc_32way) SYM_FUNC_START(camellia_ecb_dec_32way) @@ -991,7 +991,7 @@ SYM_FUNC_START(camellia_ecb_dec_32way) vzeroupper; FRAME_END - ret; + RET; SYM_FUNC_END(camellia_ecb_dec_32way) SYM_FUNC_START(camellia_cbc_dec_32way) @@ -1059,7 +1059,7 @@ SYM_FUNC_START(camellia_cbc_dec_32way) vzeroupper; FRAME_END - ret; + RET; SYM_FUNC_END(camellia_cbc_dec_32way) #define inc_le128(x, minus_one, tmp) \ @@ -1199,7 +1199,7 @@ SYM_FUNC_START(camellia_ctr_32way) vzeroupper; FRAME_END - ret; + RET; SYM_FUNC_END(camellia_ctr_32way) #define gf128mul_x_ble(iv, mask, tmp) \ @@ -1366,7 +1366,7 @@ SYM_FUNC_START_LOCAL(camellia_xts_crypt_32way) vzeroupper; FRAME_END - ret; + RET; SYM_FUNC_END(camellia_xts_crypt_32way) SYM_FUNC_START(camellia_xts_enc_32way) diff --git a/arch/x86/crypto/camellia-x86_64-asm_64.S b/arch/x86/crypto/camellia-x86_64-asm_64.S index 1372e6408850..347c059f5940 100644 --- a/arch/x86/crypto/camellia-x86_64-asm_64.S +++ b/arch/x86/crypto/camellia-x86_64-asm_64.S @@ -213,13 +213,13 @@ SYM_FUNC_START(__camellia_enc_blk) enc_outunpack(mov, RT1); movq RR12, %r12; - ret; + RET; .L__enc_xor: enc_outunpack(xor, RT1); movq RR12, %r12; - ret; + RET; SYM_FUNC_END(__camellia_enc_blk) SYM_FUNC_START(camellia_dec_blk) @@ -257,7 +257,7 @@ SYM_FUNC_START(camellia_dec_blk) dec_outunpack(); movq RR12, %r12; - ret; + RET; SYM_FUNC_END(camellia_dec_blk) /********************************************************************** @@ -448,14 +448,14 @@ SYM_FUNC_START(__camellia_enc_blk_2way) movq RR12, %r12; popq %rbx; - ret; + RET; .L__enc2_xor: enc_outunpack2(xor, RT2); movq RR12, %r12; popq %rbx; - ret; + RET; SYM_FUNC_END(__camellia_enc_blk_2way) SYM_FUNC_START(camellia_dec_blk_2way) @@ -495,5 +495,5 @@ SYM_FUNC_START(camellia_dec_blk_2way) movq RR12, %r12; movq RXOR, %rbx; - ret; + RET; SYM_FUNC_END(camellia_dec_blk_2way) diff --git a/arch/x86/crypto/cast5-avx-x86_64-asm_64.S b/arch/x86/crypto/cast5-avx-x86_64-asm_64.S index 8a6181b08b59..b258af420c92 100644 --- a/arch/x86/crypto/cast5-avx-x86_64-asm_64.S +++ b/arch/x86/crypto/cast5-avx-x86_64-asm_64.S @@ -279,7 +279,7 @@ SYM_FUNC_START_LOCAL(__cast5_enc_blk16) outunpack_blocks(RR3, RL3, RTMP, RX, RKM); outunpack_blocks(RR4, RL4, RTMP, RX, RKM); - ret; + RET; SYM_FUNC_END(__cast5_enc_blk16) .align 16 @@ -352,7 +352,7 @@ SYM_FUNC_START_LOCAL(__cast5_dec_blk16) outunpack_blocks(RR3, RL3, RTMP, RX, RKM); outunpack_blocks(RR4, RL4, RTMP, RX, RKM); - ret; + RET; .L__skip_dec: vpsrldq $4, RKR, RKR; @@ -393,7 +393,7 @@ SYM_FUNC_START(cast5_ecb_enc_16way) popq %r15; FRAME_END - ret; + RET; SYM_FUNC_END(cast5_ecb_enc_16way) SYM_FUNC_START(cast5_ecb_dec_16way) @@ -431,7 +431,7 @@ SYM_FUNC_START(cast5_ecb_dec_16way) popq %r15; FRAME_END - ret; + RET; SYM_FUNC_END(cast5_ecb_dec_16way) SYM_FUNC_START(cast5_cbc_dec_16way) @@ -483,7 +483,7 @@ SYM_FUNC_START(cast5_cbc_dec_16way) popq %r15; popq %r12; FRAME_END - ret; + RET; SYM_FUNC_END(cast5_cbc_dec_16way) SYM_FUNC_START(cast5_ctr_16way) @@ -559,5 +559,5 @@ SYM_FUNC_START(cast5_ctr_16way) popq %r15; popq %r12; FRAME_END - ret; + RET; SYM_FUNC_END(cast5_ctr_16way) diff --git a/arch/x86/crypto/cast6-avx-x86_64-asm_64.S b/arch/x86/crypto/cast6-avx-x86_64-asm_64.S index 932a3ce32a88..6eccaf1fb4c6 100644 --- a/arch/x86/crypto/cast6-avx-x86_64-asm_64.S +++ b/arch/x86/crypto/cast6-avx-x86_64-asm_64.S @@ -291,7 +291,7 @@ SYM_FUNC_START_LOCAL(__cast6_enc_blk8) outunpack_blocks(RA1, RB1, RC1, RD1, RTMP, RX, RKRF, RKM); outunpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM); - ret; + RET; SYM_FUNC_END(__cast6_enc_blk8) .align 8 @@ -338,7 +338,7 @@ SYM_FUNC_START_LOCAL(__cast6_dec_blk8) outunpack_blocks(RA1, RB1, RC1, RD1, RTMP, RX, RKRF, RKM); outunpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM); - ret; + RET; SYM_FUNC_END(__cast6_dec_blk8) SYM_FUNC_START(cast6_ecb_enc_8way) @@ -361,7 +361,7 @@ SYM_FUNC_START(cast6_ecb_enc_8way) popq %r15; FRAME_END - ret; + RET; SYM_FUNC_END(cast6_ecb_enc_8way) SYM_FUNC_START(cast6_ecb_dec_8way) @@ -384,7 +384,7 @@ SYM_FUNC_START(cast6_ecb_dec_8way) popq %r15; FRAME_END - ret; + RET; SYM_FUNC_END(cast6_ecb_dec_8way) SYM_FUNC_START(cast6_cbc_dec_8way) @@ -410,7 +410,7 @@ SYM_FUNC_START(cast6_cbc_dec_8way) popq %r15; popq %r12; FRAME_END - ret; + RET; SYM_FUNC_END(cast6_cbc_dec_8way) SYM_FUNC_START(cast6_ctr_8way) @@ -438,7 +438,7 @@ SYM_FUNC_START(cast6_ctr_8way) popq %r15; popq %r12; FRAME_END - ret; + RET; SYM_FUNC_END(cast6_ctr_8way) SYM_FUNC_START(cast6_xts_enc_8way) @@ -465,7 +465,7 @@ SYM_FUNC_START(cast6_xts_enc_8way) popq %r15; FRAME_END - ret; + RET; SYM_FUNC_END(cast6_xts_enc_8way) SYM_FUNC_START(cast6_xts_dec_8way) @@ -492,5 +492,5 @@ SYM_FUNC_START(cast6_xts_dec_8way) popq %r15; FRAME_END - ret; + RET; SYM_FUNC_END(cast6_xts_dec_8way) diff --git a/arch/x86/crypto/chacha-avx2-x86_64.S b/arch/x86/crypto/chacha-avx2-x86_64.S index ee9a40ab4109..f3d8fc018249 100644 --- a/arch/x86/crypto/chacha-avx2-x86_64.S +++ b/arch/x86/crypto/chacha-avx2-x86_64.S @@ -193,7 +193,7 @@ SYM_FUNC_START(chacha_2block_xor_avx2) .Ldone2: vzeroupper - ret + RET .Lxorpart2: # xor remaining bytes from partial register into output @@ -498,7 +498,7 @@ SYM_FUNC_START(chacha_4block_xor_avx2) .Ldone4: vzeroupper - ret + RET .Lxorpart4: # xor remaining bytes from partial register into output @@ -992,7 +992,7 @@ SYM_FUNC_START(chacha_8block_xor_avx2) .Ldone8: vzeroupper lea -8(%r10),%rsp - ret + RET .Lxorpart8: # xor remaining bytes from partial register into output diff --git a/arch/x86/crypto/chacha-avx512vl-x86_64.S b/arch/x86/crypto/chacha-avx512vl-x86_64.S index 8713c16c2501..259383e1ad44 100644 --- a/arch/x86/crypto/chacha-avx512vl-x86_64.S +++ b/arch/x86/crypto/chacha-avx512vl-x86_64.S @@ -166,7 +166,7 @@ SYM_FUNC_START(chacha_2block_xor_avx512vl) .Ldone2: vzeroupper - ret + RET .Lxorpart2: # xor remaining bytes from partial register into output @@ -432,7 +432,7 @@ SYM_FUNC_START(chacha_4block_xor_avx512vl) .Ldone4: vzeroupper - ret + RET .Lxorpart4: # xor remaining bytes from partial register into output @@ -812,7 +812,7 @@ SYM_FUNC_START(chacha_8block_xor_avx512vl) .Ldone8: vzeroupper - ret + RET .Lxorpart8: # xor remaining bytes from partial register into output diff --git a/arch/x86/crypto/chacha-ssse3-x86_64.S b/arch/x86/crypto/chacha-ssse3-x86_64.S index ca1788bfee16..7111949cd5b9 100644 --- a/arch/x86/crypto/chacha-ssse3-x86_64.S +++ b/arch/x86/crypto/chacha-ssse3-x86_64.S @@ -108,7 +108,7 @@ SYM_FUNC_START_LOCAL(chacha_permute) sub $2,%r8d jnz .Ldoubleround - ret + RET SYM_FUNC_END(chacha_permute) SYM_FUNC_START(chacha_block_xor_ssse3) @@ -166,7 +166,7 @@ SYM_FUNC_START(chacha_block_xor_ssse3) .Ldone: FRAME_END - ret + RET .Lxorpart: # xor remaining bytes from partial register into output @@ -217,7 +217,7 @@ SYM_FUNC_START(hchacha_block_ssse3) movdqu %xmm3,0x10(%rsi) FRAME_END - ret + RET SYM_FUNC_END(hchacha_block_ssse3) SYM_FUNC_START(chacha_4block_xor_ssse3) @@ -762,7 +762,7 @@ SYM_FUNC_START(chacha_4block_xor_ssse3) .Ldone4: lea -8(%r10),%rsp - ret + RET .Lxorpart4: # xor remaining bytes from partial register into output diff --git a/arch/x86/crypto/crc32-pclmul_asm.S b/arch/x86/crypto/crc32-pclmul_asm.S index 6e7d4c4d3208..c392a6edbfff 100644 --- a/arch/x86/crypto/crc32-pclmul_asm.S +++ b/arch/x86/crypto/crc32-pclmul_asm.S @@ -236,5 +236,5 @@ fold_64: pxor %xmm2, %xmm1 pextrd $0x01, %xmm1, %eax - ret + RET SYM_FUNC_END(crc32_pclmul_le_16) diff --git a/arch/x86/crypto/crc32c-pcl-intel-asm_64.S b/arch/x86/crypto/crc32c-pcl-intel-asm_64.S index 884dc767b051..f6e3568fbd63 100644 --- a/arch/x86/crypto/crc32c-pcl-intel-asm_64.S +++ b/arch/x86/crypto/crc32c-pcl-intel-asm_64.S @@ -309,7 +309,7 @@ do_return: popq %rsi popq %rdi popq %rbx - ret + RET SYM_FUNC_END(crc_pcl) .section .rodata, "a", @progbits diff --git a/arch/x86/crypto/crct10dif-pcl-asm_64.S b/arch/x86/crypto/crct10dif-pcl-asm_64.S index b2533d63030e..721474abfb71 100644 --- a/arch/x86/crypto/crct10dif-pcl-asm_64.S +++ b/arch/x86/crypto/crct10dif-pcl-asm_64.S @@ -257,7 +257,7 @@ SYM_FUNC_START(crc_t10dif_pcl) # Final CRC value (x^16 * M(x)) mod G(x) is in low 16 bits of xmm0. pextrw $0, %xmm0, %eax - ret + RET .align 16 .Lless_than_256_bytes: diff --git a/arch/x86/crypto/des3_ede-asm_64.S b/arch/x86/crypto/des3_ede-asm_64.S index fac0fdc3f25d..f4c760f4cade 100644 --- a/arch/x86/crypto/des3_ede-asm_64.S +++ b/arch/x86/crypto/des3_ede-asm_64.S @@ -243,7 +243,7 @@ SYM_FUNC_START(des3_ede_x86_64_crypt_blk) popq %r12; popq %rbx; - ret; + RET; SYM_FUNC_END(des3_ede_x86_64_crypt_blk) /*********************************************************************** @@ -528,7 +528,7 @@ SYM_FUNC_START(des3_ede_x86_64_crypt_blk_3way) popq %r12; popq %rbx; - ret; + RET; SYM_FUNC_END(des3_ede_x86_64_crypt_blk_3way) .section .rodata, "a", @progbits diff --git a/arch/x86/crypto/ghash-clmulni-intel_asm.S b/arch/x86/crypto/ghash-clmulni-intel_asm.S index 99ac25e18e09..2bf871899920 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_asm.S +++ b/arch/x86/crypto/ghash-clmulni-intel_asm.S @@ -85,7 +85,7 @@ SYM_FUNC_START_LOCAL(__clmul_gf128mul_ble) psrlq $1, T2 pxor T2, T1 pxor T1, DATA - ret + RET SYM_FUNC_END(__clmul_gf128mul_ble) /* void clmul_ghash_mul(char *dst, const u128 *shash) */ @@ -99,7 +99,7 @@ SYM_FUNC_START(clmul_ghash_mul) pshufb BSWAP, DATA movups DATA, (%rdi) FRAME_END - ret + RET SYM_FUNC_END(clmul_ghash_mul) /* @@ -128,5 +128,5 @@ SYM_FUNC_START(clmul_ghash_update) movups DATA, (%rdi) .Lupdate_just_ret: FRAME_END - ret + RET SYM_FUNC_END(clmul_ghash_update) diff --git a/arch/x86/crypto/nh-avx2-x86_64.S b/arch/x86/crypto/nh-avx2-x86_64.S index b22c7b936272..6a0b15e7196a 100644 --- a/arch/x86/crypto/nh-avx2-x86_64.S +++ b/arch/x86/crypto/nh-avx2-x86_64.S @@ -153,5 +153,5 @@ SYM_FUNC_START(nh_avx2) vpaddq T1, T0, T0 vpaddq T4, T0, T0 vmovdqu T0, (HASH) - ret + RET SYM_FUNC_END(nh_avx2) diff --git a/arch/x86/crypto/nh-sse2-x86_64.S b/arch/x86/crypto/nh-sse2-x86_64.S index d7ae22dd6683..34c567bbcb4f 100644 --- a/arch/x86/crypto/nh-sse2-x86_64.S +++ b/arch/x86/crypto/nh-sse2-x86_64.S @@ -119,5 +119,5 @@ SYM_FUNC_START(nh_sse2) paddq PASS2_SUMS, T1 movdqu T0, 0x00(HASH) movdqu T1, 0x10(HASH) - ret + RET SYM_FUNC_END(nh_sse2) diff --git a/arch/x86/crypto/serpent-avx-x86_64-asm_64.S b/arch/x86/crypto/serpent-avx-x86_64-asm_64.S index ba9e4c1e7f5c..c985bc15304b 100644 --- a/arch/x86/crypto/serpent-avx-x86_64-asm_64.S +++ b/arch/x86/crypto/serpent-avx-x86_64-asm_64.S @@ -605,7 +605,7 @@ SYM_FUNC_START_LOCAL(__serpent_enc_blk8_avx) write_blocks(RA1, RB1, RC1, RD1, RK0, RK1, RK2); write_blocks(RA2, RB2, RC2, RD2, RK0, RK1, RK2); - ret; + RET; SYM_FUNC_END(__serpent_enc_blk8_avx) .align 8 @@ -659,7 +659,7 @@ SYM_FUNC_START_LOCAL(__serpent_dec_blk8_avx) write_blocks(RC1, RD1, RB1, RE1, RK0, RK1, RK2); write_blocks(RC2, RD2, RB2, RE2, RK0, RK1, RK2); - ret; + RET; SYM_FUNC_END(__serpent_dec_blk8_avx) SYM_FUNC_START(serpent_ecb_enc_8way_avx) @@ -677,7 +677,7 @@ SYM_FUNC_START(serpent_ecb_enc_8way_avx) store_8way(%rsi, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2); FRAME_END - ret; + RET; SYM_FUNC_END(serpent_ecb_enc_8way_avx) SYM_FUNC_START(serpent_ecb_dec_8way_avx) @@ -695,7 +695,7 @@ SYM_FUNC_START(serpent_ecb_dec_8way_avx) store_8way(%rsi, RC1, RD1, RB1, RE1, RC2, RD2, RB2, RE2); FRAME_END - ret; + RET; SYM_FUNC_END(serpent_ecb_dec_8way_avx) SYM_FUNC_START(serpent_cbc_dec_8way_avx) @@ -713,7 +713,7 @@ SYM_FUNC_START(serpent_cbc_dec_8way_avx) store_cbc_8way(%rdx, %rsi, RC1, RD1, RB1, RE1, RC2, RD2, RB2, RE2); FRAME_END - ret; + RET; SYM_FUNC_END(serpent_cbc_dec_8way_avx) SYM_FUNC_START(serpent_ctr_8way_avx) @@ -733,7 +733,7 @@ SYM_FUNC_START(serpent_ctr_8way_avx) store_ctr_8way(%rdx, %rsi, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2); FRAME_END - ret; + RET; SYM_FUNC_END(serpent_ctr_8way_avx) SYM_FUNC_START(serpent_xts_enc_8way_avx) @@ -755,7 +755,7 @@ SYM_FUNC_START(serpent_xts_enc_8way_avx) store_xts_8way(%rsi, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2); FRAME_END - ret; + RET; SYM_FUNC_END(serpent_xts_enc_8way_avx) SYM_FUNC_START(serpent_xts_dec_8way_avx) @@ -777,5 +777,5 @@ SYM_FUNC_START(serpent_xts_dec_8way_avx) store_xts_8way(%rsi, RC1, RD1, RB1, RE1, RC2, RD2, RB2, RE2); FRAME_END - ret; + RET; SYM_FUNC_END(serpent_xts_dec_8way_avx) diff --git a/arch/x86/crypto/serpent-avx2-asm_64.S b/arch/x86/crypto/serpent-avx2-asm_64.S index c9648aeae705..ca18948964f4 100644 --- a/arch/x86/crypto/serpent-avx2-asm_64.S +++ b/arch/x86/crypto/serpent-avx2-asm_64.S @@ -611,7 +611,7 @@ SYM_FUNC_START_LOCAL(__serpent_enc_blk16) write_blocks(RA1, RB1, RC1, RD1, RK0, RK1, RK2); write_blocks(RA2, RB2, RC2, RD2, RK0, RK1, RK2); - ret; + RET; SYM_FUNC_END(__serpent_enc_blk16) .align 8 @@ -665,7 +665,7 @@ SYM_FUNC_START_LOCAL(__serpent_dec_blk16) write_blocks(RC1, RD1, RB1, RE1, RK0, RK1, RK2); write_blocks(RC2, RD2, RB2, RE2, RK0, RK1, RK2); - ret; + RET; SYM_FUNC_END(__serpent_dec_blk16) SYM_FUNC_START(serpent_ecb_enc_16way) @@ -687,7 +687,7 @@ SYM_FUNC_START(serpent_ecb_enc_16way) vzeroupper; FRAME_END - ret; + RET; SYM_FUNC_END(serpent_ecb_enc_16way) SYM_FUNC_START(serpent_ecb_dec_16way) @@ -709,7 +709,7 @@ SYM_FUNC_START(serpent_ecb_dec_16way) vzeroupper; FRAME_END - ret; + RET; SYM_FUNC_END(serpent_ecb_dec_16way) SYM_FUNC_START(serpent_cbc_dec_16way) @@ -732,7 +732,7 @@ SYM_FUNC_START(serpent_cbc_dec_16way) vzeroupper; FRAME_END - ret; + RET; SYM_FUNC_END(serpent_cbc_dec_16way) SYM_FUNC_START(serpent_ctr_16way) @@ -757,7 +757,7 @@ SYM_FUNC_START(serpent_ctr_16way) vzeroupper; FRAME_END - ret; + RET; SYM_FUNC_END(serpent_ctr_16way) SYM_FUNC_START(serpent_xts_enc_16way) @@ -783,7 +783,7 @@ SYM_FUNC_START(serpent_xts_enc_16way) vzeroupper; FRAME_END - ret; + RET; SYM_FUNC_END(serpent_xts_enc_16way) SYM_FUNC_START(serpent_xts_dec_16way) @@ -809,5 +809,5 @@ SYM_FUNC_START(serpent_xts_dec_16way) vzeroupper; FRAME_END - ret; + RET; SYM_FUNC_END(serpent_xts_dec_16way) diff --git a/arch/x86/crypto/serpent-sse2-i586-asm_32.S b/arch/x86/crypto/serpent-sse2-i586-asm_32.S index 6379b99cb722..8ccb03ad7cef 100644 --- a/arch/x86/crypto/serpent-sse2-i586-asm_32.S +++ b/arch/x86/crypto/serpent-sse2-i586-asm_32.S @@ -553,12 +553,12 @@ SYM_FUNC_START(__serpent_enc_blk_4way) write_blocks(%eax, RA, RB, RC, RD, RT0, RT1, RE); - ret; + RET; .L__enc_xor4: xor_blocks(%eax, RA, RB, RC, RD, RT0, RT1, RE); - ret; + RET; SYM_FUNC_END(__serpent_enc_blk_4way) SYM_FUNC_START(serpent_dec_blk_4way) @@ -612,5 +612,5 @@ SYM_FUNC_START(serpent_dec_blk_4way) movl arg_dst(%esp), %eax; write_blocks(%eax, RC, RD, RB, RE, RT0, RT1, RA); - ret; + RET; SYM_FUNC_END(serpent_dec_blk_4way) diff --git a/arch/x86/crypto/serpent-sse2-x86_64-asm_64.S b/arch/x86/crypto/serpent-sse2-x86_64-asm_64.S index efb6dc17dc90..e0998a011d1d 100644 --- a/arch/x86/crypto/serpent-sse2-x86_64-asm_64.S +++ b/arch/x86/crypto/serpent-sse2-x86_64-asm_64.S @@ -675,13 +675,13 @@ SYM_FUNC_START(__serpent_enc_blk_8way) write_blocks(%rsi, RA1, RB1, RC1, RD1, RK0, RK1, RK2); write_blocks(%rax, RA2, RB2, RC2, RD2, RK0, RK1, RK2); - ret; + RET; .L__enc_xor8: xor_blocks(%rsi, RA1, RB1, RC1, RD1, RK0, RK1, RK2); xor_blocks(%rax, RA2, RB2, RC2, RD2, RK0, RK1, RK2); - ret; + RET; SYM_FUNC_END(__serpent_enc_blk_8way) SYM_FUNC_START(serpent_dec_blk_8way) @@ -735,5 +735,5 @@ SYM_FUNC_START(serpent_dec_blk_8way) write_blocks(%rsi, RC1, RD1, RB1, RE1, RK0, RK1, RK2); write_blocks(%rax, RC2, RD2, RB2, RE2, RK0, RK1, RK2); - ret; + RET; SYM_FUNC_END(serpent_dec_blk_8way) diff --git a/arch/x86/crypto/sha1_avx2_x86_64_asm.S b/arch/x86/crypto/sha1_avx2_x86_64_asm.S index 1e594d60afa5..6fa622652bfc 100644 --- a/arch/x86/crypto/sha1_avx2_x86_64_asm.S +++ b/arch/x86/crypto/sha1_avx2_x86_64_asm.S @@ -674,7 +674,7 @@ _loop3: pop %r12 pop %rbx - ret + RET SYM_FUNC_END(\name) .endm diff --git a/arch/x86/crypto/sha1_ni_asm.S b/arch/x86/crypto/sha1_ni_asm.S index 11efe3a45a1f..b59f3ca62837 100644 --- a/arch/x86/crypto/sha1_ni_asm.S +++ b/arch/x86/crypto/sha1_ni_asm.S @@ -290,7 +290,7 @@ SYM_FUNC_START(sha1_ni_transform) .Ldone_hash: mov RSPSAVE, %rsp - ret + RET SYM_FUNC_END(sha1_ni_transform) .section .rodata.cst16.PSHUFFLE_BYTE_FLIP_MASK, "aM", @progbits, 16 diff --git a/arch/x86/crypto/sha1_ssse3_asm.S b/arch/x86/crypto/sha1_ssse3_asm.S index d25668d2a1e9..263f916362e0 100644 --- a/arch/x86/crypto/sha1_ssse3_asm.S +++ b/arch/x86/crypto/sha1_ssse3_asm.S @@ -99,7 +99,7 @@ pop %rbp pop %r12 pop %rbx - ret + RET SYM_FUNC_END(\name) .endm diff --git a/arch/x86/crypto/sha256-avx-asm.S b/arch/x86/crypto/sha256-avx-asm.S index 4739cd31b9db..3baa1ec39097 100644 --- a/arch/x86/crypto/sha256-avx-asm.S +++ b/arch/x86/crypto/sha256-avx-asm.S @@ -458,7 +458,7 @@ done_hash: popq %r13 popq %r12 popq %rbx - ret + RET SYM_FUNC_END(sha256_transform_avx) .section .rodata.cst256.K256, "aM", @progbits, 256 diff --git a/arch/x86/crypto/sha256-avx2-asm.S b/arch/x86/crypto/sha256-avx2-asm.S index 11ff60c29c8b..3439aaf4295d 100644 --- a/arch/x86/crypto/sha256-avx2-asm.S +++ b/arch/x86/crypto/sha256-avx2-asm.S @@ -711,7 +711,7 @@ done_hash: popq %r13 popq %r12 popq %rbx - ret + RET SYM_FUNC_END(sha256_transform_rorx) .section .rodata.cst512.K256, "aM", @progbits, 512 diff --git a/arch/x86/crypto/sha256-ssse3-asm.S b/arch/x86/crypto/sha256-ssse3-asm.S index ddfa863b4ee3..c4a5db612c32 100644 --- a/arch/x86/crypto/sha256-ssse3-asm.S +++ b/arch/x86/crypto/sha256-ssse3-asm.S @@ -472,7 +472,7 @@ done_hash: popq %r12 popq %rbx - ret + RET SYM_FUNC_END(sha256_transform_ssse3) .section .rodata.cst256.K256, "aM", @progbits, 256 diff --git a/arch/x86/crypto/sha256_ni_asm.S b/arch/x86/crypto/sha256_ni_asm.S index 7abade04a3a3..94d50dd27cb5 100644 --- a/arch/x86/crypto/sha256_ni_asm.S +++ b/arch/x86/crypto/sha256_ni_asm.S @@ -326,7 +326,7 @@ SYM_FUNC_START(sha256_ni_transform) .Ldone_hash: - ret + RET SYM_FUNC_END(sha256_ni_transform) .section .rodata.cst256.K256, "aM", @progbits, 256 diff --git a/arch/x86/crypto/sha512-avx-asm.S b/arch/x86/crypto/sha512-avx-asm.S index 63470fd6ae32..34fc71c3524e 100644 --- a/arch/x86/crypto/sha512-avx-asm.S +++ b/arch/x86/crypto/sha512-avx-asm.S @@ -364,7 +364,7 @@ updateblock: mov frame_RSPSAVE(%rsp), %rsp nowork: - ret + RET SYM_FUNC_END(sha512_transform_avx) ######################################################################## diff --git a/arch/x86/crypto/sha512-avx2-asm.S b/arch/x86/crypto/sha512-avx2-asm.S index 3a44bdcfd583..399fa74ab3d3 100644 --- a/arch/x86/crypto/sha512-avx2-asm.S +++ b/arch/x86/crypto/sha512-avx2-asm.S @@ -681,7 +681,7 @@ done_hash: # Restore Stack Pointer mov frame_RSPSAVE(%rsp), %rsp - ret + RET SYM_FUNC_END(sha512_transform_rorx) ######################################################################## diff --git a/arch/x86/crypto/sha512-ssse3-asm.S b/arch/x86/crypto/sha512-ssse3-asm.S index 7946a1bee85b..e9b460a54503 100644 --- a/arch/x86/crypto/sha512-ssse3-asm.S +++ b/arch/x86/crypto/sha512-ssse3-asm.S @@ -366,7 +366,7 @@ updateblock: mov frame_RSPSAVE(%rsp), %rsp nowork: - ret + RET SYM_FUNC_END(sha512_transform_ssse3) ######################################################################## diff --git a/arch/x86/crypto/twofish-avx-x86_64-asm_64.S b/arch/x86/crypto/twofish-avx-x86_64-asm_64.S index a5151393bb2f..c3707a71ef72 100644 --- a/arch/x86/crypto/twofish-avx-x86_64-asm_64.S +++ b/arch/x86/crypto/twofish-avx-x86_64-asm_64.S @@ -272,7 +272,7 @@ SYM_FUNC_START_LOCAL(__twofish_enc_blk8) outunpack_blocks(RC1, RD1, RA1, RB1, RK1, RX0, RY0, RK2); outunpack_blocks(RC2, RD2, RA2, RB2, RK1, RX0, RY0, RK2); - ret; + RET; SYM_FUNC_END(__twofish_enc_blk8) .align 8 @@ -312,7 +312,7 @@ SYM_FUNC_START_LOCAL(__twofish_dec_blk8) outunpack_blocks(RA1, RB1, RC1, RD1, RK1, RX0, RY0, RK2); outunpack_blocks(RA2, RB2, RC2, RD2, RK1, RX0, RY0, RK2); - ret; + RET; SYM_FUNC_END(__twofish_dec_blk8) SYM_FUNC_START(twofish_ecb_enc_8way) @@ -332,7 +332,7 @@ SYM_FUNC_START(twofish_ecb_enc_8way) store_8way(%r11, RC1, RD1, RA1, RB1, RC2, RD2, RA2, RB2); FRAME_END - ret; + RET; SYM_FUNC_END(twofish_ecb_enc_8way) SYM_FUNC_START(twofish_ecb_dec_8way) @@ -352,7 +352,7 @@ SYM_FUNC_START(twofish_ecb_dec_8way) store_8way(%r11, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2); FRAME_END - ret; + RET; SYM_FUNC_END(twofish_ecb_dec_8way) SYM_FUNC_START(twofish_cbc_dec_8way) @@ -377,7 +377,7 @@ SYM_FUNC_START(twofish_cbc_dec_8way) popq %r12; FRAME_END - ret; + RET; SYM_FUNC_END(twofish_cbc_dec_8way) SYM_FUNC_START(twofish_ctr_8way) @@ -404,7 +404,7 @@ SYM_FUNC_START(twofish_ctr_8way) popq %r12; FRAME_END - ret; + RET; SYM_FUNC_END(twofish_ctr_8way) SYM_FUNC_START(twofish_xts_enc_8way) @@ -428,7 +428,7 @@ SYM_FUNC_START(twofish_xts_enc_8way) store_xts_8way(%r11, RC1, RD1, RA1, RB1, RC2, RD2, RA2, RB2); FRAME_END - ret; + RET; SYM_FUNC_END(twofish_xts_enc_8way) SYM_FUNC_START(twofish_xts_dec_8way) @@ -452,5 +452,5 @@ SYM_FUNC_START(twofish_xts_dec_8way) store_xts_8way(%r11, RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2); FRAME_END - ret; + RET; SYM_FUNC_END(twofish_xts_dec_8way) diff --git a/arch/x86/crypto/twofish-i586-asm_32.S b/arch/x86/crypto/twofish-i586-asm_32.S index a6f09e4f2e46..3abcad661884 100644 --- a/arch/x86/crypto/twofish-i586-asm_32.S +++ b/arch/x86/crypto/twofish-i586-asm_32.S @@ -260,7 +260,7 @@ SYM_FUNC_START(twofish_enc_blk) pop %ebx pop %ebp mov $1, %eax - ret + RET SYM_FUNC_END(twofish_enc_blk) SYM_FUNC_START(twofish_dec_blk) @@ -317,5 +317,5 @@ SYM_FUNC_START(twofish_dec_blk) pop %ebx pop %ebp mov $1, %eax - ret + RET SYM_FUNC_END(twofish_dec_blk) diff --git a/arch/x86/crypto/twofish-x86_64-asm_64-3way.S b/arch/x86/crypto/twofish-x86_64-asm_64-3way.S index fc23552afe37..b12d916f08d6 100644 --- a/arch/x86/crypto/twofish-x86_64-asm_64-3way.S +++ b/arch/x86/crypto/twofish-x86_64-asm_64-3way.S @@ -258,7 +258,7 @@ SYM_FUNC_START(__twofish_enc_blk_3way) popq %rbx; popq %r12; popq %r13; - ret; + RET; .L__enc_xor3: outunpack_enc3(xor); @@ -266,7 +266,7 @@ SYM_FUNC_START(__twofish_enc_blk_3way) popq %rbx; popq %r12; popq %r13; - ret; + RET; SYM_FUNC_END(__twofish_enc_blk_3way) SYM_FUNC_START(twofish_dec_blk_3way) @@ -301,5 +301,5 @@ SYM_FUNC_START(twofish_dec_blk_3way) popq %rbx; popq %r12; popq %r13; - ret; + RET; SYM_FUNC_END(twofish_dec_blk_3way) diff --git a/arch/x86/crypto/twofish-x86_64-asm_64.S b/arch/x86/crypto/twofish-x86_64-asm_64.S index d2e56232494a..775af290cd19 100644 --- a/arch/x86/crypto/twofish-x86_64-asm_64.S +++ b/arch/x86/crypto/twofish-x86_64-asm_64.S @@ -252,7 +252,7 @@ SYM_FUNC_START(twofish_enc_blk) popq R1 movl $1,%eax - ret + RET SYM_FUNC_END(twofish_enc_blk) SYM_FUNC_START(twofish_dec_blk) @@ -304,5 +304,5 @@ SYM_FUNC_START(twofish_dec_blk) popq R1 movl $1,%eax - ret + RET SYM_FUNC_END(twofish_dec_blk) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 4e079f250962..4a4896ba4db3 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -821,7 +821,7 @@ SYM_FUNC_START(schedule_tail_wrapper) popl %eax FRAME_END - ret + RET SYM_FUNC_END(schedule_tail_wrapper) .popsection diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 2f2d52729e17..fb77ef8ecde5 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -740,7 +740,7 @@ SYM_FUNC_START(asm_load_gs_index) 2: ALTERNATIVE "", "mfence", X86_BUG_SWAPGS_FENCE swapgs FRAME_END - ret + RET SYM_FUNC_END(asm_load_gs_index) EXPORT_SYMBOL(asm_load_gs_index) @@ -799,7 +799,7 @@ SYM_INNER_LABEL(asm_call_irq_on_stack, SYM_L_GLOBAL) /* Restore the previous stack pointer from RBP. */ leaveq - ret + RET SYM_FUNC_END(asm_call_on_stack) #ifdef CONFIG_XEN_PV @@ -932,7 +932,7 @@ SYM_CODE_START_LOCAL(paranoid_entry) * is needed here. */ SAVE_AND_SET_GSBASE scratch_reg=%rax save_reg=%rbx - ret + RET .Lparanoid_entry_checkgs: /* EBX = 1 -> kernel GSBASE active, no restore required */ @@ -953,7 +953,7 @@ SYM_CODE_START_LOCAL(paranoid_entry) .Lparanoid_kernel_gsbase: FENCE_SWAPGS_KERNEL_ENTRY - ret + RET SYM_CODE_END(paranoid_entry) /* @@ -1032,7 +1032,7 @@ SYM_CODE_START_LOCAL(error_entry) movq %rax, %rsp /* switch stack */ ENCODE_FRAME_POINTER pushq %r12 - ret + RET /* * There are two places in the kernel that can potentially fault with @@ -1063,7 +1063,7 @@ SYM_CODE_START_LOCAL(error_entry) */ .Lerror_entry_done_lfence: FENCE_SWAPGS_KERNEL_ENTRY - ret + RET .Lbstep_iret: /* Fix truncated RIP */ diff --git a/arch/x86/entry/thunk_32.S b/arch/x86/entry/thunk_32.S index f1f96d4d8cd6..7591bab060f7 100644 --- a/arch/x86/entry/thunk_32.S +++ b/arch/x86/entry/thunk_32.S @@ -24,7 +24,7 @@ SYM_CODE_START_NOALIGN(\name) popl %edx popl %ecx popl %eax - ret + RET _ASM_NOKPROBE(\name) SYM_CODE_END(\name) .endm diff --git a/arch/x86/entry/thunk_64.S b/arch/x86/entry/thunk_64.S index c9a9fbf1655f..1b5044ad8cd0 100644 --- a/arch/x86/entry/thunk_64.S +++ b/arch/x86/entry/thunk_64.S @@ -55,7 +55,7 @@ SYM_CODE_START_LOCAL_NOALIGN(__thunk_restore) popq %rsi popq %rdi popq %rbp - ret + RET _ASM_NOKPROBE(__thunk_restore) SYM_CODE_END(__thunk_restore) #endif diff --git a/arch/x86/entry/vdso/vdso32/system_call.S b/arch/x86/entry/vdso/vdso32/system_call.S index d6a6080bade0..c92a8fef2ed2 100644 --- a/arch/x86/entry/vdso/vdso32/system_call.S +++ b/arch/x86/entry/vdso/vdso32/system_call.S @@ -78,7 +78,7 @@ SYM_INNER_LABEL(int80_landing_pad, SYM_L_GLOBAL) popl %ecx CFI_RESTORE ecx CFI_ADJUST_CFA_OFFSET -4 - ret + RET CFI_ENDPROC .size __kernel_vsyscall,.-__kernel_vsyscall diff --git a/arch/x86/entry/vsyscall/vsyscall_emu_64.S b/arch/x86/entry/vsyscall/vsyscall_emu_64.S index 2e203f3a25a7..15e35159ebb6 100644 --- a/arch/x86/entry/vsyscall/vsyscall_emu_64.S +++ b/arch/x86/entry/vsyscall/vsyscall_emu_64.S @@ -19,17 +19,17 @@ __vsyscall_page: mov $__NR_gettimeofday, %rax syscall - ret + RET .balign 1024, 0xcc mov $__NR_time, %rax syscall - ret + RET .balign 1024, 0xcc mov $__NR_getcpu, %rax syscall - ret + RET .balign 4096, 0xcc diff --git a/arch/x86/kernel/acpi/wakeup_32.S b/arch/x86/kernel/acpi/wakeup_32.S index daf88f8143c5..cf69081073b5 100644 --- a/arch/x86/kernel/acpi/wakeup_32.S +++ b/arch/x86/kernel/acpi/wakeup_32.S @@ -60,7 +60,7 @@ save_registers: popl saved_context_eflags movl $ret_point, saved_eip - ret + RET restore_registers: @@ -70,7 +70,7 @@ restore_registers: movl saved_context_edi, %edi pushl saved_context_eflags popfl - ret + RET SYM_CODE_START(do_suspend_lowlevel) call save_processor_state @@ -86,7 +86,7 @@ SYM_CODE_START(do_suspend_lowlevel) ret_point: call restore_registers call restore_processor_state - ret + RET SYM_CODE_END(do_suspend_lowlevel) .data diff --git a/arch/x86/kernel/ftrace_32.S b/arch/x86/kernel/ftrace_32.S index e405fe1a8bf4..a0ed0e4a2c0c 100644 --- a/arch/x86/kernel/ftrace_32.S +++ b/arch/x86/kernel/ftrace_32.S @@ -19,7 +19,7 @@ #endif SYM_FUNC_START(__fentry__) - ret + RET SYM_FUNC_END(__fentry__) EXPORT_SYMBOL(__fentry__) @@ -84,7 +84,7 @@ ftrace_graph_call: /* This is weak to keep gas from relaxing the jumps */ SYM_INNER_LABEL_ALIGN(ftrace_stub, SYM_L_WEAK) - ret + RET SYM_CODE_END(ftrace_caller) SYM_CODE_START(ftrace_regs_caller) @@ -177,7 +177,7 @@ SYM_CODE_START(ftrace_graph_caller) popl %edx popl %ecx popl %eax - ret + RET SYM_CODE_END(ftrace_graph_caller) .globl return_to_handler diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S index ad1da5e85aeb..b656c11521a1 100644 --- a/arch/x86/kernel/ftrace_64.S +++ b/arch/x86/kernel/ftrace_64.S @@ -132,7 +132,7 @@ #ifdef CONFIG_DYNAMIC_FTRACE SYM_FUNC_START(__fentry__) - retq + RET SYM_FUNC_END(__fentry__) EXPORT_SYMBOL(__fentry__) @@ -170,10 +170,10 @@ SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL) /* * This is weak to keep gas from relaxing the jumps. - * It is also used to copy the retq for trampolines. + * It is also used to copy the RET for trampolines. */ SYM_INNER_LABEL_ALIGN(ftrace_stub, SYM_L_WEAK) - retq + RET SYM_FUNC_END(ftrace_epilogue) SYM_FUNC_START(ftrace_regs_caller) @@ -287,7 +287,7 @@ fgraph_trace: #endif SYM_INNER_LABEL(ftrace_stub, SYM_L_GLOBAL) - retq + RET trace: /* save_mcount_regs fills in first two parameters */ @@ -319,7 +319,7 @@ SYM_FUNC_START(ftrace_graph_caller) restore_mcount_regs - retq + RET SYM_FUNC_END(ftrace_graph_caller) SYM_CODE_START(return_to_handler) diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S index 7ed84c282233..9b2b1ac7e8c9 100644 --- a/arch/x86/kernel/head_32.S +++ b/arch/x86/kernel/head_32.S @@ -354,7 +354,7 @@ setup_once: #endif andl $0,setup_once_ref /* Once is enough, thanks */ - ret + RET SYM_FUNC_START(early_idt_handler_array) # 36(%esp) %eflags diff --git a/arch/x86/kernel/irqflags.S b/arch/x86/kernel/irqflags.S index 0db0375235b4..a9c36400bdf8 100644 --- a/arch/x86/kernel/irqflags.S +++ b/arch/x86/kernel/irqflags.S @@ -10,7 +10,7 @@ SYM_FUNC_START(native_save_fl) pushf pop %_ASM_AX - ret + RET SYM_FUNC_END(native_save_fl) EXPORT_SYMBOL(native_save_fl) @@ -21,6 +21,6 @@ EXPORT_SYMBOL(native_save_fl) SYM_FUNC_START(native_restore_fl) push %_ASM_ARG1 popf - ret + RET SYM_FUNC_END(native_restore_fl) EXPORT_SYMBOL(native_restore_fl) diff --git a/arch/x86/kernel/relocate_kernel_32.S b/arch/x86/kernel/relocate_kernel_32.S index 94b33885f8d2..8190136cc63a 100644 --- a/arch/x86/kernel/relocate_kernel_32.S +++ b/arch/x86/kernel/relocate_kernel_32.S @@ -91,7 +91,7 @@ SYM_CODE_START_NOALIGN(relocate_kernel) movl %edi, %eax addl $(identity_mapped - relocate_kernel), %eax pushl %eax - ret + RET SYM_CODE_END(relocate_kernel) SYM_CODE_START_LOCAL_NOALIGN(identity_mapped) @@ -159,7 +159,7 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped) xorl %edx, %edx xorl %esi, %esi xorl %ebp, %ebp - ret + RET 1: popl %edx movl CP_PA_SWAP_PAGE(%edi), %esp @@ -190,7 +190,7 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped) movl %edi, %eax addl $(virtual_mapped - relocate_kernel), %eax pushl %eax - ret + RET SYM_CODE_END(identity_mapped) SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped) @@ -208,7 +208,7 @@ SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped) popl %edi popl %esi popl %ebx - ret + RET SYM_CODE_END(virtual_mapped) /* Do the copies */ @@ -271,7 +271,7 @@ SYM_CODE_START_LOCAL_NOALIGN(swap_pages) popl %edi popl %ebx popl %ebp - ret + RET SYM_CODE_END(swap_pages) .globl kexec_control_code_size diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S index a4d9a261425b..a0ae167b1aeb 100644 --- a/arch/x86/kernel/relocate_kernel_64.S +++ b/arch/x86/kernel/relocate_kernel_64.S @@ -104,7 +104,7 @@ SYM_CODE_START_NOALIGN(relocate_kernel) /* jump to identity mapped page */ addq $(identity_mapped - relocate_kernel), %r8 pushq %r8 - ret + RET SYM_CODE_END(relocate_kernel) SYM_CODE_START_LOCAL_NOALIGN(identity_mapped) @@ -191,7 +191,7 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped) xorl %r14d, %r14d xorl %r15d, %r15d - ret + RET 1: popq %rdx @@ -210,7 +210,7 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped) call swap_pages movq $virtual_mapped, %rax pushq %rax - ret + RET SYM_CODE_END(identity_mapped) SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped) @@ -231,7 +231,7 @@ SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped) popq %r12 popq %rbp popq %rbx - ret + RET SYM_CODE_END(virtual_mapped) /* Do the copies */ @@ -288,7 +288,7 @@ SYM_CODE_START_LOCAL_NOALIGN(swap_pages) lea PAGE_SIZE(%rax), %rsi jmp 0b 3: - ret + RET SYM_CODE_END(swap_pages) .globl kexec_control_code_size diff --git a/arch/x86/kernel/sev_verify_cbit.S b/arch/x86/kernel/sev_verify_cbit.S index ee04941a6546..3355e27c69eb 100644 --- a/arch/x86/kernel/sev_verify_cbit.S +++ b/arch/x86/kernel/sev_verify_cbit.S @@ -85,5 +85,5 @@ SYM_FUNC_START(sev_verify_cbit) #endif /* Return page-table pointer */ movq %rdi, %rax - ret + RET SYM_FUNC_END(sev_verify_cbit) diff --git a/arch/x86/kernel/verify_cpu.S b/arch/x86/kernel/verify_cpu.S index 641f0fe1e5b4..1258a5872d12 100644 --- a/arch/x86/kernel/verify_cpu.S +++ b/arch/x86/kernel/verify_cpu.S @@ -132,9 +132,9 @@ SYM_FUNC_START_LOCAL(verify_cpu) .Lverify_cpu_no_longmode: popf # Restore caller passed flags movl $1,%eax - ret + RET .Lverify_cpu_sse_ok: popf # Restore caller passed flags xorl %eax, %eax - ret + RET SYM_FUNC_END(verify_cpu) diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S index 1ec1ac40e328..6df5950d6967 100644 --- a/arch/x86/kvm/svm/vmenter.S +++ b/arch/x86/kvm/svm/vmenter.S @@ -166,5 +166,5 @@ SYM_FUNC_START(__svm_vcpu_run) pop %edi #endif pop %_ASM_BP - ret + RET SYM_FUNC_END(__svm_vcpu_run) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 3a6461694fc2..435c187927c4 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -49,14 +49,14 @@ SYM_FUNC_START_LOCAL(vmx_vmenter) je 2f 1: vmresume - ret + RET 2: vmlaunch - ret + RET 3: cmpb $0, kvm_rebooting je 4f - ret + RET 4: ud2 _ASM_EXTABLE(1b, 3b) @@ -89,7 +89,7 @@ SYM_FUNC_START(vmx_vmexit) pop %_ASM_AX .Lvmexit_skip_rsb: #endif - ret + RET SYM_FUNC_END(vmx_vmexit) /** @@ -228,7 +228,7 @@ SYM_FUNC_START(__vmx_vcpu_run) pop %edi #endif pop %_ASM_BP - ret + RET /* VM-Fail. Out-of-line to avoid a taken Jcc after VM-Exit. */ 2: mov $1, %eax @@ -293,7 +293,7 @@ SYM_FUNC_START(vmread_error_trampoline) pop %_ASM_AX pop %_ASM_BP - ret + RET SYM_FUNC_END(vmread_error_trampoline) SYM_FUNC_START(vmx_do_interrupt_nmi_irqoff) @@ -326,5 +326,5 @@ SYM_FUNC_START(vmx_do_interrupt_nmi_irqoff) */ mov %_ASM_BP, %_ASM_SP pop %_ASM_BP - ret + RET SYM_FUNC_END(vmx_do_interrupt_nmi_irqoff) diff --git a/arch/x86/lib/atomic64_386_32.S b/arch/x86/lib/atomic64_386_32.S index 4ad6b97fdb6f..e768815e58ae 100644 --- a/arch/x86/lib/atomic64_386_32.S +++ b/arch/x86/lib/atomic64_386_32.S @@ -30,7 +30,7 @@ SYM_FUNC_START(atomic64_##op##_386); \ #define RET_IRQ_RESTORE \ IRQ_RESTORE v; \ - ret + RET #define v %ecx BEGIN_IRQ_SAVE(read) diff --git a/arch/x86/lib/atomic64_cx8_32.S b/arch/x86/lib/atomic64_cx8_32.S index ce6935690766..90afb488b396 100644 --- a/arch/x86/lib/atomic64_cx8_32.S +++ b/arch/x86/lib/atomic64_cx8_32.S @@ -18,7 +18,7 @@ SYM_FUNC_START(atomic64_read_cx8) read64 %ecx - ret + RET SYM_FUNC_END(atomic64_read_cx8) SYM_FUNC_START(atomic64_set_cx8) @@ -28,7 +28,7 @@ SYM_FUNC_START(atomic64_set_cx8) cmpxchg8b (%esi) jne 1b - ret + RET SYM_FUNC_END(atomic64_set_cx8) SYM_FUNC_START(atomic64_xchg_cx8) @@ -37,7 +37,7 @@ SYM_FUNC_START(atomic64_xchg_cx8) cmpxchg8b (%esi) jne 1b - ret + RET SYM_FUNC_END(atomic64_xchg_cx8) .macro addsub_return func ins insc @@ -68,7 +68,7 @@ SYM_FUNC_START(atomic64_\func\()_return_cx8) popl %esi popl %ebx popl %ebp - ret + RET SYM_FUNC_END(atomic64_\func\()_return_cx8) .endm @@ -93,7 +93,7 @@ SYM_FUNC_START(atomic64_\func\()_return_cx8) movl %ebx, %eax movl %ecx, %edx popl %ebx - ret + RET SYM_FUNC_END(atomic64_\func\()_return_cx8) .endm @@ -118,7 +118,7 @@ SYM_FUNC_START(atomic64_dec_if_positive_cx8) movl %ebx, %eax movl %ecx, %edx popl %ebx - ret + RET SYM_FUNC_END(atomic64_dec_if_positive_cx8) SYM_FUNC_START(atomic64_add_unless_cx8) @@ -149,7 +149,7 @@ SYM_FUNC_START(atomic64_add_unless_cx8) addl $8, %esp popl %ebx popl %ebp - ret + RET 4: cmpl %edx, 4(%esp) jne 2b @@ -176,5 +176,5 @@ SYM_FUNC_START(atomic64_inc_not_zero_cx8) movl $1, %eax 3: popl %ebx - ret + RET SYM_FUNC_END(atomic64_inc_not_zero_cx8) diff --git a/arch/x86/lib/checksum_32.S b/arch/x86/lib/checksum_32.S index 4304320e51f4..929ad1747dea 100644 --- a/arch/x86/lib/checksum_32.S +++ b/arch/x86/lib/checksum_32.S @@ -127,7 +127,7 @@ SYM_FUNC_START(csum_partial) 8: popl %ebx popl %esi - ret + RET SYM_FUNC_END(csum_partial) #else @@ -245,7 +245,7 @@ SYM_FUNC_START(csum_partial) 90: popl %ebx popl %esi - ret + RET SYM_FUNC_END(csum_partial) #endif @@ -371,7 +371,7 @@ EXC( movb %cl, (%edi) ) popl %esi popl %edi popl %ecx # equivalent to addl $4,%esp - ret + RET SYM_FUNC_END(csum_partial_copy_generic) #else @@ -447,7 +447,7 @@ EXC( movb %dl, (%edi) ) popl %esi popl %edi popl %ebx - ret + RET SYM_FUNC_END(csum_partial_copy_generic) #undef ROUND diff --git a/arch/x86/lib/clear_page_64.S b/arch/x86/lib/clear_page_64.S index c4c7dd115953..fe59b8ac4fcc 100644 --- a/arch/x86/lib/clear_page_64.S +++ b/arch/x86/lib/clear_page_64.S @@ -17,7 +17,7 @@ SYM_FUNC_START(clear_page_rep) movl $4096/8,%ecx xorl %eax,%eax rep stosq - ret + RET SYM_FUNC_END(clear_page_rep) EXPORT_SYMBOL_GPL(clear_page_rep) @@ -39,7 +39,7 @@ SYM_FUNC_START(clear_page_orig) leaq 64(%rdi),%rdi jnz .Lloop nop - ret + RET SYM_FUNC_END(clear_page_orig) EXPORT_SYMBOL_GPL(clear_page_orig) @@ -47,6 +47,6 @@ SYM_FUNC_START(clear_page_erms) movl $4096,%ecx xorl %eax,%eax rep stosb - ret + RET SYM_FUNC_END(clear_page_erms) EXPORT_SYMBOL_GPL(clear_page_erms) diff --git a/arch/x86/lib/cmpxchg16b_emu.S b/arch/x86/lib/cmpxchg16b_emu.S index 3542502faa3b..33c70c0160ea 100644 --- a/arch/x86/lib/cmpxchg16b_emu.S +++ b/arch/x86/lib/cmpxchg16b_emu.S @@ -37,11 +37,11 @@ SYM_FUNC_START(this_cpu_cmpxchg16b_emu) popfq mov $1, %al - ret + RET .Lnot_same: popfq xor %al,%al - ret + RET SYM_FUNC_END(this_cpu_cmpxchg16b_emu) diff --git a/arch/x86/lib/cmpxchg8b_emu.S b/arch/x86/lib/cmpxchg8b_emu.S index ca01ed6029f4..6a912d58fecc 100644 --- a/arch/x86/lib/cmpxchg8b_emu.S +++ b/arch/x86/lib/cmpxchg8b_emu.S @@ -32,7 +32,7 @@ SYM_FUNC_START(cmpxchg8b_emu) movl %ecx, 4(%esi) popfl - ret + RET .Lnot_same: movl (%esi), %eax @@ -40,7 +40,7 @@ SYM_FUNC_START(cmpxchg8b_emu) movl 4(%esi), %edx popfl - ret + RET SYM_FUNC_END(cmpxchg8b_emu) EXPORT_SYMBOL(cmpxchg8b_emu) diff --git a/arch/x86/lib/copy_mc_64.S b/arch/x86/lib/copy_mc_64.S index 892d8915f609..fdd1929b1f9d 100644 --- a/arch/x86/lib/copy_mc_64.S +++ b/arch/x86/lib/copy_mc_64.S @@ -86,7 +86,7 @@ SYM_FUNC_START(copy_mc_fragile) .L_done_memcpy_trap: xorl %eax, %eax .L_done: - ret + RET SYM_FUNC_END(copy_mc_fragile) EXPORT_SYMBOL_GPL(copy_mc_fragile) @@ -142,7 +142,7 @@ SYM_FUNC_START(copy_mc_enhanced_fast_string) rep movsb /* Copy successful. Return zero */ xorl %eax, %eax - ret + RET SYM_FUNC_END(copy_mc_enhanced_fast_string) .section .fixup, "ax" @@ -155,7 +155,7 @@ SYM_FUNC_END(copy_mc_enhanced_fast_string) * user-copy routines. */ movq %rcx, %rax - ret + RET .previous diff --git a/arch/x86/lib/copy_page_64.S b/arch/x86/lib/copy_page_64.S index db4b4f9197c7..30ea644bf446 100644 --- a/arch/x86/lib/copy_page_64.S +++ b/arch/x86/lib/copy_page_64.S @@ -17,7 +17,7 @@ SYM_FUNC_START(copy_page) ALTERNATIVE "jmp copy_page_regs", "", X86_FEATURE_REP_GOOD movl $4096/8, %ecx rep movsq - ret + RET SYM_FUNC_END(copy_page) EXPORT_SYMBOL(copy_page) @@ -85,5 +85,5 @@ SYM_FUNC_START_LOCAL(copy_page_regs) movq (%rsp), %rbx movq 1*8(%rsp), %r12 addq $2*8, %rsp - ret + RET SYM_FUNC_END(copy_page_regs) diff --git a/arch/x86/lib/copy_user_64.S b/arch/x86/lib/copy_user_64.S index 57b79c577496..84cee84fc658 100644 --- a/arch/x86/lib/copy_user_64.S +++ b/arch/x86/lib/copy_user_64.S @@ -105,7 +105,7 @@ SYM_FUNC_START(copy_user_generic_unrolled) jnz 21b 23: xor %eax,%eax ASM_CLAC - ret + RET .section .fixup,"ax" 30: shll $6,%ecx @@ -173,7 +173,7 @@ SYM_FUNC_START(copy_user_generic_string) movsb xorl %eax,%eax ASM_CLAC - ret + RET .section .fixup,"ax" 11: leal (%rdx,%rcx,8),%ecx @@ -207,7 +207,7 @@ SYM_FUNC_START(copy_user_enhanced_fast_string) movsb xorl %eax,%eax ASM_CLAC - ret + RET .section .fixup,"ax" 12: movl %ecx,%edx /* ecx is zerorest also */ @@ -239,7 +239,7 @@ SYM_CODE_START_LOCAL(.Lcopy_user_handle_tail) 1: rep movsb 2: mov %ecx,%eax ASM_CLAC - ret + RET /* * Return zero to pretend that this copy succeeded. This @@ -250,7 +250,7 @@ SYM_CODE_START_LOCAL(.Lcopy_user_handle_tail) */ 3: xorl %eax,%eax ASM_CLAC - ret + RET _ASM_EXTABLE_CPY(1b, 2b) SYM_CODE_END(.Lcopy_user_handle_tail) @@ -361,7 +361,7 @@ SYM_FUNC_START(__copy_user_nocache) xorl %eax,%eax ASM_CLAC sfence - ret + RET .section .fixup,"ax" .L_fixup_4x8b_copy: diff --git a/arch/x86/lib/csum-copy_64.S b/arch/x86/lib/csum-copy_64.S index 1fbd8ee9642d..d9e16a2cf285 100644 --- a/arch/x86/lib/csum-copy_64.S +++ b/arch/x86/lib/csum-copy_64.S @@ -201,7 +201,7 @@ SYM_FUNC_START(csum_partial_copy_generic) movq 3*8(%rsp), %r13 movq 4*8(%rsp), %r15 addq $5*8, %rsp - ret + RET .Lshort: movl %ecx, %r10d jmp .L1 diff --git a/arch/x86/lib/getuser.S b/arch/x86/lib/getuser.S index fa1bc2104b32..b70d98d79a9d 100644 --- a/arch/x86/lib/getuser.S +++ b/arch/x86/lib/getuser.S @@ -57,7 +57,7 @@ SYM_FUNC_START(__get_user_1) 1: movzbl (%_ASM_AX),%edx xor %eax,%eax ASM_CLAC - ret + RET SYM_FUNC_END(__get_user_1) EXPORT_SYMBOL(__get_user_1) @@ -71,7 +71,7 @@ SYM_FUNC_START(__get_user_2) 2: movzwl (%_ASM_AX),%edx xor %eax,%eax ASM_CLAC - ret + RET SYM_FUNC_END(__get_user_2) EXPORT_SYMBOL(__get_user_2) @@ -85,7 +85,7 @@ SYM_FUNC_START(__get_user_4) 3: movl (%_ASM_AX),%edx xor %eax,%eax ASM_CLAC - ret + RET SYM_FUNC_END(__get_user_4) EXPORT_SYMBOL(__get_user_4) @@ -100,7 +100,7 @@ SYM_FUNC_START(__get_user_8) 4: movq (%_ASM_AX),%rdx xor %eax,%eax ASM_CLAC - ret + RET #else LOAD_TASK_SIZE_MINUS_N(7) cmp %_ASM_DX,%_ASM_AX @@ -112,7 +112,7 @@ SYM_FUNC_START(__get_user_8) 5: movl 4(%_ASM_AX),%ecx xor %eax,%eax ASM_CLAC - ret + RET #endif SYM_FUNC_END(__get_user_8) EXPORT_SYMBOL(__get_user_8) @@ -124,7 +124,7 @@ SYM_FUNC_START(__get_user_nocheck_1) 6: movzbl (%_ASM_AX),%edx xor %eax,%eax ASM_CLAC - ret + RET SYM_FUNC_END(__get_user_nocheck_1) EXPORT_SYMBOL(__get_user_nocheck_1) @@ -134,7 +134,7 @@ SYM_FUNC_START(__get_user_nocheck_2) 7: movzwl (%_ASM_AX),%edx xor %eax,%eax ASM_CLAC - ret + RET SYM_FUNC_END(__get_user_nocheck_2) EXPORT_SYMBOL(__get_user_nocheck_2) @@ -144,7 +144,7 @@ SYM_FUNC_START(__get_user_nocheck_4) 8: movl (%_ASM_AX),%edx xor %eax,%eax ASM_CLAC - ret + RET SYM_FUNC_END(__get_user_nocheck_4) EXPORT_SYMBOL(__get_user_nocheck_4) @@ -159,7 +159,7 @@ SYM_FUNC_START(__get_user_nocheck_8) #endif xor %eax,%eax ASM_CLAC - ret + RET SYM_FUNC_END(__get_user_nocheck_8) EXPORT_SYMBOL(__get_user_nocheck_8) @@ -169,7 +169,7 @@ SYM_CODE_START_LOCAL(.Lbad_get_user_clac) bad_get_user: xor %edx,%edx mov $(-EFAULT),%_ASM_AX - ret + RET SYM_CODE_END(.Lbad_get_user_clac) #ifdef CONFIG_X86_32 @@ -179,7 +179,7 @@ bad_get_user_8: xor %edx,%edx xor %ecx,%ecx mov $(-EFAULT),%_ASM_AX - ret + RET SYM_CODE_END(.Lbad_get_user_8_clac) #endif diff --git a/arch/x86/lib/hweight.S b/arch/x86/lib/hweight.S index dbf8cc97b7f5..12c16c6aa44a 100644 --- a/arch/x86/lib/hweight.S +++ b/arch/x86/lib/hweight.S @@ -32,7 +32,7 @@ SYM_FUNC_START(__sw_hweight32) imull $0x01010101, %eax, %eax # w_tmp *= 0x01010101 shrl $24, %eax # w = w_tmp >> 24 __ASM_SIZE(pop,) %__ASM_REG(dx) - ret + RET SYM_FUNC_END(__sw_hweight32) EXPORT_SYMBOL(__sw_hweight32) @@ -65,7 +65,7 @@ SYM_FUNC_START(__sw_hweight64) popq %rdx popq %rdi - ret + RET #else /* CONFIG_X86_32 */ /* We're getting an u64 arg in (%eax,%edx): unsigned long hweight64(__u64 w) */ pushl %ecx @@ -77,7 +77,7 @@ SYM_FUNC_START(__sw_hweight64) addl %ecx, %eax # result popl %ecx - ret + RET #endif SYM_FUNC_END(__sw_hweight64) EXPORT_SYMBOL(__sw_hweight64) diff --git a/arch/x86/lib/iomap_copy_64.S b/arch/x86/lib/iomap_copy_64.S index cb5a1964506b..a1f9416bf67a 100644 --- a/arch/x86/lib/iomap_copy_64.S +++ b/arch/x86/lib/iomap_copy_64.S @@ -11,5 +11,5 @@ SYM_FUNC_START(__iowrite32_copy) movl %edx,%ecx rep movsd - ret + RET SYM_FUNC_END(__iowrite32_copy) diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S index 1cc9da6e29c7..59cf2343f3d9 100644 --- a/arch/x86/lib/memcpy_64.S +++ b/arch/x86/lib/memcpy_64.S @@ -39,7 +39,7 @@ SYM_FUNC_START_WEAK(memcpy) rep movsq movl %edx, %ecx rep movsb - ret + RET SYM_FUNC_END(memcpy) SYM_FUNC_END_ALIAS(__memcpy) EXPORT_SYMBOL(memcpy) @@ -53,7 +53,7 @@ SYM_FUNC_START_LOCAL(memcpy_erms) movq %rdi, %rax movq %rdx, %rcx rep movsb - ret + RET SYM_FUNC_END(memcpy_erms) SYM_FUNC_START_LOCAL(memcpy_orig) @@ -137,7 +137,7 @@ SYM_FUNC_START_LOCAL(memcpy_orig) movq %r9, 1*8(%rdi) movq %r10, -2*8(%rdi, %rdx) movq %r11, -1*8(%rdi, %rdx) - retq + RET .p2align 4 .Lless_16bytes: cmpl $8, %edx @@ -149,7 +149,7 @@ SYM_FUNC_START_LOCAL(memcpy_orig) movq -1*8(%rsi, %rdx), %r9 movq %r8, 0*8(%rdi) movq %r9, -1*8(%rdi, %rdx) - retq + RET .p2align 4 .Lless_8bytes: cmpl $4, %edx @@ -162,7 +162,7 @@ SYM_FUNC_START_LOCAL(memcpy_orig) movl -4(%rsi, %rdx), %r8d movl %ecx, (%rdi) movl %r8d, -4(%rdi, %rdx) - retq + RET .p2align 4 .Lless_3bytes: subl $1, %edx @@ -180,7 +180,7 @@ SYM_FUNC_START_LOCAL(memcpy_orig) movb %cl, (%rdi) .Lend: - retq + RET SYM_FUNC_END(memcpy_orig) .popsection diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S index 64801010d312..e84d649620c4 100644 --- a/arch/x86/lib/memmove_64.S +++ b/arch/x86/lib/memmove_64.S @@ -40,7 +40,7 @@ SYM_FUNC_START(__memmove) /* FSRM implies ERMS => no length checks, do the copy directly */ .Lmemmove_begin_forward: ALTERNATIVE "cmp $0x20, %rdx; jb 1f", "", X86_FEATURE_FSRM - ALTERNATIVE "", "movq %rdx, %rcx; rep movsb; retq", X86_FEATURE_ERMS + ALTERNATIVE "", "movq %rdx, %rcx; rep movsb; RET", X86_FEATURE_ERMS /* * movsq instruction have many startup latency @@ -205,7 +205,7 @@ SYM_FUNC_START(__memmove) movb (%rsi), %r11b movb %r11b, (%rdi) 13: - retq + RET SYM_FUNC_END(__memmove) SYM_FUNC_END_ALIAS(memmove) EXPORT_SYMBOL(__memmove) diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S index 9827ae267f96..d624f2bc42f1 100644 --- a/arch/x86/lib/memset_64.S +++ b/arch/x86/lib/memset_64.S @@ -40,7 +40,7 @@ SYM_FUNC_START(__memset) movl %edx,%ecx rep stosb movq %r9,%rax - ret + RET SYM_FUNC_END(__memset) SYM_FUNC_END_ALIAS(memset) EXPORT_SYMBOL(memset) @@ -63,7 +63,7 @@ SYM_FUNC_START_LOCAL(memset_erms) movq %rdx,%rcx rep stosb movq %r9,%rax - ret + RET SYM_FUNC_END(memset_erms) SYM_FUNC_START_LOCAL(memset_orig) @@ -125,7 +125,7 @@ SYM_FUNC_START_LOCAL(memset_orig) .Lende: movq %r10,%rax - ret + RET .Lbad_alignment: cmpq $7,%rdx diff --git a/arch/x86/lib/msr-reg.S b/arch/x86/lib/msr-reg.S index a2b9caa5274c..ebd259f31496 100644 --- a/arch/x86/lib/msr-reg.S +++ b/arch/x86/lib/msr-reg.S @@ -35,7 +35,7 @@ SYM_FUNC_START(\op\()_safe_regs) movl %edi, 28(%r10) popq %r12 popq %rbx - ret + RET 3: movl $-EIO, %r11d jmp 2b @@ -77,7 +77,7 @@ SYM_FUNC_START(\op\()_safe_regs) popl %esi popl %ebp popl %ebx - ret + RET 3: movl $-EIO, 4(%esp) jmp 2b diff --git a/arch/x86/lib/putuser.S b/arch/x86/lib/putuser.S index 0ea344c5ea43..ecb2049c1273 100644 --- a/arch/x86/lib/putuser.S +++ b/arch/x86/lib/putuser.S @@ -52,7 +52,7 @@ SYM_INNER_LABEL(__put_user_nocheck_1, SYM_L_GLOBAL) 1: movb %al,(%_ASM_CX) xor %ecx,%ecx ASM_CLAC - ret + RET SYM_FUNC_END(__put_user_1) EXPORT_SYMBOL(__put_user_1) EXPORT_SYMBOL(__put_user_nocheck_1) @@ -66,7 +66,7 @@ SYM_INNER_LABEL(__put_user_nocheck_2, SYM_L_GLOBAL) 2: movw %ax,(%_ASM_CX) xor %ecx,%ecx ASM_CLAC - ret + RET SYM_FUNC_END(__put_user_2) EXPORT_SYMBOL(__put_user_2) EXPORT_SYMBOL(__put_user_nocheck_2) @@ -80,7 +80,7 @@ SYM_INNER_LABEL(__put_user_nocheck_4, SYM_L_GLOBAL) 3: movl %eax,(%_ASM_CX) xor %ecx,%ecx ASM_CLAC - ret + RET SYM_FUNC_END(__put_user_4) EXPORT_SYMBOL(__put_user_4) EXPORT_SYMBOL(__put_user_nocheck_4) diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index 6565c6f86160..8904c076a1df 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -23,7 +23,7 @@ .Ldo_rop_\@: mov %\reg, (%_ASM_SP) UNWIND_HINT_FUNC - ret + RET .endm .macro THUNK reg diff --git a/arch/x86/math-emu/div_Xsig.S b/arch/x86/math-emu/div_Xsig.S index 951da2ad54bb..8c270ab415be 100644 --- a/arch/x86/math-emu/div_Xsig.S +++ b/arch/x86/math-emu/div_Xsig.S @@ -341,7 +341,7 @@ L_exit: popl %esi leave - ret + RET #ifdef PARANOID diff --git a/arch/x86/math-emu/div_small.S b/arch/x86/math-emu/div_small.S index d047d1816abe..637439bfefa4 100644 --- a/arch/x86/math-emu/div_small.S +++ b/arch/x86/math-emu/div_small.S @@ -44,5 +44,5 @@ SYM_FUNC_START(FPU_div_small) popl %esi leave - ret + RET SYM_FUNC_END(FPU_div_small) diff --git a/arch/x86/math-emu/mul_Xsig.S b/arch/x86/math-emu/mul_Xsig.S index 4afc7b1fa6e9..54a031b66142 100644 --- a/arch/x86/math-emu/mul_Xsig.S +++ b/arch/x86/math-emu/mul_Xsig.S @@ -62,7 +62,7 @@ SYM_FUNC_START(mul32_Xsig) popl %esi leave - ret + RET SYM_FUNC_END(mul32_Xsig) @@ -115,7 +115,7 @@ SYM_FUNC_START(mul64_Xsig) popl %esi leave - ret + RET SYM_FUNC_END(mul64_Xsig) @@ -175,5 +175,5 @@ SYM_FUNC_START(mul_Xsig_Xsig) popl %esi leave - ret + RET SYM_FUNC_END(mul_Xsig_Xsig) diff --git a/arch/x86/math-emu/polynom_Xsig.S b/arch/x86/math-emu/polynom_Xsig.S index 702315eecb86..35fd723fc0df 100644 --- a/arch/x86/math-emu/polynom_Xsig.S +++ b/arch/x86/math-emu/polynom_Xsig.S @@ -133,5 +133,5 @@ L_accum_done: popl %edi popl %esi leave - ret + RET SYM_FUNC_END(polynomial_Xsig) diff --git a/arch/x86/math-emu/reg_norm.S b/arch/x86/math-emu/reg_norm.S index cad1d60b1e84..594936eeed67 100644 --- a/arch/x86/math-emu/reg_norm.S +++ b/arch/x86/math-emu/reg_norm.S @@ -72,7 +72,7 @@ L_exit_valid: L_exit: popl %ebx leave - ret + RET L_zero: @@ -138,7 +138,7 @@ L_exit_nuo_valid: popl %ebx leave - ret + RET L_exit_nuo_zero: movl TAG_Zero,%eax @@ -146,5 +146,5 @@ L_exit_nuo_zero: popl %ebx leave - ret + RET SYM_FUNC_END(FPU_normalize_nuo) diff --git a/arch/x86/math-emu/reg_round.S b/arch/x86/math-emu/reg_round.S index 11a1f798451b..8bdbdcee7456 100644 --- a/arch/x86/math-emu/reg_round.S +++ b/arch/x86/math-emu/reg_round.S @@ -437,7 +437,7 @@ fpu_Arith_exit: popl %edi popl %esi leave - ret + RET /* diff --git a/arch/x86/math-emu/reg_u_add.S b/arch/x86/math-emu/reg_u_add.S index 9c9e2c810afe..07247287a3af 100644 --- a/arch/x86/math-emu/reg_u_add.S +++ b/arch/x86/math-emu/reg_u_add.S @@ -164,6 +164,6 @@ L_exit: popl %edi popl %esi leave - ret + RET #endif /* PARANOID */ SYM_FUNC_END(FPU_u_add) diff --git a/arch/x86/math-emu/reg_u_div.S b/arch/x86/math-emu/reg_u_div.S index e2fb5c2644c5..b5a41e2fc484 100644 --- a/arch/x86/math-emu/reg_u_div.S +++ b/arch/x86/math-emu/reg_u_div.S @@ -468,7 +468,7 @@ L_exit: popl %esi leave - ret + RET #endif /* PARANOID */ SYM_FUNC_END(FPU_u_div) diff --git a/arch/x86/math-emu/reg_u_mul.S b/arch/x86/math-emu/reg_u_mul.S index 0c779c87ac5b..e2588b24b8c2 100644 --- a/arch/x86/math-emu/reg_u_mul.S +++ b/arch/x86/math-emu/reg_u_mul.S @@ -144,7 +144,7 @@ L_exit: popl %edi popl %esi leave - ret + RET #endif /* PARANOID */ SYM_FUNC_END(FPU_u_mul) diff --git a/arch/x86/math-emu/reg_u_sub.S b/arch/x86/math-emu/reg_u_sub.S index e9bb7c248649..4c900c29e4ff 100644 --- a/arch/x86/math-emu/reg_u_sub.S +++ b/arch/x86/math-emu/reg_u_sub.S @@ -270,5 +270,5 @@ L_exit: popl %edi popl %esi leave - ret + RET SYM_FUNC_END(FPU_u_sub) diff --git a/arch/x86/math-emu/round_Xsig.S b/arch/x86/math-emu/round_Xsig.S index d9d7de8dbd7b..126c40473bad 100644 --- a/arch/x86/math-emu/round_Xsig.S +++ b/arch/x86/math-emu/round_Xsig.S @@ -78,7 +78,7 @@ L_exit: popl %esi popl %ebx leave - ret + RET SYM_FUNC_END(round_Xsig) @@ -138,5 +138,5 @@ L_n_exit: popl %esi popl %ebx leave - ret + RET SYM_FUNC_END(norm_Xsig) diff --git a/arch/x86/math-emu/shr_Xsig.S b/arch/x86/math-emu/shr_Xsig.S index 726af985f758..f726bf6f6396 100644 --- a/arch/x86/math-emu/shr_Xsig.S +++ b/arch/x86/math-emu/shr_Xsig.S @@ -45,7 +45,7 @@ SYM_FUNC_START(shr_Xsig) popl %ebx popl %esi leave - ret + RET L_more_than_31: cmpl $64,%ecx @@ -61,7 +61,7 @@ L_more_than_31: movl $0,8(%esi) popl %esi leave - ret + RET L_more_than_63: cmpl $96,%ecx @@ -76,7 +76,7 @@ L_more_than_63: movl %edx,8(%esi) popl %esi leave - ret + RET L_more_than_95: xorl %eax,%eax @@ -85,5 +85,5 @@ L_more_than_95: movl %eax,8(%esi) popl %esi leave - ret + RET SYM_FUNC_END(shr_Xsig) diff --git a/arch/x86/math-emu/wm_shrx.S b/arch/x86/math-emu/wm_shrx.S index 4fc89174caf0..f608a28a4c43 100644 --- a/arch/x86/math-emu/wm_shrx.S +++ b/arch/x86/math-emu/wm_shrx.S @@ -55,7 +55,7 @@ SYM_FUNC_START(FPU_shrx) popl %ebx popl %esi leave - ret + RET L_more_than_31: cmpl $64,%ecx @@ -70,7 +70,7 @@ L_more_than_31: movl $0,4(%esi) popl %esi leave - ret + RET L_more_than_63: cmpl $96,%ecx @@ -84,7 +84,7 @@ L_more_than_63: movl %edx,4(%esi) popl %esi leave - ret + RET L_more_than_95: xorl %eax,%eax @@ -92,7 +92,7 @@ L_more_than_95: movl %eax,4(%esi) popl %esi leave - ret + RET SYM_FUNC_END(FPU_shrx) @@ -146,7 +146,7 @@ SYM_FUNC_START(FPU_shrxs) popl %ebx popl %esi leave - ret + RET /* Shift by [0..31] bits */ Ls_less_than_32: @@ -163,7 +163,7 @@ Ls_less_than_32: popl %ebx popl %esi leave - ret + RET /* Shift by [64..95] bits */ Ls_more_than_63: @@ -189,7 +189,7 @@ Ls_more_than_63: popl %ebx popl %esi leave - ret + RET Ls_more_than_95: /* Shift by [96..inf) bits */ @@ -203,5 +203,5 @@ Ls_more_than_95: popl %ebx popl %esi leave - ret + RET SYM_FUNC_END(FPU_shrxs) diff --git a/arch/x86/mm/mem_encrypt_boot.S b/arch/x86/mm/mem_encrypt_boot.S index 7a84fc8bc5c3..3c2bc5562fc2 100644 --- a/arch/x86/mm/mem_encrypt_boot.S +++ b/arch/x86/mm/mem_encrypt_boot.S @@ -65,7 +65,7 @@ SYM_FUNC_START(sme_encrypt_execute) movq %rbp, %rsp /* Restore original stack pointer */ pop %rbp - ret + RET SYM_FUNC_END(sme_encrypt_execute) SYM_FUNC_START(__enc_copy) @@ -151,6 +151,6 @@ SYM_FUNC_START(__enc_copy) pop %r12 pop %r15 - ret + RET .L__enc_copy_end: SYM_FUNC_END(__enc_copy) diff --git a/arch/x86/platform/efi/efi_stub_32.S b/arch/x86/platform/efi/efi_stub_32.S index 09ec84f6ef51..f3cfdb1c9a35 100644 --- a/arch/x86/platform/efi/efi_stub_32.S +++ b/arch/x86/platform/efi/efi_stub_32.S @@ -56,5 +56,5 @@ SYM_FUNC_START(efi_call_svam) movl 16(%esp), %ebx leave - ret + RET SYM_FUNC_END(efi_call_svam) diff --git a/arch/x86/platform/efi/efi_stub_64.S b/arch/x86/platform/efi/efi_stub_64.S index 90380a17ab23..2206b8bc47b8 100644 --- a/arch/x86/platform/efi/efi_stub_64.S +++ b/arch/x86/platform/efi/efi_stub_64.S @@ -23,5 +23,5 @@ SYM_FUNC_START(__efi_call) mov %rsi, %rcx CALL_NOSPEC rdi leave - ret + RET SYM_FUNC_END(__efi_call) diff --git a/arch/x86/platform/efi/efi_thunk_64.S b/arch/x86/platform/efi/efi_thunk_64.S index 26f0da238c1c..063eb27c0a12 100644 --- a/arch/x86/platform/efi/efi_thunk_64.S +++ b/arch/x86/platform/efi/efi_thunk_64.S @@ -63,7 +63,7 @@ SYM_CODE_START(__efi64_thunk) 1: movq 24(%rsp), %rsp pop %rbx pop %rbp - retq + RET .code32 2: pushl $__KERNEL_CS diff --git a/arch/x86/platform/olpc/xo1-wakeup.S b/arch/x86/platform/olpc/xo1-wakeup.S index 75f4faff8468..3a5abffe5660 100644 --- a/arch/x86/platform/olpc/xo1-wakeup.S +++ b/arch/x86/platform/olpc/xo1-wakeup.S @@ -77,7 +77,7 @@ save_registers: pushfl popl saved_context_eflags - ret + RET restore_registers: movl saved_context_ebp, %ebp @@ -88,7 +88,7 @@ restore_registers: pushl saved_context_eflags popfl - ret + RET SYM_CODE_START(do_olpc_suspend_lowlevel) call save_processor_state @@ -109,7 +109,7 @@ ret_point: call restore_registers call restore_processor_state - ret + RET SYM_CODE_END(do_olpc_suspend_lowlevel) .data diff --git a/arch/x86/power/hibernate_asm_32.S b/arch/x86/power/hibernate_asm_32.S index 8786653ad3c0..5606a15cf9a1 100644 --- a/arch/x86/power/hibernate_asm_32.S +++ b/arch/x86/power/hibernate_asm_32.S @@ -32,7 +32,7 @@ SYM_FUNC_START(swsusp_arch_suspend) FRAME_BEGIN call swsusp_save FRAME_END - ret + RET SYM_FUNC_END(swsusp_arch_suspend) SYM_CODE_START(restore_image) @@ -108,5 +108,5 @@ SYM_FUNC_START(restore_registers) /* tell the hibernation core that we've just restored the memory */ movl %eax, in_suspend - ret + RET SYM_FUNC_END(restore_registers) diff --git a/arch/x86/power/hibernate_asm_64.S b/arch/x86/power/hibernate_asm_64.S index 7918b8415f13..3ae7a3d7d61e 100644 --- a/arch/x86/power/hibernate_asm_64.S +++ b/arch/x86/power/hibernate_asm_64.S @@ -49,7 +49,7 @@ SYM_FUNC_START(swsusp_arch_suspend) FRAME_BEGIN call swsusp_save FRAME_END - ret + RET SYM_FUNC_END(swsusp_arch_suspend) SYM_CODE_START(restore_image) @@ -143,5 +143,5 @@ SYM_FUNC_START(restore_registers) /* tell the hibernation core that we've just restored the memory */ movq %rax, in_suspend(%rip) - ret + RET SYM_FUNC_END(restore_registers) diff --git a/arch/x86/um/checksum_32.S b/arch/x86/um/checksum_32.S index 13f118dec74f..aed782ab7721 100644 --- a/arch/x86/um/checksum_32.S +++ b/arch/x86/um/checksum_32.S @@ -110,7 +110,7 @@ csum_partial: 7: popl %ebx popl %esi - ret + RET #else @@ -208,7 +208,7 @@ csum_partial: 80: popl %ebx popl %esi - ret + RET #endif EXPORT_SYMBOL(csum_partial) diff --git a/arch/x86/um/setjmp_32.S b/arch/x86/um/setjmp_32.S index 62eaf8c80e04..2d991ddbcca5 100644 --- a/arch/x86/um/setjmp_32.S +++ b/arch/x86/um/setjmp_32.S @@ -34,7 +34,7 @@ kernel_setjmp: movl %esi,12(%edx) movl %edi,16(%edx) movl %ecx,20(%edx) # Return address - ret + RET .size kernel_setjmp,.-kernel_setjmp diff --git a/arch/x86/um/setjmp_64.S b/arch/x86/um/setjmp_64.S index 1b5d40d4ff46..b46acb6a8ebd 100644 --- a/arch/x86/um/setjmp_64.S +++ b/arch/x86/um/setjmp_64.S @@ -33,7 +33,7 @@ kernel_setjmp: movq %r14,40(%rdi) movq %r15,48(%rdi) movq %rsi,56(%rdi) # Return address - ret + RET .size kernel_setjmp,.-kernel_setjmp diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S index e75f60a06056..e8854f3b9a9f 100644 --- a/arch/x86/xen/xen-asm.S +++ b/arch/x86/xen/xen-asm.S @@ -45,7 +45,7 @@ SYM_FUNC_START(xen_irq_enable_direct) call check_events 1: FRAME_END - ret + RET SYM_FUNC_END(xen_irq_enable_direct) @@ -55,7 +55,7 @@ SYM_FUNC_END(xen_irq_enable_direct) */ SYM_FUNC_START(xen_irq_disable_direct) movb $1, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask - ret + RET SYM_FUNC_END(xen_irq_disable_direct) /* @@ -71,7 +71,7 @@ SYM_FUNC_START(xen_save_fl_direct) testb $0xff, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask setz %ah addb %ah, %ah - ret + RET SYM_FUNC_END(xen_save_fl_direct) @@ -98,7 +98,7 @@ SYM_FUNC_START(xen_restore_fl_direct) call check_events 1: FRAME_END - ret + RET SYM_FUNC_END(xen_restore_fl_direct) @@ -128,7 +128,7 @@ SYM_FUNC_START(check_events) pop %rcx pop %rax FRAME_END - ret + RET SYM_FUNC_END(check_events) SYM_FUNC_START(xen_read_cr2) @@ -136,14 +136,14 @@ SYM_FUNC_START(xen_read_cr2) _ASM_MOV PER_CPU_VAR(xen_vcpu), %_ASM_AX _ASM_MOV XEN_vcpu_info_arch_cr2(%_ASM_AX), %_ASM_AX FRAME_END - ret + RET SYM_FUNC_END(xen_read_cr2); SYM_FUNC_START(xen_read_cr2_direct) FRAME_BEGIN _ASM_MOV PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_arch_cr2, %_ASM_AX FRAME_END - ret + RET SYM_FUNC_END(xen_read_cr2_direct); .macro xen_pv_trap name diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S index cb6538ae2fe0..565062932ef1 100644 --- a/arch/x86/xen/xen-head.S +++ b/arch/x86/xen/xen-head.S @@ -70,7 +70,7 @@ SYM_CODE_START(hypercall_page) .rept (PAGE_SIZE / 32) UNWIND_HINT_FUNC .skip 31, 0x90 - ret + RET .endr #define HYPERCALL(n) \ From 277f4ddc36c578691678b8ae59b60d76ad15fa1b Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Sat, 4 Dec 2021 14:43:41 +0100 Subject: [PATCH 1492/1842] x86: Prepare inline-asm for straight-line-speculation commit b17c2baa305cccbd16bafa289fd743cc2db77966 upstream. Replace all ret/retq instructions with ASM_RET in preparation of making it more than a single instruction. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/20211204134907.964635458@infradead.org Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman [bwh: Backported to 5.10: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/linkage.h | 4 ++++ arch/x86/include/asm/paravirt.h | 2 +- arch/x86/include/asm/qspinlock_paravirt.h | 4 ++-- arch/x86/kernel/alternative.c | 2 +- arch/x86/kernel/kprobes/core.c | 2 +- arch/x86/kernel/paravirt.c | 2 +- arch/x86/kvm/emulate.c | 4 ++-- arch/x86/lib/error-inject.c | 3 ++- samples/ftrace/ftrace-direct-modify.c | 4 ++-- samples/ftrace/ftrace-direct-too.c | 2 +- samples/ftrace/ftrace-direct.c | 2 +- 11 files changed, 18 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h index 365111789cc6..ebddec2f3ba8 100644 --- a/arch/x86/include/asm/linkage.h +++ b/arch/x86/include/asm/linkage.h @@ -18,6 +18,10 @@ #define __ALIGN_STR __stringify(__ALIGN) #endif +#else /* __ASSEMBLY__ */ + +#define ASM_RET "ret\n\t" + #endif /* __ASSEMBLY__ */ #endif /* _ASM_X86_LINKAGE_H */ diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index 5647bcdba776..4a32b0d34376 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -630,7 +630,7 @@ bool __raw_callee_save___native_vcpu_is_preempted(long cpu); "call " #func ";" \ PV_RESTORE_ALL_CALLER_REGS \ FRAME_END \ - "ret;" \ + ASM_RET \ ".size " PV_THUNK_NAME(func) ", .-" PV_THUNK_NAME(func) ";" \ ".popsection") diff --git a/arch/x86/include/asm/qspinlock_paravirt.h b/arch/x86/include/asm/qspinlock_paravirt.h index 159622ee0674..1474cf96251d 100644 --- a/arch/x86/include/asm/qspinlock_paravirt.h +++ b/arch/x86/include/asm/qspinlock_paravirt.h @@ -48,7 +48,7 @@ asm (".pushsection .text;" "jne .slowpath;" "pop %rdx;" FRAME_END - "ret;" + ASM_RET ".slowpath: " "push %rsi;" "movzbl %al,%esi;" @@ -56,7 +56,7 @@ asm (".pushsection .text;" "pop %rsi;" "pop %rdx;" FRAME_END - "ret;" + ASM_RET ".size " PV_UNLOCK ", .-" PV_UNLOCK ";" ".popsection"); diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index e2865dac6865..349fc4c747b0 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -869,7 +869,7 @@ asm ( " .type int3_magic, @function\n" "int3_magic:\n" " movl $1, (%" _ASM_ARG1 ")\n" -" ret\n" + ASM_RET " .size int3_magic, .-int3_magic\n" " .popsection\n" ); diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index 535da74c124e..ee85f1b258d0 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -768,7 +768,7 @@ asm( RESTORE_REGS_STRING " popfl\n" #endif - " ret\n" + ASM_RET ".size kretprobe_trampoline, .-kretprobe_trampoline\n" ); NOKPROBE_SYMBOL(kretprobe_trampoline); diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 5e5fcf5c376d..e21937680d1f 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -40,7 +40,7 @@ extern void _paravirt_nop(void); asm (".pushsection .entry.text, \"ax\"\n" ".global _paravirt_nop\n" "_paravirt_nop:\n\t" - "ret\n\t" + ASM_RET ".size _paravirt_nop, . - _paravirt_nop\n\t" ".type _paravirt_nop, @function\n\t" ".popsection"); diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 71e1a2d39f21..325697840275 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -316,7 +316,7 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop); __FOP_FUNC(#name) #define __FOP_RET(name) \ - "ret \n\t" \ + ASM_RET \ ".size " name ", .-" name "\n\t" #define FOP_RET(name) \ @@ -437,7 +437,7 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop); asm(".pushsection .fixup, \"ax\"\n" ".global kvm_fastop_exception \n" - "kvm_fastop_exception: xor %esi, %esi; ret\n" + "kvm_fastop_exception: xor %esi, %esi; " ASM_RET ".popsection"); FOP_START(setcc) diff --git a/arch/x86/lib/error-inject.c b/arch/x86/lib/error-inject.c index be5b5fb1598b..520897061ee0 100644 --- a/arch/x86/lib/error-inject.c +++ b/arch/x86/lib/error-inject.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 +#include #include #include @@ -10,7 +11,7 @@ asm( ".type just_return_func, @function\n" ".globl just_return_func\n" "just_return_func:\n" - " ret\n" + ASM_RET ".size just_return_func, .-just_return_func\n" ); diff --git a/samples/ftrace/ftrace-direct-modify.c b/samples/ftrace/ftrace-direct-modify.c index 89e6bf27cd9f..d620f3da086f 100644 --- a/samples/ftrace/ftrace-direct-modify.c +++ b/samples/ftrace/ftrace-direct-modify.c @@ -31,7 +31,7 @@ asm ( " call my_direct_func1\n" " leave\n" " .size my_tramp1, .-my_tramp1\n" -" ret\n" + ASM_RET " .type my_tramp2, @function\n" " .globl my_tramp2\n" " my_tramp2:" @@ -39,7 +39,7 @@ asm ( " movq %rsp, %rbp\n" " call my_direct_func2\n" " leave\n" -" ret\n" + ASM_RET " .size my_tramp2, .-my_tramp2\n" " .popsection\n" ); diff --git a/samples/ftrace/ftrace-direct-too.c b/samples/ftrace/ftrace-direct-too.c index 11b99325f3db..3927cb880d1a 100644 --- a/samples/ftrace/ftrace-direct-too.c +++ b/samples/ftrace/ftrace-direct-too.c @@ -31,7 +31,7 @@ asm ( " popq %rsi\n" " popq %rdi\n" " leave\n" -" ret\n" + ASM_RET " .size my_tramp, .-my_tramp\n" " .popsection\n" ); diff --git a/samples/ftrace/ftrace-direct.c b/samples/ftrace/ftrace-direct.c index 642c50b5f716..1e901bb8d729 100644 --- a/samples/ftrace/ftrace-direct.c +++ b/samples/ftrace/ftrace-direct.c @@ -24,7 +24,7 @@ asm ( " call my_direct_func\n" " popq %rdi\n" " leave\n" -" ret\n" + ASM_RET " .size my_tramp, .-my_tramp\n" " .popsection\n" ); From 1f6e6683c46612baf01555e214d863aaa03c399b Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Sat, 4 Dec 2021 14:43:43 +0100 Subject: [PATCH 1493/1842] x86/alternative: Relax text_poke_bp() constraint commit 26c44b776dba4ac692a0bf5a3836feb8a63fea6b upstream. Currently, text_poke_bp() is very strict to only allow patching a single instruction; however with straight-line-speculation it will be required to patch: ret; int3, which is two instructions. As such, relax the constraints a little to allow int3 padding for all instructions that do not imply the execution of the next instruction, ie: RET, JMP.d8 and JMP.d32. While there, rename the text_poke_loc::rel32 field to ::disp. Note: this fills up the text_poke_loc structure which is now a round 16 bytes big. [ bp: Put comments ontop instead of on the side. ] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/20211204134908.082342723@infradead.org Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/alternative.c | 49 ++++++++++++++++++++++++----------- 1 file changed, 34 insertions(+), 15 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 349fc4c747b0..8412048bc037 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -1243,10 +1243,13 @@ void text_poke_sync(void) } struct text_poke_loc { - s32 rel_addr; /* addr := _stext + rel_addr */ - s32 rel32; + /* addr := _stext + rel_addr */ + s32 rel_addr; + s32 disp; + u8 len; u8 opcode; const u8 text[POKE_MAX_OPCODE_SIZE]; + /* see text_poke_bp_batch() */ u8 old; }; @@ -1261,7 +1264,8 @@ static struct bp_patching_desc *bp_desc; static __always_inline struct bp_patching_desc *try_get_desc(struct bp_patching_desc **descp) { - struct bp_patching_desc *desc = __READ_ONCE(*descp); /* rcu_dereference */ + /* rcu_dereference */ + struct bp_patching_desc *desc = __READ_ONCE(*descp); if (!desc || !arch_atomic_inc_not_zero(&desc->refs)) return NULL; @@ -1295,7 +1299,7 @@ noinstr int poke_int3_handler(struct pt_regs *regs) { struct bp_patching_desc *desc; struct text_poke_loc *tp; - int len, ret = 0; + int ret = 0; void *ip; if (user_mode(regs)) @@ -1335,8 +1339,7 @@ noinstr int poke_int3_handler(struct pt_regs *regs) goto out_put; } - len = text_opcode_size(tp->opcode); - ip += len; + ip += tp->len; switch (tp->opcode) { case INT3_INSN_OPCODE: @@ -1351,12 +1354,12 @@ noinstr int poke_int3_handler(struct pt_regs *regs) break; case CALL_INSN_OPCODE: - int3_emulate_call(regs, (long)ip + tp->rel32); + int3_emulate_call(regs, (long)ip + tp->disp); break; case JMP32_INSN_OPCODE: case JMP8_INSN_OPCODE: - int3_emulate_jmp(regs, (long)ip + tp->rel32); + int3_emulate_jmp(regs, (long)ip + tp->disp); break; default: @@ -1431,7 +1434,7 @@ static void text_poke_bp_batch(struct text_poke_loc *tp, unsigned int nr_entries */ for (do_sync = 0, i = 0; i < nr_entries; i++) { u8 old[POKE_MAX_OPCODE_SIZE] = { tp[i].old, }; - int len = text_opcode_size(tp[i].opcode); + int len = tp[i].len; if (len - INT3_INSN_SIZE > 0) { memcpy(old + INT3_INSN_SIZE, @@ -1508,20 +1511,36 @@ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr, const void *opcode, size_t len, const void *emulate) { struct insn insn; - int ret; + int ret, i; memcpy((void *)tp->text, opcode, len); if (!emulate) emulate = opcode; ret = insn_decode_kernel(&insn, emulate); - BUG_ON(ret < 0); - BUG_ON(len != insn.length); tp->rel_addr = addr - (void *)_stext; + tp->len = len; tp->opcode = insn.opcode.bytes[0]; + switch (tp->opcode) { + case RET_INSN_OPCODE: + case JMP32_INSN_OPCODE: + case JMP8_INSN_OPCODE: + /* + * Control flow instructions without implied execution of the + * next instruction can be padded with INT3. + */ + for (i = insn.length; i < len; i++) + BUG_ON(tp->text[i] != INT3_INSN_OPCODE); + break; + + default: + BUG_ON(len != insn.length); + }; + + switch (tp->opcode) { case INT3_INSN_OPCODE: case RET_INSN_OPCODE: @@ -1530,7 +1549,7 @@ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr, case CALL_INSN_OPCODE: case JMP32_INSN_OPCODE: case JMP8_INSN_OPCODE: - tp->rel32 = insn.immediate.value; + tp->disp = insn.immediate.value; break; default: /* assume NOP */ @@ -1538,13 +1557,13 @@ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr, case 2: /* NOP2 -- emulate as JMP8+0 */ BUG_ON(memcmp(emulate, ideal_nops[len], len)); tp->opcode = JMP8_INSN_OPCODE; - tp->rel32 = 0; + tp->disp = 0; break; case 5: /* NOP5 -- emulate as JMP32+0 */ BUG_ON(memcmp(emulate, ideal_nops[NOP_ATOMIC5], len)); tp->opcode = JMP32_INSN_OPCODE; - tp->rel32 = 0; + tp->disp = 0; break; default: /* unknown instruction */ From 0f8532c2837793acdaa07c6b47fda0bf1fa61f40 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Sat, 4 Dec 2021 14:43:42 +0100 Subject: [PATCH 1494/1842] objtool: Add straight-line-speculation validation commit 1cc1e4c8aab4213bd4e6353dec2620476a233d6d upstream. Teach objtool to validate the straight-line-speculation constraints: - speculation trap after indirect calls - speculation trap after RET Notable: when an instruction is annotated RETPOLINE_SAFE, indicating speculation isn't a problem, also don't care about sls for that instruction. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/20211204134908.023037659@infradead.org Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman [bwh: Backported to 5.10: adjust filenames, context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/arch.h | 1 + tools/objtool/arch/x86/decode.c | 13 +++++++++---- tools/objtool/builtin-check.c | 4 +++- tools/objtool/builtin.h | 3 ++- tools/objtool/check.c | 14 ++++++++++++++ 5 files changed, 29 insertions(+), 6 deletions(-) diff --git a/tools/objtool/arch.h b/tools/objtool/arch.h index 0031a27b6ad0..e15f6e932b4f 100644 --- a/tools/objtool/arch.h +++ b/tools/objtool/arch.h @@ -26,6 +26,7 @@ enum insn_type { INSN_CLAC, INSN_STD, INSN_CLD, + INSN_TRAP, INSN_OTHER, }; diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c index 32a810429306..f7154241a2a5 100644 --- a/tools/objtool/arch/x86/decode.c +++ b/tools/objtool/arch/x86/decode.c @@ -456,6 +456,11 @@ int arch_decode_instruction(const struct elf *elf, const struct section *sec, break; + case 0xcc: + /* int3 */ + *type = INSN_TRAP; + break; + case 0xe3: /* jecxz/jrcxz */ *type = INSN_JUMP_CONDITIONAL; @@ -592,10 +597,10 @@ const char *arch_ret_insn(int len) { static const char ret[5][5] = { { BYTE_RET }, - { BYTE_RET, 0x90 }, - { BYTE_RET, 0x66, 0x90 }, - { BYTE_RET, 0x0f, 0x1f, 0x00 }, - { BYTE_RET, 0x0f, 0x1f, 0x40, 0x00 }, + { BYTE_RET, 0xcc }, + { BYTE_RET, 0xcc, 0x90 }, + { BYTE_RET, 0xcc, 0x66, 0x90 }, + { BYTE_RET, 0xcc, 0x0f, 0x1f, 0x00 }, }; if (len < 1 || len > 5) { diff --git a/tools/objtool/builtin-check.c b/tools/objtool/builtin-check.c index c6d199bfd0ae..758baf918d83 100644 --- a/tools/objtool/builtin-check.c +++ b/tools/objtool/builtin-check.c @@ -18,7 +18,8 @@ #include "builtin.h" #include "objtool.h" -bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, validate_dup, vmlinux; +bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, + validate_dup, vmlinux, sls; static const char * const check_usage[] = { "objtool check [] file.o", @@ -35,6 +36,7 @@ const struct option check_options[] = { OPT_BOOLEAN('s', "stats", &stats, "print statistics"), OPT_BOOLEAN('d', "duplicate", &validate_dup, "duplicate validation for vmlinux.o"), OPT_BOOLEAN('l', "vmlinux", &vmlinux, "vmlinux.o validation"), + OPT_BOOLEAN('S', "sls", &sls, "validate straight-line-speculation"), OPT_END(), }; diff --git a/tools/objtool/builtin.h b/tools/objtool/builtin.h index 85c979caa367..33043fcb16db 100644 --- a/tools/objtool/builtin.h +++ b/tools/objtool/builtin.h @@ -8,7 +8,8 @@ #include extern const struct option check_options[]; -extern bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, validate_dup, vmlinux; +extern bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, + validate_dup, vmlinux, sls; extern int cmd_check(int argc, const char **argv); extern int cmd_orc(int argc, const char **argv); diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 09e7807f83ee..9943987b24a9 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -2775,6 +2775,12 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, switch (insn->type) { case INSN_RETURN: + if (next_insn && next_insn->type == INSN_TRAP) { + next_insn->ignore = true; + } else if (sls && !insn->retpoline_safe) { + WARN_FUNC("missing int3 after ret", + insn->sec, insn->offset); + } return validate_return(func, insn, &state); case INSN_CALL: @@ -2818,6 +2824,14 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, break; case INSN_JUMP_DYNAMIC: + if (next_insn && next_insn->type == INSN_TRAP) { + next_insn->ignore = true; + } else if (sls && !insn->retpoline_safe) { + WARN_FUNC("missing int3 after indirect jump", + insn->sec, insn->offset); + } + + /* fallthrough */ case INSN_JUMP_DYNAMIC_CONDITIONAL: if (is_sibling_call(insn)) { ret = validate_sibling_call(insn, &state); From e9925a4584dc2dd1a5eb4ffc44cd42bb1117a797 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Sat, 4 Dec 2021 14:43:44 +0100 Subject: [PATCH 1495/1842] x86: Add straight-line-speculation mitigation commit e463a09af2f0677b9485a7e8e4e70b396b2ffb6f upstream. Make use of an upcoming GCC feature to mitigate straight-line-speculation for x86: https://gcc.gnu.org/g:53a643f8568067d7700a9f2facc8ba39974973d3 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102952 https://bugs.llvm.org/show_bug.cgi?id=52323 It's built tested on x86_64-allyesconfig using GCC-12 and GCC-11. Maintenance overhead of this should be fairly low due to objtool validation. Size overhead of all these additional int3 instructions comes to: text data bss dec hex filename 22267751 6933356 2011368 31212475 1dc43bb defconfig-build/vmlinux 22804126 6933356 1470696 31208178 1dc32f2 defconfig-build/vmlinux.sls Or roughly 2.4% additional text. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/20211204134908.140103474@infradead.org Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman [bwh: Backported to 5.10: - In scripts/Makefile.build, add the objtool option with an ifdef block, same as for other options - Adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/Kconfig | 12 ++++++++++++ arch/x86/Makefile | 6 +++++- arch/x86/include/asm/linkage.h | 10 ++++++++++ arch/x86/include/asm/static_call.h | 2 +- arch/x86/kernel/ftrace.c | 2 +- arch/x86/kernel/static_call.c | 5 +++-- arch/x86/lib/memmove_64.S | 2 +- arch/x86/lib/retpoline.S | 2 +- scripts/Makefile.build | 3 +++ scripts/link-vmlinux.sh | 3 +++ 10 files changed, 40 insertions(+), 7 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index ed713840d469..68d46a648f6e 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -462,6 +462,18 @@ config RETPOLINE branches. Requires a compiler with -mindirect-branch=thunk-extern support for full protection. The kernel may run slower. +config CC_HAS_SLS + def_bool $(cc-option,-mharden-sls=all) + +config SLS + bool "Mitigate Straight-Line-Speculation" + depends on CC_HAS_SLS && X86_64 + default n + help + Compile the kernel with straight-line-speculation options to guard + against straight line speculation. The kernel image might be slightly + larger. + config X86_CPU_RESCTRL bool "x86 CPU resource control support" depends on X86 && (CPU_SUP_INTEL || CPU_SUP_AMD) diff --git a/arch/x86/Makefile b/arch/x86/Makefile index 8ed757d06f77..05f5d28b75eb 100644 --- a/arch/x86/Makefile +++ b/arch/x86/Makefile @@ -196,7 +196,11 @@ ifdef CONFIG_RETPOLINE endif endif -KBUILD_LDFLAGS := -m elf_$(UTS_MACHINE) +ifdef CONFIG_SLS + KBUILD_CFLAGS += -mharden-sls=all +endif + +KBUILD_LDFLAGS += -m elf_$(UTS_MACHINE) ifdef CONFIG_X86_NEED_RELOCS LDFLAGS_vmlinux := --emit-relocs --discard-none diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h index ebddec2f3ba8..030907922bd0 100644 --- a/arch/x86/include/asm/linkage.h +++ b/arch/x86/include/asm/linkage.h @@ -18,9 +18,19 @@ #define __ALIGN_STR __stringify(__ALIGN) #endif +#ifdef CONFIG_SLS +#define RET ret; int3 +#else +#define RET ret +#endif + #else /* __ASSEMBLY__ */ +#ifdef CONFIG_SLS +#define ASM_RET "ret; int3\n\t" +#else #define ASM_RET "ret\n\t" +#endif #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h index cbb67b6030f9..343234569392 100644 --- a/arch/x86/include/asm/static_call.h +++ b/arch/x86/include/asm/static_call.h @@ -35,7 +35,7 @@ __ARCH_DEFINE_STATIC_CALL_TRAMP(name, ".byte 0xe9; .long " #func " - (. + 4)") #define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) \ - __ARCH_DEFINE_STATIC_CALL_TRAMP(name, "ret; nop; nop; nop; nop") + __ARCH_DEFINE_STATIC_CALL_TRAMP(name, "ret; int3; nop; nop; nop") #define ARCH_ADD_TRAMP_KEY(name) \ diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 7edbd5ee5ed4..fbcd144260ac 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -308,7 +308,7 @@ union ftrace_op_code_union { } __attribute__((packed)); }; -#define RET_SIZE 1 +#define RET_SIZE 1 + IS_ENABLED(CONFIG_SLS) static unsigned long create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c index ca9a380d9c0b..32d673bbc3ef 100644 --- a/arch/x86/kernel/static_call.c +++ b/arch/x86/kernel/static_call.c @@ -11,6 +11,8 @@ enum insn_type { RET = 3, /* tramp / site cond-tail-call */ }; +static const u8 retinsn[] = { RET_INSN_OPCODE, 0xcc, 0xcc, 0xcc, 0xcc }; + static void __ref __static_call_transform(void *insn, enum insn_type type, void *func) { int size = CALL_INSN_SIZE; @@ -30,8 +32,7 @@ static void __ref __static_call_transform(void *insn, enum insn_type type, void break; case RET: - code = text_gen_insn(RET_INSN_OPCODE, insn, func); - size = RET_INSN_SIZE; + code = &retinsn; break; } diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S index e84d649620c4..50ea390df712 100644 --- a/arch/x86/lib/memmove_64.S +++ b/arch/x86/lib/memmove_64.S @@ -40,7 +40,7 @@ SYM_FUNC_START(__memmove) /* FSRM implies ERMS => no length checks, do the copy directly */ .Lmemmove_begin_forward: ALTERNATIVE "cmp $0x20, %rdx; jb 1f", "", X86_FEATURE_FSRM - ALTERNATIVE "", "movq %rdx, %rcx; rep movsb; RET", X86_FEATURE_ERMS + ALTERNATIVE "", __stringify(movq %rdx, %rcx; rep movsb; RET), X86_FEATURE_ERMS /* * movsq instruction have many startup latency diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index 8904c076a1df..afbdda539b80 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -34,7 +34,7 @@ SYM_INNER_LABEL(__x86_indirect_thunk_\reg, SYM_L_GLOBAL) ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), \ __stringify(RETPOLINE \reg), X86_FEATURE_RETPOLINE, \ - __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_LFENCE + __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg; int3), X86_FEATURE_RETPOLINE_LFENCE .endm diff --git a/scripts/Makefile.build b/scripts/Makefile.build index 8bd4e673383f..bea7e54b2ab9 100644 --- a/scripts/Makefile.build +++ b/scripts/Makefile.build @@ -230,6 +230,9 @@ endif ifdef CONFIG_X86_SMAP objtool_args += --uaccess endif +ifdef CONFIG_SLS + objtool_args += --sls +endif # 'OBJECT_FILES_NON_STANDARD := y': skip objtool checking for a directory # 'OBJECT_FILES_NON_STANDARD_foo.o := 'y': skip objtool checking for a file diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh index 6eded325c837..b184d94b9052 100755 --- a/scripts/link-vmlinux.sh +++ b/scripts/link-vmlinux.sh @@ -77,6 +77,9 @@ objtool_link() if [ -n "${CONFIG_X86_SMAP}" ]; then objtoolopt="${objtoolopt} --uaccess" fi + if [ -n "${CONFIG_SLS}" ]; then + objtoolopt="${objtoolopt} --sls" + fi info OBJTOOL ${1} tools/objtool/objtool ${objtoolopt} ${1} fi From 494ed76c1446b67621b1c55b8b2dceb5c6dfee64 Mon Sep 17 00:00:00 2001 From: Arnaldo Carvalho de Melo Date: Sun, 9 May 2021 10:19:37 -0300 Subject: [PATCH 1496/1842] tools arch: Update arch/x86/lib/mem{cpy,set}_64.S copies used in 'perf bench mem memcpy' commit 35cb8c713a496e8c114eed5e2a5a30b359876df2 upstream. To bring in the change made in this cset: f94909ceb1ed4bfd ("x86: Prepare asm files for straight-line-speculation") It silences these perf tools build warnings, no change in the tools: Warning: Kernel ABI header at 'tools/arch/x86/lib/memcpy_64.S' differs from latest version at 'arch/x86/lib/memcpy_64.S' diff -u tools/arch/x86/lib/memcpy_64.S arch/x86/lib/memcpy_64.S Warning: Kernel ABI header at 'tools/arch/x86/lib/memset_64.S' differs from latest version at 'arch/x86/lib/memset_64.S' diff -u tools/arch/x86/lib/memset_64.S arch/x86/lib/memset_64.S The code generated was checked before and after using 'objdump -d /tmp/build/perf/bench/mem-memcpy-x86-64-asm.o', no changes. Cc: Borislav Petkov Cc: Peter Zijlstra Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/arch/x86/lib/memcpy_64.S | 12 ++++++------ tools/arch/x86/lib/memset_64.S | 6 +++--- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/tools/arch/x86/lib/memcpy_64.S b/tools/arch/x86/lib/memcpy_64.S index 1e299ac73c86..644da05e5e71 100644 --- a/tools/arch/x86/lib/memcpy_64.S +++ b/tools/arch/x86/lib/memcpy_64.S @@ -39,7 +39,7 @@ SYM_FUNC_START_WEAK(memcpy) rep movsq movl %edx, %ecx rep movsb - ret + RET SYM_FUNC_END(memcpy) SYM_FUNC_END_ALIAS(__memcpy) EXPORT_SYMBOL(memcpy) @@ -53,7 +53,7 @@ SYM_FUNC_START_LOCAL(memcpy_erms) movq %rdi, %rax movq %rdx, %rcx rep movsb - ret + RET SYM_FUNC_END(memcpy_erms) SYM_FUNC_START_LOCAL(memcpy_orig) @@ -137,7 +137,7 @@ SYM_FUNC_START_LOCAL(memcpy_orig) movq %r9, 1*8(%rdi) movq %r10, -2*8(%rdi, %rdx) movq %r11, -1*8(%rdi, %rdx) - retq + RET .p2align 4 .Lless_16bytes: cmpl $8, %edx @@ -149,7 +149,7 @@ SYM_FUNC_START_LOCAL(memcpy_orig) movq -1*8(%rsi, %rdx), %r9 movq %r8, 0*8(%rdi) movq %r9, -1*8(%rdi, %rdx) - retq + RET .p2align 4 .Lless_8bytes: cmpl $4, %edx @@ -162,7 +162,7 @@ SYM_FUNC_START_LOCAL(memcpy_orig) movl -4(%rsi, %rdx), %r8d movl %ecx, (%rdi) movl %r8d, -4(%rdi, %rdx) - retq + RET .p2align 4 .Lless_3bytes: subl $1, %edx @@ -180,7 +180,7 @@ SYM_FUNC_START_LOCAL(memcpy_orig) movb %cl, (%rdi) .Lend: - retq + RET SYM_FUNC_END(memcpy_orig) .popsection diff --git a/tools/arch/x86/lib/memset_64.S b/tools/arch/x86/lib/memset_64.S index 0bfd26e4ca9e..4d793ff820e2 100644 --- a/tools/arch/x86/lib/memset_64.S +++ b/tools/arch/x86/lib/memset_64.S @@ -40,7 +40,7 @@ SYM_FUNC_START(__memset) movl %edx,%ecx rep stosb movq %r9,%rax - ret + RET SYM_FUNC_END(__memset) SYM_FUNC_END_ALIAS(memset) EXPORT_SYMBOL(memset) @@ -63,7 +63,7 @@ SYM_FUNC_START_LOCAL(memset_erms) movq %rdx,%rcx rep stosb movq %r9,%rax - ret + RET SYM_FUNC_END(memset_erms) SYM_FUNC_START_LOCAL(memset_orig) @@ -125,7 +125,7 @@ SYM_FUNC_START_LOCAL(memset_orig) .Lende: movq %r10,%rax - ret + RET .Lbad_alignment: cmpq $7,%rdx From bef21f88b47e399a76276ef1620fb816a0cc4e83 Mon Sep 17 00:00:00 2001 From: Borislav Petkov Date: Wed, 16 Mar 2022 22:05:52 +0100 Subject: [PATCH 1497/1842] kvm/emulate: Fix SETcc emulation function offsets with SLS commit fe83f5eae432ccc8e90082d6ed506d5233547473 upstream. The commit in Fixes started adding INT3 after RETs as a mitigation against straight-line speculation. The fastop SETcc implementation in kvm's insn emulator uses macro magic to generate all possible SETcc functions and to jump to them when emulating the respective instruction. However, it hardcodes the size and alignment of those functions to 4: a three-byte SETcc insn and a single-byte RET. BUT, with SLS, there's an INT3 that gets slapped after the RET, which brings the whole scheme out of alignment: 15: 0f 90 c0 seto %al 18: c3 ret 19: cc int3 1a: 0f 1f 00 nopl (%rax) 1d: 0f 91 c0 setno %al 20: c3 ret 21: cc int3 22: 0f 1f 00 nopl (%rax) 25: 0f 92 c0 setb %al 28: c3 ret 29: cc int3 and this explodes like this: int3: 0000 [#1] PREEMPT SMP PTI CPU: 0 PID: 2435 Comm: qemu-system-x86 Not tainted 5.17.0-rc8-sls #1 Hardware name: Dell Inc. Precision WorkStation T3400 /0TP412, BIOS A14 04/30/2012 RIP: 0010:setc+0x5/0x8 [kvm] Code: 00 00 0f 1f 00 0f b6 05 43 24 06 00 c3 cc 0f 1f 80 00 00 00 00 0f 90 c0 c3 cc 0f \ 1f 00 0f 91 c0 c3 cc 0f 1f 00 0f 92 c0 c3 cc <0f> 1f 00 0f 93 c0 c3 cc 0f 1f 00 \ 0f 94 c0 c3 cc 0f 1f 00 0f 95 c0 Call Trace: ? x86_emulate_insn [kvm] ? x86_emulate_instruction [kvm] ? vmx_handle_exit [kvm_intel] ? kvm_arch_vcpu_ioctl_run [kvm] ? kvm_vcpu_ioctl [kvm] ? __x64_sys_ioctl ? do_syscall_64 ? entry_SYSCALL_64_after_hwframe Raise the alignment value when SLS is enabled and use a macro for that instead of hard-coding naked numbers. Fixes: e463a09af2f0 ("x86: Add straight-line-speculation mitigation") Reported-by: Jamie Heilman Signed-off-by: Borislav Petkov Acked-by: Peter Zijlstra (Intel) Tested-by: Jamie Heilman Link: https://lore.kernel.org/r/YjGzJwjrvxg5YZ0Z@audible.transient.net [Add a comment and a bit of safety checking, since this is going to be changed again for IBT support. - Paolo] Signed-off-by: Paolo Bonzini Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/emulate.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 325697840275..bdcbf23b8b1e 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -428,8 +428,23 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop); FOP_END /* Special case for SETcc - 1 instruction per cc */ + +/* + * Depending on .config the SETcc functions look like: + * + * SETcc %al [3 bytes] + * RET [1 byte] + * INT3 [1 byte; CONFIG_SLS] + * + * Which gives possible sizes 4 or 5. When rounded up to the + * next power-of-two alignment they become 4 or 8. + */ +#define SETCC_LENGTH (4 + IS_ENABLED(CONFIG_SLS)) +#define SETCC_ALIGN (4 << IS_ENABLED(CONFIG_SLS)) +static_assert(SETCC_LENGTH <= SETCC_ALIGN); + #define FOP_SETCC(op) \ - ".align 4 \n\t" \ + ".align " __stringify(SETCC_ALIGN) " \n\t" \ ".type " #op ", @function \n\t" \ #op ": \n\t" \ #op " %al \n\t" \ @@ -1055,7 +1070,7 @@ static int em_bsr_c(struct x86_emulate_ctxt *ctxt) static __always_inline u8 test_cc(unsigned int condition, unsigned long flags) { u8 rc; - void (*fop)(void) = (void *)em_setcc + 4 * (condition & 0xf); + void (*fop)(void) = (void *)em_setcc + SETCC_ALIGN * (condition & 0xf); flags = (flags & EFLAGS_MASK) | X86_EFLAGS_IF; asm("push %[flags]; popf; " CALL_NOSPEC From 03c5c33e043e77a1a848c52f37c512efb412f2c3 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 8 Mar 2022 16:30:14 +0100 Subject: [PATCH 1498/1842] objtool: Default ignore INT3 for unreachable commit 1ffbe4e935f9b7308615c75be990aec07464d1e7 upstream. Ignore all INT3 instructions for unreachable code warnings, similar to NOP. This allows using INT3 for various paddings instead of NOPs. Signed-off-by: Peter Zijlstra (Intel) Acked-by: Josh Poimboeuf Link: https://lore.kernel.org/r/20220308154317.343312938@infradead.org Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 9943987b24a9..fdf96eef6e00 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -2775,9 +2775,8 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, switch (insn->type) { case INSN_RETURN: - if (next_insn && next_insn->type == INSN_TRAP) { - next_insn->ignore = true; - } else if (sls && !insn->retpoline_safe) { + if (sls && !insn->retpoline_safe && + next_insn && next_insn->type != INSN_TRAP) { WARN_FUNC("missing int3 after ret", insn->sec, insn->offset); } @@ -2824,9 +2823,8 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, break; case INSN_JUMP_DYNAMIC: - if (next_insn && next_insn->type == INSN_TRAP) { - next_insn->ignore = true; - } else if (sls && !insn->retpoline_safe) { + if (sls && !insn->retpoline_safe && + next_insn && next_insn->type != INSN_TRAP) { WARN_FUNC("missing int3 after indirect jump", insn->sec, insn->offset); } @@ -2997,7 +2995,7 @@ static bool ignore_unreachable_insn(struct objtool_file *file, struct instructio int i; struct instruction *prev_insn; - if (insn->ignore || insn->type == INSN_NOP) + if (insn->ignore || insn->type == INSN_NOP || insn->type == INSN_TRAP) return true; /* From 9728af8857dfd0bfed48ab137955d28afbc107e5 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Thu, 24 Mar 2022 00:05:55 +0100 Subject: [PATCH 1499/1842] crypto: x86/poly1305 - Fixup SLS commit 7ed7aa4de9421229be6d331ed52d5cd09c99f409 upstream. Due to being a perl generated asm file, it got missed by the mass convertion script. arch/x86/crypto/poly1305-x86_64-cryptogams.o: warning: objtool: poly1305_init_x86_64()+0x3a: missing int3 after ret arch/x86/crypto/poly1305-x86_64-cryptogams.o: warning: objtool: poly1305_blocks_x86_64()+0xf2: missing int3 after ret arch/x86/crypto/poly1305-x86_64-cryptogams.o: warning: objtool: poly1305_emit_x86_64()+0x37: missing int3 after ret arch/x86/crypto/poly1305-x86_64-cryptogams.o: warning: objtool: __poly1305_block()+0x6d: missing int3 after ret arch/x86/crypto/poly1305-x86_64-cryptogams.o: warning: objtool: __poly1305_init_avx()+0x1e8: missing int3 after ret arch/x86/crypto/poly1305-x86_64-cryptogams.o: warning: objtool: poly1305_blocks_avx()+0x18a: missing int3 after ret arch/x86/crypto/poly1305-x86_64-cryptogams.o: warning: objtool: poly1305_blocks_avx()+0xaf8: missing int3 after ret arch/x86/crypto/poly1305-x86_64-cryptogams.o: warning: objtool: poly1305_emit_avx()+0x99: missing int3 after ret arch/x86/crypto/poly1305-x86_64-cryptogams.o: warning: objtool: poly1305_blocks_avx2()+0x18a: missing int3 after ret arch/x86/crypto/poly1305-x86_64-cryptogams.o: warning: objtool: poly1305_blocks_avx2()+0x776: missing int3 after ret arch/x86/crypto/poly1305-x86_64-cryptogams.o: warning: objtool: poly1305_blocks_avx512()+0x18a: missing int3 after ret arch/x86/crypto/poly1305-x86_64-cryptogams.o: warning: objtool: poly1305_blocks_avx512()+0x796: missing int3 after ret arch/x86/crypto/poly1305-x86_64-cryptogams.o: warning: objtool: poly1305_blocks_avx512()+0x10bd: missing int3 after ret Fixes: f94909ceb1ed ("x86: Prepare asm files for straight-line-speculation") Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Herbert Xu Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/crypto/poly1305-x86_64-cryptogams.pl | 38 +++++++++---------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/arch/x86/crypto/poly1305-x86_64-cryptogams.pl b/arch/x86/crypto/poly1305-x86_64-cryptogams.pl index 7d568012cc15..58eaec958c86 100644 --- a/arch/x86/crypto/poly1305-x86_64-cryptogams.pl +++ b/arch/x86/crypto/poly1305-x86_64-cryptogams.pl @@ -297,7 +297,7 @@ ___ $code.=<<___; mov \$1,%eax .Lno_key: - ret + RET ___ &end_function("poly1305_init_x86_64"); @@ -373,7 +373,7 @@ $code.=<<___; .cfi_adjust_cfa_offset -48 .Lno_data: .Lblocks_epilogue: - ret + RET .cfi_endproc ___ &end_function("poly1305_blocks_x86_64"); @@ -399,7 +399,7 @@ $code.=<<___; mov %rax,0($mac) # write result mov %rcx,8($mac) - ret + RET ___ &end_function("poly1305_emit_x86_64"); if ($avx) { @@ -429,7 +429,7 @@ ___ &poly1305_iteration(); $code.=<<___; pop $ctx - ret + RET .size __poly1305_block,.-__poly1305_block .type __poly1305_init_avx,\@abi-omnipotent @@ -594,7 +594,7 @@ __poly1305_init_avx: lea -48-64($ctx),$ctx # size [de-]optimization pop %rbp - ret + RET .size __poly1305_init_avx,.-__poly1305_init_avx ___ @@ -747,7 +747,7 @@ $code.=<<___; .cfi_restore %rbp .Lno_data_avx: .Lblocks_avx_epilogue: - ret + RET .cfi_endproc .align 32 @@ -1452,7 +1452,7 @@ $code.=<<___ if (!$win64); ___ $code.=<<___; vzeroupper - ret + RET .cfi_endproc ___ &end_function("poly1305_blocks_avx"); @@ -1508,7 +1508,7 @@ $code.=<<___; mov %rax,0($mac) # write result mov %rcx,8($mac) - ret + RET ___ &end_function("poly1305_emit_avx"); @@ -1675,7 +1675,7 @@ $code.=<<___; .cfi_restore %rbp .Lno_data_avx2$suffix: .Lblocks_avx2_epilogue$suffix: - ret + RET .cfi_endproc .align 32 @@ -2201,7 +2201,7 @@ $code.=<<___ if (!$win64); ___ $code.=<<___; vzeroupper - ret + RET .cfi_endproc ___ if($avx > 2 && $avx512) { @@ -2792,7 +2792,7 @@ $code.=<<___ if (!$win64); .cfi_def_cfa_register %rsp ___ $code.=<<___; - ret + RET .cfi_endproc ___ @@ -2893,7 +2893,7 @@ $code.=<<___ if ($flavour =~ /elf32/); ___ $code.=<<___; mov \$1,%eax - ret + RET .size poly1305_init_base2_44,.-poly1305_init_base2_44 ___ { @@ -3010,7 +3010,7 @@ poly1305_blocks_vpmadd52: jnz .Lblocks_vpmadd52_4x .Lno_data_vpmadd52: - ret + RET .size poly1305_blocks_vpmadd52,.-poly1305_blocks_vpmadd52 ___ } @@ -3451,7 +3451,7 @@ poly1305_blocks_vpmadd52_4x: vzeroall .Lno_data_vpmadd52_4x: - ret + RET .size poly1305_blocks_vpmadd52_4x,.-poly1305_blocks_vpmadd52_4x ___ } @@ -3824,7 +3824,7 @@ $code.=<<___; vzeroall .Lno_data_vpmadd52_8x: - ret + RET .size poly1305_blocks_vpmadd52_8x,.-poly1305_blocks_vpmadd52_8x ___ } @@ -3861,7 +3861,7 @@ poly1305_emit_base2_44: mov %rax,0($mac) # write result mov %rcx,8($mac) - ret + RET .size poly1305_emit_base2_44,.-poly1305_emit_base2_44 ___ } } } @@ -3916,7 +3916,7 @@ xor128_encrypt_n_pad: .Ldone_enc: mov $otp,%rax - ret + RET .size xor128_encrypt_n_pad,.-xor128_encrypt_n_pad .globl xor128_decrypt_n_pad @@ -3967,7 +3967,7 @@ xor128_decrypt_n_pad: .Ldone_dec: mov $otp,%rax - ret + RET .size xor128_decrypt_n_pad,.-xor128_decrypt_n_pad ___ } @@ -4109,7 +4109,7 @@ avx_handler: pop %rbx pop %rdi pop %rsi - ret + RET .size avx_handler,.-avx_handler .section .pdata From 831d5c07b7e7c6eab9df64ae328bacb5a6a0313a Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Wed, 23 Mar 2022 23:35:01 +0100 Subject: [PATCH 1500/1842] objtool: Fix SLS validation for kcov tail-call replacement commit 7a53f408902d913cd541b4f8ad7dbcd4961f5b82 upstream. Since not all compilers have a function attribute to disable KCOV instrumentation, objtool can rewrite KCOV instrumentation in noinstr functions as per commit: f56dae88a81f ("objtool: Handle __sanitize_cov*() tail calls") However, this has subtle interaction with the SLS validation from commit: 1cc1e4c8aab4 ("objtool: Add straight-line-speculation validation") In that when a tail-call instrucion is replaced with a RET an additional INT3 instruction is also written, but is not represented in the decoded instruction stream. This then leads to false positive missing INT3 objtool warnings in noinstr code. Instead of adding additional struct instruction objects, mark the RET instruction with retpoline_safe to suppress the warning (since we know there really is an INT3). Fixes: 1cc1e4c8aab4 ("objtool: Add straight-line-speculation validation") Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20220323230712.GA8939@worktop.programming.kicks-ass.net Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index fdf96eef6e00..0ca4a3c2d86b 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -961,6 +961,17 @@ static void annotate_call_site(struct objtool_file *file, : arch_nop_insn(insn->len)); insn->type = sibling ? INSN_RETURN : INSN_NOP; + + if (sibling) { + /* + * We've replaced the tail-call JMP insn by two new + * insn: RET; INT3, except we only have a single struct + * insn here. Mark it retpoline_safe to avoid the SLS + * warning, instead of adding another insn. + */ + insn->retpoline_safe = true; + } + return; } } From 42ec4d71353f4c6da1d94baf64acbb1badc16b11 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Sun, 17 Apr 2022 17:03:36 +0200 Subject: [PATCH 1501/1842] objtool: Fix code relocs vs weak symbols commit 4abff6d48dbcea8200c7ea35ba70c242d128ebf3 upstream. Occasionally objtool driven code patching (think .static_call_sites .retpoline_sites etc..) goes sideways and it tries to patch an instruction that doesn't match. Much head-scatching and cursing later the problem is as outlined below and affects every section that objtool generates for us, very much including the ORC data. The below uses .static_call_sites because it's convenient for demonstration purposes, but as mentioned the ORC sections, .retpoline_sites and __mount_loc are all similarly affected. Consider: foo-weak.c: extern void __SCT__foo(void); __attribute__((weak)) void foo(void) { return __SCT__foo(); } foo.c: extern void __SCT__foo(void); extern void my_foo(void); void foo(void) { my_foo(); return __SCT__foo(); } These generate the obvious code (gcc -O2 -fcf-protection=none -fno-asynchronous-unwind-tables -c foo*.c): foo-weak.o: 0000000000000000 : 0: e9 00 00 00 00 jmpq 5 1: R_X86_64_PLT32 __SCT__foo-0x4 foo.o: 0000000000000000 : 0: 48 83 ec 08 sub $0x8,%rsp 4: e8 00 00 00 00 callq 9 5: R_X86_64_PLT32 my_foo-0x4 9: 48 83 c4 08 add $0x8,%rsp d: e9 00 00 00 00 jmpq 12 e: R_X86_64_PLT32 __SCT__foo-0x4 Now, when we link these two files together, you get something like (ld -r -o foos.o foo-weak.o foo.o): foos.o: 0000000000000000 : 0: e9 00 00 00 00 jmpq 5 1: R_X86_64_PLT32 __SCT__foo-0x4 5: 66 2e 0f 1f 84 00 00 00 00 00 nopw %cs:0x0(%rax,%rax,1) f: 90 nop 0000000000000010 : 10: 48 83 ec 08 sub $0x8,%rsp 14: e8 00 00 00 00 callq 19 15: R_X86_64_PLT32 my_foo-0x4 19: 48 83 c4 08 add $0x8,%rsp 1d: e9 00 00 00 00 jmpq 22 1e: R_X86_64_PLT32 __SCT__foo-0x4 Noting that ld preserves the weak function text, but strips the symbol off of it (hence objdump doing that funny negative offset thing). This does lead to 'interesting' unused code issues with objtool when ran on linked objects, but that seems to be working (fingers crossed). So far so good.. Now lets consider the objtool static_call output section (readelf output, old binutils): foo-weak.o: Relocation section '.rela.static_call_sites' at offset 0x2c8 contains 1 entry: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000000000 0000000200000002 R_X86_64_PC32 0000000000000000 .text + 0 0000000000000004 0000000d00000002 R_X86_64_PC32 0000000000000000 __SCT__foo + 1 foo.o: Relocation section '.rela.static_call_sites' at offset 0x310 contains 2 entries: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000000000 0000000200000002 R_X86_64_PC32 0000000000000000 .text + d 0000000000000004 0000000d00000002 R_X86_64_PC32 0000000000000000 __SCT__foo + 1 foos.o: Relocation section '.rela.static_call_sites' at offset 0x430 contains 4 entries: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000000000 0000000100000002 R_X86_64_PC32 0000000000000000 .text + 0 0000000000000004 0000000d00000002 R_X86_64_PC32 0000000000000000 __SCT__foo + 1 0000000000000008 0000000100000002 R_X86_64_PC32 0000000000000000 .text + 1d 000000000000000c 0000000d00000002 R_X86_64_PC32 0000000000000000 __SCT__foo + 1 So we have two patch sites, one in the dead code of the weak foo and one in the real foo. All is well. *HOWEVER*, when the toolchain strips unused section symbols it generates things like this (using new enough binutils): foo-weak.o: Relocation section '.rela.static_call_sites' at offset 0x2c8 contains 1 entry: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000000000 0000000200000002 R_X86_64_PC32 0000000000000000 foo + 0 0000000000000004 0000000d00000002 R_X86_64_PC32 0000000000000000 __SCT__foo + 1 foo.o: Relocation section '.rela.static_call_sites' at offset 0x310 contains 2 entries: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000000000 0000000200000002 R_X86_64_PC32 0000000000000000 foo + d 0000000000000004 0000000d00000002 R_X86_64_PC32 0000000000000000 __SCT__foo + 1 foos.o: Relocation section '.rela.static_call_sites' at offset 0x430 contains 4 entries: Offset Info Type Symbol's Value Symbol's Name + Addend 0000000000000000 0000000100000002 R_X86_64_PC32 0000000000000000 foo + 0 0000000000000004 0000000d00000002 R_X86_64_PC32 0000000000000000 __SCT__foo + 1 0000000000000008 0000000100000002 R_X86_64_PC32 0000000000000000 foo + d 000000000000000c 0000000d00000002 R_X86_64_PC32 0000000000000000 __SCT__foo + 1 And now we can see how that foos.o .static_call_sites goes side-ways, we now have _two_ patch sites in foo. One for the weak symbol at foo+0 (which is no longer a static_call site!) and one at foo+d which is in fact the right location. This seems to happen when objtool cannot find a section symbol, in which case it falls back to any other symbol to key off of, however in this case that goes terribly wrong! As such, teach objtool to create a section symbol when there isn't one. Fixes: 44f6a7c0755d ("objtool: Fix seg fault with Clang non-section symbols") Signed-off-by: Peter Zijlstra (Intel) Acked-by: Josh Poimboeuf Link: https://lkml.kernel.org/r/20220419203807.655552918@infradead.org Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/elf.c | 187 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 165 insertions(+), 22 deletions(-) diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c index 74f4b3bb7961..0a9a1cb222e0 100644 --- a/tools/objtool/elf.c +++ b/tools/objtool/elf.c @@ -537,37 +537,180 @@ int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset, return 0; } +/* + * Ensure that any reloc section containing references to @sym is marked + * changed such that it will get re-generated in elf_rebuild_reloc_sections() + * with the new symbol index. + */ +static void elf_dirty_reloc_sym(struct elf *elf, struct symbol *sym) +{ + struct section *sec; + + list_for_each_entry(sec, &elf->sections, list) { + struct reloc *reloc; + + if (sec->changed) + continue; + + list_for_each_entry(reloc, &sec->reloc_list, list) { + if (reloc->sym == sym) { + sec->changed = true; + break; + } + } + } +} + +/* + * Move the first global symbol, as per sh_info, into a new, higher symbol + * index. This fees up the shndx for a new local symbol. + */ +static int elf_move_global_symbol(struct elf *elf, struct section *symtab, + struct section *symtab_shndx) +{ + Elf_Data *data, *shndx_data = NULL; + Elf32_Word first_non_local; + struct symbol *sym; + Elf_Scn *s; + + first_non_local = symtab->sh.sh_info; + + sym = find_symbol_by_index(elf, first_non_local); + if (!sym) { + WARN("no non-local symbols !?"); + return first_non_local; + } + + s = elf_getscn(elf->elf, symtab->idx); + if (!s) { + WARN_ELF("elf_getscn"); + return -1; + } + + data = elf_newdata(s); + if (!data) { + WARN_ELF("elf_newdata"); + return -1; + } + + data->d_buf = &sym->sym; + data->d_size = sizeof(sym->sym); + data->d_align = 1; + data->d_type = ELF_T_SYM; + + sym->idx = symtab->sh.sh_size / sizeof(sym->sym); + elf_dirty_reloc_sym(elf, sym); + + symtab->sh.sh_info += 1; + symtab->sh.sh_size += data->d_size; + symtab->changed = true; + + if (symtab_shndx) { + s = elf_getscn(elf->elf, symtab_shndx->idx); + if (!s) { + WARN_ELF("elf_getscn"); + return -1; + } + + shndx_data = elf_newdata(s); + if (!shndx_data) { + WARN_ELF("elf_newshndx_data"); + return -1; + } + + shndx_data->d_buf = &sym->sec->idx; + shndx_data->d_size = sizeof(Elf32_Word); + shndx_data->d_align = 4; + shndx_data->d_type = ELF_T_WORD; + + symtab_shndx->sh.sh_size += 4; + symtab_shndx->changed = true; + } + + return first_non_local; +} + +static struct symbol * +elf_create_section_symbol(struct elf *elf, struct section *sec) +{ + struct section *symtab, *symtab_shndx; + Elf_Data *shndx_data = NULL; + struct symbol *sym; + Elf32_Word shndx; + + symtab = find_section_by_name(elf, ".symtab"); + if (symtab) { + symtab_shndx = find_section_by_name(elf, ".symtab_shndx"); + if (symtab_shndx) + shndx_data = symtab_shndx->data; + } else { + WARN("no .symtab"); + return NULL; + } + + sym = malloc(sizeof(*sym)); + if (!sym) { + perror("malloc"); + return NULL; + } + memset(sym, 0, sizeof(*sym)); + + sym->idx = elf_move_global_symbol(elf, symtab, symtab_shndx); + if (sym->idx < 0) { + WARN("elf_move_global_symbol"); + return NULL; + } + + sym->name = sec->name; + sym->sec = sec; + + // st_name 0 + sym->sym.st_info = GELF_ST_INFO(STB_LOCAL, STT_SECTION); + // st_other 0 + // st_value 0 + // st_size 0 + shndx = sec->idx; + if (shndx >= SHN_UNDEF && shndx < SHN_LORESERVE) { + sym->sym.st_shndx = shndx; + if (!shndx_data) + shndx = 0; + } else { + sym->sym.st_shndx = SHN_XINDEX; + if (!shndx_data) { + WARN("no .symtab_shndx"); + return NULL; + } + } + + if (!gelf_update_symshndx(symtab->data, shndx_data, sym->idx, &sym->sym, shndx)) { + WARN_ELF("gelf_update_symshndx"); + return NULL; + } + + elf_add_symbol(elf, sym); + + return sym; +} + int elf_add_reloc_to_insn(struct elf *elf, struct section *sec, unsigned long offset, unsigned int type, struct section *insn_sec, unsigned long insn_off) { - struct symbol *sym; - int addend; + struct symbol *sym = insn_sec->sym; + int addend = insn_off; - if (insn_sec->sym) { - sym = insn_sec->sym; - addend = insn_off; - - } else { + if (!sym) { /* - * The Clang assembler strips section symbols, so we have to - * reference the function symbol instead: + * Due to how weak functions work, we must use section based + * relocations. Symbol based relocations would result in the + * weak and non-weak function annotations being overlaid on the + * non-weak function after linking. */ - sym = find_symbol_containing(insn_sec, insn_off); - if (!sym) { - /* - * Hack alert. This happens when we need to reference - * the NOP pad insn immediately after the function. - */ - sym = find_symbol_containing(insn_sec, insn_off - 1); - } - - if (!sym) { - WARN("can't find symbol containing %s+0x%lx", insn_sec->name, insn_off); + sym = elf_create_section_symbol(elf, insn_sec); + if (!sym) return -1; - } - addend = insn_off - sym->offset; + insn_sec->sym = sym; } return elf_add_reloc(elf, sec, offset, type, sym, addend); From 3e8afd072d098958a507fb10251a201c9899150c Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Sun, 17 Apr 2022 17:03:40 +0200 Subject: [PATCH 1502/1842] objtool: Fix type of reloc::addend commit c087c6e7b551b7f208c0b852304f044954cf2bb3 upstream. Elf{32,64}_Rela::r_addend is of type: Elf{32,64}_Sword, that means that our reloc::addend needs to be long or face tuncation issues when we do elf_rebuild_reloc_section(): - 107: 48 b8 00 00 00 00 00 00 00 00 movabs $0x0,%rax 109: R_X86_64_64 level4_kernel_pgt+0x80000067 + 107: 48 b8 00 00 00 00 00 00 00 00 movabs $0x0,%rax 109: R_X86_64_64 level4_kernel_pgt-0x7fffff99 Fixes: 627fce14809b ("objtool: Add ORC unwind table generation") Signed-off-by: Peter Zijlstra (Intel) Acked-by: Josh Poimboeuf Link: https://lkml.kernel.org/r/20220419203807.596871927@infradead.org Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 8 ++++---- tools/objtool/elf.c | 2 +- tools/objtool/elf.h | 4 ++-- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 0ca4a3c2d86b..e8fcbdeefb89 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -467,12 +467,12 @@ static int add_dead_ends(struct objtool_file *file) else if (reloc->addend == reloc->sym->sec->len) { insn = find_last_insn(file, reloc->sym->sec); if (!insn) { - WARN("can't find unreachable insn at %s+0x%x", + WARN("can't find unreachable insn at %s+0x%lx", reloc->sym->sec->name, reloc->addend); return -1; } } else { - WARN("can't find unreachable insn at %s+0x%x", + WARN("can't find unreachable insn at %s+0x%lx", reloc->sym->sec->name, reloc->addend); return -1; } @@ -502,12 +502,12 @@ static int add_dead_ends(struct objtool_file *file) else if (reloc->addend == reloc->sym->sec->len) { insn = find_last_insn(file, reloc->sym->sec); if (!insn) { - WARN("can't find reachable insn at %s+0x%x", + WARN("can't find reachable insn at %s+0x%lx", reloc->sym->sec->name, reloc->addend); return -1; } } else { - WARN("can't find reachable insn at %s+0x%x", + WARN("can't find reachable insn at %s+0x%lx", reloc->sym->sec->name, reloc->addend); return -1; } diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c index 0a9a1cb222e0..cb7a0acfce46 100644 --- a/tools/objtool/elf.c +++ b/tools/objtool/elf.c @@ -509,7 +509,7 @@ static struct section *elf_create_reloc_section(struct elf *elf, int reltype); int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset, - unsigned int type, struct symbol *sym, int addend) + unsigned int type, struct symbol *sym, long addend) { struct reloc *reloc; diff --git a/tools/objtool/elf.h b/tools/objtool/elf.h index de00cac9aede..63bcfee82033 100644 --- a/tools/objtool/elf.h +++ b/tools/objtool/elf.h @@ -73,7 +73,7 @@ struct reloc { struct symbol *sym; unsigned long offset; unsigned int type; - int addend; + long addend; int idx; bool jump_table_start; }; @@ -127,7 +127,7 @@ struct elf *elf_open_read(const char *name, int flags); struct section *elf_create_section(struct elf *elf, const char *name, unsigned int sh_flags, size_t entsize, int nr); int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset, - unsigned int type, struct symbol *sym, int addend); + unsigned int type, struct symbol *sym, long addend); int elf_add_reloc_to_insn(struct elf *elf, struct section *sec, unsigned long offset, unsigned int type, struct section *insn_sec, unsigned long insn_off); From e1db6c8a69ec74aa0ecff19857895d10f39c6d98 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 17 May 2022 17:42:04 +0200 Subject: [PATCH 1503/1842] objtool: Fix symbol creation commit ead165fa1042247b033afad7be4be9b815d04ade upstream. Nathan reported objtool failing with the following messages: warning: objtool: no non-local symbols !? warning: objtool: gelf_update_symshndx: invalid section index The problem is due to commit 4abff6d48dbc ("objtool: Fix code relocs vs weak symbols") failing to consider the case where an object would have no non-local symbols. The problem that commit tries to address is adding a STB_LOCAL symbol to the symbol table in light of the ELF spec's requirement that: In each symbol table, all symbols with STB_LOCAL binding preced the weak and global symbols. As ``Sections'' above describes, a symbol table section's sh_info section header member holds the symbol table index for the first non-local symbol. The approach taken is to find this first non-local symbol, move that to the end and then re-use the freed spot to insert a new local symbol and increment sh_info. Except it never considered the case of object files without global symbols and got a whole bunch of details wrong -- so many in fact that it is a wonder it ever worked :/ Specifically: - It failed to re-hash the symbol on the new index, so a subsequent find_symbol_by_index() would not find it at the new location and a query for the old location would now return a non-deterministic choice between the old and new symbol. - It failed to appreciate that the GElf wrappers are not a valid disk format (it works because GElf is basically Elf64 and we only support x86_64 atm.) - It failed to fully appreciate how horrible the libelf API really is and got the gelf_update_symshndx() call pretty much completely wrong; with the direct consequence that if inserting a second STB_LOCAL symbol would require moving the same STB_GLOBAL symbol again it would completely come unstuck. Write a new elf_update_symbol() function that wraps all the magic required to update or create a new symbol at a given index. Specifically, gelf_update_sym*() require an @ndx argument that is relative to the @data argument; this means you have to manually iterate the section data descriptor list and update @ndx. Fixes: 4abff6d48dbc ("objtool: Fix code relocs vs weak symbols") Reported-by: Nathan Chancellor Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Acked-by: Josh Poimboeuf Tested-by: Nathan Chancellor Cc: Link: https://lkml.kernel.org/r/YoPCTEYjoPqE4ZxB@hirez.programming.kicks-ass.net Signed-off-by: Greg Kroah-Hartman [bwh: Backported to 5.10: elf_hash_add() takes a hash table pointer, not just a name] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/elf.c | 202 ++++++++++++++++++++++++++++---------------- 1 file changed, 131 insertions(+), 71 deletions(-) diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c index cb7a0acfce46..155e84875bd7 100644 --- a/tools/objtool/elf.c +++ b/tools/objtool/elf.c @@ -346,6 +346,8 @@ static void elf_add_symbol(struct elf *elf, struct symbol *sym) struct list_head *entry; struct rb_node *pnode; + sym->alias = sym; + sym->type = GELF_ST_TYPE(sym->sym.st_info); sym->bind = GELF_ST_BIND(sym->sym.st_info); @@ -401,7 +403,6 @@ static int read_symbols(struct elf *elf) return -1; } memset(sym, 0, sizeof(*sym)); - sym->alias = sym; sym->idx = i; @@ -562,24 +563,21 @@ static void elf_dirty_reloc_sym(struct elf *elf, struct symbol *sym) } /* - * Move the first global symbol, as per sh_info, into a new, higher symbol - * index. This fees up the shndx for a new local symbol. + * The libelf API is terrible; gelf_update_sym*() takes a data block relative + * index value, *NOT* the symbol index. As such, iterate the data blocks and + * adjust index until it fits. + * + * If no data block is found, allow adding a new data block provided the index + * is only one past the end. */ -static int elf_move_global_symbol(struct elf *elf, struct section *symtab, - struct section *symtab_shndx) +static int elf_update_symbol(struct elf *elf, struct section *symtab, + struct section *symtab_shndx, struct symbol *sym) { - Elf_Data *data, *shndx_data = NULL; - Elf32_Word first_non_local; - struct symbol *sym; - Elf_Scn *s; - - first_non_local = symtab->sh.sh_info; - - sym = find_symbol_by_index(elf, first_non_local); - if (!sym) { - WARN("no non-local symbols !?"); - return first_non_local; - } + Elf32_Word shndx = sym->sec ? sym->sec->idx : SHN_UNDEF; + Elf_Data *symtab_data = NULL, *shndx_data = NULL; + Elf64_Xword entsize = symtab->sh.sh_entsize; + int max_idx, idx = sym->idx; + Elf_Scn *s, *t = NULL; s = elf_getscn(elf->elf, symtab->idx); if (!s) { @@ -587,79 +585,124 @@ static int elf_move_global_symbol(struct elf *elf, struct section *symtab, return -1; } - data = elf_newdata(s); - if (!data) { - WARN_ELF("elf_newdata"); - return -1; - } - - data->d_buf = &sym->sym; - data->d_size = sizeof(sym->sym); - data->d_align = 1; - data->d_type = ELF_T_SYM; - - sym->idx = symtab->sh.sh_size / sizeof(sym->sym); - elf_dirty_reloc_sym(elf, sym); - - symtab->sh.sh_info += 1; - symtab->sh.sh_size += data->d_size; - symtab->changed = true; - if (symtab_shndx) { - s = elf_getscn(elf->elf, symtab_shndx->idx); - if (!s) { + t = elf_getscn(elf->elf, symtab_shndx->idx); + if (!t) { WARN_ELF("elf_getscn"); return -1; } + } - shndx_data = elf_newdata(s); - if (!shndx_data) { - WARN_ELF("elf_newshndx_data"); + for (;;) { + /* get next data descriptor for the relevant sections */ + symtab_data = elf_getdata(s, symtab_data); + if (t) + shndx_data = elf_getdata(t, shndx_data); + + /* end-of-list */ + if (!symtab_data) { + void *buf; + + if (idx) { + /* we don't do holes in symbol tables */ + WARN("index out of range"); + return -1; + } + + /* if @idx == 0, it's the next contiguous entry, create it */ + symtab_data = elf_newdata(s); + if (t) + shndx_data = elf_newdata(t); + + buf = calloc(1, entsize); + if (!buf) { + WARN("malloc"); + return -1; + } + + symtab_data->d_buf = buf; + symtab_data->d_size = entsize; + symtab_data->d_align = 1; + symtab_data->d_type = ELF_T_SYM; + + symtab->sh.sh_size += entsize; + symtab->changed = true; + + if (t) { + shndx_data->d_buf = &sym->sec->idx; + shndx_data->d_size = sizeof(Elf32_Word); + shndx_data->d_align = sizeof(Elf32_Word); + shndx_data->d_type = ELF_T_WORD; + + symtab_shndx->sh.sh_size += sizeof(Elf32_Word); + symtab_shndx->changed = true; + } + + break; + } + + /* empty blocks should not happen */ + if (!symtab_data->d_size) { + WARN("zero size data"); return -1; } - shndx_data->d_buf = &sym->sec->idx; - shndx_data->d_size = sizeof(Elf32_Word); - shndx_data->d_align = 4; - shndx_data->d_type = ELF_T_WORD; + /* is this the right block? */ + max_idx = symtab_data->d_size / entsize; + if (idx < max_idx) + break; - symtab_shndx->sh.sh_size += 4; - symtab_shndx->changed = true; + /* adjust index and try again */ + idx -= max_idx; } - return first_non_local; + /* something went side-ways */ + if (idx < 0) { + WARN("negative index"); + return -1; + } + + /* setup extended section index magic and write the symbol */ + if (shndx >= SHN_UNDEF && shndx < SHN_LORESERVE) { + sym->sym.st_shndx = shndx; + if (!shndx_data) + shndx = 0; + } else { + sym->sym.st_shndx = SHN_XINDEX; + if (!shndx_data) { + WARN("no .symtab_shndx"); + return -1; + } + } + + if (!gelf_update_symshndx(symtab_data, shndx_data, idx, &sym->sym, shndx)) { + WARN_ELF("gelf_update_symshndx"); + return -1; + } + + return 0; } static struct symbol * elf_create_section_symbol(struct elf *elf, struct section *sec) { struct section *symtab, *symtab_shndx; - Elf_Data *shndx_data = NULL; - struct symbol *sym; - Elf32_Word shndx; + Elf32_Word first_non_local, new_idx; + struct symbol *sym, *old; symtab = find_section_by_name(elf, ".symtab"); if (symtab) { symtab_shndx = find_section_by_name(elf, ".symtab_shndx"); - if (symtab_shndx) - shndx_data = symtab_shndx->data; } else { WARN("no .symtab"); return NULL; } - sym = malloc(sizeof(*sym)); + sym = calloc(1, sizeof(*sym)); if (!sym) { perror("malloc"); return NULL; } - memset(sym, 0, sizeof(*sym)); - - sym->idx = elf_move_global_symbol(elf, symtab, symtab_shndx); - if (sym->idx < 0) { - WARN("elf_move_global_symbol"); - return NULL; - } sym->name = sec->name; sym->sec = sec; @@ -669,24 +712,41 @@ elf_create_section_symbol(struct elf *elf, struct section *sec) // st_other 0 // st_value 0 // st_size 0 - shndx = sec->idx; - if (shndx >= SHN_UNDEF && shndx < SHN_LORESERVE) { - sym->sym.st_shndx = shndx; - if (!shndx_data) - shndx = 0; - } else { - sym->sym.st_shndx = SHN_XINDEX; - if (!shndx_data) { - WARN("no .symtab_shndx"); + + /* + * Move the first global symbol, as per sh_info, into a new, higher + * symbol index. This fees up a spot for a new local symbol. + */ + first_non_local = symtab->sh.sh_info; + new_idx = symtab->sh.sh_size / symtab->sh.sh_entsize; + old = find_symbol_by_index(elf, first_non_local); + if (old) { + old->idx = new_idx; + + hlist_del(&old->hash); + elf_hash_add(elf->symbol_hash, &old->hash, old->idx); + + elf_dirty_reloc_sym(elf, old); + + if (elf_update_symbol(elf, symtab, symtab_shndx, old)) { + WARN("elf_update_symbol move"); return NULL; } + + new_idx = first_non_local; } - if (!gelf_update_symshndx(symtab->data, shndx_data, sym->idx, &sym->sym, shndx)) { - WARN_ELF("gelf_update_symshndx"); + sym->idx = new_idx; + if (elf_update_symbol(elf, symtab, symtab_shndx, sym)) { + WARN("elf_update_symbol"); return NULL; } + /* + * Either way, we added a LOCAL symbol. + */ + symtab->sh.sh_info += 1; + elf_add_symbol(elf, sym); return sym; From 148811a84292467b0527eb789f24cd4fe004136c Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 6 May 2022 14:14:35 +0200 Subject: [PATCH 1504/1842] x86/entry: Remove skip_r11rcx commit 1b331eeea7b8676fc5dbdf80d0a07e41be226177 upstream. Yes, r11 and rcx have been restored previously, but since they're being popped anyway (into rsi) might as well pop them into their own regs -- setting them to the value they already are. Less magical code. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/20220506121631.365070674@infradead.org [bwh: Backported to 5.10: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/entry/calling.h | 10 +--------- arch/x86/entry/entry_64.S | 3 +-- 2 files changed, 2 insertions(+), 11 deletions(-) diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h index 07a9331d55e7..2be2595e4458 100644 --- a/arch/x86/entry/calling.h +++ b/arch/x86/entry/calling.h @@ -146,27 +146,19 @@ For 32-bit we have the following conventions - kernel is built with .endm -.macro POP_REGS pop_rdi=1 skip_r11rcx=0 +.macro POP_REGS pop_rdi=1 popq %r15 popq %r14 popq %r13 popq %r12 popq %rbp popq %rbx - .if \skip_r11rcx - popq %rsi - .else popq %r11 - .endif popq %r10 popq %r9 popq %r8 popq %rax - .if \skip_r11rcx - popq %rsi - .else popq %rcx - .endif popq %rdx popq %rsi .if \pop_rdi diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index fb77ef8ecde5..afdc4f5370c3 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -191,8 +191,7 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL) * perf profiles. Nothing jumps here. */ syscall_return_via_sysret: - /* rcx and r11 are already restored (see code above) */ - POP_REGS pop_rdi=0 skip_r11rcx=1 + POP_REGS pop_rdi=0 /* * Now all regs are restored except RSP and RDI. From 236b959da9d145c311f9daa65e3fbbe1a3c9c9af Mon Sep 17 00:00:00 2001 From: Mikulas Patocka Date: Mon, 16 May 2022 11:06:36 -0400 Subject: [PATCH 1505/1842] objtool: Fix objtool regression on x32 systems commit 22682a07acc308ef78681572e19502ce8893c4d4 upstream. Commit c087c6e7b551 ("objtool: Fix type of reloc::addend") failed to appreciate cross building from ILP32 hosts, where 'int' == 'long' and the issue persists. As such, use s64/int64_t/Elf64_Sxword for this field and suffer the pain that is ISO C99 printf formats for it. Fixes: c087c6e7b551 ("objtool: Fix type of reloc::addend") Signed-off-by: Mikulas Patocka [peterz: reword changelog, s/long long/s64/] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Cc: Link: https://lkml.kernel.org/r/alpine.LRH.2.02.2205161041260.11556@file01.intranet.prod.int.rdu2.redhat.com Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 9 +++++---- tools/objtool/elf.c | 2 +- tools/objtool/elf.h | 4 ++-- 3 files changed, 8 insertions(+), 7 deletions(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index e8fcbdeefb89..c5419db93140 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -5,6 +5,7 @@ #include #include +#include #include #include "builtin.h" @@ -467,12 +468,12 @@ static int add_dead_ends(struct objtool_file *file) else if (reloc->addend == reloc->sym->sec->len) { insn = find_last_insn(file, reloc->sym->sec); if (!insn) { - WARN("can't find unreachable insn at %s+0x%lx", + WARN("can't find unreachable insn at %s+0x%" PRIx64, reloc->sym->sec->name, reloc->addend); return -1; } } else { - WARN("can't find unreachable insn at %s+0x%lx", + WARN("can't find unreachable insn at %s+0x%" PRIx64, reloc->sym->sec->name, reloc->addend); return -1; } @@ -502,12 +503,12 @@ static int add_dead_ends(struct objtool_file *file) else if (reloc->addend == reloc->sym->sec->len) { insn = find_last_insn(file, reloc->sym->sec); if (!insn) { - WARN("can't find reachable insn at %s+0x%lx", + WARN("can't find reachable insn at %s+0x%" PRIx64, reloc->sym->sec->name, reloc->addend); return -1; } } else { - WARN("can't find reachable insn at %s+0x%lx", + WARN("can't find reachable insn at %s+0x%" PRIx64, reloc->sym->sec->name, reloc->addend); return -1; } diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c index 155e84875bd7..0fb5b45aec53 100644 --- a/tools/objtool/elf.c +++ b/tools/objtool/elf.c @@ -510,7 +510,7 @@ static struct section *elf_create_reloc_section(struct elf *elf, int reltype); int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset, - unsigned int type, struct symbol *sym, long addend) + unsigned int type, struct symbol *sym, s64 addend) { struct reloc *reloc; diff --git a/tools/objtool/elf.h b/tools/objtool/elf.h index 63bcfee82033..875f9f475851 100644 --- a/tools/objtool/elf.h +++ b/tools/objtool/elf.h @@ -73,7 +73,7 @@ struct reloc { struct symbol *sym; unsigned long offset; unsigned int type; - long addend; + s64 addend; int idx; bool jump_table_start; }; @@ -127,7 +127,7 @@ struct elf *elf_open_read(const char *name, int flags); struct section *elf_create_section(struct elf *elf, const char *name, unsigned int sh_flags, size_t entsize, int nr); int elf_add_reloc(struct elf *elf, struct section *sec, unsigned long offset, - unsigned int type, struct symbol *sym, long addend); + unsigned int type, struct symbol *sym, s64 addend); int elf_add_reloc_to_insn(struct elf *elf, struct section *sec, unsigned long offset, unsigned int type, struct section *insn_sec, unsigned long insn_off); From accb8cfd506da1e218a0cd56a11f37885dc32adf Mon Sep 17 00:00:00 2001 From: Thadeu Lima de Souza Cascardo Date: Fri, 1 Jul 2022 11:21:20 -0300 Subject: [PATCH 1506/1842] x86/realmode: build with -D__DISABLE_EXPORTS Commit 156ff4a544ae ("x86/ibt: Base IBT bits") added this option when building realmode in order to disable IBT there. This is also needed in order to disable return thunks. Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/Makefile b/arch/x86/Makefile index 05f5d28b75eb..1f796050c6dd 100644 --- a/arch/x86/Makefile +++ b/arch/x86/Makefile @@ -31,7 +31,7 @@ endif CODE16GCC_CFLAGS := -m32 -Wa,$(srctree)/arch/x86/boot/code16gcc.h M16_CFLAGS := $(call cc-option, -m16, $(CODE16GCC_CFLAGS)) -REALMODE_CFLAGS := $(M16_CFLAGS) -g -Os -DDISABLE_BRANCH_PROFILING \ +REALMODE_CFLAGS := $(M16_CFLAGS) -g -Os -DDISABLE_BRANCH_PROFILING -D__DISABLE_EXPORTS \ -Wall -Wstrict-prototypes -march=i386 -mregparm=3 \ -fno-strict-aliasing -fomit-frame-pointer -fno-pic \ -mno-mmx -mno-sse $(call cc-option,-fcf-protection=none) From 7070bbb66c5303117e4c7651711ea7daae4c64b5 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:32 +0200 Subject: [PATCH 1507/1842] x86/kvm/vmx: Make noinstr clean commit 742ab6df974ae8384a2dd213db1a3a06cf6d8936 upstream. The recent mmio_stale_data fixes broke the noinstr constraints: vmlinux.o: warning: objtool: vmx_vcpu_enter_exit+0x15b: call to wrmsrl.constprop.0() leaves .noinstr.text section vmlinux.o: warning: objtool: vmx_vcpu_enter_exit+0x1bf: call to kvm_arch_has_assigned_device() leaves .noinstr.text section make it all happy again. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/vmx/vmx.c | 6 +++--- arch/x86/kvm/x86.c | 4 ++-- include/linux/kvm_host.h | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index e71e81b2ea52..a1ff92b572de 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -380,9 +380,9 @@ static __always_inline void vmx_disable_fb_clear(struct vcpu_vmx *vmx) if (!vmx->disable_fb_clear) return; - rdmsrl(MSR_IA32_MCU_OPT_CTRL, msr); + msr = __rdmsr(MSR_IA32_MCU_OPT_CTRL); msr |= FB_CLEAR_DIS; - wrmsrl(MSR_IA32_MCU_OPT_CTRL, msr); + native_wrmsrl(MSR_IA32_MCU_OPT_CTRL, msr); /* Cache the MSR value to avoid reading it later */ vmx->msr_ia32_mcu_opt_ctrl = msr; } @@ -393,7 +393,7 @@ static __always_inline void vmx_enable_fb_clear(struct vcpu_vmx *vmx) return; vmx->msr_ia32_mcu_opt_ctrl &= ~FB_CLEAR_DIS; - wrmsrl(MSR_IA32_MCU_OPT_CTRL, vmx->msr_ia32_mcu_opt_ctrl); + native_wrmsrl(MSR_IA32_MCU_OPT_CTRL, vmx->msr_ia32_mcu_opt_ctrl); } static void vmx_update_fb_clear_dis(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index c71f702c037d..29a8ca95c581 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11173,9 +11173,9 @@ void kvm_arch_end_assignment(struct kvm *kvm) } EXPORT_SYMBOL_GPL(kvm_arch_end_assignment); -bool kvm_arch_has_assigned_device(struct kvm *kvm) +bool noinstr kvm_arch_has_assigned_device(struct kvm *kvm) { - return atomic_read(&kvm->arch.assigned_device_count); + return arch_atomic_read(&kvm->arch.assigned_device_count); } EXPORT_SYMBOL_GPL(kvm_arch_has_assigned_device); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c66c702a4f07..439fbe0ee0c7 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -988,7 +988,7 @@ static inline void kvm_arch_end_assignment(struct kvm *kvm) { } -static inline bool kvm_arch_has_assigned_device(struct kvm *kvm) +static __always_inline bool kvm_arch_has_assigned_device(struct kvm *kvm) { return false; } From feec5277d5aa9780d4814084262b98af2b1a2242 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:33 +0200 Subject: [PATCH 1508/1842] x86/cpufeatures: Move RETPOLINE flags to word 11 commit a883d624aed463c84c22596006e5a96f5b44db31 upstream. In order to extend the RETPOLINE features to 4, move them to word 11 where there is still room. This mostly keeps DISABLE_RETPOLINE simple. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: bits 8 and 9 of word 11 are also free here, so comment them accordingly] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/cpufeatures.h | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index f6a6ac0b3bcd..cdf07272f160 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -203,8 +203,8 @@ #define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ -#define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ -#define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCE for Spectre variant 2 */ +/* FREE! ( 7*32+12) */ +/* FREE! ( 7*32+13) */ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */ #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */ #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */ @@ -290,6 +290,12 @@ #define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel entry SWAPGS path */ #define X86_FEATURE_SPLIT_LOCK_DETECT (11*32+ 6) /* #AC for split lock */ #define X86_FEATURE_PER_THREAD_MBA (11*32+ 7) /* "" Per-thread Memory Bandwidth Allocation */ +/* FREE! (11*32+ 8) */ +/* FREE! (11*32+ 9) */ +/* FREE! (11*32+10) */ +/* FREE! (11*32+11) */ +#define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ +#define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */ From 6a2b142886c52244a9c1dfb0a36971daa963541a Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:34 +0200 Subject: [PATCH 1509/1842] x86/retpoline: Cleanup some #ifdefery commit 369ae6ffc41a3c1137cab697635a84d0cc7cdcea upstream. On it's own not much of a cleanup but it prepares for more/similar code. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: conflict fixup because of DISABLE_ENQCMD] [cascardo: no changes at nospec-branch.h and bpf_jit_comp.c] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/disabled-features.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h index 09db5b8f1444..a277cc712a13 100644 --- a/arch/x86/include/asm/disabled-features.h +++ b/arch/x86/include/asm/disabled-features.h @@ -56,6 +56,13 @@ # define DISABLE_PTI (1 << (X86_FEATURE_PTI & 31)) #endif +#ifdef CONFIG_RETPOLINE +# define DISABLE_RETPOLINE 0 +#else +# define DISABLE_RETPOLINE ((1 << (X86_FEATURE_RETPOLINE & 31)) | \ + (1 << (X86_FEATURE_RETPOLINE_LFENCE & 31))) +#endif + /* Force disable because it's broken beyond repair */ #define DISABLE_ENQCMD (1 << (X86_FEATURE_ENQCMD & 31)) @@ -73,7 +80,7 @@ #define DISABLED_MASK8 0 #define DISABLED_MASK9 (DISABLE_SMAP) #define DISABLED_MASK10 0 -#define DISABLED_MASK11 0 +#define DISABLED_MASK11 (DISABLE_RETPOLINE) #define DISABLED_MASK12 0 #define DISABLED_MASK13 0 #define DISABLED_MASK14 0 From 3e519ed8d509f5f2e1c67984f3cdf079b725e724 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:35 +0200 Subject: [PATCH 1510/1842] x86/retpoline: Swizzle retpoline thunk commit 00e1533325fd1fb5459229fe37f235462649f668 upstream. Put the actual retpoline thunk as the original code so that it can become more complicated. Specifically, it allows RET to be a JMP, which can't be .altinstr_replacement since that doesn't do relocations (except for the very first instruction). Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/lib/retpoline.S | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index afbdda539b80..c013241d9188 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -32,9 +32,9 @@ SYM_INNER_LABEL(__x86_indirect_thunk_\reg, SYM_L_GLOBAL) UNWIND_HINT_EMPTY - ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), \ - __stringify(RETPOLINE \reg), X86_FEATURE_RETPOLINE, \ - __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg; int3), X86_FEATURE_RETPOLINE_LFENCE + ALTERNATIVE_2 __stringify(RETPOLINE \reg), \ + __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg; int3), X86_FEATURE_RETPOLINE_LFENCE, \ + __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), ALT_NOT(X86_FEATURE_RETPOLINE) .endm From 37b9bb094123a14a986137d693b5aa18a240128b Mon Sep 17 00:00:00 2001 From: Ben Hutchings Date: Mon, 11 Jul 2022 00:31:38 +0200 Subject: [PATCH 1511/1842] Makefile: Set retpoline cflags based on CONFIG_CC_IS_{CLANG,GCC} This was done as part of commit 7d73c3e9c51400d3e0e755488050804e4d44737a "Makefile: remove stale cc-option checks" upstream, and is needed to support backporting further retpoline changes. Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- Makefile | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/Makefile b/Makefile index 5bee8f281b06..19dcd74925bd 100644 --- a/Makefile +++ b/Makefile @@ -670,12 +670,14 @@ ifdef CONFIG_FUNCTION_TRACER CC_FLAGS_FTRACE := -pg endif -RETPOLINE_CFLAGS_GCC := -mindirect-branch=thunk-extern -mindirect-branch-register -RETPOLINE_VDSO_CFLAGS_GCC := -mindirect-branch=thunk-inline -mindirect-branch-register -RETPOLINE_CFLAGS_CLANG := -mretpoline-external-thunk -RETPOLINE_VDSO_CFLAGS_CLANG := -mretpoline -RETPOLINE_CFLAGS := $(call cc-option,$(RETPOLINE_CFLAGS_GCC),$(call cc-option,$(RETPOLINE_CFLAGS_CLANG))) -RETPOLINE_VDSO_CFLAGS := $(call cc-option,$(RETPOLINE_VDSO_CFLAGS_GCC),$(call cc-option,$(RETPOLINE_VDSO_CFLAGS_CLANG))) +ifdef CONFIG_CC_IS_GCC +RETPOLINE_CFLAGS := $(call cc-option,-mindirect-branch=thunk-extern -mindirect-branch-register) +RETPOLINE_VDSO_CFLAGS := $(call cc-option,-mindirect-branch=thunk-inline -mindirect-branch-register) +endif +ifdef CONFIG_CC_IS_CLANG +RETPOLINE_CFLAGS := -mretpoline-external-thunk +RETPOLINE_VDSO_CFLAGS := -mretpoline +endif export RETPOLINE_CFLAGS export RETPOLINE_VDSO_CFLAGS From 270de63cf4a380fe9942d3e0da599c0e966fad78 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:36 +0200 Subject: [PATCH 1512/1842] x86/retpoline: Use -mfunction-return commit 0b53c374b9eff2255a386f1f1cfb9a928e52a5ae upstream. Utilize -mfunction-return=thunk-extern when available to have the compiler replace RET instructions with direct JMPs to the symbol __x86_return_thunk. This does not affect assembler (.S) sources, only C sources. -mfunction-return=thunk-extern has been available since gcc 7.3 and clang 15. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Nick Desaulniers Reviewed-by: Josh Poimboeuf Tested-by: Nick Desaulniers Signed-off-by: Borislav Petkov [cascardo: RETPOLINE_CFLAGS is at Makefile] [cascardo: remove ANNOTATE_NOENDBR from __x86_return_thunk] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 ++ arch/x86/include/asm/nospec-branch.h | 2 ++ arch/x86/lib/retpoline.S | 12 ++++++++++++ 3 files changed, 16 insertions(+) diff --git a/Makefile b/Makefile index 19dcd74925bd..4d502a81d24b 100644 --- a/Makefile +++ b/Makefile @@ -672,11 +672,13 @@ endif ifdef CONFIG_CC_IS_GCC RETPOLINE_CFLAGS := $(call cc-option,-mindirect-branch=thunk-extern -mindirect-branch-register) +RETPOLINE_CFLAGS += $(call cc-option,-mfunction-return=thunk-extern) RETPOLINE_VDSO_CFLAGS := $(call cc-option,-mindirect-branch=thunk-inline -mindirect-branch-register) endif ifdef CONFIG_CC_IS_CLANG RETPOLINE_CFLAGS := -mretpoline-external-thunk RETPOLINE_VDSO_CFLAGS := -mretpoline +RETPOLINE_CFLAGS += $(call cc-option,-mfunction-return=thunk-extern) endif export RETPOLINE_CFLAGS export RETPOLINE_VDSO_CFLAGS diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 1bb47df51c26..20dc331243e6 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -120,6 +120,8 @@ _ASM_PTR " 999b\n\t" \ ".popsection\n\t" +extern void __x86_return_thunk(void); + #ifdef CONFIG_RETPOLINE typedef u8 retpoline_thunk_t[RETPOLINE_THUNK_SIZE]; diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index c013241d9188..01667ea9da02 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -66,3 +66,15 @@ SYM_CODE_END(__x86_indirect_thunk_array) #define GEN(reg) EXPORT_THUNK(reg) #include #undef GEN + +/* + * This function name is magical and is used by -mfunction-return=thunk-extern + * for the compiler to generate JMPs to it. + */ +SYM_CODE_START(__x86_return_thunk) + UNWIND_HINT_EMPTY + ret + int3 +SYM_CODE_END(__x86_return_thunk) + +__EXPORT_THUNK(__x86_return_thunk) From 716410960ba0a2d2c3f59cb46315467c9faf59b2 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:37 +0200 Subject: [PATCH 1513/1842] x86: Undo return-thunk damage commit 15e67227c49a57837108acfe1c80570e1bd9f962 upstream. Introduce X86_FEATURE_RETHUNK for those afflicted with needing this. [ bp: Do only INT3 padding - simpler. ] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: CONFIG_STACK_VALIDATION vs CONFIG_OBJTOOL] [cascardo: no IBT support] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/alternative.h | 1 + arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/disabled-features.h | 3 +- arch/x86/kernel/alternative.c | 60 ++++++++++++++++++++++++ arch/x86/kernel/module.c | 8 +++- arch/x86/kernel/vmlinux.lds.S | 7 +++ 6 files changed, 78 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h index 3ac7460933af..0e777b27972b 100644 --- a/arch/x86/include/asm/alternative.h +++ b/arch/x86/include/asm/alternative.h @@ -76,6 +76,7 @@ extern int alternatives_patched; extern void alternative_instructions(void); extern void apply_alternatives(struct alt_instr *start, struct alt_instr *end); extern void apply_retpolines(s32 *start, s32 *end); +extern void apply_returns(s32 *start, s32 *end); struct module; diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index cdf07272f160..d0d2bd5eef61 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -296,6 +296,7 @@ /* FREE! (11*32+11) */ #define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */ +#define X86_FEATURE_RETHUNK (11*32+14) /* "" Use REturn THUNK */ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */ diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h index a277cc712a13..d678509756ce 100644 --- a/arch/x86/include/asm/disabled-features.h +++ b/arch/x86/include/asm/disabled-features.h @@ -60,7 +60,8 @@ # define DISABLE_RETPOLINE 0 #else # define DISABLE_RETPOLINE ((1 << (X86_FEATURE_RETPOLINE & 31)) | \ - (1 << (X86_FEATURE_RETPOLINE_LFENCE & 31))) + (1 << (X86_FEATURE_RETPOLINE_LFENCE & 31)) | \ + (1 << (X86_FEATURE_RETHUNK & 31))) #endif /* Force disable because it's broken beyond repair */ diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 8412048bc037..071862854778 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -270,6 +270,7 @@ static void __init_or_module add_nops(void *insns, unsigned int len) } extern s32 __retpoline_sites[], __retpoline_sites_end[]; +extern s32 __return_sites[], __return_sites_end[]; extern struct alt_instr __alt_instructions[], __alt_instructions_end[]; extern s32 __smp_locks[], __smp_locks_end[]; void text_poke_early(void *addr, const void *opcode, size_t len); @@ -661,9 +662,67 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) } } +/* + * Rewrite the compiler generated return thunk tail-calls. + * + * For example, convert: + * + * JMP __x86_return_thunk + * + * into: + * + * RET + */ +static int patch_return(void *addr, struct insn *insn, u8 *bytes) +{ + int i = 0; + + if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) + return -1; + + bytes[i++] = RET_INSN_OPCODE; + + for (; i < insn->length;) + bytes[i++] = INT3_INSN_OPCODE; + + return i; +} + +void __init_or_module noinline apply_returns(s32 *start, s32 *end) +{ + s32 *s; + + for (s = start; s < end; s++) { + void *addr = (void *)s + *s; + struct insn insn; + int len, ret; + u8 bytes[16]; + u8 op1; + + ret = insn_decode_kernel(&insn, addr); + if (WARN_ON_ONCE(ret < 0)) + continue; + + op1 = insn.opcode.bytes[0]; + if (WARN_ON_ONCE(op1 != JMP32_INSN_OPCODE)) + continue; + + DPRINTK("return thunk at: %pS (%px) len: %d to: %pS", + addr, addr, insn.length, + addr + insn.length + insn.immediate.value); + + len = patch_return(addr, &insn, bytes); + if (len == insn.length) { + DUMP_BYTES(((u8*)addr), len, "%px: orig: ", addr); + DUMP_BYTES(((u8*)bytes), len, "%px: repl: ", addr); + text_poke_early(addr, bytes, len); + } + } +} #else /* !RETPOLINES || !CONFIG_STACK_VALIDATION */ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) { } +void __init_or_module noinline apply_returns(s32 *start, s32 *end) { } #endif /* CONFIG_RETPOLINE && CONFIG_STACK_VALIDATION */ @@ -956,6 +1015,7 @@ void __init alternative_instructions(void) * those can rewrite the retpoline thunks. */ apply_retpolines(__retpoline_sites, __retpoline_sites_end); + apply_returns(__return_sites, __return_sites_end); apply_alternatives(__alt_instructions, __alt_instructions_end); diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c index 169fb6f4cd2e..455e195847f9 100644 --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -252,7 +252,7 @@ int module_finalize(const Elf_Ehdr *hdr, { const Elf_Shdr *s, *text = NULL, *alt = NULL, *locks = NULL, *para = NULL, *orc = NULL, *orc_ip = NULL, - *retpolines = NULL; + *retpolines = NULL, *returns = NULL; char *secstrings = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset; for (s = sechdrs; s < sechdrs + hdr->e_shnum; s++) { @@ -270,12 +270,18 @@ int module_finalize(const Elf_Ehdr *hdr, orc_ip = s; if (!strcmp(".retpoline_sites", secstrings + s->sh_name)) retpolines = s; + if (!strcmp(".return_sites", secstrings + s->sh_name)) + returns = s; } if (retpolines) { void *rseg = (void *)retpolines->sh_addr; apply_retpolines(rseg, rseg + retpolines->sh_size); } + if (returns) { + void *rseg = (void *)returns->sh_addr; + apply_returns(rseg, rseg + returns->sh_size); + } if (alt) { /* patch .altinstructions */ void *aseg = (void *)alt->sh_addr; diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index d7085a94cfed..b6c4dbaf668d 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -284,6 +284,13 @@ SECTIONS *(.retpoline_sites) __retpoline_sites_end = .; } + + . = ALIGN(8); + .return_sites : AT(ADDR(.return_sites) - LOAD_OFFSET) { + __return_sites = .; + *(.return_sites) + __return_sites_end = .; + } #endif /* From 8bdb25f7aee312450e9c9ac21ae209d9cf0602e5 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:38 +0200 Subject: [PATCH 1514/1842] x86,objtool: Create .return_sites commit d9e9d2300681d68a775c28de6aa6e5290ae17796 upstream. Find all the return-thunk sites and record them in a .return_sites section such that the kernel can undo this. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: conflict fixup because of functions added to support IBT] Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/arch.h | 1 + tools/objtool/arch/x86/decode.c | 5 +++ tools/objtool/check.c | 75 +++++++++++++++++++++++++++++++++ tools/objtool/elf.h | 1 + tools/objtool/objtool.c | 1 + tools/objtool/objtool.h | 1 + 6 files changed, 84 insertions(+) diff --git a/tools/objtool/arch.h b/tools/objtool/arch.h index e15f6e932b4f..580ce1857585 100644 --- a/tools/objtool/arch.h +++ b/tools/objtool/arch.h @@ -89,6 +89,7 @@ const char *arch_ret_insn(int len); int arch_decode_hint_reg(u8 sp_reg, int *base); bool arch_is_retpoline(struct symbol *sym); +bool arch_is_rethunk(struct symbol *sym); int arch_rewrite_retpolines(struct objtool_file *file); diff --git a/tools/objtool/arch/x86/decode.c b/tools/objtool/arch/x86/decode.c index f7154241a2a5..d8f47704fd85 100644 --- a/tools/objtool/arch/x86/decode.c +++ b/tools/objtool/arch/x86/decode.c @@ -649,3 +649,8 @@ bool arch_is_retpoline(struct symbol *sym) { return !strncmp(sym->name, "__x86_indirect_", 15); } + +bool arch_is_rethunk(struct symbol *sym) +{ + return !strcmp(sym->name, "__x86_return_thunk"); +} diff --git a/tools/objtool/check.c b/tools/objtool/check.c index c5419db93140..32203e458e44 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -653,6 +653,52 @@ static int create_retpoline_sites_sections(struct objtool_file *file) return 0; } +static int create_return_sites_sections(struct objtool_file *file) +{ + struct instruction *insn; + struct section *sec; + int idx; + + sec = find_section_by_name(file->elf, ".return_sites"); + if (sec) { + WARN("file already has .return_sites, skipping"); + return 0; + } + + idx = 0; + list_for_each_entry(insn, &file->return_thunk_list, call_node) + idx++; + + if (!idx) + return 0; + + sec = elf_create_section(file->elf, ".return_sites", 0, + sizeof(int), idx); + if (!sec) { + WARN("elf_create_section: .return_sites"); + return -1; + } + + idx = 0; + list_for_each_entry(insn, &file->return_thunk_list, call_node) { + + int *site = (int *)sec->data->d_buf + idx; + *site = 0; + + if (elf_add_reloc_to_insn(file->elf, sec, + idx * sizeof(int), + R_X86_64_PC32, + insn->sec, insn->offset)) { + WARN("elf_add_reloc_to_insn: .return_sites"); + return -1; + } + + idx++; + } + + return 0; +} + /* * Warnings shouldn't be reported for ignored functions. */ @@ -888,6 +934,11 @@ __weak bool arch_is_retpoline(struct symbol *sym) return false; } +__weak bool arch_is_rethunk(struct symbol *sym) +{ + return false; +} + #define NEGATIVE_RELOC ((void *)-1L) static struct reloc *insn_reloc(struct objtool_file *file, struct instruction *insn) @@ -1029,6 +1080,19 @@ static void add_retpoline_call(struct objtool_file *file, struct instruction *in annotate_call_site(file, insn, false); } + +static void add_return_call(struct objtool_file *file, struct instruction *insn) +{ + /* + * Return thunk tail calls are really just returns in disguise, + * so convert them accordingly. + */ + insn->type = INSN_RETURN; + insn->retpoline_safe = true; + + list_add_tail(&insn->call_node, &file->return_thunk_list); +} + /* * Find the destination instructions for all jumps. */ @@ -1053,6 +1117,9 @@ static int add_jump_destinations(struct objtool_file *file) } else if (reloc->sym->retpoline_thunk) { add_retpoline_call(file, insn); continue; + } else if (reloc->sym->return_thunk) { + add_return_call(file, insn); + continue; } else if (insn->func) { /* internal or external sibling call (with reloc) */ add_call_dest(file, insn, reloc->sym, true); @@ -1842,6 +1909,9 @@ static int classify_symbols(struct objtool_file *file) if (arch_is_retpoline(func)) func->retpoline_thunk = true; + if (arch_is_rethunk(func)) + func->return_thunk = true; + if (!strcmp(func->name, "__fentry__")) func->fentry = true; @@ -3235,6 +3305,11 @@ int check(struct objtool_file *file) if (ret < 0) goto out; warnings += ret; + + ret = create_return_sites_sections(file); + if (ret < 0) + goto out; + warnings += ret; } if (stats) { diff --git a/tools/objtool/elf.h b/tools/objtool/elf.h index 875f9f475851..a1863eb35fbb 100644 --- a/tools/objtool/elf.h +++ b/tools/objtool/elf.h @@ -58,6 +58,7 @@ struct symbol { u8 uaccess_safe : 1; u8 static_call_tramp : 1; u8 retpoline_thunk : 1; + u8 return_thunk : 1; u8 fentry : 1; u8 kcov : 1; }; diff --git a/tools/objtool/objtool.c b/tools/objtool/objtool.c index c0b1f5a99df9..cb2c6acd9667 100644 --- a/tools/objtool/objtool.c +++ b/tools/objtool/objtool.c @@ -62,6 +62,7 @@ struct objtool_file *objtool_open_read(const char *_objname) INIT_LIST_HEAD(&file.insn_list); hash_init(file.insn_hash); INIT_LIST_HEAD(&file.retpoline_call_list); + INIT_LIST_HEAD(&file.return_thunk_list); INIT_LIST_HEAD(&file.static_call_list); file.c_file = !vmlinux && find_section_by_name(file.elf, ".comment"); file.ignore_unreachables = no_unreachable; diff --git a/tools/objtool/objtool.h b/tools/objtool/objtool.h index 0b57150ea0fe..bf64946e749b 100644 --- a/tools/objtool/objtool.h +++ b/tools/objtool/objtool.h @@ -19,6 +19,7 @@ struct objtool_file { struct list_head insn_list; DECLARE_HASHTABLE(insn_hash, 20); struct list_head retpoline_call_list; + struct list_head return_thunk_list; struct list_head static_call_list; bool ignore_unreachables, c_file, hints, rodata; }; From 446eb6f08936e6f87bea9f35be05556a7211df9b Mon Sep 17 00:00:00 2001 From: Thadeu Lima de Souza Cascardo Date: Fri, 1 Jul 2022 09:00:45 -0300 Subject: [PATCH 1515/1842] objtool: skip non-text sections when adding return-thunk sites The .discard.text section is added in order to reserve BRK, with a temporary function just so it can give it a size. This adds a relocation to the return thunk, which objtool will add to the .return_sites section. Linking will then fail as there are references to the .discard.text section. Do not add instructions from non-text sections to the list of return thunk calls, avoiding the reference to .discard.text. Signed-off-by: Thadeu Lima de Souza Cascardo Acked-by: Josh Poimboeuf Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 32203e458e44..95c2fd945a1b 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -1090,7 +1090,9 @@ static void add_return_call(struct objtool_file *file, struct instruction *insn) insn->type = INSN_RETURN; insn->retpoline_safe = true; - list_add_tail(&insn->call_node, &file->return_thunk_list); + /* Skip the non-text sections, specially .discard ones */ + if (insn->sec->text) + list_add_tail(&insn->call_node, &file->return_thunk_list); } /* From 7723edf5edfdfdabd8234e45142be86598a04cad Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:39 +0200 Subject: [PATCH 1516/1842] x86,static_call: Use alternative RET encoding commit ee88d363d15617ff50ac24fab0ffec11113b2aeb upstream. In addition to teaching static_call about the new way to spell 'RET', there is an added complication in that static_call() is allowed to rewrite text before it is known which particular spelling is required. In order to deal with this; have a static_call specific fixup in the apply_return() 'alternative' patching routine that will rewrite the static_call trampoline to match the definite sequence. This in turn creates the problem of uniquely identifying static call trampolines. Currently trampolines are 8 bytes, the first 5 being the jmp.d32/ret sequence and the final 3 a byte sequence that spells out 'SCT'. This sequence is used in __static_call_validate() to ensure it is patching a trampoline and not a random other jmp.d32. That is, false-positives shouldn't be plenty, but aren't a big concern. OTOH the new __static_call_fixup() must not have false-positives, and 'SCT' decodes to the somewhat weird but semi plausible sequence: push %rbx rex.XB push %r12 Additionally, there are SLS concerns with immediate jumps. Combined it seems like a good moment to change the signature to a single 3 byte trap instruction that is unique to this usage and will not ever get generated by accident. As such, change the signature to: '0x0f, 0xb9, 0xcc', which decodes to: ud1 %esp, %ecx Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: skip validation as introduced by 2105a92748e8 ("static_call,x86: Robustify trampoline patching")] Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/static_call.h | 17 +++++++++++++ arch/x86/kernel/alternative.c | 12 ++++++---- arch/x86/kernel/static_call.c | 38 +++++++++++++++++++++++++++++- 3 files changed, 62 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h index 343234569392..7efce22d6355 100644 --- a/arch/x86/include/asm/static_call.h +++ b/arch/x86/include/asm/static_call.h @@ -21,6 +21,16 @@ * relative displacement across sections. */ +/* + * The trampoline is 8 bytes and of the general form: + * + * jmp.d32 \func + * ud1 %esp, %ecx + * + * That trailing #UD provides both a speculation stop and serves as a unique + * 3 byte signature identifying static call trampolines. Also see tramp_ud[] + * and __static_call_fixup(). + */ #define __ARCH_DEFINE_STATIC_CALL_TRAMP(name, insns) \ asm(".pushsection .static_call.text, \"ax\" \n" \ ".align 4 \n" \ @@ -34,8 +44,13 @@ #define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) \ __ARCH_DEFINE_STATIC_CALL_TRAMP(name, ".byte 0xe9; .long " #func " - (. + 4)") +#ifdef CONFIG_RETPOLINE +#define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) \ + __ARCH_DEFINE_STATIC_CALL_TRAMP(name, "jmp __x86_return_thunk") +#else #define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) \ __ARCH_DEFINE_STATIC_CALL_TRAMP(name, "ret; int3; nop; nop; nop") +#endif #define ARCH_ADD_TRAMP_KEY(name) \ @@ -44,4 +59,6 @@ ".long " STATIC_CALL_KEY_STR(name) " - . \n" \ ".popsection \n") +extern bool __static_call_fixup(void *tramp, u8 op, void *dest); + #endif /* _ASM_STATIC_CALL_H */ diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 071862854778..1fded2daaa2e 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -693,18 +693,22 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end) s32 *s; for (s = start; s < end; s++) { - void *addr = (void *)s + *s; + void *dest = NULL, *addr = (void *)s + *s; struct insn insn; int len, ret; u8 bytes[16]; - u8 op1; + u8 op; ret = insn_decode_kernel(&insn, addr); if (WARN_ON_ONCE(ret < 0)) continue; - op1 = insn.opcode.bytes[0]; - if (WARN_ON_ONCE(op1 != JMP32_INSN_OPCODE)) + op = insn.opcode.bytes[0]; + if (op == JMP32_INSN_OPCODE) + dest = addr + insn.length + insn.immediate.value; + + if (__static_call_fixup(addr, op, dest) || + WARN_ON_ONCE(dest != &__x86_return_thunk)) continue; DPRINTK("return thunk at: %pS (%px) len: %d to: %pS", diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c index 32d673bbc3ef..02e471c27efa 100644 --- a/arch/x86/kernel/static_call.c +++ b/arch/x86/kernel/static_call.c @@ -11,6 +11,13 @@ enum insn_type { RET = 3, /* tramp / site cond-tail-call */ }; +/* + * ud1 %esp, %ecx - a 3 byte #UD that is unique to trampolines, chosen such + * that there is no false-positive trampoline identification while also being a + * speculation stop. + */ +static const u8 tramp_ud[] = { 0x0f, 0xb9, 0xcc }; + static const u8 retinsn[] = { RET_INSN_OPCODE, 0xcc, 0xcc, 0xcc, 0xcc }; static void __ref __static_call_transform(void *insn, enum insn_type type, void *func) @@ -32,7 +39,10 @@ static void __ref __static_call_transform(void *insn, enum insn_type type, void break; case RET: - code = &retinsn; + if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) + code = text_gen_insn(JMP32_INSN_OPCODE, insn, &__x86_return_thunk); + else + code = &retinsn; break; } @@ -97,3 +107,29 @@ void arch_static_call_transform(void *site, void *tramp, void *func, bool tail) mutex_unlock(&text_mutex); } EXPORT_SYMBOL_GPL(arch_static_call_transform); + +#ifdef CONFIG_RETPOLINE +/* + * This is called by apply_returns() to fix up static call trampolines, + * specifically ARCH_DEFINE_STATIC_CALL_NULL_TRAMP which is recorded as + * having a return trampoline. + * + * The problem is that static_call() is available before determining + * X86_FEATURE_RETHUNK and, by implication, running alternatives. + * + * This means that __static_call_transform() above can have overwritten the + * return trampoline and we now need to fix things up to be consistent. + */ +bool __static_call_fixup(void *tramp, u8 op, void *dest) +{ + if (memcmp(tramp+5, tramp_ud, 3)) { + /* Not a trampoline site, not our problem. */ + return false; + } + + if (op == RET_INSN_OPCODE || dest == &__x86_return_thunk) + __static_call_transform(tramp, RET, NULL); + + return true; +} +#endif From 00b136bb6254e0abf6aaafe62c4da5f6c4fea4cb Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:40 +0200 Subject: [PATCH 1517/1842] x86/ftrace: Use alternative RET encoding commit 1f001e9da6bbf482311e45e48f53c2bd2179e59c upstream. Use the return thunk in ftrace trampolines, if needed. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: still copy return from ftrace_stub] [cascardo: use memcpy(text_gen_insn) as there is no __text_gen_insn] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/ftrace.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index fbcd144260ac..dca5cf82144c 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -308,7 +308,7 @@ union ftrace_op_code_union { } __attribute__((packed)); }; -#define RET_SIZE 1 + IS_ENABLED(CONFIG_SLS) +#define RET_SIZE (IS_ENABLED(CONFIG_RETPOLINE) ? 5 : 1 + IS_ENABLED(CONFIG_SLS)) static unsigned long create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) @@ -367,7 +367,10 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size) /* The trampoline ends with ret(q) */ retq = (unsigned long)ftrace_stub; - ret = copy_from_kernel_nofault(ip, (void *)retq, RET_SIZE); + if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) + memcpy(ip, text_gen_insn(JMP32_INSN_OPCODE, ip, &__x86_return_thunk), JMP32_INSN_SIZE); + else + ret = copy_from_kernel_nofault(ip, (void *)retq, RET_SIZE); if (WARN_ON(ret < 0)) goto fail; From e0e06a922706204df43d50032c05af75d8e75f8e Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:41 +0200 Subject: [PATCH 1518/1842] x86/bpf: Use alternative RET encoding commit d77cfe594ad50e0bf95d457e02ccd578791b2a15 upstream. Use the return thunk in eBPF generated code, if needed. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: add the necessary cnt variable to emit_return()] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/net/bpf_jit_comp.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 2563aef35bc0..8e3c3d8916dd 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -399,6 +399,22 @@ static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip) *pprog = prog; } +static void emit_return(u8 **pprog, u8 *ip) +{ + u8 *prog = *pprog; + int cnt = 0; + + if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) { + emit_jump(&prog, &__x86_return_thunk, ip); + } else { + EMIT1(0xC3); /* ret */ + if (IS_ENABLED(CONFIG_SLS)) + EMIT1(0xCC); /* int3 */ + } + + *pprog = prog; +} + /* * Generate the following code: * @@ -1443,7 +1459,7 @@ xadd: if (is_imm8(insn->off)) ctx->cleanup_addr = proglen; pop_callee_regs(&prog, callee_regs_used); EMIT1(0xC9); /* leave */ - EMIT1(0xC3); /* ret */ + emit_return(&prog, image + addrs[i - 1] + (prog - temp)); break; default: @@ -1884,7 +1900,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i if (flags & BPF_TRAMP_F_SKIP_FRAME) /* skip our return address and return to parent */ EMIT4(0x48, 0x83, 0xC4, 8); /* add rsp, 8 */ - EMIT1(0xC3); /* ret */ + emit_return(&prog, prog); /* Make sure the trampoline generation logic doesn't overflow */ if (WARN_ON_ONCE(prog > (u8 *)image_end - BPF_INSN_SAFETY)) { ret = -EFAULT; From ee4996f07d868ee6cc7e76151dfab9a2344cdeb0 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:42 +0200 Subject: [PATCH 1519/1842] x86/kvm: Fix SETcc emulation for return thunks commit af2e140f34208a5dfb6b7a8ad2d56bda88f0524d upstream. Prepare the SETcc fastop stuff for when RET can be larger still. The tricky bit here is that the expressions should not only be constant C expressions, but also absolute GAS expressions. This means no ?: and 'true' is ~0. Also ensure em_setcc() has the same alignment as the actual FOP_SETCC() ops, this ensures there cannot be an alignment hole between em_setcc() and the first op. Additionally, add a .skip directive to the FOP_SETCC() macro to fill any remaining space with INT3 traps; however the primary purpose of this directive is to generate AS warnings when the remaining space goes negative. Which is a very good indication the alignment magic went side-ways. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: ignore ENDBR when computing SETCC_LENGTH] [cascardo: conflict fixup] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/emulate.c | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index bdcbf23b8b1e..1f953f230ad5 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -322,13 +322,15 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop); #define FOP_RET(name) \ __FOP_RET(#name) -#define FOP_START(op) \ +#define __FOP_START(op, align) \ extern void em_##op(struct fastop *fake); \ asm(".pushsection .text, \"ax\" \n\t" \ ".global em_" #op " \n\t" \ - ".align " __stringify(FASTOP_SIZE) " \n\t" \ + ".align " __stringify(align) " \n\t" \ "em_" #op ":\n\t" +#define FOP_START(op) __FOP_START(op, FASTOP_SIZE) + #define FOP_END \ ".popsection") @@ -432,15 +434,14 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop); /* * Depending on .config the SETcc functions look like: * - * SETcc %al [3 bytes] - * RET [1 byte] - * INT3 [1 byte; CONFIG_SLS] - * - * Which gives possible sizes 4 or 5. When rounded up to the - * next power-of-two alignment they become 4 or 8. + * SETcc %al [3 bytes] + * RET | JMP __x86_return_thunk [1,5 bytes; CONFIG_RETPOLINE] + * INT3 [1 byte; CONFIG_SLS] */ -#define SETCC_LENGTH (4 + IS_ENABLED(CONFIG_SLS)) -#define SETCC_ALIGN (4 << IS_ENABLED(CONFIG_SLS)) +#define RET_LENGTH (1 + (4 * IS_ENABLED(CONFIG_RETPOLINE)) + \ + IS_ENABLED(CONFIG_SLS)) +#define SETCC_LENGTH (3 + RET_LENGTH) +#define SETCC_ALIGN (4 << ((SETCC_LENGTH > 4) & 1) << ((SETCC_LENGTH > 8) & 1)) static_assert(SETCC_LENGTH <= SETCC_ALIGN); #define FOP_SETCC(op) \ @@ -448,14 +449,15 @@ static_assert(SETCC_LENGTH <= SETCC_ALIGN); ".type " #op ", @function \n\t" \ #op ": \n\t" \ #op " %al \n\t" \ - __FOP_RET(#op) + __FOP_RET(#op) \ + ".skip " __stringify(SETCC_ALIGN) " - (.-" #op "), 0xcc \n\t" asm(".pushsection .fixup, \"ax\"\n" ".global kvm_fastop_exception \n" "kvm_fastop_exception: xor %esi, %esi; " ASM_RET ".popsection"); -FOP_START(setcc) +__FOP_START(setcc, SETCC_ALIGN) FOP_SETCC(seto) FOP_SETCC(setno) FOP_SETCC(setc) From d6eb50e9b7245a238872a9a969f84993339780a5 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:43 +0200 Subject: [PATCH 1520/1842] x86/vsyscall_emu/64: Don't use RET in vsyscall emulation commit 15583e514eb16744b80be85dea0774ece153177d upstream. This is userspace code and doesn't play by the normal kernel rules. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/entry/vsyscall/vsyscall_emu_64.S | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/x86/entry/vsyscall/vsyscall_emu_64.S b/arch/x86/entry/vsyscall/vsyscall_emu_64.S index 15e35159ebb6..ef2dd1827243 100644 --- a/arch/x86/entry/vsyscall/vsyscall_emu_64.S +++ b/arch/x86/entry/vsyscall/vsyscall_emu_64.S @@ -19,17 +19,20 @@ __vsyscall_page: mov $__NR_gettimeofday, %rax syscall - RET + ret + int3 .balign 1024, 0xcc mov $__NR_time, %rax syscall - RET + ret + int3 .balign 1024, 0xcc mov $__NR_getcpu, %rax syscall - RET + ret + int3 .balign 4096, 0xcc From 5b2edaf709b50c81b3c6ddb745c8a76ab6632645 Mon Sep 17 00:00:00 2001 From: Kim Phillips Date: Tue, 14 Jun 2022 23:15:44 +0200 Subject: [PATCH 1521/1842] x86/sev: Avoid using __x86_return_thunk commit 0ee9073000e8791f8b134a8ded31bcc767f7f232 upstream. Specifically, it's because __enc_copy() encrypts the kernel after being relocated outside the kernel in sme_encrypt_execute(), and the RET macro's jmp offset isn't amended prior to execution. Signed-off-by: Kim Phillips Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/mm/mem_encrypt_boot.S | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/x86/mm/mem_encrypt_boot.S b/arch/x86/mm/mem_encrypt_boot.S index 3c2bc5562fc2..a186007a50d3 100644 --- a/arch/x86/mm/mem_encrypt_boot.S +++ b/arch/x86/mm/mem_encrypt_boot.S @@ -65,7 +65,9 @@ SYM_FUNC_START(sme_encrypt_execute) movq %rbp, %rsp /* Restore original stack pointer */ pop %rbp - RET + /* Offset to __x86_return_thunk would be wrong here */ + ret + int3 SYM_FUNC_END(sme_encrypt_execute) SYM_FUNC_START(__enc_copy) @@ -151,6 +153,8 @@ SYM_FUNC_START(__enc_copy) pop %r12 pop %r15 - RET + /* Offset to __x86_return_thunk would be wrong here */ + ret + int3 .L__enc_copy_end: SYM_FUNC_END(__enc_copy) From c9eb5dcdc8f4a848b45b97725f5a2b8d324bb31a Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:45 +0200 Subject: [PATCH 1522/1842] x86: Use return-thunk in asm code commit aa3d480315ba6c3025a60958e1981072ea37c3df upstream. Use the return thunk in asm code. If the thunk isn't needed, it will get patched into a RET instruction during boot by apply_returns(). Since alternatives can't handle relocations outside of the first instruction, putting a 'jmp __x86_return_thunk' in one is not valid, therefore carve out the memmove ERMS path into a separate label and jump to it. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: no RANDSTRUCT_CFLAGS] Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/entry/vdso/Makefile | 1 + arch/x86/include/asm/linkage.h | 8 ++++++++ arch/x86/lib/memmove_64.S | 7 ++++++- 3 files changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile index 21243747965d..f181220f1b5d 100644 --- a/arch/x86/entry/vdso/Makefile +++ b/arch/x86/entry/vdso/Makefile @@ -91,6 +91,7 @@ endif endif $(vobjs): KBUILD_CFLAGS := $(filter-out $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL) +$(vobjs): KBUILD_AFLAGS += -DBUILD_VDSO # # vDSO code runs in userspace and -pg doesn't help with profiling anyway. diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h index 030907922bd0..d04e61c2f863 100644 --- a/arch/x86/include/asm/linkage.h +++ b/arch/x86/include/asm/linkage.h @@ -18,19 +18,27 @@ #define __ALIGN_STR __stringify(__ALIGN) #endif +#if defined(CONFIG_RETPOLINE) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO) +#define RET jmp __x86_return_thunk +#else /* CONFIG_RETPOLINE */ #ifdef CONFIG_SLS #define RET ret; int3 #else #define RET ret #endif +#endif /* CONFIG_RETPOLINE */ #else /* __ASSEMBLY__ */ +#if defined(CONFIG_RETPOLINE) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO) +#define ASM_RET "jmp __x86_return_thunk\n\t" +#else /* CONFIG_RETPOLINE */ #ifdef CONFIG_SLS #define ASM_RET "ret; int3\n\t" #else #define ASM_RET "ret\n\t" #endif +#endif /* CONFIG_RETPOLINE */ #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S index 50ea390df712..4b8ee3a2fcc3 100644 --- a/arch/x86/lib/memmove_64.S +++ b/arch/x86/lib/memmove_64.S @@ -40,7 +40,7 @@ SYM_FUNC_START(__memmove) /* FSRM implies ERMS => no length checks, do the copy directly */ .Lmemmove_begin_forward: ALTERNATIVE "cmp $0x20, %rdx; jb 1f", "", X86_FEATURE_FSRM - ALTERNATIVE "", __stringify(movq %rdx, %rcx; rep movsb; RET), X86_FEATURE_ERMS + ALTERNATIVE "", "jmp .Lmemmove_erms", X86_FEATURE_ERMS /* * movsq instruction have many startup latency @@ -206,6 +206,11 @@ SYM_FUNC_START(__memmove) movb %r11b, (%rdi) 13: RET + +.Lmemmove_erms: + movq %rdx, %rcx + rep movsb + RET SYM_FUNC_END(__memmove) SYM_FUNC_END_ALIAS(memmove) EXPORT_SYMBOL(__memmove) From c70d6f82141b89db6c076b0cbf9a7a2edc29e46d Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:47 +0200 Subject: [PATCH 1523/1842] objtool: Treat .text.__x86.* as noinstr commit 951ddecf435659553ed15a9214e153a3af43a9a1 upstream. Needed because zen_untrain_ret() will be called from noinstr code. Also makes sense since the thunks MUST NOT contain instrumentation nor be poked with dynamic instrumentation. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- tools/objtool/check.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 95c2fd945a1b..f3cc59714ed4 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -366,7 +366,8 @@ static int decode_instructions(struct objtool_file *file) sec->text = true; if (!strcmp(sec->name, ".noinstr.text") || - !strcmp(sec->name, ".entry.text")) + !strcmp(sec->name, ".entry.text") || + !strncmp(sec->name, ".text.__x86.", 12)) sec->noinstr = true; for (offset = 0; offset < sec->len; offset += insn->len) { From df748593c55389892902aecb8691080ad5e8cff5 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:48 +0200 Subject: [PATCH 1524/1842] x86: Add magic AMD return-thunk commit a149180fbcf336e97ce4eb2cdc13672727feb94d upstream. Note: needs to be in a section distinct from Retpolines such that the Retpoline RET substitution cannot possibly use immediate jumps. ORC unwinding for zen_untrain_ret() and __x86_return_thunk() is a little tricky but works due to the fact that zen_untrain_ret() doesn't have any stack ops and as such will emit a single ORC entry at the start (+0x3f). Meanwhile, unwinding an IP, including the __x86_return_thunk() one (+0x40) will search for the largest ORC entry smaller or equal to the IP, these will find the one ORC entry (+0x3f) and all works. [ Alexandre: SVM part. ] [ bp: Build fix, massages. ] Suggested-by: Andrew Cooper Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: conflicts at arch/x86/entry/entry_64_compat.S] [cascardo: there is no ANNOTATE_NOENDBR] [cascardo: objtool commit 34c861e806478ac2ea4032721defbf1d6967df08 missing] [cascardo: conflict fixup] Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: SEV-ES is not supported, so drop the change in arch/x86/kvm/svm/vmenter.S] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/entry/entry_64.S | 6 +++ arch/x86/entry/entry_64_compat.S | 4 ++ arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/disabled-features.h | 3 +- arch/x86/include/asm/nospec-branch.h | 17 +++++++ arch/x86/kernel/vmlinux.lds.S | 2 +- arch/x86/kvm/svm/vmenter.S | 9 ++++ arch/x86/lib/retpoline.S | 63 ++++++++++++++++++++++-- tools/objtool/check.c | 20 ++++++-- 9 files changed, 117 insertions(+), 8 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index afdc4f5370c3..2f6e0a7a7e85 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -102,6 +102,7 @@ SYM_CODE_START(entry_SYSCALL_64) movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp SYM_INNER_LABEL(entry_SYSCALL_64_safe_stack, SYM_L_GLOBAL) + UNTRAIN_RET /* Construct struct pt_regs on stack */ pushq $__USER_DS /* pt_regs->ss */ @@ -675,6 +676,7 @@ native_irq_return_ldt: pushq %rdi /* Stash user RDI */ swapgs /* to kernel GS */ SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi /* to kernel CR3 */ + UNTRAIN_RET movq PER_CPU_VAR(espfix_waddr), %rdi movq %rax, (0*8)(%rdi) /* user RAX */ @@ -910,6 +912,7 @@ SYM_CODE_START_LOCAL(paranoid_entry) * be retrieved from a kernel internal table. */ SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg=%rax save_reg=%r14 + UNTRAIN_RET /* * Handling GSBASE depends on the availability of FSGSBASE. @@ -1022,6 +1025,7 @@ SYM_CODE_START_LOCAL(error_entry) FENCE_SWAPGS_USER_ENTRY /* We have user CR3. Change to kernel CR3. */ SWITCH_TO_KERNEL_CR3 scratch_reg=%rax + UNTRAIN_RET .Lerror_entry_from_usermode_after_swapgs: /* Put us onto the real thread stack. */ @@ -1077,6 +1081,7 @@ SYM_CODE_START_LOCAL(error_entry) SWAPGS FENCE_SWAPGS_USER_ENTRY SWITCH_TO_KERNEL_CR3 scratch_reg=%rax + UNTRAIN_RET /* * Pretend that the exception came from user mode: set up pt_regs @@ -1171,6 +1176,7 @@ SYM_CODE_START(asm_exc_nmi) movq %rsp, %rdx movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp UNWIND_HINT_IRET_REGS base=%rdx offset=8 + UNTRAIN_RET pushq 5*8(%rdx) /* pt_regs->ss */ pushq 4*8(%rdx) /* pt_regs->rsp */ pushq 3*8(%rdx) /* pt_regs->flags */ diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S index 0051cf5c792d..007f3a1fbf1a 100644 --- a/arch/x86/entry/entry_64_compat.S +++ b/arch/x86/entry/entry_64_compat.S @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -71,6 +72,7 @@ SYM_CODE_START(entry_SYSENTER_compat) pushq $__USER32_CS /* pt_regs->cs */ pushq $0 /* pt_regs->ip = 0 (placeholder) */ SYM_INNER_LABEL(entry_SYSENTER_compat_after_hwframe, SYM_L_GLOBAL) + UNTRAIN_RET /* * User tracing code (ptrace or signal handlers) might assume that @@ -211,6 +213,7 @@ SYM_CODE_START(entry_SYSCALL_compat) movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp SYM_INNER_LABEL(entry_SYSCALL_compat_safe_stack, SYM_L_GLOBAL) + UNTRAIN_RET /* Construct struct pt_regs on stack */ pushq $__USER32_DS /* pt_regs->ss */ @@ -377,6 +380,7 @@ SYM_CODE_START(entry_INT80_compat) pushq (%rdi) /* pt_regs->di */ .Lint80_keep_stack: + UNTRAIN_RET pushq %rsi /* pt_regs->si */ xorl %esi, %esi /* nospec si */ pushq %rdx /* pt_regs->dx */ diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index d0d2bd5eef61..bf4f7da45e5d 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -297,6 +297,7 @@ #define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */ #define X86_FEATURE_RETHUNK (11*32+14) /* "" Use REturn THUNK */ +#define X86_FEATURE_UNRET (11*32+15) /* "" AMD BTB untrain return */ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */ diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h index d678509756ce..e5b4e3d460c1 100644 --- a/arch/x86/include/asm/disabled-features.h +++ b/arch/x86/include/asm/disabled-features.h @@ -61,7 +61,8 @@ #else # define DISABLE_RETPOLINE ((1 << (X86_FEATURE_RETPOLINE & 31)) | \ (1 << (X86_FEATURE_RETPOLINE_LFENCE & 31)) | \ - (1 << (X86_FEATURE_RETHUNK & 31))) + (1 << (X86_FEATURE_RETHUNK & 31)) | \ + (1 << (X86_FEATURE_UNRET & 31))) #endif /* Force disable because it's broken beyond repair */ diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 20dc331243e6..fa18454f2c01 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -112,6 +112,22 @@ #endif .endm +/* + * Mitigate RETBleed for AMD/Hygon Zen uarch. Requires KERNEL CR3 because the + * return thunk isn't mapped into the userspace tables (then again, AMD + * typically has NO_MELTDOWN). + * + * Doesn't clobber any registers but does require a stable stack. + * + * As such, this must be placed after every *SWITCH_TO_KERNEL_CR3 at a point + * where we have a stack but before any RET instruction. + */ +.macro UNTRAIN_RET +#ifdef CONFIG_RETPOLINE + ALTERNATIVE "", "call zen_untrain_ret", X86_FEATURE_UNRET +#endif +.endm + #else /* __ASSEMBLY__ */ #define ANNOTATE_RETPOLINE_SAFE \ @@ -121,6 +137,7 @@ ".popsection\n\t" extern void __x86_return_thunk(void); +extern void zen_untrain_ret(void); #ifdef CONFIG_RETPOLINE diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index b6c4dbaf668d..a21cd2381fa8 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -142,7 +142,7 @@ SECTIONS #ifdef CONFIG_RETPOLINE __indirect_thunk_start = .; - *(.text.__x86.indirect_thunk) + *(.text.__x86.*) __indirect_thunk_end = .; #endif } :text =0xcccc diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S index 6df5950d6967..c18d812d00cd 100644 --- a/arch/x86/kvm/svm/vmenter.S +++ b/arch/x86/kvm/svm/vmenter.S @@ -128,6 +128,15 @@ SYM_FUNC_START(__svm_vcpu_run) mov %r15, VCPU_R15(%_ASM_AX) #endif + /* + * Mitigate RETBleed for AMD/Hygon Zen uarch. RET should be + * untrained as soon as we exit the VM and are back to the + * kernel. This should be done before re-enabling interrupts + * because interrupt handlers won't sanitize 'ret' if the return is + * from the kernel. + */ + UNTRAIN_RET + /* * Clear all general purpose registers except RSP and RAX to prevent * speculative use of the guest's values, even those that are reloaded diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index 01667ea9da02..807f674fd59e 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -71,10 +71,67 @@ SYM_CODE_END(__x86_indirect_thunk_array) * This function name is magical and is used by -mfunction-return=thunk-extern * for the compiler to generate JMPs to it. */ -SYM_CODE_START(__x86_return_thunk) - UNWIND_HINT_EMPTY + .section .text.__x86.return_thunk + +/* + * Safety details here pertain to the AMD Zen{1,2} microarchitecture: + * 1) The RET at __x86_return_thunk must be on a 64 byte boundary, for + * alignment within the BTB. + * 2) The instruction at zen_untrain_ret must contain, and not + * end with, the 0xc3 byte of the RET. + * 3) STIBP must be enabled, or SMT disabled, to prevent the sibling thread + * from re-poisioning the BTB prediction. + */ + .align 64 + .skip 63, 0xcc +SYM_FUNC_START_NOALIGN(zen_untrain_ret); + + /* + * As executed from zen_untrain_ret, this is: + * + * TEST $0xcc, %bl + * LFENCE + * JMP __x86_return_thunk + * + * Executing the TEST instruction has a side effect of evicting any BTB + * prediction (potentially attacker controlled) attached to the RET, as + * __x86_return_thunk + 1 isn't an instruction boundary at the moment. + */ + .byte 0xf6 + + /* + * As executed from __x86_return_thunk, this is a plain RET. + * + * As part of the TEST above, RET is the ModRM byte, and INT3 the imm8. + * + * We subsequently jump backwards and architecturally execute the RET. + * This creates a correct BTB prediction (type=ret), but in the + * meantime we suffer Straight Line Speculation (because the type was + * no branch) which is halted by the INT3. + * + * With SMT enabled and STIBP active, a sibling thread cannot poison + * RET's prediction to a type of its choice, but can evict the + * prediction due to competitive sharing. If the prediction is + * evicted, __x86_return_thunk will suffer Straight Line Speculation + * which will be contained safely by the INT3. + */ +SYM_INNER_LABEL(__x86_return_thunk, SYM_L_GLOBAL) ret int3 SYM_CODE_END(__x86_return_thunk) -__EXPORT_THUNK(__x86_return_thunk) + /* + * Ensure the TEST decoding / BTB invalidation is complete. + */ + lfence + + /* + * Jump back and execute the RET in the middle of the TEST instruction. + * INT3 is for SLS protection. + */ + jmp __x86_return_thunk + int3 +SYM_FUNC_END(zen_untrain_ret) +__EXPORT_THUNK(zen_untrain_ret) + +EXPORT_SYMBOL(__x86_return_thunk) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index f3cc59714ed4..1eebfa422153 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -1082,7 +1082,7 @@ static void add_retpoline_call(struct objtool_file *file, struct instruction *in annotate_call_site(file, insn, false); } -static void add_return_call(struct objtool_file *file, struct instruction *insn) +static void add_return_call(struct objtool_file *file, struct instruction *insn, bool add) { /* * Return thunk tail calls are really just returns in disguise, @@ -1092,7 +1092,7 @@ static void add_return_call(struct objtool_file *file, struct instruction *insn) insn->retpoline_safe = true; /* Skip the non-text sections, specially .discard ones */ - if (insn->sec->text) + if (add && insn->sec->text) list_add_tail(&insn->call_node, &file->return_thunk_list); } @@ -1121,7 +1121,7 @@ static int add_jump_destinations(struct objtool_file *file) add_retpoline_call(file, insn); continue; } else if (reloc->sym->return_thunk) { - add_return_call(file, insn); + add_return_call(file, insn, true); continue; } else if (insn->func) { /* internal or external sibling call (with reloc) */ @@ -1138,6 +1138,7 @@ static int add_jump_destinations(struct objtool_file *file) insn->jump_dest = find_insn(file, dest_sec, dest_off); if (!insn->jump_dest) { + struct symbol *sym = find_symbol_by_offset(dest_sec, dest_off); /* * This is a special case where an alt instruction @@ -1147,6 +1148,19 @@ static int add_jump_destinations(struct objtool_file *file) if (!strcmp(insn->sec->name, ".altinstr_replacement")) continue; + /* + * This is a special case for zen_untrain_ret(). + * It jumps to __x86_return_thunk(), but objtool + * can't find the thunk's starting RET + * instruction, because the RET is also in the + * middle of another instruction. Objtool only + * knows about the outer instruction. + */ + if (sym && sym->return_thunk) { + add_return_call(file, insn, false); + continue; + } + WARN_FUNC("can't find jump dest instruction at %s+0x%lx", insn->sec, insn->offset, dest_sec->name, dest_off); From 876750cca4f043bd626a3ac760ce887dda3b6ec7 Mon Sep 17 00:00:00 2001 From: Alexandre Chartre Date: Tue, 14 Jun 2022 23:15:49 +0200 Subject: [PATCH 1525/1842] x86/bugs: Report AMD retbleed vulnerability commit 6b80b59b3555706508008f1f127b5412c89c7fd8 upstream. Report that AMD x86 CPUs are vulnerable to the RETBleed (Arbitrary Speculative Code Execution with Return Instructions) attack. [peterz: add hygon] [kim: invert parity; fam15h] Co-developed-by: Kim Phillips Signed-off-by: Kim Phillips Signed-off-by: Alexandre Chartre Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/kernel/cpu/bugs.c | 13 +++++++++++++ arch/x86/kernel/cpu/common.c | 19 +++++++++++++++++++ drivers/base/cpu.c | 8 ++++++++ include/linux/cpu.h | 2 ++ 5 files changed, 43 insertions(+) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index bf4f7da45e5d..66f044c4427d 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -426,5 +426,6 @@ #define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */ #define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */ #define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */ +#define X86_BUG_RETBLEED X86_BUG(26) /* CPU is affected by RETBleed */ #endif /* _ASM_X86_CPUFEATURES_H */ diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 2a21046846b6..56757e1b7301 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1917,6 +1917,11 @@ static ssize_t srbds_show_state(char *buf) return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]); } +static ssize_t retbleed_show_state(char *buf) +{ + return sprintf(buf, "Vulnerable\n"); +} + static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr, char *buf, unsigned int bug) { @@ -1962,6 +1967,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr case X86_BUG_MMIO_STALE_DATA: return mmio_stale_data_show_state(buf); + case X86_BUG_RETBLEED: + return retbleed_show_state(buf); + default: break; } @@ -2018,4 +2026,9 @@ ssize_t cpu_show_mmio_stale_data(struct device *dev, struct device_attribute *at { return cpu_show_common(dev, attr, buf, X86_BUG_MMIO_STALE_DATA); } + +ssize_t cpu_show_retbleed(struct device *dev, struct device_attribute *attr, char *buf) +{ + return cpu_show_common(dev, attr, buf, X86_BUG_RETBLEED); +} #endif diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 4917c2698ac1..5214c4bba33b 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1092,16 +1092,27 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { {} }; +#define VULNBL(vendor, family, model, blacklist) \ + X86_MATCH_VENDOR_FAM_MODEL(vendor, family, model, blacklist) + #define VULNBL_INTEL_STEPPINGS(model, steppings, issues) \ X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(INTEL, 6, \ INTEL_FAM6_##model, steppings, \ X86_FEATURE_ANY, issues) +#define VULNBL_AMD(family, blacklist) \ + VULNBL(AMD, family, X86_MODEL_ANY, blacklist) + +#define VULNBL_HYGON(family, blacklist) \ + VULNBL(HYGON, family, X86_MODEL_ANY, blacklist) + #define SRBDS BIT(0) /* CPU is affected by X86_BUG_MMIO_STALE_DATA */ #define MMIO BIT(1) /* CPU is affected by Shared Buffers Data Sampling (SBDS), a variant of X86_BUG_MMIO_STALE_DATA */ #define MMIO_SBDS BIT(2) +/* CPU is affected by RETbleed, speculating where you would not expect it */ +#define RETBLEED BIT(3) static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS), @@ -1134,6 +1145,11 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPINGS(0x0, 0x0), MMIO | MMIO_SBDS), + + VULNBL_AMD(0x15, RETBLEED), + VULNBL_AMD(0x16, RETBLEED), + VULNBL_AMD(0x17, RETBLEED), + VULNBL_HYGON(0x18, RETBLEED), {} }; @@ -1235,6 +1251,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) !arch_cap_mmio_immune(ia32_cap)) setup_force_cpu_bug(X86_BUG_MMIO_STALE_DATA); + if (cpu_matches(cpu_vuln_blacklist, RETBLEED)) + setup_force_cpu_bug(X86_BUG_RETBLEED); + if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) return; diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c index 8ecb9f90f467..24dd532c8e5e 100644 --- a/drivers/base/cpu.c +++ b/drivers/base/cpu.c @@ -572,6 +572,12 @@ ssize_t __weak cpu_show_mmio_stale_data(struct device *dev, return sysfs_emit(buf, "Not affected\n"); } +ssize_t __weak cpu_show_retbleed(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return sysfs_emit(buf, "Not affected\n"); +} + static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL); static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL); static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL); @@ -582,6 +588,7 @@ static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL); static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL); static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL); static DEVICE_ATTR(mmio_stale_data, 0444, cpu_show_mmio_stale_data, NULL); +static DEVICE_ATTR(retbleed, 0444, cpu_show_retbleed, NULL); static struct attribute *cpu_root_vulnerabilities_attrs[] = { &dev_attr_meltdown.attr, @@ -594,6 +601,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = { &dev_attr_itlb_multihit.attr, &dev_attr_srbds.attr, &dev_attr_mmio_stale_data.attr, + &dev_attr_retbleed.attr, NULL }; diff --git a/include/linux/cpu.h b/include/linux/cpu.h index d63b8f70d123..24e5f237244d 100644 --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -68,6 +68,8 @@ extern ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, extern ssize_t cpu_show_mmio_stale_data(struct device *dev, struct device_attribute *attr, char *buf); +extern ssize_t cpu_show_retbleed(struct device *dev, + struct device_attribute *attr, char *buf); extern __printf(4, 5) struct device *cpu_device_create(struct device *parent, void *drvdata, From 3f29791d56d32a610a2b57a9b700b1bc1912e41f Mon Sep 17 00:00:00 2001 From: Alexandre Chartre Date: Tue, 14 Jun 2022 23:15:50 +0200 Subject: [PATCH 1526/1842] x86/bugs: Add AMD retbleed= boot parameter commit 7fbf47c7ce50b38a64576b150e7011ae73d54669 upstream. Add the "retbleed=" boot parameter to select a mitigation for RETBleed. Possible values are "off", "auto" and "unret" (JMP2RET mitigation). The default value is "auto". Currently, "retbleed=auto" will select the unret mitigation on AMD and Hygon and no mitigation on Intel (JMP2RET is not effective on Intel). [peterz: rebase; add hygon] [jpoimboe: cleanups] Signed-off-by: Alexandre Chartre Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- .../admin-guide/kernel-parameters.txt | 15 +++ arch/x86/Kconfig | 3 + arch/x86/kernel/cpu/bugs.c | 108 +++++++++++++++++- 3 files changed, 125 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index ea8b704be705..bee8d551f310 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4656,6 +4656,21 @@ retain_initrd [RAM] Keep initrd memory after extraction + retbleed= [X86] Control mitigation of RETBleed (Arbitrary + Speculative Code Execution with Return Instructions) + vulnerability. + + off - unconditionally disable + auto - automatically select a migitation + unret - force enable untrained return thunks, + only effective on AMD Zen {1,2} + based systems. + + Selecting 'auto' will choose a mitigation method at run + time according to the CPU. + + Not specifying this option is equivalent to retbleed=auto. + rfkill.default_state= 0 "airplane mode". All wifi, bluetooth, wimax, gps, fm, etc. communication is blocked by default. diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 68d46a648f6e..6eebb8927378 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -465,6 +465,9 @@ config RETPOLINE config CC_HAS_SLS def_bool $(cc-option,-mharden-sls=all) +config CC_HAS_RETURN_THUNK + def_bool $(cc-option,-mfunction-return=thunk-extern) + config SLS bool "Mitigate Straight-Line-Speculation" depends on CC_HAS_SLS && X86_64 diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 56757e1b7301..7cf0f1d2bcb0 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -37,6 +37,7 @@ #include "cpu.h" static void __init spectre_v1_select_mitigation(void); +static void __init retbleed_select_mitigation(void); static void __init spectre_v2_select_mitigation(void); static void __init ssb_select_mitigation(void); static void __init l1tf_select_mitigation(void); @@ -112,6 +113,12 @@ void __init check_bugs(void) /* Select the proper CPU mitigations before patching alternatives: */ spectre_v1_select_mitigation(); + retbleed_select_mitigation(); + /* + * spectre_v2_select_mitigation() relies on the state set by + * retbleed_select_mitigation(); specifically the STIBP selection is + * forced for UNRET. + */ spectre_v2_select_mitigation(); ssb_select_mitigation(); l1tf_select_mitigation(); @@ -708,6 +715,100 @@ static int __init nospectre_v1_cmdline(char *str) } early_param("nospectre_v1", nospectre_v1_cmdline); +#undef pr_fmt +#define pr_fmt(fmt) "RETBleed: " fmt + +enum retbleed_mitigation { + RETBLEED_MITIGATION_NONE, + RETBLEED_MITIGATION_UNRET, +}; + +enum retbleed_mitigation_cmd { + RETBLEED_CMD_OFF, + RETBLEED_CMD_AUTO, + RETBLEED_CMD_UNRET, +}; + +const char * const retbleed_strings[] = { + [RETBLEED_MITIGATION_NONE] = "Vulnerable", + [RETBLEED_MITIGATION_UNRET] = "Mitigation: untrained return thunk", +}; + +static enum retbleed_mitigation retbleed_mitigation __ro_after_init = + RETBLEED_MITIGATION_NONE; +static enum retbleed_mitigation_cmd retbleed_cmd __ro_after_init = + RETBLEED_CMD_AUTO; + +static int __init retbleed_parse_cmdline(char *str) +{ + if (!str) + return -EINVAL; + + if (!strcmp(str, "off")) + retbleed_cmd = RETBLEED_CMD_OFF; + else if (!strcmp(str, "auto")) + retbleed_cmd = RETBLEED_CMD_AUTO; + else if (!strcmp(str, "unret")) + retbleed_cmd = RETBLEED_CMD_UNRET; + else + pr_err("Unknown retbleed option (%s). Defaulting to 'auto'\n", str); + + return 0; +} +early_param("retbleed", retbleed_parse_cmdline); + +#define RETBLEED_UNTRAIN_MSG "WARNING: BTB untrained return thunk mitigation is only effective on AMD/Hygon!\n" +#define RETBLEED_COMPILER_MSG "WARNING: kernel not compiled with RETPOLINE or -mfunction-return capable compiler!\n" + +static void __init retbleed_select_mitigation(void) +{ + if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off()) + return; + + switch (retbleed_cmd) { + case RETBLEED_CMD_OFF: + return; + + case RETBLEED_CMD_UNRET: + retbleed_mitigation = RETBLEED_MITIGATION_UNRET; + break; + + case RETBLEED_CMD_AUTO: + default: + if (!boot_cpu_has_bug(X86_BUG_RETBLEED)) + break; + + if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || + boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) + retbleed_mitigation = RETBLEED_MITIGATION_UNRET; + break; + } + + switch (retbleed_mitigation) { + case RETBLEED_MITIGATION_UNRET: + + if (!IS_ENABLED(CONFIG_RETPOLINE) || + !IS_ENABLED(CONFIG_CC_HAS_RETURN_THUNK)) { + pr_err(RETBLEED_COMPILER_MSG); + retbleed_mitigation = RETBLEED_MITIGATION_NONE; + break; + } + + setup_force_cpu_cap(X86_FEATURE_RETHUNK); + setup_force_cpu_cap(X86_FEATURE_UNRET); + + if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD && + boot_cpu_data.x86_vendor != X86_VENDOR_HYGON) + pr_err(RETBLEED_UNTRAIN_MSG); + break; + + default: + break; + } + + pr_info("%s\n", retbleed_strings[retbleed_mitigation]); +} + #undef pr_fmt #define pr_fmt(fmt) "Spectre V2 : " fmt @@ -1919,7 +2020,12 @@ static ssize_t srbds_show_state(char *buf) static ssize_t retbleed_show_state(char *buf) { - return sprintf(buf, "Vulnerable\n"); + if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET && + (boot_cpu_data.x86_vendor != X86_VENDOR_AMD && + boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)) + return sprintf(buf, "Vulnerable: untrained return thunk on non-Zen uarch\n"); + + return sprintf(buf, "%s\n", retbleed_strings[retbleed_mitigation]); } static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr, From a989e75136192036d47e4dc4fe87ff9c961d6b46 Mon Sep 17 00:00:00 2001 From: Kim Phillips Date: Tue, 14 Jun 2022 23:15:51 +0200 Subject: [PATCH 1527/1842] x86/bugs: Enable STIBP for JMP2RET commit e8ec1b6e08a2102d8755ccb06fa26d540f26a2fa upstream. For untrained return thunks to be fully effective, STIBP must be enabled or SMT disabled. Co-developed-by: Josh Poimboeuf Signed-off-by: Josh Poimboeuf Signed-off-by: Kim Phillips Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- .../admin-guide/kernel-parameters.txt | 16 +++-- arch/x86/kernel/cpu/bugs.c | 58 +++++++++++++++---- 2 files changed, 57 insertions(+), 17 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index bee8d551f310..15ca7a41f585 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4660,11 +4660,17 @@ Speculative Code Execution with Return Instructions) vulnerability. - off - unconditionally disable - auto - automatically select a migitation - unret - force enable untrained return thunks, - only effective on AMD Zen {1,2} - based systems. + off - no mitigation + auto - automatically select a migitation + auto,nosmt - automatically select a mitigation, + disabling SMT if necessary for + the full mitigation (only on Zen1 + and older without STIBP). + unret - force enable untrained return thunks, + only effective on AMD f15h-f17h + based systems. + unret,nosmt - like unret, will disable SMT when STIBP + is not available. Selecting 'auto' will choose a mitigation method at run time according to the CPU. diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 7cf0f1d2bcb0..619565986048 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -739,19 +739,34 @@ static enum retbleed_mitigation retbleed_mitigation __ro_after_init = static enum retbleed_mitigation_cmd retbleed_cmd __ro_after_init = RETBLEED_CMD_AUTO; +static int __ro_after_init retbleed_nosmt = false; + static int __init retbleed_parse_cmdline(char *str) { if (!str) return -EINVAL; - if (!strcmp(str, "off")) - retbleed_cmd = RETBLEED_CMD_OFF; - else if (!strcmp(str, "auto")) - retbleed_cmd = RETBLEED_CMD_AUTO; - else if (!strcmp(str, "unret")) - retbleed_cmd = RETBLEED_CMD_UNRET; - else - pr_err("Unknown retbleed option (%s). Defaulting to 'auto'\n", str); + while (str) { + char *next = strchr(str, ','); + if (next) { + *next = 0; + next++; + } + + if (!strcmp(str, "off")) { + retbleed_cmd = RETBLEED_CMD_OFF; + } else if (!strcmp(str, "auto")) { + retbleed_cmd = RETBLEED_CMD_AUTO; + } else if (!strcmp(str, "unret")) { + retbleed_cmd = RETBLEED_CMD_UNRET; + } else if (!strcmp(str, "nosmt")) { + retbleed_nosmt = true; + } else { + pr_err("Ignoring unknown retbleed option (%s).", str); + } + + str = next; + } return 0; } @@ -797,6 +812,10 @@ static void __init retbleed_select_mitigation(void) setup_force_cpu_cap(X86_FEATURE_RETHUNK); setup_force_cpu_cap(X86_FEATURE_UNRET); + if (!boot_cpu_has(X86_FEATURE_STIBP) && + (retbleed_nosmt || cpu_mitigations_auto_nosmt())) + cpu_smt_disable(false); + if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD && boot_cpu_data.x86_vendor != X86_VENDOR_HYGON) pr_err(RETBLEED_UNTRAIN_MSG); @@ -1043,6 +1062,13 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd) boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON)) mode = SPECTRE_V2_USER_STRICT_PREFERRED; + if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET) { + if (mode != SPECTRE_V2_USER_STRICT && + mode != SPECTRE_V2_USER_STRICT_PREFERRED) + pr_info("Selecting STIBP always-on mode to complement retbleed mitigation'\n"); + mode = SPECTRE_V2_USER_STRICT_PREFERRED; + } + spectre_v2_user_stibp = mode; set_mode: @@ -2020,10 +2046,18 @@ static ssize_t srbds_show_state(char *buf) static ssize_t retbleed_show_state(char *buf) { - if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET && - (boot_cpu_data.x86_vendor != X86_VENDOR_AMD && - boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)) - return sprintf(buf, "Vulnerable: untrained return thunk on non-Zen uarch\n"); + if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET) { + if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD && + boot_cpu_data.x86_vendor != X86_VENDOR_HYGON) + return sprintf(buf, "Vulnerable: untrained return thunk on non-Zen uarch\n"); + + return sprintf(buf, "%s; SMT %s\n", + retbleed_strings[retbleed_mitigation], + !sched_smt_active() ? "disabled" : + spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT || + spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED ? + "enabled with STIBP protection" : "vulnerable"); + } return sprintf(buf, "%s\n", retbleed_strings[retbleed_mitigation]); } From 9e727e0d9486121de5c21cbb65fcc0c907834b17 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:52 +0200 Subject: [PATCH 1528/1842] x86/bugs: Keep a per-CPU IA32_SPEC_CTRL value commit caa0ff24d5d0e02abce5e65c3d2b7f20a6617be5 upstream. Due to TIF_SSBD and TIF_SPEC_IB the actual IA32_SPEC_CTRL value can differ from x86_spec_ctrl_base. As such, keep a per-CPU value reflecting the current task's MSR content. [jpoimboe: rename] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/nospec-branch.h | 1 + arch/x86/kernel/cpu/bugs.c | 28 +++++++++++++++++++++++----- arch/x86/kernel/process.c | 2 +- 3 files changed, 25 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index fa18454f2c01..b4403c1f840c 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -254,6 +254,7 @@ static inline void indirect_branch_prediction_barrier(void) /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; +extern void write_spec_ctrl_current(u64 val); /* * With retpoline, we must use IBRS to restrict branch prediction diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 619565986048..35642e47e005 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -48,11 +48,29 @@ static void __init taa_select_mitigation(void); static void __init mmio_select_mitigation(void); static void __init srbds_select_mitigation(void); -/* The base value of the SPEC_CTRL MSR that always has to be preserved. */ +/* The base value of the SPEC_CTRL MSR without task-specific bits set */ u64 x86_spec_ctrl_base; EXPORT_SYMBOL_GPL(x86_spec_ctrl_base); + +/* The current value of the SPEC_CTRL MSR with task-specific bits set */ +DEFINE_PER_CPU(u64, x86_spec_ctrl_current); +EXPORT_SYMBOL_GPL(x86_spec_ctrl_current); + static DEFINE_MUTEX(spec_ctrl_mutex); +/* + * Keep track of the SPEC_CTRL MSR value for the current task, which may differ + * from x86_spec_ctrl_base due to STIBP/SSB in __speculation_ctrl_update(). + */ +void write_spec_ctrl_current(u64 val) +{ + if (this_cpu_read(x86_spec_ctrl_current) == val) + return; + + this_cpu_write(x86_spec_ctrl_current, val); + wrmsrl(MSR_IA32_SPEC_CTRL, val); +} + /* * The vendor and possibly platform specific bits which can be modified in * x86_spec_ctrl_base. @@ -1235,7 +1253,7 @@ static void __init spectre_v2_select_mitigation(void) if (spectre_v2_in_eibrs_mode(mode)) { /* Force it so VMEXIT will restore correctly */ x86_spec_ctrl_base |= SPEC_CTRL_IBRS; - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base); } switch (mode) { @@ -1290,7 +1308,7 @@ static void __init spectre_v2_select_mitigation(void) static void update_stibp_msr(void * __unused) { - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base); } /* Update x86_spec_ctrl_base in case SMT state changed. */ @@ -1533,7 +1551,7 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void) x86_amd_ssb_disable(); } else { x86_spec_ctrl_base |= SPEC_CTRL_SSBD; - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base); } } @@ -1751,7 +1769,7 @@ int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which) void x86_spec_ctrl_setup_ap(void) { if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base); if (ssb_mode == SPEC_STORE_BYPASS_DISABLE) x86_amd_ssb_disable(); diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 0aa1baf9a3af..510ef22b8ef7 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -556,7 +556,7 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp, } if (updmsr) - wrmsrl(MSR_IA32_SPEC_CTRL, msr); + write_spec_ctrl_current(msr); } static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk) From 3dddacf8c3cc29b9b37d8c4353f746e510ad1371 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:53 +0200 Subject: [PATCH 1529/1842] x86/entry: Add kernel IBRS implementation commit 2dbb887e875b1de3ca8f40ddf26bcfe55798c609 upstream. Implement Kernel IBRS - currently the only known option to mitigate RSB underflow speculation issues on Skylake hardware. Note: since IBRS_ENTER requires fuller context established than UNTRAIN_RET, it must be placed after it. However, since UNTRAIN_RET itself implies a RET, it must come after IBRS_ENTER. This means IBRS_ENTER needs to also move UNTRAIN_RET. Note 2: KERNEL_IBRS is sub-optimal for XenPV. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: conflict at arch/x86/entry/entry_64.S, skip_r11rcx] [cascardo: conflict at arch/x86/entry/entry_64_compat.S] [cascardo: conflict fixups, no ANNOTATE_NOENDBR] Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/entry/calling.h | 58 ++++++++++++++++++++++++++++++ arch/x86/entry/entry_64.S | 44 ++++++++++++++++++++--- arch/x86/entry/entry_64_compat.S | 17 ++++++--- arch/x86/include/asm/cpufeatures.h | 2 +- 4 files changed, 111 insertions(+), 10 deletions(-) diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h index 2be2595e4458..c3c68e97a404 100644 --- a/arch/x86/entry/calling.h +++ b/arch/x86/entry/calling.h @@ -6,6 +6,8 @@ #include #include #include +#include +#include /* @@ -308,6 +310,62 @@ For 32-bit we have the following conventions - kernel is built with #endif +/* + * IBRS kernel mitigation for Spectre_v2. + * + * Assumes full context is established (PUSH_REGS, CR3 and GS) and it clobbers + * the regs it uses (AX, CX, DX). Must be called before the first RET + * instruction (NOTE! UNTRAIN_RET includes a RET instruction) + * + * The optional argument is used to save/restore the current value, + * which is used on the paranoid paths. + * + * Assumes x86_spec_ctrl_{base,current} to have SPEC_CTRL_IBRS set. + */ +.macro IBRS_ENTER save_reg + ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS + movl $MSR_IA32_SPEC_CTRL, %ecx + +.ifnb \save_reg + rdmsr + shl $32, %rdx + or %rdx, %rax + mov %rax, \save_reg + test $SPEC_CTRL_IBRS, %eax + jz .Ldo_wrmsr_\@ + lfence + jmp .Lend_\@ +.Ldo_wrmsr_\@: +.endif + + movq PER_CPU_VAR(x86_spec_ctrl_current), %rdx + movl %edx, %eax + shr $32, %rdx + wrmsr +.Lend_\@: +.endm + +/* + * Similar to IBRS_ENTER, requires KERNEL GS,CR3 and clobbers (AX, CX, DX) + * regs. Must be called after the last RET. + */ +.macro IBRS_EXIT save_reg + ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS + movl $MSR_IA32_SPEC_CTRL, %ecx + +.ifnb \save_reg + mov \save_reg, %rdx +.else + movq PER_CPU_VAR(x86_spec_ctrl_current), %rdx + andl $(~SPEC_CTRL_IBRS), %edx +.endif + + movl %edx, %eax + shr $32, %rdx + wrmsr +.Lend_\@: +.endm + /* * Mitigate Spectre v1 for conditional swapgs code paths. * diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 2f6e0a7a7e85..f32bb2f943d6 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -102,7 +102,6 @@ SYM_CODE_START(entry_SYSCALL_64) movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp SYM_INNER_LABEL(entry_SYSCALL_64_safe_stack, SYM_L_GLOBAL) - UNTRAIN_RET /* Construct struct pt_regs on stack */ pushq $__USER_DS /* pt_regs->ss */ @@ -118,6 +117,11 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL) /* IRQs are off. */ movq %rax, %rdi movq %rsp, %rsi + + /* clobbers %rax, make sure it is after saving the syscall nr */ + IBRS_ENTER + UNTRAIN_RET + call do_syscall_64 /* returns with IRQs disabled */ /* @@ -192,6 +196,7 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL) * perf profiles. Nothing jumps here. */ syscall_return_via_sysret: + IBRS_EXIT POP_REGS pop_rdi=0 /* @@ -569,6 +574,7 @@ __irqentry_text_end: SYM_CODE_START_LOCAL(common_interrupt_return) SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL) + IBRS_EXIT #ifdef CONFIG_DEBUG_ENTRY /* Assert that pt_regs indicates user mode. */ testb $3, CS(%rsp) @@ -889,6 +895,9 @@ SYM_CODE_END(xen_failsafe_callback) * 1 -> no SWAPGS on exit * * Y GSBASE value at entry, must be restored in paranoid_exit + * + * R14 - old CR3 + * R15 - old SPEC_CTRL */ SYM_CODE_START_LOCAL(paranoid_entry) UNWIND_HINT_FUNC @@ -912,7 +921,6 @@ SYM_CODE_START_LOCAL(paranoid_entry) * be retrieved from a kernel internal table. */ SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg=%rax save_reg=%r14 - UNTRAIN_RET /* * Handling GSBASE depends on the availability of FSGSBASE. @@ -934,7 +942,7 @@ SYM_CODE_START_LOCAL(paranoid_entry) * is needed here. */ SAVE_AND_SET_GSBASE scratch_reg=%rax save_reg=%rbx - RET + jmp .Lparanoid_gsbase_done .Lparanoid_entry_checkgs: /* EBX = 1 -> kernel GSBASE active, no restore required */ @@ -953,8 +961,16 @@ SYM_CODE_START_LOCAL(paranoid_entry) xorl %ebx, %ebx swapgs .Lparanoid_kernel_gsbase: - FENCE_SWAPGS_KERNEL_ENTRY +.Lparanoid_gsbase_done: + + /* + * Once we have CR3 and %GS setup save and set SPEC_CTRL. Just like + * CR3 above, keep the old value in a callee saved register. + */ + IBRS_ENTER save_reg=%r15 + UNTRAIN_RET + RET SYM_CODE_END(paranoid_entry) @@ -976,9 +992,19 @@ SYM_CODE_END(paranoid_entry) * 1 -> no SWAPGS on exit * * Y User space GSBASE, must be restored unconditionally + * + * R14 - old CR3 + * R15 - old SPEC_CTRL */ SYM_CODE_START_LOCAL(paranoid_exit) UNWIND_HINT_REGS + + /* + * Must restore IBRS state before both CR3 and %GS since we need access + * to the per-CPU x86_spec_ctrl_shadow variable. + */ + IBRS_EXIT save_reg=%r15 + /* * The order of operations is important. RESTORE_CR3 requires * kernel GSBASE. @@ -1025,9 +1051,11 @@ SYM_CODE_START_LOCAL(error_entry) FENCE_SWAPGS_USER_ENTRY /* We have user CR3. Change to kernel CR3. */ SWITCH_TO_KERNEL_CR3 scratch_reg=%rax + IBRS_ENTER UNTRAIN_RET .Lerror_entry_from_usermode_after_swapgs: + /* Put us onto the real thread stack. */ popq %r12 /* save return addr in %12 */ movq %rsp, %rdi /* arg0 = pt_regs pointer */ @@ -1081,6 +1109,7 @@ SYM_CODE_START_LOCAL(error_entry) SWAPGS FENCE_SWAPGS_USER_ENTRY SWITCH_TO_KERNEL_CR3 scratch_reg=%rax + IBRS_ENTER UNTRAIN_RET /* @@ -1176,7 +1205,6 @@ SYM_CODE_START(asm_exc_nmi) movq %rsp, %rdx movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp UNWIND_HINT_IRET_REGS base=%rdx offset=8 - UNTRAIN_RET pushq 5*8(%rdx) /* pt_regs->ss */ pushq 4*8(%rdx) /* pt_regs->rsp */ pushq 3*8(%rdx) /* pt_regs->flags */ @@ -1187,6 +1215,9 @@ SYM_CODE_START(asm_exc_nmi) PUSH_AND_CLEAR_REGS rdx=(%rdx) ENCODE_FRAME_POINTER + IBRS_ENTER + UNTRAIN_RET + /* * At this point we no longer need to worry about stack damage * due to nesting -- we're on the normal thread stack and we're @@ -1409,6 +1440,9 @@ end_repeat_nmi: movq $-1, %rsi call exc_nmi + /* Always restore stashed SPEC_CTRL value (see paranoid_entry) */ + IBRS_EXIT save_reg=%r15 + /* Always restore stashed CR3 value (see paranoid_entry) */ RESTORE_CR3 scratch_reg=%r15 save_reg=%r14 diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S index 007f3a1fbf1a..e2f25403ec30 100644 --- a/arch/x86/entry/entry_64_compat.S +++ b/arch/x86/entry/entry_64_compat.S @@ -4,7 +4,6 @@ * * Copyright 2000-2002 Andi Kleen, SuSE Labs. */ -#include "calling.h" #include #include #include @@ -18,6 +17,8 @@ #include #include +#include "calling.h" + .section .entry.text, "ax" /* @@ -72,7 +73,6 @@ SYM_CODE_START(entry_SYSENTER_compat) pushq $__USER32_CS /* pt_regs->cs */ pushq $0 /* pt_regs->ip = 0 (placeholder) */ SYM_INNER_LABEL(entry_SYSENTER_compat_after_hwframe, SYM_L_GLOBAL) - UNTRAIN_RET /* * User tracing code (ptrace or signal handlers) might assume that @@ -114,6 +114,9 @@ SYM_INNER_LABEL(entry_SYSENTER_compat_after_hwframe, SYM_L_GLOBAL) cld + IBRS_ENTER + UNTRAIN_RET + /* * SYSENTER doesn't filter flags, so we need to clear NT and AC * ourselves. To save a few cycles, we can check whether @@ -213,7 +216,6 @@ SYM_CODE_START(entry_SYSCALL_compat) movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp SYM_INNER_LABEL(entry_SYSCALL_compat_safe_stack, SYM_L_GLOBAL) - UNTRAIN_RET /* Construct struct pt_regs on stack */ pushq $__USER32_DS /* pt_regs->ss */ @@ -255,6 +257,9 @@ SYM_INNER_LABEL(entry_SYSCALL_compat_after_hwframe, SYM_L_GLOBAL) UNWIND_HINT_REGS + IBRS_ENTER + UNTRAIN_RET + movq %rsp, %rdi call do_fast_syscall_32 /* XEN PV guests always use IRET path */ @@ -269,6 +274,8 @@ sysret32_from_system_call: */ STACKLEAK_ERASE + IBRS_EXIT + movq RBX(%rsp), %rbx /* pt_regs->rbx */ movq RBP(%rsp), %rbp /* pt_regs->rbp */ movq EFLAGS(%rsp), %r11 /* pt_regs->flags (in r11) */ @@ -380,7 +387,6 @@ SYM_CODE_START(entry_INT80_compat) pushq (%rdi) /* pt_regs->di */ .Lint80_keep_stack: - UNTRAIN_RET pushq %rsi /* pt_regs->si */ xorl %esi, %esi /* nospec si */ pushq %rdx /* pt_regs->dx */ @@ -413,6 +419,9 @@ SYM_CODE_START(entry_INT80_compat) cld + IBRS_ENTER + UNTRAIN_RET + movq %rsp, %rdi call do_int80_syscall_32 jmp swapgs_restore_regs_and_return_to_usermode diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 66f044c4427d..72a7a58f0097 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -203,7 +203,7 @@ #define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ -/* FREE! ( 7*32+12) */ +#define X86_FEATURE_KERNEL_IBRS ( 7*32+12) /* "" Set/clear IBRS on kernel entry/exit */ /* FREE! ( 7*32+13) */ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */ #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */ From 6d7e13ccc4d73e5c88cc015bc0154b7d08f65038 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:54 +0200 Subject: [PATCH 1530/1842] x86/bugs: Optimize SPEC_CTRL MSR writes commit c779bc1a9002fa474175b80e72b85c9bf628abb0 upstream. When changing SPEC_CTRL for user control, the WRMSR can be delayed until return-to-user when KERNEL_IBRS has been enabled. This avoids an MSR write during context switch. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/nospec-branch.h | 2 +- arch/x86/kernel/cpu/bugs.c | 18 ++++++++++++------ arch/x86/kernel/process.c | 2 +- 3 files changed, 14 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index b4403c1f840c..dc465d290837 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -254,7 +254,7 @@ static inline void indirect_branch_prediction_barrier(void) /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; -extern void write_spec_ctrl_current(u64 val); +extern void write_spec_ctrl_current(u64 val, bool force); /* * With retpoline, we must use IBRS to restrict branch prediction diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 35642e47e005..8572cac3ca98 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -62,13 +62,19 @@ static DEFINE_MUTEX(spec_ctrl_mutex); * Keep track of the SPEC_CTRL MSR value for the current task, which may differ * from x86_spec_ctrl_base due to STIBP/SSB in __speculation_ctrl_update(). */ -void write_spec_ctrl_current(u64 val) +void write_spec_ctrl_current(u64 val, bool force) { if (this_cpu_read(x86_spec_ctrl_current) == val) return; this_cpu_write(x86_spec_ctrl_current, val); - wrmsrl(MSR_IA32_SPEC_CTRL, val); + + /* + * When KERNEL_IBRS this MSR is written on return-to-user, unless + * forced the update can be delayed until that time. + */ + if (force || !cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS)) + wrmsrl(MSR_IA32_SPEC_CTRL, val); } /* @@ -1253,7 +1259,7 @@ static void __init spectre_v2_select_mitigation(void) if (spectre_v2_in_eibrs_mode(mode)) { /* Force it so VMEXIT will restore correctly */ x86_spec_ctrl_base |= SPEC_CTRL_IBRS; - write_spec_ctrl_current(x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base, true); } switch (mode) { @@ -1308,7 +1314,7 @@ static void __init spectre_v2_select_mitigation(void) static void update_stibp_msr(void * __unused) { - write_spec_ctrl_current(x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base, true); } /* Update x86_spec_ctrl_base in case SMT state changed. */ @@ -1551,7 +1557,7 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void) x86_amd_ssb_disable(); } else { x86_spec_ctrl_base |= SPEC_CTRL_SSBD; - write_spec_ctrl_current(x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base, true); } } @@ -1769,7 +1775,7 @@ int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which) void x86_spec_ctrl_setup_ap(void) { if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) - write_spec_ctrl_current(x86_spec_ctrl_base); + write_spec_ctrl_current(x86_spec_ctrl_base, true); if (ssb_mode == SPEC_STORE_BYPASS_DISABLE) x86_amd_ssb_disable(); diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 510ef22b8ef7..a2823682d64e 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -556,7 +556,7 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp, } if (updmsr) - write_spec_ctrl_current(msr); + write_spec_ctrl_current(msr, false); } static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk) From dabc2a1b406ae0ff5286c91f7519b3e20ec2aa63 Mon Sep 17 00:00:00 2001 From: Pawan Gupta Date: Tue, 14 Jun 2022 23:15:55 +0200 Subject: [PATCH 1531/1842] x86/speculation: Add spectre_v2=ibrs option to support Kernel IBRS commit 7c693f54c873691a4b7da05c7e0f74e67745d144 upstream. Extend spectre_v2= boot option with Kernel IBRS. [jpoimboe: no STIBP with IBRS] Signed-off-by: Pawan Gupta Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- .../admin-guide/kernel-parameters.txt | 1 + arch/x86/include/asm/nospec-branch.h | 1 + arch/x86/kernel/cpu/bugs.c | 66 +++++++++++++++---- 3 files changed, 54 insertions(+), 14 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 15ca7a41f585..2e3b495c13ce 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -5026,6 +5026,7 @@ eibrs - enhanced IBRS eibrs,retpoline - enhanced IBRS + Retpolines eibrs,lfence - enhanced IBRS + LFENCE + ibrs - use IBRS to protect kernel Not specifying this option is equivalent to spectre_v2=auto. diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index dc465d290837..bed273363c35 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -212,6 +212,7 @@ enum spectre_v2_mitigation { SPECTRE_V2_EIBRS, SPECTRE_V2_EIBRS_RETPOLINE, SPECTRE_V2_EIBRS_LFENCE, + SPECTRE_V2_IBRS, }; /* The indirect branch speculation control variants */ diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 8572cac3ca98..483c5d5afc9f 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -928,6 +928,7 @@ enum spectre_v2_mitigation_cmd { SPECTRE_V2_CMD_EIBRS, SPECTRE_V2_CMD_EIBRS_RETPOLINE, SPECTRE_V2_CMD_EIBRS_LFENCE, + SPECTRE_V2_CMD_IBRS, }; enum spectre_v2_user_cmd { @@ -1000,11 +1001,12 @@ spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd) return SPECTRE_V2_USER_CMD_AUTO; } -static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode) +static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode) { - return (mode == SPECTRE_V2_EIBRS || - mode == SPECTRE_V2_EIBRS_RETPOLINE || - mode == SPECTRE_V2_EIBRS_LFENCE); + return mode == SPECTRE_V2_IBRS || + mode == SPECTRE_V2_EIBRS || + mode == SPECTRE_V2_EIBRS_RETPOLINE || + mode == SPECTRE_V2_EIBRS_LFENCE; } static void __init @@ -1069,12 +1071,12 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd) } /* - * If no STIBP, enhanced IBRS is enabled or SMT impossible, STIBP is not - * required. + * If no STIBP, IBRS or enhanced IBRS is enabled, or SMT impossible, + * STIBP is not required. */ if (!boot_cpu_has(X86_FEATURE_STIBP) || !smt_possible || - spectre_v2_in_eibrs_mode(spectre_v2_enabled)) + spectre_v2_in_ibrs_mode(spectre_v2_enabled)) return; /* @@ -1106,6 +1108,7 @@ static const char * const spectre_v2_strings[] = { [SPECTRE_V2_EIBRS] = "Mitigation: Enhanced IBRS", [SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced IBRS + LFENCE", [SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced IBRS + Retpolines", + [SPECTRE_V2_IBRS] = "Mitigation: IBRS", }; static const struct { @@ -1123,6 +1126,7 @@ static const struct { { "eibrs,lfence", SPECTRE_V2_CMD_EIBRS_LFENCE, false }, { "eibrs,retpoline", SPECTRE_V2_CMD_EIBRS_RETPOLINE, false }, { "auto", SPECTRE_V2_CMD_AUTO, false }, + { "ibrs", SPECTRE_V2_CMD_IBRS, false }, }; static void __init spec_v2_print_cond(const char *reason, bool secure) @@ -1185,6 +1189,24 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void) return SPECTRE_V2_CMD_AUTO; } + if (cmd == SPECTRE_V2_CMD_IBRS && boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) { + pr_err("%s selected but not Intel CPU. Switching to AUTO select\n", + mitigation_options[i].option); + return SPECTRE_V2_CMD_AUTO; + } + + if (cmd == SPECTRE_V2_CMD_IBRS && !boot_cpu_has(X86_FEATURE_IBRS)) { + pr_err("%s selected but CPU doesn't have IBRS. Switching to AUTO select\n", + mitigation_options[i].option); + return SPECTRE_V2_CMD_AUTO; + } + + if (cmd == SPECTRE_V2_CMD_IBRS && boot_cpu_has(X86_FEATURE_XENPV)) { + pr_err("%s selected but running as XenPV guest. Switching to AUTO select\n", + mitigation_options[i].option); + return SPECTRE_V2_CMD_AUTO; + } + spec_v2_print_cond(mitigation_options[i].option, mitigation_options[i].secure); return cmd; @@ -1224,6 +1246,14 @@ static void __init spectre_v2_select_mitigation(void) break; } + if (boot_cpu_has_bug(X86_BUG_RETBLEED) && + retbleed_cmd != RETBLEED_CMD_OFF && + boot_cpu_has(X86_FEATURE_IBRS) && + boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) { + mode = SPECTRE_V2_IBRS; + break; + } + mode = spectre_v2_select_retpoline(); break; @@ -1240,6 +1270,10 @@ static void __init spectre_v2_select_mitigation(void) mode = spectre_v2_select_retpoline(); break; + case SPECTRE_V2_CMD_IBRS: + mode = SPECTRE_V2_IBRS; + break; + case SPECTRE_V2_CMD_EIBRS: mode = SPECTRE_V2_EIBRS; break; @@ -1256,7 +1290,7 @@ static void __init spectre_v2_select_mitigation(void) if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled()) pr_err(SPECTRE_V2_EIBRS_EBPF_MSG); - if (spectre_v2_in_eibrs_mode(mode)) { + if (spectre_v2_in_ibrs_mode(mode)) { /* Force it so VMEXIT will restore correctly */ x86_spec_ctrl_base |= SPEC_CTRL_IBRS; write_spec_ctrl_current(x86_spec_ctrl_base, true); @@ -1267,6 +1301,10 @@ static void __init spectre_v2_select_mitigation(void) case SPECTRE_V2_EIBRS: break; + case SPECTRE_V2_IBRS: + setup_force_cpu_cap(X86_FEATURE_KERNEL_IBRS); + break; + case SPECTRE_V2_LFENCE: case SPECTRE_V2_EIBRS_LFENCE: setup_force_cpu_cap(X86_FEATURE_RETPOLINE_LFENCE); @@ -1293,17 +1331,17 @@ static void __init spectre_v2_select_mitigation(void) pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n"); /* - * Retpoline means the kernel is safe because it has no indirect - * branches. Enhanced IBRS protects firmware too, so, enable restricted - * speculation around firmware calls only when Enhanced IBRS isn't - * supported. + * Retpoline protects the kernel, but doesn't protect firmware. IBRS + * and Enhanced IBRS protect firmware too, so enable IBRS around + * firmware calls only when IBRS / Enhanced IBRS aren't otherwise + * enabled. * * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because * the user might select retpoline on the kernel command line and if * the CPU supports Enhanced IBRS, kernel might un-intentionally not * enable IBRS around firmware calls. */ - if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_eibrs_mode(mode)) { + if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_ibrs_mode(mode)) { setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW); pr_info("Enabling Restricted Speculation for firmware calls\n"); } @@ -2012,7 +2050,7 @@ static ssize_t mmio_stale_data_show_state(char *buf) static char *stibp_state(void) { - if (spectre_v2_in_eibrs_mode(spectre_v2_enabled)) + if (spectre_v2_in_ibrs_mode(spectre_v2_enabled)) return ""; switch (spectre_v2_user_stibp) { From a0f8ef71d762501769df69e35c4c4e7496866d90 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:56 +0200 Subject: [PATCH 1532/1842] x86/bugs: Split spectre_v2_select_mitigation() and spectre_v2_user_select_mitigation() commit 166115c08a9b0b846b783088808a27d739be6e8d upstream. retbleed will depend on spectre_v2, while spectre_v2_user depends on retbleed. Break this cycle. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 25 +++++++++++++++++-------- 1 file changed, 17 insertions(+), 8 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 483c5d5afc9f..58f48f93e8cc 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -37,8 +37,9 @@ #include "cpu.h" static void __init spectre_v1_select_mitigation(void); -static void __init retbleed_select_mitigation(void); static void __init spectre_v2_select_mitigation(void); +static void __init retbleed_select_mitigation(void); +static void __init spectre_v2_user_select_mitigation(void); static void __init ssb_select_mitigation(void); static void __init l1tf_select_mitigation(void); static void __init mds_select_mitigation(void); @@ -137,13 +138,19 @@ void __init check_bugs(void) /* Select the proper CPU mitigations before patching alternatives: */ spectre_v1_select_mitigation(); + spectre_v2_select_mitigation(); + /* + * retbleed_select_mitigation() relies on the state set by + * spectre_v2_select_mitigation(); specifically it wants to know about + * spectre_v2=ibrs. + */ retbleed_select_mitigation(); /* - * spectre_v2_select_mitigation() relies on the state set by + * spectre_v2_user_select_mitigation() relies on the state set by * retbleed_select_mitigation(); specifically the STIBP selection is * forced for UNRET. */ - spectre_v2_select_mitigation(); + spectre_v2_user_select_mitigation(); ssb_select_mitigation(); l1tf_select_mitigation(); md_clear_select_mitigation(); @@ -969,13 +976,15 @@ static void __init spec_v2_user_print_cond(const char *reason, bool secure) pr_info("spectre_v2_user=%s forced on command line.\n", reason); } +static __ro_after_init enum spectre_v2_mitigation_cmd spectre_v2_cmd; + static enum spectre_v2_user_cmd __init -spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd) +spectre_v2_parse_user_cmdline(void) { char arg[20]; int ret, i; - switch (v2_cmd) { + switch (spectre_v2_cmd) { case SPECTRE_V2_CMD_NONE: return SPECTRE_V2_USER_CMD_NONE; case SPECTRE_V2_CMD_FORCE: @@ -1010,7 +1019,7 @@ static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode) } static void __init -spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd) +spectre_v2_user_select_mitigation(void) { enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE; bool smt_possible = IS_ENABLED(CONFIG_SMP); @@ -1023,7 +1032,7 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd) cpu_smt_control == CPU_SMT_NOT_SUPPORTED) smt_possible = false; - cmd = spectre_v2_parse_user_cmdline(v2_cmd); + cmd = spectre_v2_parse_user_cmdline(); switch (cmd) { case SPECTRE_V2_USER_CMD_NONE: goto set_mode; @@ -1347,7 +1356,7 @@ static void __init spectre_v2_select_mitigation(void) } /* Set up IBPB and STIBP depending on the general spectre V2 command */ - spectre_v2_user_select_mitigation(cmd); + spectre_v2_cmd = cmd; } static void update_stibp_msr(void * __unused) From e8142e2d6cb6b39fdd78bc17199429f79bcd051c Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 24 Jun 2022 13:48:58 +0200 Subject: [PATCH 1533/1842] x86/bugs: Report Intel retbleed vulnerability commit 6ad0ad2bf8a67e27d1f9d006a1dabb0e1c360cc3 upstream. Skylake suffers from RSB underflow speculation issues; report this vulnerability and it's mitigation (spectre_v2=ibrs). [jpoimboe: cleanups, eibrs] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/msr-index.h | 1 + arch/x86/kernel/cpu/bugs.c | 39 +++++++++++++++++++++++++++----- arch/x86/kernel/cpu/common.c | 24 ++++++++++---------- 3 files changed, 46 insertions(+), 18 deletions(-) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 96973d197972..77a55777e002 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -91,6 +91,7 @@ #define MSR_IA32_ARCH_CAPABILITIES 0x0000010a #define ARCH_CAP_RDCL_NO BIT(0) /* Not susceptible to Meltdown */ #define ARCH_CAP_IBRS_ALL BIT(1) /* Enhanced IBRS support */ +#define ARCH_CAP_RSBA BIT(2) /* RET may use alternative branch predictors */ #define ARCH_CAP_SKIP_VMENTRY_L1DFLUSH BIT(3) /* Skip L1D flush on vmentry */ #define ARCH_CAP_SSB_NO BIT(4) /* * Not susceptible to Speculative Store Bypass diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 58f48f93e8cc..a2423d4ed293 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -746,12 +746,17 @@ static int __init nospectre_v1_cmdline(char *str) } early_param("nospectre_v1", nospectre_v1_cmdline); +static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = + SPECTRE_V2_NONE; + #undef pr_fmt #define pr_fmt(fmt) "RETBleed: " fmt enum retbleed_mitigation { RETBLEED_MITIGATION_NONE, RETBLEED_MITIGATION_UNRET, + RETBLEED_MITIGATION_IBRS, + RETBLEED_MITIGATION_EIBRS, }; enum retbleed_mitigation_cmd { @@ -763,6 +768,8 @@ enum retbleed_mitigation_cmd { const char * const retbleed_strings[] = { [RETBLEED_MITIGATION_NONE] = "Vulnerable", [RETBLEED_MITIGATION_UNRET] = "Mitigation: untrained return thunk", + [RETBLEED_MITIGATION_IBRS] = "Mitigation: IBRS", + [RETBLEED_MITIGATION_EIBRS] = "Mitigation: Enhanced IBRS", }; static enum retbleed_mitigation retbleed_mitigation __ro_after_init = @@ -805,6 +812,7 @@ early_param("retbleed", retbleed_parse_cmdline); #define RETBLEED_UNTRAIN_MSG "WARNING: BTB untrained return thunk mitigation is only effective on AMD/Hygon!\n" #define RETBLEED_COMPILER_MSG "WARNING: kernel not compiled with RETPOLINE or -mfunction-return capable compiler!\n" +#define RETBLEED_INTEL_MSG "WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!\n" static void __init retbleed_select_mitigation(void) { @@ -821,12 +829,15 @@ static void __init retbleed_select_mitigation(void) case RETBLEED_CMD_AUTO: default: - if (!boot_cpu_has_bug(X86_BUG_RETBLEED)) - break; - if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) retbleed_mitigation = RETBLEED_MITIGATION_UNRET; + + /* + * The Intel mitigation (IBRS) was already selected in + * spectre_v2_select_mitigation(). + */ + break; } @@ -856,15 +867,31 @@ static void __init retbleed_select_mitigation(void) break; } + /* + * Let IBRS trump all on Intel without affecting the effects of the + * retbleed= cmdline option. + */ + if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) { + switch (spectre_v2_enabled) { + case SPECTRE_V2_IBRS: + retbleed_mitigation = RETBLEED_MITIGATION_IBRS; + break; + case SPECTRE_V2_EIBRS: + case SPECTRE_V2_EIBRS_RETPOLINE: + case SPECTRE_V2_EIBRS_LFENCE: + retbleed_mitigation = RETBLEED_MITIGATION_EIBRS; + break; + default: + pr_err(RETBLEED_INTEL_MSG); + } + } + pr_info("%s\n", retbleed_strings[retbleed_mitigation]); } #undef pr_fmt #define pr_fmt(fmt) "Spectre V2 : " fmt -static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = - SPECTRE_V2_NONE; - static enum spectre_v2_user_mitigation spectre_v2_user_stibp __ro_after_init = SPECTRE_V2_USER_NONE; static enum spectre_v2_user_mitigation spectre_v2_user_ibpb __ro_after_init = diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 5214c4bba33b..ab68595e217e 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1124,24 +1124,24 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { VULNBL_INTEL_STEPPINGS(BROADWELL_G, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(BROADWELL_X, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(BROADWELL, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO), + VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO | RETBLEED), VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(SKYLAKE_X, BIT(3) | BIT(4) | BIT(6) | - BIT(7) | BIT(0xB), MMIO), - VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO), + BIT(7) | BIT(0xB), MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO | RETBLEED), VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x9, 0xC), SRBDS | MMIO), + VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x9, 0xC), SRBDS | MMIO | RETBLEED), VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x0, 0x8), SRBDS), - VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x9, 0xD), SRBDS | MMIO), + VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x9, 0xD), SRBDS | MMIO | RETBLEED), VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x0, 0x8), SRBDS), - VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPINGS(0x5, 0x5), MMIO | MMIO_SBDS), + VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPINGS(0x5, 0x5), MMIO | MMIO_SBDS | RETBLEED), VULNBL_INTEL_STEPPINGS(ICELAKE_D, X86_STEPPINGS(0x1, 0x1), MMIO), VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPINGS(0x4, 0x6), MMIO), - VULNBL_INTEL_STEPPINGS(COMETLAKE, BIT(2) | BIT(3) | BIT(5), MMIO | MMIO_SBDS), - VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS), - VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x0), MMIO), - VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS), - VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPINGS(0x1, 0x1), MMIO), + VULNBL_INTEL_STEPPINGS(COMETLAKE, BIT(2) | BIT(3) | BIT(5), MMIO | MMIO_SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x0), MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPINGS(0x1, 0x1), MMIO | RETBLEED), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPINGS(0x0, 0x0), MMIO | MMIO_SBDS), @@ -1251,7 +1251,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) !arch_cap_mmio_immune(ia32_cap)) setup_force_cpu_bug(X86_BUG_MMIO_STALE_DATA); - if (cpu_matches(cpu_vuln_blacklist, RETBLEED)) + if ((cpu_matches(cpu_vuln_blacklist, RETBLEED) || (ia32_cap & ARCH_CAP_RSBA))) setup_force_cpu_bug(X86_BUG_RETBLEED); if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) From 55bba093fd91a76971134e3a4e3576e536c08f5c Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:58 +0200 Subject: [PATCH 1534/1842] intel_idle: Disable IBRS during long idle commit bf5835bcdb9635c97f85120dba9bfa21e111130f upstream. Having IBRS enabled while the SMT sibling is idle unnecessarily slows down the running sibling. OTOH, disabling IBRS around idle takes two MSR writes, which will increase the idle latency. Therefore, only disable IBRS around deeper idle states. Shallow idle states are bounded by the tick in duration, since NOHZ is not allowed for them by virtue of their short target residency. Only do this for mwait-driven idle, since that keeps interrupts disabled across idle, which makes disabling IBRS vs IRQ-entry a non-issue. Note: C6 is a random threshold, most importantly C1 probably shouldn't disable IBRS, benchmarking needed. Suggested-by: Tim Chen Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: no CPUIDLE_FLAG_IRQ_ENABLE] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/nospec-branch.h | 1 + arch/x86/kernel/cpu/bugs.c | 6 ++++ drivers/idle/intel_idle.c | 43 ++++++++++++++++++++++++---- 3 files changed, 44 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index bed273363c35..dafaf73746eb 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -256,6 +256,7 @@ static inline void indirect_branch_prediction_barrier(void) /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; extern void write_spec_ctrl_current(u64 val, bool force); +extern u64 spec_ctrl_current(void); /* * With retpoline, we must use IBRS to restrict branch prediction diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index a2423d4ed293..1d16d194f6ab 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -78,6 +78,12 @@ void write_spec_ctrl_current(u64 val, bool force) wrmsrl(MSR_IA32_SPEC_CTRL, val); } +u64 spec_ctrl_current(void) +{ + return this_cpu_read(x86_spec_ctrl_current); +} +EXPORT_SYMBOL_GPL(spec_ctrl_current); + /* * The vendor and possibly platform specific bits which can be modified in * x86_spec_ctrl_base. diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c index d79335506ecd..b92b032fb6d1 100644 --- a/drivers/idle/intel_idle.c +++ b/drivers/idle/intel_idle.c @@ -47,11 +47,13 @@ #include #include #include +#include #include #include #include #include #include +#include #include #include @@ -93,6 +95,12 @@ static unsigned int mwait_substates __initdata; */ #define CPUIDLE_FLAG_ALWAYS_ENABLE BIT(15) +/* + * Disable IBRS across idle (when KERNEL_IBRS), is exclusive vs IRQ_ENABLE + * above. + */ +#define CPUIDLE_FLAG_IBRS BIT(16) + /* * MWAIT takes an 8-bit "hint" in EAX "suggesting" * the C-state (top nibble) and sub-state (bottom nibble) @@ -132,6 +140,24 @@ static __cpuidle int intel_idle(struct cpuidle_device *dev, return index; } +static __cpuidle int intel_idle_ibrs(struct cpuidle_device *dev, + struct cpuidle_driver *drv, int index) +{ + bool smt_active = sched_smt_active(); + u64 spec_ctrl = spec_ctrl_current(); + int ret; + + if (smt_active) + wrmsrl(MSR_IA32_SPEC_CTRL, 0); + + ret = intel_idle(dev, drv, index); + + if (smt_active) + wrmsrl(MSR_IA32_SPEC_CTRL, spec_ctrl); + + return ret; +} + /** * intel_idle_s2idle - Ask the processor to enter the given idle state. * @dev: cpuidle device of the target CPU. @@ -653,7 +679,7 @@ static struct cpuidle_state skl_cstates[] __initdata = { { .name = "C6", .desc = "MWAIT 0x20", - .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS, .exit_latency = 85, .target_residency = 200, .enter = &intel_idle, @@ -661,7 +687,7 @@ static struct cpuidle_state skl_cstates[] __initdata = { { .name = "C7s", .desc = "MWAIT 0x33", - .flags = MWAIT2flg(0x33) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags = MWAIT2flg(0x33) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS, .exit_latency = 124, .target_residency = 800, .enter = &intel_idle, @@ -669,7 +695,7 @@ static struct cpuidle_state skl_cstates[] __initdata = { { .name = "C8", .desc = "MWAIT 0x40", - .flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS, .exit_latency = 200, .target_residency = 800, .enter = &intel_idle, @@ -677,7 +703,7 @@ static struct cpuidle_state skl_cstates[] __initdata = { { .name = "C9", .desc = "MWAIT 0x50", - .flags = MWAIT2flg(0x50) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags = MWAIT2flg(0x50) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS, .exit_latency = 480, .target_residency = 5000, .enter = &intel_idle, @@ -685,7 +711,7 @@ static struct cpuidle_state skl_cstates[] __initdata = { { .name = "C10", .desc = "MWAIT 0x60", - .flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS, .exit_latency = 890, .target_residency = 5000, .enter = &intel_idle, @@ -714,7 +740,7 @@ static struct cpuidle_state skx_cstates[] __initdata = { { .name = "C6", .desc = "MWAIT 0x20", - .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED, + .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED | CPUIDLE_FLAG_IBRS, .exit_latency = 133, .target_residency = 600, .enter = &intel_idle, @@ -1501,6 +1527,11 @@ static void __init intel_idle_init_cstates_icpu(struct cpuidle_driver *drv) /* Structure copy. */ drv->states[drv->state_count] = cpuidle_state_table[cstate]; + if (cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS) && + cpuidle_state_table[cstate].flags & CPUIDLE_FLAG_IBRS) { + drv->states[drv->state_count].enter = intel_idle_ibrs; + } + if ((disabled_states_mask & BIT(drv->state_count)) || ((icpu->use_acpi || force_use_acpi) && intel_idle_off_by_default(mwait_hint) && From 28aa3fa0b2c9d0cd7bdac42d9eb7fe3d5f6c79e8 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:15:59 +0200 Subject: [PATCH 1535/1842] objtool: Update Retpoline validation commit 9bb2ec608a209018080ca262f771e6a9ff203b6f upstream. Update retpoline validation with the new CONFIG_RETPOLINE requirement of not having bare naked RET instructions. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: conflict fixup at arch/x86/xen/xen-head.S] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/nospec-branch.h | 6 ++++++ arch/x86/mm/mem_encrypt_boot.S | 2 ++ arch/x86/xen/xen-head.S | 1 + tools/objtool/check.c | 19 +++++++++++++------ 4 files changed, 22 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index dafaf73746eb..981e6147ca38 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -75,6 +75,12 @@ .popsection .endm +/* + * (ab)use RETPOLINE_SAFE on RET to annotate away 'bare' RET instructions + * vs RETBleed validation. + */ +#define ANNOTATE_UNRET_SAFE ANNOTATE_RETPOLINE_SAFE + /* * JMP_NOSPEC and CALL_NOSPEC macros can be used instead of a simple * indirect jmp/call which may be susceptible to the Spectre variant 2 diff --git a/arch/x86/mm/mem_encrypt_boot.S b/arch/x86/mm/mem_encrypt_boot.S index a186007a50d3..145b67299ab6 100644 --- a/arch/x86/mm/mem_encrypt_boot.S +++ b/arch/x86/mm/mem_encrypt_boot.S @@ -66,6 +66,7 @@ SYM_FUNC_START(sme_encrypt_execute) pop %rbp /* Offset to __x86_return_thunk would be wrong here */ + ANNOTATE_UNRET_SAFE ret int3 SYM_FUNC_END(sme_encrypt_execute) @@ -154,6 +155,7 @@ SYM_FUNC_START(__enc_copy) pop %r15 /* Offset to __x86_return_thunk would be wrong here */ + ANNOTATE_UNRET_SAFE ret int3 .L__enc_copy_end: diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S index 565062932ef1..38b73e7e54ba 100644 --- a/arch/x86/xen/xen-head.S +++ b/arch/x86/xen/xen-head.S @@ -70,6 +70,7 @@ SYM_CODE_START(hypercall_page) .rept (PAGE_SIZE / 32) UNWIND_HINT_FUNC .skip 31, 0x90 + ANNOTATE_UNRET_SAFE RET .endr diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 1eebfa422153..eac6b89660b1 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -1799,8 +1799,9 @@ static int read_retpoline_hints(struct objtool_file *file) } if (insn->type != INSN_JUMP_DYNAMIC && - insn->type != INSN_CALL_DYNAMIC) { - WARN_FUNC("retpoline_safe hint not an indirect jump/call", + insn->type != INSN_CALL_DYNAMIC && + insn->type != INSN_RETURN) { + WARN_FUNC("retpoline_safe hint not an indirect jump/call/ret", insn->sec, insn->offset); return -1; } @@ -3051,7 +3052,8 @@ static int validate_retpoline(struct objtool_file *file) for_each_insn(file, insn) { if (insn->type != INSN_JUMP_DYNAMIC && - insn->type != INSN_CALL_DYNAMIC) + insn->type != INSN_CALL_DYNAMIC && + insn->type != INSN_RETURN) continue; if (insn->retpoline_safe) @@ -3066,9 +3068,14 @@ static int validate_retpoline(struct objtool_file *file) if (!strcmp(insn->sec->name, ".init.text") && !module) continue; - WARN_FUNC("indirect %s found in RETPOLINE build", - insn->sec, insn->offset, - insn->type == INSN_JUMP_DYNAMIC ? "jump" : "call"); + if (insn->type == INSN_RETURN) { + WARN_FUNC("'naked' return found in RETPOLINE build", + insn->sec, insn->offset); + } else { + WARN_FUNC("indirect %s found in RETPOLINE build", + insn->sec, insn->offset, + insn->type == INSN_JUMP_DYNAMIC ? "jump" : "call"); + } warnings++; } From f728eff26339d85825e588d461f0e55267bc6c3f Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:16:00 +0200 Subject: [PATCH 1536/1842] x86/xen: Rename SYS* entry points commit b75b7f8ef1148be1b9321ffc2f6c19238904b438 upstream. Native SYS{CALL,ENTER} entry points are called entry_SYS{CALL,ENTER}_{64,compat}, make sure the Xen versions are named consistently. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/xen/setup.c | 6 +++--- arch/x86/xen/xen-asm.S | 20 ++++++++++---------- arch/x86/xen/xen-ops.h | 6 +++--- 3 files changed, 16 insertions(+), 16 deletions(-) diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c index 8bfc10330107..1f80dd3a2dd4 100644 --- a/arch/x86/xen/setup.c +++ b/arch/x86/xen/setup.c @@ -922,7 +922,7 @@ void xen_enable_sysenter(void) if (!boot_cpu_has(sysenter_feature)) return; - ret = register_callback(CALLBACKTYPE_sysenter, xen_sysenter_target); + ret = register_callback(CALLBACKTYPE_sysenter, xen_entry_SYSENTER_compat); if(ret != 0) setup_clear_cpu_cap(sysenter_feature); } @@ -931,7 +931,7 @@ void xen_enable_syscall(void) { int ret; - ret = register_callback(CALLBACKTYPE_syscall, xen_syscall_target); + ret = register_callback(CALLBACKTYPE_syscall, xen_entry_SYSCALL_64); if (ret != 0) { printk(KERN_ERR "Failed to set syscall callback: %d\n", ret); /* Pretty fatal; 64-bit userspace has no other @@ -940,7 +940,7 @@ void xen_enable_syscall(void) if (boot_cpu_has(X86_FEATURE_SYSCALL32)) { ret = register_callback(CALLBACKTYPE_syscall32, - xen_syscall32_target); + xen_entry_SYSCALL_compat); if (ret != 0) setup_clear_cpu_cap(X86_FEATURE_SYSCALL32); } diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S index e8854f3b9a9f..70d3b21008e5 100644 --- a/arch/x86/xen/xen-asm.S +++ b/arch/x86/xen/xen-asm.S @@ -276,7 +276,7 @@ SYM_CODE_END(xenpv_restore_regs_and_return_to_usermode) */ /* Normal 64-bit system call target */ -SYM_CODE_START(xen_syscall_target) +SYM_CODE_START(xen_entry_SYSCALL_64) UNWIND_HINT_EMPTY popq %rcx popq %r11 @@ -290,12 +290,12 @@ SYM_CODE_START(xen_syscall_target) movq $__USER_CS, 1*8(%rsp) jmp entry_SYSCALL_64_after_hwframe -SYM_CODE_END(xen_syscall_target) +SYM_CODE_END(xen_entry_SYSCALL_64) #ifdef CONFIG_IA32_EMULATION /* 32-bit compat syscall target */ -SYM_CODE_START(xen_syscall32_target) +SYM_CODE_START(xen_entry_SYSCALL_compat) UNWIND_HINT_EMPTY popq %rcx popq %r11 @@ -309,10 +309,10 @@ SYM_CODE_START(xen_syscall32_target) movq $__USER32_CS, 1*8(%rsp) jmp entry_SYSCALL_compat_after_hwframe -SYM_CODE_END(xen_syscall32_target) +SYM_CODE_END(xen_entry_SYSCALL_compat) /* 32-bit compat sysenter target */ -SYM_CODE_START(xen_sysenter_target) +SYM_CODE_START(xen_entry_SYSENTER_compat) UNWIND_HINT_EMPTY /* * NB: Xen is polite and clears TF from EFLAGS for us. This means @@ -330,18 +330,18 @@ SYM_CODE_START(xen_sysenter_target) movq $__USER32_CS, 1*8(%rsp) jmp entry_SYSENTER_compat_after_hwframe -SYM_CODE_END(xen_sysenter_target) +SYM_CODE_END(xen_entry_SYSENTER_compat) #else /* !CONFIG_IA32_EMULATION */ -SYM_CODE_START(xen_syscall32_target) -SYM_CODE_START(xen_sysenter_target) +SYM_CODE_START(xen_entry_SYSCALL_compat) +SYM_CODE_START(xen_entry_SYSENTER_compat) UNWIND_HINT_EMPTY lea 16(%rsp), %rsp /* strip %rcx, %r11 */ mov $-ENOSYS, %rax pushq $0 jmp hypercall_iret -SYM_CODE_END(xen_sysenter_target) -SYM_CODE_END(xen_syscall32_target) +SYM_CODE_END(xen_entry_SYSENTER_compat) +SYM_CODE_END(xen_entry_SYSCALL_compat) #endif /* CONFIG_IA32_EMULATION */ diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h index 9546c3384c75..8695809b88f0 100644 --- a/arch/x86/xen/xen-ops.h +++ b/arch/x86/xen/xen-ops.h @@ -10,10 +10,10 @@ /* These are code, but not functions. Defined in entry.S */ extern const char xen_failsafe_callback[]; -void xen_sysenter_target(void); +void xen_entry_SYSENTER_compat(void); #ifdef CONFIG_X86_64 -void xen_syscall_target(void); -void xen_syscall32_target(void); +void xen_entry_SYSCALL_64(void); +void xen_entry_SYSCALL_compat(void); #endif extern void *xen_initial_gdt; From c8845b875437b8ea9cd023f15b44c436c9c5b62d Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:16:02 +0200 Subject: [PATCH 1537/1842] x86/bugs: Add retbleed=ibpb commit 3ebc170068885b6fc7bedda6c667bb2c4d533159 upstream. jmp2ret mitigates the easy-to-attack case at relatively low overhead. It mitigates the long speculation windows after a mispredicted RET, but it does not mitigate the short speculation window from arbitrary instruction boundaries. On Zen2, there is a chicken bit which needs setting, which mitigates "arbitrary instruction boundaries" down to just "basic block boundaries". But there is no fix for the short speculation window on basic block boundaries, other than to flush the entire BTB to evict all attacker predictions. On the spectrum of "fast & blurry" -> "safe", there is (on top of STIBP or no-SMT): 1) Nothing System wide open 2) jmp2ret May stop a script kiddy 3) jmp2ret+chickenbit Raises the bar rather further 4) IBPB Only thing which can count as "safe". Tentative numbers put IBPB-on-entry at a 2.5x hit on Zen2, and a 10x hit on Zen1 according to lmbench. [ bp: Fixup feature bit comments, document option, 32-bit build fix. ] Suggested-by: Andrew Cooper Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- .../admin-guide/kernel-parameters.txt | 3 ++ arch/x86/entry/Makefile | 2 +- arch/x86/entry/entry.S | 22 ++++++++++ arch/x86/include/asm/cpufeatures.h | 2 +- arch/x86/include/asm/nospec-branch.h | 8 +++- arch/x86/kernel/cpu/bugs.c | 43 +++++++++++++++---- 6 files changed, 67 insertions(+), 13 deletions(-) create mode 100644 arch/x86/entry/entry.S diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 2e3b495c13ce..1a58c580b236 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4666,6 +4666,9 @@ disabling SMT if necessary for the full mitigation (only on Zen1 and older without STIBP). + ibpb - mitigate short speculation windows on + basic block boundaries too. Safe, highest + perf impact. unret - force enable untrained return thunks, only effective on AMD f15h-f17h based systems. diff --git a/arch/x86/entry/Makefile b/arch/x86/entry/Makefile index 08bf95dbc911..58533752efab 100644 --- a/arch/x86/entry/Makefile +++ b/arch/x86/entry/Makefile @@ -21,7 +21,7 @@ CFLAGS_syscall_64.o += $(call cc-option,-Wno-override-init,) CFLAGS_syscall_32.o += $(call cc-option,-Wno-override-init,) CFLAGS_syscall_x32.o += $(call cc-option,-Wno-override-init,) -obj-y := entry_$(BITS).o thunk_$(BITS).o syscall_$(BITS).o +obj-y := entry.o entry_$(BITS).o thunk_$(BITS).o syscall_$(BITS).o obj-y += common.o obj-y += vdso/ diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S new file mode 100644 index 000000000000..bfb7bcb362bc --- /dev/null +++ b/arch/x86/entry/entry.S @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Common place for both 32- and 64-bit entry routines. + */ + +#include +#include +#include + +.pushsection .noinstr.text, "ax" + +SYM_FUNC_START(entry_ibpb) + movl $MSR_IA32_PRED_CMD, %ecx + movl $PRED_CMD_IBPB, %eax + xorl %edx, %edx + wrmsr + RET +SYM_FUNC_END(entry_ibpb) +/* For KVM */ +EXPORT_SYMBOL_GPL(entry_ibpb); + +.popsection diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 72a7a58f0097..eafc8c7cf358 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -292,7 +292,7 @@ #define X86_FEATURE_PER_THREAD_MBA (11*32+ 7) /* "" Per-thread Memory Bandwidth Allocation */ /* FREE! (11*32+ 8) */ /* FREE! (11*32+ 9) */ -/* FREE! (11*32+10) */ +#define X86_FEATURE_ENTRY_IBPB (11*32+10) /* "" Issue an IBPB on kernel entry */ /* FREE! (11*32+11) */ #define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */ diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 981e6147ca38..19c02a2d632e 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -123,14 +123,17 @@ * return thunk isn't mapped into the userspace tables (then again, AMD * typically has NO_MELTDOWN). * - * Doesn't clobber any registers but does require a stable stack. + * While zen_untrain_ret() doesn't clobber anything but requires stack, + * entry_ibpb() will clobber AX, CX, DX. * * As such, this must be placed after every *SWITCH_TO_KERNEL_CR3 at a point * where we have a stack but before any RET instruction. */ .macro UNTRAIN_RET #ifdef CONFIG_RETPOLINE - ALTERNATIVE "", "call zen_untrain_ret", X86_FEATURE_UNRET + ALTERNATIVE_2 "", \ + "call zen_untrain_ret", X86_FEATURE_UNRET, \ + "call entry_ibpb", X86_FEATURE_ENTRY_IBPB #endif .endm @@ -144,6 +147,7 @@ extern void __x86_return_thunk(void); extern void zen_untrain_ret(void); +extern void entry_ibpb(void); #ifdef CONFIG_RETPOLINE diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 1d16d194f6ab..0e25b0a66621 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -761,6 +761,7 @@ static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = enum retbleed_mitigation { RETBLEED_MITIGATION_NONE, RETBLEED_MITIGATION_UNRET, + RETBLEED_MITIGATION_IBPB, RETBLEED_MITIGATION_IBRS, RETBLEED_MITIGATION_EIBRS, }; @@ -769,11 +770,13 @@ enum retbleed_mitigation_cmd { RETBLEED_CMD_OFF, RETBLEED_CMD_AUTO, RETBLEED_CMD_UNRET, + RETBLEED_CMD_IBPB, }; const char * const retbleed_strings[] = { [RETBLEED_MITIGATION_NONE] = "Vulnerable", [RETBLEED_MITIGATION_UNRET] = "Mitigation: untrained return thunk", + [RETBLEED_MITIGATION_IBPB] = "Mitigation: IBPB", [RETBLEED_MITIGATION_IBRS] = "Mitigation: IBRS", [RETBLEED_MITIGATION_EIBRS] = "Mitigation: Enhanced IBRS", }; @@ -803,6 +806,8 @@ static int __init retbleed_parse_cmdline(char *str) retbleed_cmd = RETBLEED_CMD_AUTO; } else if (!strcmp(str, "unret")) { retbleed_cmd = RETBLEED_CMD_UNRET; + } else if (!strcmp(str, "ibpb")) { + retbleed_cmd = RETBLEED_CMD_IBPB; } else if (!strcmp(str, "nosmt")) { retbleed_nosmt = true; } else { @@ -817,11 +822,13 @@ static int __init retbleed_parse_cmdline(char *str) early_param("retbleed", retbleed_parse_cmdline); #define RETBLEED_UNTRAIN_MSG "WARNING: BTB untrained return thunk mitigation is only effective on AMD/Hygon!\n" -#define RETBLEED_COMPILER_MSG "WARNING: kernel not compiled with RETPOLINE or -mfunction-return capable compiler!\n" +#define RETBLEED_COMPILER_MSG "WARNING: kernel not compiled with RETPOLINE or -mfunction-return capable compiler; falling back to IBPB!\n" #define RETBLEED_INTEL_MSG "WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!\n" static void __init retbleed_select_mitigation(void) { + bool mitigate_smt = false; + if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off()) return; @@ -833,11 +840,21 @@ static void __init retbleed_select_mitigation(void) retbleed_mitigation = RETBLEED_MITIGATION_UNRET; break; + case RETBLEED_CMD_IBPB: + retbleed_mitigation = RETBLEED_MITIGATION_IBPB; + break; + case RETBLEED_CMD_AUTO: default: if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || - boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) - retbleed_mitigation = RETBLEED_MITIGATION_UNRET; + boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) { + + if (IS_ENABLED(CONFIG_RETPOLINE) && + IS_ENABLED(CONFIG_CC_HAS_RETURN_THUNK)) + retbleed_mitigation = RETBLEED_MITIGATION_UNRET; + else + retbleed_mitigation = RETBLEED_MITIGATION_IBPB; + } /* * The Intel mitigation (IBRS) was already selected in @@ -853,26 +870,34 @@ static void __init retbleed_select_mitigation(void) if (!IS_ENABLED(CONFIG_RETPOLINE) || !IS_ENABLED(CONFIG_CC_HAS_RETURN_THUNK)) { pr_err(RETBLEED_COMPILER_MSG); - retbleed_mitigation = RETBLEED_MITIGATION_NONE; - break; + retbleed_mitigation = RETBLEED_MITIGATION_IBPB; + goto retbleed_force_ibpb; } setup_force_cpu_cap(X86_FEATURE_RETHUNK); setup_force_cpu_cap(X86_FEATURE_UNRET); - if (!boot_cpu_has(X86_FEATURE_STIBP) && - (retbleed_nosmt || cpu_mitigations_auto_nosmt())) - cpu_smt_disable(false); - if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD && boot_cpu_data.x86_vendor != X86_VENDOR_HYGON) pr_err(RETBLEED_UNTRAIN_MSG); + + mitigate_smt = true; + break; + + case RETBLEED_MITIGATION_IBPB: +retbleed_force_ibpb: + setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB); + mitigate_smt = true; break; default: break; } + if (mitigate_smt && !boot_cpu_has(X86_FEATURE_STIBP) && + (retbleed_nosmt || cpu_mitigations_auto_nosmt())) + cpu_smt_disable(false); + /* * Let IBRS trump all on Intel without affecting the effects of the * retbleed= cmdline option. From fbab1c94eb1a3139d7ac0620dc6d7d6a33f3b255 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Tue, 14 Jun 2022 15:07:19 -0700 Subject: [PATCH 1538/1842] x86/bugs: Do IBPB fallback check only once commit 0fe4aeea9c01baabecc8c3afc7889c809d939bc2 upstream. When booting with retbleed=auto, if the kernel wasn't built with CONFIG_CC_HAS_RETURN_THUNK, the mitigation falls back to IBPB. Make sure a warning is printed in that case. The IBPB fallback check is done twice, but it really only needs to be done once. Signed-off-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 0e25b0a66621..8205834dd11e 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -847,18 +847,13 @@ static void __init retbleed_select_mitigation(void) case RETBLEED_CMD_AUTO: default: if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || - boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) { - - if (IS_ENABLED(CONFIG_RETPOLINE) && - IS_ENABLED(CONFIG_CC_HAS_RETURN_THUNK)) - retbleed_mitigation = RETBLEED_MITIGATION_UNRET; - else - retbleed_mitigation = RETBLEED_MITIGATION_IBPB; - } + boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) + retbleed_mitigation = RETBLEED_MITIGATION_UNRET; /* - * The Intel mitigation (IBRS) was already selected in - * spectre_v2_select_mitigation(). + * The Intel mitigation (IBRS or eIBRS) was already selected in + * spectre_v2_select_mitigation(). 'retbleed_mitigation' will + * be set accordingly below. */ break; From 0d1a8a16e62c8048f2ff7f9c6f448bf595d2a2a8 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:16:03 +0200 Subject: [PATCH 1539/1842] objtool: Add entry UNRET validation commit a09a6e2399ba0595c3042b3164f3ca68a3cff33e upstream. Since entry asm is tricky, add a validation pass that ensures the retbleed mitigation has been done before the first actual RET instruction. Entry points are those that either have UNWIND_HINT_ENTRY, which acts as UNWIND_HINT_EMPTY but marks the instruction as an entry point, or those that have UWIND_HINT_IRET_REGS at +0. This is basically a variant of validate_branch() that is intra-function and it will simply follow all branches from marked entry points and ensures that all paths lead to ANNOTATE_UNRET_END. If a path hits RET or an indirection the path is a fail and will be reported. There are 3 ANNOTATE_UNRET_END instances: - UNTRAIN_RET itself - exception from-kernel; this path doesn't need UNTRAIN_RET - all early exceptions; these also don't need UNTRAIN_RET Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov [cascardo: arch/x86/entry/entry_64.S no pt_regs return at .Lerror_entry_done_lfence] [cascardo: tools/objtool/builtin-check.c no link option validation] [cascardo: tools/objtool/check.c opts.ibt is ibt] [cascardo: tools/objtool/include/objtool/builtin.h leave unret option as bool, no struct opts] [cascardo: objtool is still called from scripts/link-vmlinux.sh] [cascardo: no IBT support] Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: - In scripts/link-vmlinux.sh, use "test -n" instead of is_enabled - Adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/entry/entry_64.S | 3 +- arch/x86/entry/entry_64_compat.S | 6 +- arch/x86/include/asm/nospec-branch.h | 12 ++ arch/x86/include/asm/unwind_hints.h | 4 + arch/x86/kernel/head_64.S | 5 + arch/x86/xen/xen-asm.S | 10 +- include/linux/objtool.h | 3 + scripts/link-vmlinux.sh | 3 + tools/include/linux/objtool.h | 3 + tools/objtool/builtin-check.c | 3 +- tools/objtool/builtin.h | 2 +- tools/objtool/check.c | 172 ++++++++++++++++++++++++++- tools/objtool/check.h | 6 + 13 files changed, 217 insertions(+), 15 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index f32bb2f943d6..61e65e3ebd53 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -93,7 +93,7 @@ SYM_CODE_END(native_usergs_sysret64) */ SYM_CODE_START(entry_SYSCALL_64) - UNWIND_HINT_EMPTY + UNWIND_HINT_ENTRY swapgs /* tss.sp2 is scratch space. */ @@ -1094,6 +1094,7 @@ SYM_CODE_START_LOCAL(error_entry) */ .Lerror_entry_done_lfence: FENCE_SWAPGS_KERNEL_ENTRY + ANNOTATE_UNRET_END RET .Lbstep_iret: diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S index e2f25403ec30..4d637a965efb 100644 --- a/arch/x86/entry/entry_64_compat.S +++ b/arch/x86/entry/entry_64_compat.S @@ -49,7 +49,7 @@ * 0(%ebp) arg6 */ SYM_CODE_START(entry_SYSENTER_compat) - UNWIND_HINT_EMPTY + UNWIND_HINT_ENTRY /* Interrupts are off on entry. */ SWAPGS @@ -202,7 +202,7 @@ SYM_CODE_END(entry_SYSENTER_compat) * 0(%esp) arg6 */ SYM_CODE_START(entry_SYSCALL_compat) - UNWIND_HINT_EMPTY + UNWIND_HINT_ENTRY /* Interrupts are off on entry. */ swapgs @@ -349,7 +349,7 @@ SYM_CODE_END(entry_SYSCALL_compat) * ebp arg6 */ SYM_CODE_START(entry_INT80_compat) - UNWIND_HINT_EMPTY + UNWIND_HINT_ENTRY /* * Interrupts are off on entry. */ diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 19c02a2d632e..07c5642ab6d7 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -81,6 +81,17 @@ */ #define ANNOTATE_UNRET_SAFE ANNOTATE_RETPOLINE_SAFE +/* + * Abuse ANNOTATE_RETPOLINE_SAFE on a NOP to indicate UNRET_END, should + * eventually turn into it's own annotation. + */ +.macro ANNOTATE_UNRET_END +#ifdef CONFIG_DEBUG_ENTRY + ANNOTATE_RETPOLINE_SAFE + nop +#endif +.endm + /* * JMP_NOSPEC and CALL_NOSPEC macros can be used instead of a simple * indirect jmp/call which may be susceptible to the Spectre variant 2 @@ -131,6 +142,7 @@ */ .macro UNTRAIN_RET #ifdef CONFIG_RETPOLINE + ANNOTATE_UNRET_END ALTERNATIVE_2 "", \ "call zen_untrain_ret", X86_FEATURE_UNRET, \ "call entry_ibpb", X86_FEATURE_ENTRY_IBPB diff --git a/arch/x86/include/asm/unwind_hints.h b/arch/x86/include/asm/unwind_hints.h index 8e574c0afef8..9cd8214175b5 100644 --- a/arch/x86/include/asm/unwind_hints.h +++ b/arch/x86/include/asm/unwind_hints.h @@ -11,6 +11,10 @@ UNWIND_HINT sp_reg=ORC_REG_UNDEFINED type=UNWIND_HINT_TYPE_CALL end=1 .endm +.macro UNWIND_HINT_ENTRY + UNWIND_HINT sp_reg=ORC_REG_UNDEFINED type=UNWIND_HINT_TYPE_ENTRY end=1 +.endm + .macro UNWIND_HINT_REGS base=%rsp offset=0 indirect=0 extra=1 partial=0 .if \base == %rsp .if \indirect diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index 3c417734790f..0424c2a6c15b 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -321,6 +321,8 @@ SYM_CODE_END(start_cpu0) SYM_CODE_START_NOALIGN(vc_boot_ghcb) UNWIND_HINT_IRET_REGS offset=8 + ANNOTATE_UNRET_END + /* Build pt_regs */ PUSH_AND_CLEAR_REGS @@ -378,6 +380,7 @@ SYM_CODE_START(early_idt_handler_array) SYM_CODE_END(early_idt_handler_array) SYM_CODE_START_LOCAL(early_idt_handler_common) + ANNOTATE_UNRET_END /* * The stack is the hardware frame, an error code or zero, and the * vector number. @@ -424,6 +427,8 @@ SYM_CODE_END(early_idt_handler_common) SYM_CODE_START_NOALIGN(vc_no_ghcb) UNWIND_HINT_IRET_REGS offset=8 + ANNOTATE_UNRET_END + /* Build pt_regs */ PUSH_AND_CLEAR_REGS diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S index 70d3b21008e5..e3031afcb103 100644 --- a/arch/x86/xen/xen-asm.S +++ b/arch/x86/xen/xen-asm.S @@ -148,7 +148,7 @@ SYM_FUNC_END(xen_read_cr2_direct); .macro xen_pv_trap name SYM_CODE_START(xen_\name) - UNWIND_HINT_EMPTY + UNWIND_HINT_ENTRY pop %rcx pop %r11 jmp \name @@ -277,7 +277,7 @@ SYM_CODE_END(xenpv_restore_regs_and_return_to_usermode) /* Normal 64-bit system call target */ SYM_CODE_START(xen_entry_SYSCALL_64) - UNWIND_HINT_EMPTY + UNWIND_HINT_ENTRY popq %rcx popq %r11 @@ -296,7 +296,7 @@ SYM_CODE_END(xen_entry_SYSCALL_64) /* 32-bit compat syscall target */ SYM_CODE_START(xen_entry_SYSCALL_compat) - UNWIND_HINT_EMPTY + UNWIND_HINT_ENTRY popq %rcx popq %r11 @@ -313,7 +313,7 @@ SYM_CODE_END(xen_entry_SYSCALL_compat) /* 32-bit compat sysenter target */ SYM_CODE_START(xen_entry_SYSENTER_compat) - UNWIND_HINT_EMPTY + UNWIND_HINT_ENTRY /* * NB: Xen is polite and clears TF from EFLAGS for us. This means * that we don't need to guard against single step exceptions here. @@ -336,7 +336,7 @@ SYM_CODE_END(xen_entry_SYSENTER_compat) SYM_CODE_START(xen_entry_SYSCALL_compat) SYM_CODE_START(xen_entry_SYSENTER_compat) - UNWIND_HINT_EMPTY + UNWIND_HINT_ENTRY lea 16(%rsp), %rsp /* strip %rcx, %r11 */ mov $-ENOSYS, %rax pushq $0 diff --git a/include/linux/objtool.h b/include/linux/objtool.h index 8ed079c52c47..b2855f7fce53 100644 --- a/include/linux/objtool.h +++ b/include/linux/objtool.h @@ -32,11 +32,14 @@ struct unwind_hint { * * UNWIND_HINT_FUNC: Generate the unwind metadata of a callable function. * Useful for code which doesn't have an ELF function annotation. + * + * UNWIND_HINT_ENTRY: machine entry without stack, SYSCALL/SYSENTER etc. */ #define UNWIND_HINT_TYPE_CALL 0 #define UNWIND_HINT_TYPE_REGS 1 #define UNWIND_HINT_TYPE_REGS_PARTIAL 2 #define UNWIND_HINT_TYPE_FUNC 3 +#define UNWIND_HINT_TYPE_ENTRY 4 #ifdef CONFIG_STACK_VALIDATION diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh index b184d94b9052..a7c72eb5a624 100755 --- a/scripts/link-vmlinux.sh +++ b/scripts/link-vmlinux.sh @@ -65,6 +65,9 @@ objtool_link() if [ -n "${CONFIG_VMLINUX_VALIDATION}" ]; then objtoolopt="check" + if [ -n "${CONFIG_RETPOLINE}" ]; then + objtoolopt="${objtoolopt} --unret" + fi if [ -z "${CONFIG_FRAME_POINTER}" ]; then objtoolopt="${objtoolopt} --no-fp" fi diff --git a/tools/include/linux/objtool.h b/tools/include/linux/objtool.h index 8ed079c52c47..b2855f7fce53 100644 --- a/tools/include/linux/objtool.h +++ b/tools/include/linux/objtool.h @@ -32,11 +32,14 @@ struct unwind_hint { * * UNWIND_HINT_FUNC: Generate the unwind metadata of a callable function. * Useful for code which doesn't have an ELF function annotation. + * + * UNWIND_HINT_ENTRY: machine entry without stack, SYSCALL/SYSENTER etc. */ #define UNWIND_HINT_TYPE_CALL 0 #define UNWIND_HINT_TYPE_REGS 1 #define UNWIND_HINT_TYPE_REGS_PARTIAL 2 #define UNWIND_HINT_TYPE_FUNC 3 +#define UNWIND_HINT_TYPE_ENTRY 4 #ifdef CONFIG_STACK_VALIDATION diff --git a/tools/objtool/builtin-check.c b/tools/objtool/builtin-check.c index 758baf918d83..d04eab7e77ae 100644 --- a/tools/objtool/builtin-check.c +++ b/tools/objtool/builtin-check.c @@ -19,7 +19,7 @@ #include "objtool.h" bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, - validate_dup, vmlinux, sls; + validate_dup, vmlinux, sls, unret; static const char * const check_usage[] = { "objtool check [] file.o", @@ -30,6 +30,7 @@ const struct option check_options[] = { OPT_BOOLEAN('f', "no-fp", &no_fp, "Skip frame pointer validation"), OPT_BOOLEAN('u', "no-unreachable", &no_unreachable, "Skip 'unreachable instruction' warnings"), OPT_BOOLEAN('r', "retpoline", &retpoline, "Validate retpoline assumptions"), + OPT_BOOLEAN(0, "unret", &unret, "validate entry unret placement"), OPT_BOOLEAN('m', "module", &module, "Indicates the object will be part of a kernel module"), OPT_BOOLEAN('b', "backtrace", &backtrace, "unwind on error"), OPT_BOOLEAN('a', "uaccess", &uaccess, "enable uaccess checking"), diff --git a/tools/objtool/builtin.h b/tools/objtool/builtin.h index 33043fcb16db..b6b486e28d54 100644 --- a/tools/objtool/builtin.h +++ b/tools/objtool/builtin.h @@ -9,7 +9,7 @@ extern const struct option check_options[]; extern bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, - validate_dup, vmlinux, sls; + validate_dup, vmlinux, sls, unret; extern int cmd_check(int argc, const char **argv); extern int cmd_orc(int argc, const char **argv); diff --git a/tools/objtool/check.c b/tools/objtool/check.c index eac6b89660b1..ffc29b2603a1 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -1752,6 +1752,19 @@ static int read_unwind_hints(struct objtool_file *file) insn->hint = true; + if (hint->type == UNWIND_HINT_TYPE_REGS_PARTIAL) { + struct symbol *sym = find_symbol_by_offset(insn->sec, insn->offset); + + if (sym && sym->bind == STB_GLOBAL) { + insn->entry = 1; + } + } + + if (hint->type == UNWIND_HINT_TYPE_ENTRY) { + hint->type = UNWIND_HINT_TYPE_CALL; + insn->entry = 1; + } + if (hint->type == UNWIND_HINT_TYPE_FUNC) { insn->cfi = &func_cfi; continue; @@ -1800,8 +1813,9 @@ static int read_retpoline_hints(struct objtool_file *file) if (insn->type != INSN_JUMP_DYNAMIC && insn->type != INSN_CALL_DYNAMIC && - insn->type != INSN_RETURN) { - WARN_FUNC("retpoline_safe hint not an indirect jump/call/ret", + insn->type != INSN_RETURN && + insn->type != INSN_NOP) { + WARN_FUNC("retpoline_safe hint not an indirect jump/call/ret/nop", insn->sec, insn->offset); return -1; } @@ -2818,8 +2832,8 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, return 1; } - visited = 1 << state.uaccess; - if (insn->visited) { + visited = VISITED_BRANCH << state.uaccess; + if (insn->visited & VISITED_BRANCH_MASK) { if (!insn->hint && !insn_cfi_match(insn, &state.cfi)) return 1; @@ -3045,6 +3059,145 @@ static int validate_unwind_hints(struct objtool_file *file, struct section *sec) return warnings; } +/* + * Validate rethunk entry constraint: must untrain RET before the first RET. + * + * Follow every branch (intra-function) and ensure ANNOTATE_UNRET_END comes + * before an actual RET instruction. + */ +static int validate_entry(struct objtool_file *file, struct instruction *insn) +{ + struct instruction *next, *dest; + int ret, warnings = 0; + + for (;;) { + next = next_insn_to_validate(file, insn); + + if (insn->visited & VISITED_ENTRY) + return 0; + + insn->visited |= VISITED_ENTRY; + + if (!insn->ignore_alts && !list_empty(&insn->alts)) { + struct alternative *alt; + bool skip_orig = false; + + list_for_each_entry(alt, &insn->alts, list) { + if (alt->skip_orig) + skip_orig = true; + + ret = validate_entry(file, alt->insn); + if (ret) { + if (backtrace) + BT_FUNC("(alt)", insn); + return ret; + } + } + + if (skip_orig) + return 0; + } + + switch (insn->type) { + + case INSN_CALL_DYNAMIC: + case INSN_JUMP_DYNAMIC: + case INSN_JUMP_DYNAMIC_CONDITIONAL: + WARN_FUNC("early indirect call", insn->sec, insn->offset); + return 1; + + case INSN_JUMP_UNCONDITIONAL: + case INSN_JUMP_CONDITIONAL: + if (!is_sibling_call(insn)) { + if (!insn->jump_dest) { + WARN_FUNC("unresolved jump target after linking?!?", + insn->sec, insn->offset); + return -1; + } + ret = validate_entry(file, insn->jump_dest); + if (ret) { + if (backtrace) { + BT_FUNC("(branch%s)", insn, + insn->type == INSN_JUMP_CONDITIONAL ? "-cond" : ""); + } + return ret; + } + + if (insn->type == INSN_JUMP_UNCONDITIONAL) + return 0; + + break; + } + + /* fallthrough */ + case INSN_CALL: + dest = find_insn(file, insn->call_dest->sec, + insn->call_dest->offset); + if (!dest) { + WARN("Unresolved function after linking!?: %s", + insn->call_dest->name); + return -1; + } + + ret = validate_entry(file, dest); + if (ret) { + if (backtrace) + BT_FUNC("(call)", insn); + return ret; + } + /* + * If a call returns without error, it must have seen UNTRAIN_RET. + * Therefore any non-error return is a success. + */ + return 0; + + case INSN_RETURN: + WARN_FUNC("RET before UNTRAIN", insn->sec, insn->offset); + return 1; + + case INSN_NOP: + if (insn->retpoline_safe) + return 0; + break; + + default: + break; + } + + if (!next) { + WARN_FUNC("teh end!", insn->sec, insn->offset); + return -1; + } + insn = next; + } + + return warnings; +} + +/* + * Validate that all branches starting at 'insn->entry' encounter UNRET_END + * before RET. + */ +static int validate_unret(struct objtool_file *file) +{ + struct instruction *insn; + int ret, warnings = 0; + + for_each_insn(file, insn) { + if (!insn->entry) + continue; + + ret = validate_entry(file, insn); + if (ret < 0) { + WARN_FUNC("Failed UNRET validation", insn->sec, insn->offset); + return ret; + } + warnings += ret; + } + + return warnings; +} + static int validate_retpoline(struct objtool_file *file) { struct instruction *insn; @@ -3312,6 +3465,17 @@ int check(struct objtool_file *file) goto out; warnings += ret; + if (unret) { + /* + * Must be after validate_branch() and friends, it plays + * further games with insn->visited. + */ + ret = validate_unret(file); + if (ret < 0) + return ret; + warnings += ret; + } + if (!warnings) { ret = validate_reachable_instructions(file); if (ret < 0) diff --git a/tools/objtool/check.h b/tools/objtool/check.h index 508f712deba1..e8fa948c053f 100644 --- a/tools/objtool/check.h +++ b/tools/objtool/check.h @@ -48,6 +48,7 @@ struct instruction { bool dead_end, ignore, ignore_alts; bool hint; bool retpoline_safe; + bool entry; s8 instr; u8 visited; struct alt_group *alt_group; @@ -62,6 +63,11 @@ struct instruction { struct cfi_state *cfi; }; +#define VISITED_BRANCH 0x01 +#define VISITED_BRANCH_UACCESS 0x02 +#define VISITED_BRANCH_MASK 0x03 +#define VISITED_ENTRY 0x04 + static inline bool is_static_jump(struct instruction *insn) { return insn->type == INSN_JUMP_CONDITIONAL || From ea1aa926f423a8cf1b2416bb909bfbea37d12b11 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 14 Jun 2022 23:16:04 +0200 Subject: [PATCH 1540/1842] x86/cpu/amd: Add Spectral Chicken commit d7caac991feeef1b871ee6988fd2c9725df09039 upstream. Zen2 uarchs have an undocumented, unnamed, MSR that contains a chicken bit for some speculation behaviour. It needs setting. Note: very belatedly AMD released naming; it's now officially called MSR_AMD64_DE_CFG2 and MSR_AMD64_DE_CFG2_SUPPRESS_NOBR_PRED_BIT but shall remain the SPECTRAL CHICKEN. Suggested-by: Andrew Cooper Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/msr-index.h | 3 +++ arch/x86/kernel/cpu/amd.c | 23 ++++++++++++++++++++++- arch/x86/kernel/cpu/cpu.h | 2 ++ arch/x86/kernel/cpu/hygon.c | 6 ++++++ 4 files changed, 33 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 77a55777e002..19cc68aea58a 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -508,6 +508,9 @@ /* Fam 17h MSRs */ #define MSR_F17H_IRPERF 0xc00000e9 +#define MSR_ZEN2_SPECTRAL_CHICKEN 0xc00110e3 +#define MSR_ZEN2_SPECTRAL_CHICKEN_BIT BIT_ULL(1) + /* Fam 16h MSRs */ #define MSR_F16H_L2I_PERF_CTL 0xc0010230 #define MSR_F16H_L2I_PERF_CTR 0xc0010231 diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index acea05eed27d..f1f52e67e361 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -914,6 +914,26 @@ static void init_amd_bd(struct cpuinfo_x86 *c) clear_rdrand_cpuid_bit(c); } +void init_spectral_chicken(struct cpuinfo_x86 *c) +{ + u64 value; + + /* + * On Zen2 we offer this chicken (bit) on the altar of Speculation. + * + * This suppresses speculation from the middle of a basic block, i.e. it + * suppresses non-branch predictions. + * + * We use STIBP as a heuristic to filter out Zen2 from the rest of F17H + */ + if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && cpu_has(c, X86_FEATURE_AMD_STIBP)) { + if (!rdmsrl_safe(MSR_ZEN2_SPECTRAL_CHICKEN, &value)) { + value |= MSR_ZEN2_SPECTRAL_CHICKEN_BIT; + wrmsrl_safe(MSR_ZEN2_SPECTRAL_CHICKEN, value); + } + } +} + static void init_amd_zn(struct cpuinfo_x86 *c) { set_cpu_cap(c, X86_FEATURE_ZEN); @@ -959,7 +979,8 @@ static void init_amd(struct cpuinfo_x86 *c) case 0x12: init_amd_ln(c); break; case 0x15: init_amd_bd(c); break; case 0x16: init_amd_jg(c); break; - case 0x17: fallthrough; + case 0x17: init_spectral_chicken(c); + fallthrough; case 0x19: init_amd_zn(c); break; } diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h index 093f5fc860e3..91df90abc1d9 100644 --- a/arch/x86/kernel/cpu/cpu.h +++ b/arch/x86/kernel/cpu/cpu.h @@ -60,6 +60,8 @@ extern void tsx_disable(void); static inline void tsx_init(void) { } #endif /* CONFIG_CPU_SUP_INTEL */ +extern void init_spectral_chicken(struct cpuinfo_x86 *c); + extern void get_cpu_cap(struct cpuinfo_x86 *c); extern void get_cpu_address_sizes(struct cpuinfo_x86 *c); extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c); diff --git a/arch/x86/kernel/cpu/hygon.c b/arch/x86/kernel/cpu/hygon.c index b78c471ec344..774ca6bfda9f 100644 --- a/arch/x86/kernel/cpu/hygon.c +++ b/arch/x86/kernel/cpu/hygon.c @@ -318,6 +318,12 @@ static void init_hygon(struct cpuinfo_x86 *c) /* get apicid instead of initial apic id from cpuid */ c->apicid = hard_smp_processor_id(); + /* + * XXX someone from Hygon needs to confirm this DTRT + * + init_spectral_chicken(c); + */ + set_cpu_cap(c, X86_FEATURE_ZEN); set_cpu_cap(c, X86_FEATURE_CPB); From f1b01ace814b0a8318041e3aea5fd36cc74f09b0 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Tue, 14 Jun 2022 23:16:05 +0200 Subject: [PATCH 1541/1842] x86/speculation: Fix RSB filling with CONFIG_RETPOLINE=n commit b2620facef4889fefcbf2e87284f34dcd4189bce upstream. If a kernel is built with CONFIG_RETPOLINE=n, but the user still wants to mitigate Spectre v2 using IBRS or eIBRS, the RSB filling will be silently disabled. There's nothing retpoline-specific about RSB buffer filling. Remove the CONFIG_RETPOLINE guards around it. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/entry/entry_32.S | 2 -- arch/x86/entry/entry_64.S | 2 -- arch/x86/include/asm/nospec-branch.h | 2 -- 3 files changed, 6 deletions(-) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 4a4896ba4db3..8fcd6a42b3a1 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -782,7 +782,6 @@ SYM_CODE_START(__switch_to_asm) movl %ebx, PER_CPU_VAR(stack_canary)+stack_canary_offset #endif -#ifdef CONFIG_RETPOLINE /* * When switching from a shallower to a deeper call stack * the RSB may either underflow or use entries populated @@ -791,7 +790,6 @@ SYM_CODE_START(__switch_to_asm) * speculative execution to prevent attack. */ FILL_RETURN_BUFFER %ebx, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_CTXSW -#endif /* Restore flags or the incoming task to restore AC state. */ popfl diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 61e65e3ebd53..559c82b83475 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -249,7 +249,6 @@ SYM_FUNC_START(__switch_to_asm) movq %rbx, PER_CPU_VAR(fixed_percpu_data) + stack_canary_offset #endif -#ifdef CONFIG_RETPOLINE /* * When switching from a shallower to a deeper call stack * the RSB may either underflow or use entries populated @@ -258,7 +257,6 @@ SYM_FUNC_START(__switch_to_asm) * speculative execution to prevent attack. */ FILL_RETURN_BUFFER %r12, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_CTXSW -#endif /* restore callee-saved registers */ popq %r15 diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 07c5642ab6d7..b44e6d1bd820 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -122,11 +122,9 @@ * monstrosity above, manually. */ .macro FILL_RETURN_BUFFER reg:req nr:req ftr:req -#ifdef CONFIG_RETPOLINE ALTERNATIVE "jmp .Lskip_rsb_\@", "", \ftr __FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP) .Lskip_rsb_\@: -#endif .endm /* From d29c07912a49fce965228f73a293e2c899bc7e35 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Tue, 14 Jun 2022 23:16:06 +0200 Subject: [PATCH 1542/1842] x86/speculation: Fix firmware entry SPEC_CTRL handling commit e6aa13622ea8283cc699cac5d018cc40a2ba2010 upstream. The firmware entry code may accidentally clear STIBP or SSBD. Fix that. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/nospec-branch.h | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index b44e6d1bd820..3382f59a1e03 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -286,18 +286,16 @@ extern u64 spec_ctrl_current(void); */ #define firmware_restrict_branch_speculation_start() \ do { \ - u64 val = x86_spec_ctrl_base | SPEC_CTRL_IBRS; \ - \ preempt_disable(); \ - alternative_msr_write(MSR_IA32_SPEC_CTRL, val, \ + alternative_msr_write(MSR_IA32_SPEC_CTRL, \ + spec_ctrl_current() | SPEC_CTRL_IBRS, \ X86_FEATURE_USE_IBRS_FW); \ } while (0) #define firmware_restrict_branch_speculation_end() \ do { \ - u64 val = x86_spec_ctrl_base; \ - \ - alternative_msr_write(MSR_IA32_SPEC_CTRL, val, \ + alternative_msr_write(MSR_IA32_SPEC_CTRL, \ + spec_ctrl_current(), \ X86_FEATURE_USE_IBRS_FW); \ preempt_enable(); \ } while (0) From aad83db22e9950577b5b827f57ed7108b3ca5553 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Tue, 14 Jun 2022 23:16:07 +0200 Subject: [PATCH 1543/1842] x86/speculation: Fix SPEC_CTRL write on SMT state change commit 56aa4d221f1ee2c3a49b45b800778ec6e0ab73c5 upstream. If the SMT state changes, SSBD might get accidentally disabled. Fix that. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 8205834dd11e..33af6f9ab241 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1414,7 +1414,8 @@ static void __init spectre_v2_select_mitigation(void) static void update_stibp_msr(void * __unused) { - write_spec_ctrl_current(x86_spec_ctrl_base, true); + u64 val = spec_ctrl_current() | (x86_spec_ctrl_base & SPEC_CTRL_STIBP); + write_spec_ctrl_current(val, true); } /* Update x86_spec_ctrl_base in case SMT state changed. */ From ce11f91b21c25dda8b06988817115bef1c636434 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Tue, 14 Jun 2022 23:16:08 +0200 Subject: [PATCH 1544/1842] x86/speculation: Use cached host SPEC_CTRL value for guest entry/exit commit bbb69e8bee1bd882784947095ffb2bfe0f7c9470 upstream. There's no need to recalculate the host value for every entry/exit. Just use the cached value in spec_ctrl_current(). Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 33af6f9ab241..6af379b69b82 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -199,7 +199,7 @@ void __init check_bugs(void) void x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest) { - u64 msrval, guestval, hostval = x86_spec_ctrl_base; + u64 msrval, guestval, hostval = spec_ctrl_current(); struct thread_info *ti = current_thread_info(); /* Is MSR_SPEC_CTRL implemented ? */ @@ -212,15 +212,6 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest) guestval = hostval & ~x86_spec_ctrl_mask; guestval |= guest_spec_ctrl & x86_spec_ctrl_mask; - /* SSBD controlled in MSR_SPEC_CTRL */ - if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) || - static_cpu_has(X86_FEATURE_AMD_SSBD)) - hostval |= ssbd_tif_to_spec_ctrl(ti->flags); - - /* Conditional STIBP enabled? */ - if (static_branch_unlikely(&switch_to_cond_stibp)) - hostval |= stibp_tif_to_spec_ctrl(ti->flags); - if (hostval != guestval) { msrval = setguest ? guestval : hostval; wrmsrl(MSR_IA32_SPEC_CTRL, msrval); @@ -1353,7 +1344,6 @@ static void __init spectre_v2_select_mitigation(void) pr_err(SPECTRE_V2_EIBRS_EBPF_MSG); if (spectre_v2_in_ibrs_mode(mode)) { - /* Force it so VMEXIT will restore correctly */ x86_spec_ctrl_base |= SPEC_CTRL_IBRS; write_spec_ctrl_current(x86_spec_ctrl_base, true); } From 1dbefa57725204be0348351ea4756c52b10b3504 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Fri, 17 Jun 2022 12:12:48 -0700 Subject: [PATCH 1545/1842] x86/speculation: Remove x86_spec_ctrl_mask commit acac5e98ef8d638a411cfa2ee676c87e1973f126 upstream. This mask has been made redundant by kvm_spec_ctrl_test_value(). And it doesn't even work when MSR interception is disabled, as the guest can just write to SPEC_CTRL directly. Signed-off-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Reviewed-by: Paolo Bonzini Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 31 +------------------------------ 1 file changed, 1 insertion(+), 30 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 6af379b69b82..8fe701ffee31 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -84,12 +84,6 @@ u64 spec_ctrl_current(void) } EXPORT_SYMBOL_GPL(spec_ctrl_current); -/* - * The vendor and possibly platform specific bits which can be modified in - * x86_spec_ctrl_base. - */ -static u64 __ro_after_init x86_spec_ctrl_mask = SPEC_CTRL_IBRS; - /* * AMD specific MSR info for Speculative Store Bypass control. * x86_amd_ls_cfg_ssbd_mask is initialized in identify_boot_cpu(). @@ -138,10 +132,6 @@ void __init check_bugs(void) if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); - /* Allow STIBP in MSR_SPEC_CTRL if supported */ - if (boot_cpu_has(X86_FEATURE_STIBP)) - x86_spec_ctrl_mask |= SPEC_CTRL_STIBP; - /* Select the proper CPU mitigations before patching alternatives: */ spectre_v1_select_mitigation(); spectre_v2_select_mitigation(); @@ -199,19 +189,10 @@ void __init check_bugs(void) void x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest) { - u64 msrval, guestval, hostval = spec_ctrl_current(); + u64 msrval, guestval = guest_spec_ctrl, hostval = spec_ctrl_current(); struct thread_info *ti = current_thread_info(); - /* Is MSR_SPEC_CTRL implemented ? */ if (static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) { - /* - * Restrict guest_spec_ctrl to supported values. Clear the - * modifiable bits in the host base value and or the - * modifiable bits from the guest value. - */ - guestval = hostval & ~x86_spec_ctrl_mask; - guestval |= guest_spec_ctrl & x86_spec_ctrl_mask; - if (hostval != guestval) { msrval = setguest ? guestval : hostval; wrmsrl(MSR_IA32_SPEC_CTRL, msrval); @@ -1621,16 +1602,6 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void) break; } - /* - * If SSBD is controlled by the SPEC_CTRL MSR, then set the proper - * bit in the mask to allow guests to use the mitigation even in the - * case where the host does not enable it. - */ - if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) || - static_cpu_has(X86_FEATURE_AMD_SSBD)) { - x86_spec_ctrl_mask |= SPEC_CTRL_SSBD; - } - /* * We have three CPU feature flags that are in play here: * - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible. From df93717a32f57e1b033dbfa2a78809d7d4000648 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Fri, 24 Jun 2022 12:52:40 +0200 Subject: [PATCH 1546/1842] objtool: Re-add UNWIND_HINT_{SAVE_RESTORE} commit 8faea26e611189e933ea2281975ff4dc7c1106b6 upstream. Commit c536ed2fffd5 ("objtool: Remove SAVE/RESTORE hints") removed the save/restore unwind hints because they were no longer needed. Now they're going to be needed again so re-add them. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/unwind_hints.h | 12 +++++++-- include/linux/objtool.h | 6 +++-- tools/include/linux/objtool.h | 6 +++-- tools/objtool/check.c | 40 +++++++++++++++++++++++++++++ tools/objtool/check.h | 1 + 5 files changed, 59 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/unwind_hints.h b/arch/x86/include/asm/unwind_hints.h index 9cd8214175b5..56664b31b6da 100644 --- a/arch/x86/include/asm/unwind_hints.h +++ b/arch/x86/include/asm/unwind_hints.h @@ -8,11 +8,11 @@ #ifdef __ASSEMBLY__ .macro UNWIND_HINT_EMPTY - UNWIND_HINT sp_reg=ORC_REG_UNDEFINED type=UNWIND_HINT_TYPE_CALL end=1 + UNWIND_HINT type=UNWIND_HINT_TYPE_CALL end=1 .endm .macro UNWIND_HINT_ENTRY - UNWIND_HINT sp_reg=ORC_REG_UNDEFINED type=UNWIND_HINT_TYPE_ENTRY end=1 + UNWIND_HINT type=UNWIND_HINT_TYPE_ENTRY end=1 .endm .macro UNWIND_HINT_REGS base=%rsp offset=0 indirect=0 extra=1 partial=0 @@ -56,6 +56,14 @@ UNWIND_HINT sp_reg=ORC_REG_SP sp_offset=8 type=UNWIND_HINT_TYPE_FUNC .endm +.macro UNWIND_HINT_SAVE + UNWIND_HINT type=UNWIND_HINT_TYPE_SAVE +.endm + +.macro UNWIND_HINT_RESTORE + UNWIND_HINT type=UNWIND_HINT_TYPE_RESTORE +.endm + #endif /* __ASSEMBLY__ */ #endif /* _ASM_X86_UNWIND_HINTS_H */ diff --git a/include/linux/objtool.h b/include/linux/objtool.h index b2855f7fce53..662f19374bd9 100644 --- a/include/linux/objtool.h +++ b/include/linux/objtool.h @@ -40,6 +40,8 @@ struct unwind_hint { #define UNWIND_HINT_TYPE_REGS_PARTIAL 2 #define UNWIND_HINT_TYPE_FUNC 3 #define UNWIND_HINT_TYPE_ENTRY 4 +#define UNWIND_HINT_TYPE_SAVE 5 +#define UNWIND_HINT_TYPE_RESTORE 6 #ifdef CONFIG_STACK_VALIDATION @@ -102,7 +104,7 @@ struct unwind_hint { * the debuginfo as necessary. It will also warn if it sees any * inconsistencies. */ -.macro UNWIND_HINT sp_reg:req sp_offset=0 type:req end=0 +.macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 end=0 .Lunwind_hint_ip_\@: .pushsection .discard.unwind_hints /* struct unwind_hint */ @@ -126,7 +128,7 @@ struct unwind_hint { #define STACK_FRAME_NON_STANDARD(func) #else #define ANNOTATE_INTRA_FUNCTION_CALL -.macro UNWIND_HINT sp_reg:req sp_offset=0 type:req end=0 +.macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 end=0 .endm #endif diff --git a/tools/include/linux/objtool.h b/tools/include/linux/objtool.h index b2855f7fce53..662f19374bd9 100644 --- a/tools/include/linux/objtool.h +++ b/tools/include/linux/objtool.h @@ -40,6 +40,8 @@ struct unwind_hint { #define UNWIND_HINT_TYPE_REGS_PARTIAL 2 #define UNWIND_HINT_TYPE_FUNC 3 #define UNWIND_HINT_TYPE_ENTRY 4 +#define UNWIND_HINT_TYPE_SAVE 5 +#define UNWIND_HINT_TYPE_RESTORE 6 #ifdef CONFIG_STACK_VALIDATION @@ -102,7 +104,7 @@ struct unwind_hint { * the debuginfo as necessary. It will also warn if it sees any * inconsistencies. */ -.macro UNWIND_HINT sp_reg:req sp_offset=0 type:req end=0 +.macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 end=0 .Lunwind_hint_ip_\@: .pushsection .discard.unwind_hints /* struct unwind_hint */ @@ -126,7 +128,7 @@ struct unwind_hint { #define STACK_FRAME_NON_STANDARD(func) #else #define ANNOTATE_INTRA_FUNCTION_CALL -.macro UNWIND_HINT sp_reg:req sp_offset=0 type:req end=0 +.macro UNWIND_HINT type:req sp_reg=0 sp_offset=0 end=0 .endm #endif diff --git a/tools/objtool/check.c b/tools/objtool/check.c index ffc29b2603a1..864d407009c9 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -1752,6 +1752,17 @@ static int read_unwind_hints(struct objtool_file *file) insn->hint = true; + if (hint->type == UNWIND_HINT_TYPE_SAVE) { + insn->hint = false; + insn->save = true; + continue; + } + + if (hint->type == UNWIND_HINT_TYPE_RESTORE) { + insn->restore = true; + continue; + } + if (hint->type == UNWIND_HINT_TYPE_REGS_PARTIAL) { struct symbol *sym = find_symbol_by_offset(insn->sec, insn->offset); @@ -2847,6 +2858,35 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, state.instr += insn->instr; if (insn->hint) { + if (insn->restore) { + struct instruction *save_insn, *i; + + i = insn; + save_insn = NULL; + + sym_for_each_insn_continue_reverse(file, func, i) { + if (i->save) { + save_insn = i; + break; + } + } + + if (!save_insn) { + WARN_FUNC("no corresponding CFI save for CFI restore", + sec, insn->offset); + return 1; + } + + if (!save_insn->visited) { + WARN_FUNC("objtool isn't smart enough to handle this CFI save/restore combo", + sec, insn->offset); + return 1; + } + + insn->cfi = save_insn->cfi; + nr_cfi_reused++; + } + state.cfi = *insn->cfi; } else { /* XXX track if we actually changed state.cfi */ diff --git a/tools/objtool/check.h b/tools/objtool/check.h index e8fa948c053f..7f34a7f9ca52 100644 --- a/tools/objtool/check.h +++ b/tools/objtool/check.h @@ -47,6 +47,7 @@ struct instruction { unsigned long immediate; bool dead_end, ignore, ignore_alts; bool hint; + bool save, restore; bool retpoline_safe; bool entry; s8 instr; From 07401c2311f6fddd3c49a392eafc2c28a899f768 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Tue, 14 Jun 2022 23:16:11 +0200 Subject: [PATCH 1547/1842] KVM: VMX: Flatten __vmx_vcpu_run() commit 8bd200d23ec42d66ccd517a72dd0b9cc6132d2fd upstream. Move the vmx_vm{enter,exit}() functionality into __vmx_vcpu_run(). This will make it easier to do the spec_ctrl handling before the first RET. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov [cascardo: remove ENDBR] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/vmx/vmenter.S | 118 ++++++++++++++----------------------- 1 file changed, 45 insertions(+), 73 deletions(-) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 435c187927c4..cd1cd516ec3d 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -30,68 +30,6 @@ .section .noinstr.text, "ax" -/** - * vmx_vmenter - VM-Enter the current loaded VMCS - * - * %RFLAGS.ZF: !VMCS.LAUNCHED, i.e. controls VMLAUNCH vs. VMRESUME - * - * Returns: - * %RFLAGS.CF is set on VM-Fail Invalid - * %RFLAGS.ZF is set on VM-Fail Valid - * %RFLAGS.{CF,ZF} are cleared on VM-Success, i.e. VM-Exit - * - * Note that VMRESUME/VMLAUNCH fall-through and return directly if - * they VM-Fail, whereas a successful VM-Enter + VM-Exit will jump - * to vmx_vmexit. - */ -SYM_FUNC_START_LOCAL(vmx_vmenter) - /* EFLAGS.ZF is set if VMCS.LAUNCHED == 0 */ - je 2f - -1: vmresume - RET - -2: vmlaunch - RET - -3: cmpb $0, kvm_rebooting - je 4f - RET -4: ud2 - - _ASM_EXTABLE(1b, 3b) - _ASM_EXTABLE(2b, 3b) - -SYM_FUNC_END(vmx_vmenter) - -/** - * vmx_vmexit - Handle a VMX VM-Exit - * - * Returns: - * %RFLAGS.{CF,ZF} are cleared on VM-Success, i.e. VM-Exit - * - * This is vmx_vmenter's partner in crime. On a VM-Exit, control will jump - * here after hardware loads the host's state, i.e. this is the destination - * referred to by VMCS.HOST_RIP. - */ -SYM_FUNC_START(vmx_vmexit) -#ifdef CONFIG_RETPOLINE - ALTERNATIVE "jmp .Lvmexit_skip_rsb", "", X86_FEATURE_RETPOLINE - /* Preserve guest's RAX, it's used to stuff the RSB. */ - push %_ASM_AX - - /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ - FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE - - /* Clear RFLAGS.CF and RFLAGS.ZF to preserve VM-Exit, i.e. !VM-Fail. */ - or $1, %_ASM_AX - - pop %_ASM_AX -.Lvmexit_skip_rsb: -#endif - RET -SYM_FUNC_END(vmx_vmexit) - /** * __vmx_vcpu_run - Run a vCPU via a transition to VMX guest mode * @vmx: struct vcpu_vmx * (forwarded to vmx_update_host_rsp) @@ -124,8 +62,7 @@ SYM_FUNC_START(__vmx_vcpu_run) /* Copy @launched to BL, _ASM_ARG3 is volatile. */ mov %_ASM_ARG3B, %bl - /* Adjust RSP to account for the CALL to vmx_vmenter(). */ - lea -WORD_SIZE(%_ASM_SP), %_ASM_ARG2 + lea (%_ASM_SP), %_ASM_ARG2 call vmx_update_host_rsp /* Load @regs to RAX. */ @@ -154,11 +91,36 @@ SYM_FUNC_START(__vmx_vcpu_run) /* Load guest RAX. This kills the @regs pointer! */ mov VCPU_RAX(%_ASM_AX), %_ASM_AX - /* Enter guest mode */ - call vmx_vmenter + /* Check EFLAGS.ZF from 'testb' above */ + je .Lvmlaunch - /* Jump on VM-Fail. */ - jbe 2f + /* + * After a successful VMRESUME/VMLAUNCH, control flow "magically" + * resumes below at 'vmx_vmexit' due to the VMCS HOST_RIP setting. + * So this isn't a typical function and objtool needs to be told to + * save the unwind state here and restore it below. + */ + UNWIND_HINT_SAVE + +/* + * If VMRESUME/VMLAUNCH and corresponding vmexit succeed, execution resumes at + * the 'vmx_vmexit' label below. + */ +.Lvmresume: + vmresume + jmp .Lvmfail + +.Lvmlaunch: + vmlaunch + jmp .Lvmfail + + _ASM_EXTABLE(.Lvmresume, .Lfixup) + _ASM_EXTABLE(.Lvmlaunch, .Lfixup) + +SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL) + + /* Restore unwind state from before the VMRESUME/VMLAUNCH. */ + UNWIND_HINT_RESTORE /* Temporarily save guest's RAX. */ push %_ASM_AX @@ -185,9 +147,13 @@ SYM_FUNC_START(__vmx_vcpu_run) mov %r15, VCPU_R15(%_ASM_AX) #endif + /* IMPORTANT: RSB must be stuffed before the first return. */ + FILL_RETURN_BUFFER %_ASM_BX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE + /* Clear RAX to indicate VM-Exit (as opposed to VM-Fail). */ xor %eax, %eax +.Lclear_regs: /* * Clear all general purpose registers except RSP and RAX to prevent * speculative use of the guest's values, even those that are reloaded @@ -197,7 +163,7 @@ SYM_FUNC_START(__vmx_vcpu_run) * free. RSP and RAX are exempt as RSP is restored by hardware during * VM-Exit and RAX is explicitly loaded with 0 or 1 to return VM-Fail. */ -1: xor %ecx, %ecx + xor %ecx, %ecx xor %edx, %edx xor %ebx, %ebx xor %ebp, %ebp @@ -216,8 +182,8 @@ SYM_FUNC_START(__vmx_vcpu_run) /* "POP" @regs. */ add $WORD_SIZE, %_ASM_SP - pop %_ASM_BX + pop %_ASM_BX #ifdef CONFIG_X86_64 pop %r12 pop %r13 @@ -230,9 +196,15 @@ SYM_FUNC_START(__vmx_vcpu_run) pop %_ASM_BP RET - /* VM-Fail. Out-of-line to avoid a taken Jcc after VM-Exit. */ -2: mov $1, %eax - jmp 1b +.Lfixup: + cmpb $0, kvm_rebooting + jne .Lvmfail + ud2 +.Lvmfail: + /* VM-Fail: set return value to 1 */ + mov $1, %eax + jmp .Lclear_regs + SYM_FUNC_END(__vmx_vcpu_run) From 84061fff2ad98a7809f00e88a54f584f84830388 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Tue, 14 Jun 2022 23:16:12 +0200 Subject: [PATCH 1548/1842] KVM: VMX: Convert launched argument to flags commit bb06650634d3552c0f8557e9d16aa1a408040e28 upstream. Convert __vmx_vcpu_run()'s 'launched' argument to 'flags', in preparation for doing SPEC_CTRL handling immediately after vmexit, which will need another flag. This is much easier than adding a fourth argument, because this code supports both 32-bit and 64-bit, and the fourth argument on 32-bit would have to be pushed on the stack. Note that __vmx_vcpu_run_flags() is called outside of the noinstr critical section because it will soon start calling potentially traceable functions. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/vmx/nested.c | 2 +- arch/x86/kvm/vmx/run_flags.h | 7 +++++++ arch/x86/kvm/vmx/vmenter.S | 9 +++++---- arch/x86/kvm/vmx/vmx.c | 17 ++++++++++++++--- arch/x86/kvm/vmx/vmx.h | 5 ++++- 5 files changed, 31 insertions(+), 9 deletions(-) create mode 100644 arch/x86/kvm/vmx/run_flags.h diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c index 67fe1cf79caf..09804cad6e2d 100644 --- a/arch/x86/kvm/vmx/nested.c +++ b/arch/x86/kvm/vmx/nested.c @@ -3077,7 +3077,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) } vm_fail = __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs, - vmx->loaded_vmcs->launched); + __vmx_vcpu_run_flags(vmx)); if (vmx->msr_autoload.host.nr) vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr); diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h new file mode 100644 index 000000000000..57f4c664ea9c --- /dev/null +++ b/arch/x86/kvm/vmx/run_flags.h @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __KVM_X86_VMX_RUN_FLAGS_H +#define __KVM_X86_VMX_RUN_FLAGS_H + +#define VMX_RUN_VMRESUME (1 << 0) + +#endif /* __KVM_X86_VMX_RUN_FLAGS_H */ diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index cd1cd516ec3d..95d3a47b157d 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -5,6 +5,7 @@ #include #include #include +#include "run_flags.h" #define WORD_SIZE (BITS_PER_LONG / 8) @@ -34,7 +35,7 @@ * __vmx_vcpu_run - Run a vCPU via a transition to VMX guest mode * @vmx: struct vcpu_vmx * (forwarded to vmx_update_host_rsp) * @regs: unsigned long * (to guest registers) - * @launched: %true if the VMCS has been launched + * @flags: VMX_RUN_VMRESUME: use VMRESUME instead of VMLAUNCH * * Returns: * 0 on VM-Exit, 1 on VM-Fail @@ -59,7 +60,7 @@ SYM_FUNC_START(__vmx_vcpu_run) */ push %_ASM_ARG2 - /* Copy @launched to BL, _ASM_ARG3 is volatile. */ + /* Copy @flags to BL, _ASM_ARG3 is volatile. */ mov %_ASM_ARG3B, %bl lea (%_ASM_SP), %_ASM_ARG2 @@ -69,7 +70,7 @@ SYM_FUNC_START(__vmx_vcpu_run) mov (%_ASM_SP), %_ASM_AX /* Check if vmlaunch or vmresume is needed */ - testb %bl, %bl + testb $VMX_RUN_VMRESUME, %bl /* Load guest registers. Don't clobber flags. */ mov VCPU_RCX(%_ASM_AX), %_ASM_CX @@ -92,7 +93,7 @@ SYM_FUNC_START(__vmx_vcpu_run) mov VCPU_RAX(%_ASM_AX), %_ASM_AX /* Check EFLAGS.ZF from 'testb' above */ - je .Lvmlaunch + jz .Lvmlaunch /* * After a successful VMRESUME/VMLAUNCH, control flow "magically" diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index a1ff92b572de..63f0f2304797 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -936,6 +936,16 @@ static bool msr_write_intercepted(struct vcpu_vmx *vmx, u32 msr) return true; } +unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx) +{ + unsigned int flags = 0; + + if (vmx->loaded_vmcs->launched) + flags |= VMX_RUN_VMRESUME; + + return flags; +} + static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx, unsigned long entry, unsigned long exit) { @@ -6688,7 +6698,8 @@ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu) } static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, - struct vcpu_vmx *vmx) + struct vcpu_vmx *vmx, + unsigned long flags) { /* * VMENTER enables interrupts (host state), but the kernel state is @@ -6725,7 +6736,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu, native_write_cr2(vcpu->arch.cr2); vmx->fail = __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs, - vmx->loaded_vmcs->launched); + flags); vcpu->arch.cr2 = native_read_cr2(); @@ -6824,7 +6835,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu) x86_spec_ctrl_set_guest(vmx->spec_ctrl, 0); /* The actual VMENTER/EXIT is in the .noinstr.text section. */ - vmx_vcpu_enter_exit(vcpu, vmx); + vmx_vcpu_enter_exit(vcpu, vmx, __vmx_vcpu_run_flags(vmx)); /* * We do not use IBRS in the kernel. If this vCPU has used the diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index 08835468314f..d015ed2684a9 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -13,6 +13,7 @@ #include "vmcs.h" #include "vmx_ops.h" #include "cpuid.h" +#include "run_flags.h" extern const u32 vmx_msr_index[]; @@ -365,7 +366,9 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu); struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr); void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu); void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp); -bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched); +unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx); +bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, + unsigned int flags); int vmx_find_loadstore_msr_slot(struct vmx_msrs *m, u32 msr); void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu); From 5269be9111e2b66572e78647f2e8948f7fc96466 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Tue, 14 Jun 2022 23:16:13 +0200 Subject: [PATCH 1549/1842] KVM: VMX: Prevent guest RSB poisoning attacks with eIBRS commit fc02735b14fff8c6678b521d324ade27b1a3d4cf upstream. On eIBRS systems, the returns in the vmexit return path from __vmx_vcpu_run() to vmx_vcpu_run() are exposed to RSB poisoning attacks. Fix that by moving the post-vmexit spec_ctrl handling to immediately after the vmexit. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/nospec-branch.h | 1 + arch/x86/kernel/cpu/bugs.c | 4 +++ arch/x86/kvm/vmx/run_flags.h | 1 + arch/x86/kvm/vmx/vmenter.S | 49 +++++++++++++++++++++------- arch/x86/kvm/vmx/vmx.c | 48 +++++++++++++++------------ arch/x86/kvm/vmx/vmx.h | 1 + 6 files changed, 73 insertions(+), 31 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 3382f59a1e03..f50a18d62ad7 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -275,6 +275,7 @@ static inline void indirect_branch_prediction_barrier(void) /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; +extern u64 x86_spec_ctrl_current; extern void write_spec_ctrl_current(u64 val, bool force); extern u64 spec_ctrl_current(void); diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 8fe701ffee31..4157991053d1 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -186,6 +186,10 @@ void __init check_bugs(void) #endif } +/* + * NOTE: For VMX, this function is not called in the vmexit path. + * It uses vmx_spec_ctrl_restore_host() instead. + */ void x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest) { diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h index 57f4c664ea9c..edc3f16cc189 100644 --- a/arch/x86/kvm/vmx/run_flags.h +++ b/arch/x86/kvm/vmx/run_flags.h @@ -3,5 +3,6 @@ #define __KVM_X86_VMX_RUN_FLAGS_H #define VMX_RUN_VMRESUME (1 << 0) +#define VMX_RUN_SAVE_SPEC_CTRL (1 << 1) #endif /* __KVM_X86_VMX_RUN_FLAGS_H */ diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 95d3a47b157d..1c1a3d6c2af2 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -33,9 +33,10 @@ /** * __vmx_vcpu_run - Run a vCPU via a transition to VMX guest mode - * @vmx: struct vcpu_vmx * (forwarded to vmx_update_host_rsp) + * @vmx: struct vcpu_vmx * * @regs: unsigned long * (to guest registers) - * @flags: VMX_RUN_VMRESUME: use VMRESUME instead of VMLAUNCH + * @flags: VMX_RUN_VMRESUME: use VMRESUME instead of VMLAUNCH + * VMX_RUN_SAVE_SPEC_CTRL: save guest SPEC_CTRL into vmx->spec_ctrl * * Returns: * 0 on VM-Exit, 1 on VM-Fail @@ -54,6 +55,12 @@ SYM_FUNC_START(__vmx_vcpu_run) #endif push %_ASM_BX + /* Save @vmx for SPEC_CTRL handling */ + push %_ASM_ARG1 + + /* Save @flags for SPEC_CTRL handling */ + push %_ASM_ARG3 + /* * Save @regs, _ASM_ARG2 may be modified by vmx_update_host_rsp() and * @regs is needed after VM-Exit to save the guest's register values. @@ -148,25 +155,23 @@ SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL) mov %r15, VCPU_R15(%_ASM_AX) #endif - /* IMPORTANT: RSB must be stuffed before the first return. */ - FILL_RETURN_BUFFER %_ASM_BX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE - - /* Clear RAX to indicate VM-Exit (as opposed to VM-Fail). */ - xor %eax, %eax + /* Clear return value to indicate VM-Exit (as opposed to VM-Fail). */ + xor %ebx, %ebx .Lclear_regs: /* - * Clear all general purpose registers except RSP and RAX to prevent + * Clear all general purpose registers except RSP and RBX to prevent * speculative use of the guest's values, even those that are reloaded * via the stack. In theory, an L1 cache miss when restoring registers * could lead to speculative execution with the guest's values. * Zeroing XORs are dirt cheap, i.e. the extra paranoia is essentially * free. RSP and RAX are exempt as RSP is restored by hardware during - * VM-Exit and RAX is explicitly loaded with 0 or 1 to return VM-Fail. + * VM-Exit and RBX is explicitly loaded with 0 or 1 to hold the return + * value. */ + xor %eax, %eax xor %ecx, %ecx xor %edx, %edx - xor %ebx, %ebx xor %ebp, %ebp xor %esi, %esi xor %edi, %edi @@ -184,6 +189,28 @@ SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL) /* "POP" @regs. */ add $WORD_SIZE, %_ASM_SP + /* + * IMPORTANT: RSB filling and SPEC_CTRL handling must be done before + * the first unbalanced RET after vmexit! + * + * For retpoline, RSB filling is needed to prevent poisoned RSB entries + * and (in some cases) RSB underflow. + * + * eIBRS has its own protection against poisoned RSB, so it doesn't + * need the RSB filling sequence. But it does need to be enabled + * before the first unbalanced RET. + */ + + FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE + + pop %_ASM_ARG2 /* @flags */ + pop %_ASM_ARG1 /* @vmx */ + + call vmx_spec_ctrl_restore_host + + /* Put return value in AX */ + mov %_ASM_BX, %_ASM_AX + pop %_ASM_BX #ifdef CONFIG_X86_64 pop %r12 @@ -203,7 +230,7 @@ SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL) ud2 .Lvmfail: /* VM-Fail: set return value to 1 */ - mov $1, %eax + mov $1, %_ASM_BX jmp .Lclear_regs SYM_FUNC_END(__vmx_vcpu_run) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 63f0f2304797..296e230f6003 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -943,6 +943,14 @@ unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx) if (vmx->loaded_vmcs->launched) flags |= VMX_RUN_VMRESUME; + /* + * If writes to the SPEC_CTRL MSR aren't intercepted, the guest is free + * to change it directly without causing a vmexit. In that case read + * it after vmexit and store it in vmx->spec_ctrl. + */ + if (unlikely(!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))) + flags |= VMX_RUN_SAVE_SPEC_CTRL; + return flags; } @@ -6685,6 +6693,26 @@ void noinstr vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp) } } +void noinstr vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx, + unsigned int flags) +{ + u64 hostval = this_cpu_read(x86_spec_ctrl_current); + + if (!cpu_feature_enabled(X86_FEATURE_MSR_SPEC_CTRL)) + return; + + if (flags & VMX_RUN_SAVE_SPEC_CTRL) + vmx->spec_ctrl = __rdmsr(MSR_IA32_SPEC_CTRL); + + /* + * If the guest/host SPEC_CTRL values differ, restore the host value. + */ + if (vmx->spec_ctrl != hostval) + native_wrmsrl(MSR_IA32_SPEC_CTRL, hostval); + + barrier_nospec(); +} + static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu) { switch (to_vmx(vcpu)->exit_reason.basic) { @@ -6837,26 +6865,6 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu) /* The actual VMENTER/EXIT is in the .noinstr.text section. */ vmx_vcpu_enter_exit(vcpu, vmx, __vmx_vcpu_run_flags(vmx)); - /* - * We do not use IBRS in the kernel. If this vCPU has used the - * SPEC_CTRL MSR it may have left it on; save the value and - * turn it off. This is much more efficient than blindly adding - * it to the atomic save/restore list. Especially as the former - * (Saving guest MSRs on vmexit) doesn't even exist in KVM. - * - * For non-nested case: - * If the L01 MSR bitmap does not intercept the MSR, then we need to - * save it. - * - * For nested case: - * If the L02 MSR bitmap does not intercept the MSR, then we need to - * save it. - */ - if (unlikely(!msr_write_intercepted(vmx, MSR_IA32_SPEC_CTRL))) - vmx->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL); - - x86_spec_ctrl_restore_host(vmx->spec_ctrl, 0); - /* All fields are clean at this point */ if (static_branch_unlikely(&enable_evmcs)) current_evmcs->hv_clean_fields |= diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index d015ed2684a9..a6b52d3a39c9 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -366,6 +366,7 @@ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu); struct vmx_uret_msr *vmx_find_uret_msr(struct vcpu_vmx *vmx, u32 msr); void pt_update_intercept_for_msr(struct kvm_vcpu *vcpu); void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp); +void vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx, unsigned int flags); unsigned int __vmx_vcpu_run_flags(struct vcpu_vmx *vmx); bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, unsigned int flags); From 47ae76fb27398e867980d63789058ff7c4f12a35 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Tue, 14 Jun 2022 23:16:14 +0200 Subject: [PATCH 1550/1842] KVM: VMX: Fix IBRS handling after vmexit commit bea7e31a5caccb6fe8ed989c065072354f0ecb52 upstream. For legacy IBRS to work, the IBRS bit needs to be always re-written after vmexit, even if it's already on. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/vmx/vmx.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 296e230f6003..9b520da3f748 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6706,8 +6706,13 @@ void noinstr vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx, /* * If the guest/host SPEC_CTRL values differ, restore the host value. + * + * For legacy IBRS, the IBRS bit always needs to be written after + * transitioning from a less privileged predictor mode, regardless of + * whether the guest/host values differ. */ - if (vmx->spec_ctrl != hostval) + if (cpu_feature_enabled(X86_FEATURE_KERNEL_IBRS) || + vmx->spec_ctrl != hostval) native_wrmsrl(MSR_IA32_SPEC_CTRL, hostval); barrier_nospec(); From 4d7f72b6e1bc630bec7e4cd51814bc2b092bf153 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Tue, 14 Jun 2022 23:16:15 +0200 Subject: [PATCH 1551/1842] x86/speculation: Fill RSB on vmexit for IBRS commit 9756bba28470722dacb79ffce554336dd1f6a6cd upstream. Prevent RSB underflow/poisoning attacks with RSB. While at it, add a bunch of comments to attempt to document the current state of tribal knowledge about RSB attacks and what exactly is being mitigated. Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/cpufeatures.h | 2 +- arch/x86/kernel/cpu/bugs.c | 63 +++++++++++++++++++++++++++--- arch/x86/kvm/vmx/vmenter.S | 6 +-- 3 files changed, 62 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index eafc8c7cf358..48ca004c2b67 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -204,7 +204,7 @@ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ #define X86_FEATURE_KERNEL_IBRS ( 7*32+12) /* "" Set/clear IBRS on kernel entry/exit */ -/* FREE! ( 7*32+13) */ +#define X86_FEATURE_RSB_VMEXIT ( 7*32+13) /* "" Fill RSB on VM-Exit */ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */ #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */ #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */ diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 4157991053d1..5e88663591bb 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1357,16 +1357,69 @@ static void __init spectre_v2_select_mitigation(void) pr_info("%s\n", spectre_v2_strings[mode]); /* - * If spectre v2 protection has been enabled, unconditionally fill - * RSB during a context switch; this protects against two independent - * issues: + * If Spectre v2 protection has been enabled, fill the RSB during a + * context switch. In general there are two types of RSB attacks + * across context switches, for which the CALLs/RETs may be unbalanced. * - * - RSB underflow (and switch to BTB) on Skylake+ - * - SpectreRSB variant of spectre v2 on X86_BUG_SPECTRE_V2 CPUs + * 1) RSB underflow + * + * Some Intel parts have "bottomless RSB". When the RSB is empty, + * speculated return targets may come from the branch predictor, + * which could have a user-poisoned BTB or BHB entry. + * + * AMD has it even worse: *all* returns are speculated from the BTB, + * regardless of the state of the RSB. + * + * When IBRS or eIBRS is enabled, the "user -> kernel" attack + * scenario is mitigated by the IBRS branch prediction isolation + * properties, so the RSB buffer filling wouldn't be necessary to + * protect against this type of attack. + * + * The "user -> user" attack scenario is mitigated by RSB filling. + * + * 2) Poisoned RSB entry + * + * If the 'next' in-kernel return stack is shorter than 'prev', + * 'next' could be tricked into speculating with a user-poisoned RSB + * entry. + * + * The "user -> kernel" attack scenario is mitigated by SMEP and + * eIBRS. + * + * The "user -> user" scenario, also known as SpectreBHB, requires + * RSB clearing. + * + * So to mitigate all cases, unconditionally fill RSB on context + * switches. + * + * FIXME: Is this pointless for retbleed-affected AMD? */ setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW); pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n"); + /* + * Similar to context switches, there are two types of RSB attacks + * after vmexit: + * + * 1) RSB underflow + * + * 2) Poisoned RSB entry + * + * When retpoline is enabled, both are mitigated by filling/clearing + * the RSB. + * + * When IBRS is enabled, while #1 would be mitigated by the IBRS branch + * prediction isolation protections, RSB still needs to be cleared + * because of #2. Note that SMEP provides no protection here, unlike + * user-space-poisoned RSB entries. + * + * eIBRS, on the other hand, has RSB-poisoning protections, so it + * doesn't need RSB clearing after vmexit. + */ + if (boot_cpu_has(X86_FEATURE_RETPOLINE) || + boot_cpu_has(X86_FEATURE_KERNEL_IBRS)) + setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT); + /* * Retpoline protects the kernel, but doesn't protect firmware. IBRS * and Enhanced IBRS protect firmware too, so enable IBRS around diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 1c1a3d6c2af2..857fa0fc49fa 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -193,15 +193,15 @@ SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL) * IMPORTANT: RSB filling and SPEC_CTRL handling must be done before * the first unbalanced RET after vmexit! * - * For retpoline, RSB filling is needed to prevent poisoned RSB entries - * and (in some cases) RSB underflow. + * For retpoline or IBRS, RSB filling is needed to prevent poisoned RSB + * entries and (in some cases) RSB underflow. * * eIBRS has its own protection against poisoned RSB, so it doesn't * need the RSB filling sequence. But it does need to be enabled * before the first unbalanced RET. */ - FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE + FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT pop %_ASM_ARG2 /* @flags */ pop %_ASM_ARG1 /* @vmx */ From a74f5d23e68d9687ed06bd462d344867824707d8 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 24 Jun 2022 14:03:25 +0200 Subject: [PATCH 1552/1842] x86/common: Stamp out the stepping madness commit 7a05bc95ed1c5a59e47aaade9fb4083c27de9e62 upstream. The whole MMIO/RETBLEED enumeration went overboard on steppings. Get rid of all that and simply use ANY. If a future stepping of these models would not be affected, it had better set the relevant ARCH_CAP_$FOO_NO bit in IA32_ARCH_CAPABILITIES. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Acked-by: Dave Hansen Signed-off-by: Borislav Petkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/common.c | 37 ++++++++++++++++-------------------- 1 file changed, 16 insertions(+), 21 deletions(-) diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index ab68595e217e..a1051a036833 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1119,32 +1119,27 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { VULNBL_INTEL_STEPPINGS(HASWELL, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(HASWELL_L, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(HASWELL_G, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(HASWELL_X, BIT(2) | BIT(4), MMIO), - VULNBL_INTEL_STEPPINGS(BROADWELL_D, X86_STEPPINGS(0x3, 0x5), MMIO), + VULNBL_INTEL_STEPPINGS(HASWELL_X, X86_STEPPING_ANY, MMIO), + VULNBL_INTEL_STEPPINGS(BROADWELL_D, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(BROADWELL_G, X86_STEPPING_ANY, SRBDS), VULNBL_INTEL_STEPPINGS(BROADWELL_X, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(BROADWELL, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO | RETBLEED), - VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(SKYLAKE_X, BIT(3) | BIT(4) | BIT(6) | - BIT(7) | BIT(0xB), MMIO | RETBLEED), - VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPINGS(0x3, 0x3), SRBDS | MMIO | RETBLEED), - VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPING_ANY, SRBDS), - VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x9, 0xC), SRBDS | MMIO | RETBLEED), - VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x0, 0x8), SRBDS), - VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x9, 0xD), SRBDS | MMIO | RETBLEED), - VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x0, 0x8), SRBDS), - VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPINGS(0x5, 0x5), MMIO | MMIO_SBDS | RETBLEED), - VULNBL_INTEL_STEPPINGS(ICELAKE_D, X86_STEPPINGS(0x1, 0x1), MMIO), - VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPINGS(0x4, 0x6), MMIO), - VULNBL_INTEL_STEPPINGS(COMETLAKE, BIT(2) | BIT(3) | BIT(5), MMIO | MMIO_SBDS | RETBLEED), - VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(SKYLAKE_X, X86_STEPPING_ANY, MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(ICELAKE_D, X86_STEPPING_ANY, MMIO), + VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPING_ANY, MMIO), + VULNBL_INTEL_STEPPINGS(COMETLAKE, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED), VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x0), MMIO | RETBLEED), - VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS | RETBLEED), - VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPINGS(0x1, 0x1), MMIO | RETBLEED), - VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPINGS(0x1, 0x1), MMIO | MMIO_SBDS), + VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED), + VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPING_ANY, MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(ATOM_TREMONT, X86_STEPPING_ANY, MMIO | MMIO_SBDS), VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_D, X86_STEPPING_ANY, MMIO), - VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPINGS(0x0, 0x0), MMIO | MMIO_SBDS), + VULNBL_INTEL_STEPPINGS(ATOM_TREMONT_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS), VULNBL_AMD(0x15, RETBLEED), VULNBL_AMD(0x16, RETBLEED), From f7851ed697be2ce86bd8baf29111762b7b3ff6cc Mon Sep 17 00:00:00 2001 From: Andrew Cooper Date: Fri, 24 Jun 2022 14:41:21 +0100 Subject: [PATCH 1553/1842] x86/cpu/amd: Enumerate BTC_NO commit 26aae8ccbc1972233afd08fb3f368947c0314265 upstream. BTC_NO indicates that hardware is not susceptible to Branch Type Confusion. Zen3 CPUs don't suffer BTC. Hypervisors are expected to synthesise BTC_NO when it is appropriate given the migration pool, to prevent kernels using heuristics. [ bp: Massage. ] Signed-off-by: Andrew Cooper Signed-off-by: Borislav Petkov [cascardo: no X86_FEATURE_BRS] [cascardo: no X86_FEATURE_CPPC] Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/kernel/cpu/amd.c | 21 +++++++++++++++------ arch/x86/kernel/cpu/common.c | 6 ++++-- 3 files changed, 20 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 48ca004c2b67..1d2feea0805a 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -316,6 +316,7 @@ #define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */ #define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */ #define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */ +#define X86_FEATURE_BTC_NO (13*32+29) /* "" Not vulnerable to Branch Type Confusion */ /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */ #define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */ diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index f1f52e67e361..a2a7f0275e36 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -942,12 +942,21 @@ static void init_amd_zn(struct cpuinfo_x86 *c) node_reclaim_distance = 32; #endif - /* - * Fix erratum 1076: CPB feature bit not being set in CPUID. - * Always set it, except when running under a hypervisor. - */ - if (!cpu_has(c, X86_FEATURE_HYPERVISOR) && !cpu_has(c, X86_FEATURE_CPB)) - set_cpu_cap(c, X86_FEATURE_CPB); + /* Fix up CPUID bits, but only if not virtualised. */ + if (!cpu_has(c, X86_FEATURE_HYPERVISOR)) { + + /* Erratum 1076: CPB feature bit not being set in CPUID. */ + if (!cpu_has(c, X86_FEATURE_CPB)) + set_cpu_cap(c, X86_FEATURE_CPB); + + /* + * Zen3 (Fam19 model < 0x10) parts are not susceptible to + * Branch Type Confusion, but predate the allocation of the + * BTC_NO bit. + */ + if (c->x86 == 0x19 && !cpu_has(c, X86_FEATURE_BTC_NO)) + set_cpu_cap(c, X86_FEATURE_BTC_NO); + } } static void init_amd(struct cpuinfo_x86 *c) diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index a1051a036833..3364498f5537 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1246,8 +1246,10 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) !arch_cap_mmio_immune(ia32_cap)) setup_force_cpu_bug(X86_BUG_MMIO_STALE_DATA); - if ((cpu_matches(cpu_vuln_blacklist, RETBLEED) || (ia32_cap & ARCH_CAP_RSBA))) - setup_force_cpu_bug(X86_BUG_RETBLEED); + if (!cpu_has(c, X86_FEATURE_BTC_NO)) { + if (cpu_matches(cpu_vuln_blacklist, RETBLEED) || (ia32_cap & ARCH_CAP_RSBA)) + setup_force_cpu_bug(X86_BUG_RETBLEED); + } if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) return; From b24fdd0f1c3328cf8ee0c518b93a7187f8cee097 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Mon, 27 Jun 2022 22:21:17 +0000 Subject: [PATCH 1554/1842] x86/retbleed: Add fine grained Kconfig knobs commit f43b9876e857c739d407bc56df288b0ebe1a9164 upstream. Do fine-grained Kconfig for all the various retbleed parts. NOTE: if your compiler doesn't support return thunks this will silently 'upgrade' your mitigation to IBPB, you might not like this. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov [cascardo: there is no CONFIG_OBJTOOL] [cascardo: objtool calling and option parsing has changed] Signed-off-by: Thadeu Lima de Souza Cascardo [bwh: Backported to 5.10: - In scripts/Makefile.build, add the objtool option with an ifdef block, same as for other options - Adjust filename, context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- Makefile | 8 +- arch/x86/Kconfig | 106 ++++++++++++++++++----- arch/x86/entry/calling.h | 4 + arch/x86/include/asm/disabled-features.h | 18 +++- arch/x86/include/asm/linkage.h | 4 +- arch/x86/include/asm/nospec-branch.h | 10 ++- arch/x86/include/asm/static_call.h | 2 +- arch/x86/kernel/alternative.c | 5 ++ arch/x86/kernel/cpu/amd.c | 2 + arch/x86/kernel/cpu/bugs.c | 42 +++++---- arch/x86/kernel/static_call.c | 2 +- arch/x86/kvm/emulate.c | 4 +- arch/x86/lib/retpoline.S | 4 + scripts/Makefile.build | 3 + scripts/link-vmlinux.sh | 2 +- security/Kconfig | 11 --- tools/objtool/builtin-check.c | 3 +- tools/objtool/builtin.h | 2 +- tools/objtool/check.c | 9 +- 19 files changed, 172 insertions(+), 69 deletions(-) diff --git a/Makefile b/Makefile index 4d502a81d24b..96794db9f18a 100644 --- a/Makefile +++ b/Makefile @@ -672,14 +672,18 @@ endif ifdef CONFIG_CC_IS_GCC RETPOLINE_CFLAGS := $(call cc-option,-mindirect-branch=thunk-extern -mindirect-branch-register) -RETPOLINE_CFLAGS += $(call cc-option,-mfunction-return=thunk-extern) RETPOLINE_VDSO_CFLAGS := $(call cc-option,-mindirect-branch=thunk-inline -mindirect-branch-register) endif ifdef CONFIG_CC_IS_CLANG RETPOLINE_CFLAGS := -mretpoline-external-thunk RETPOLINE_VDSO_CFLAGS := -mretpoline -RETPOLINE_CFLAGS += $(call cc-option,-mfunction-return=thunk-extern) endif + +ifdef CONFIG_RETHUNK +RETHUNK_CFLAGS := -mfunction-return=thunk-extern +RETPOLINE_CFLAGS += $(RETHUNK_CFLAGS) +endif + export RETPOLINE_CFLAGS export RETPOLINE_VDSO_CFLAGS diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 6eebb8927378..16c045906b2a 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -453,30 +453,6 @@ config GOLDFISH def_bool y depends on X86_GOLDFISH -config RETPOLINE - bool "Avoid speculative indirect branches in kernel" - default y - help - Compile kernel with the retpoline compiler options to guard against - kernel-to-user data leaks by avoiding speculative indirect - branches. Requires a compiler with -mindirect-branch=thunk-extern - support for full protection. The kernel may run slower. - -config CC_HAS_SLS - def_bool $(cc-option,-mharden-sls=all) - -config CC_HAS_RETURN_THUNK - def_bool $(cc-option,-mfunction-return=thunk-extern) - -config SLS - bool "Mitigate Straight-Line-Speculation" - depends on CC_HAS_SLS && X86_64 - default n - help - Compile the kernel with straight-line-speculation options to guard - against straight line speculation. The kernel image might be slightly - larger. - config X86_CPU_RESCTRL bool "x86 CPU resource control support" depends on X86 && (CPU_SUP_INTEL || CPU_SUP_AMD) @@ -2430,6 +2406,88 @@ source "kernel/livepatch/Kconfig" endmenu +config CC_HAS_SLS + def_bool $(cc-option,-mharden-sls=all) + +config CC_HAS_RETURN_THUNK + def_bool $(cc-option,-mfunction-return=thunk-extern) + +menuconfig SPECULATION_MITIGATIONS + bool "Mitigations for speculative execution vulnerabilities" + default y + help + Say Y here to enable options which enable mitigations for + speculative execution hardware vulnerabilities. + + If you say N, all mitigations will be disabled. You really + should know what you are doing to say so. + +if SPECULATION_MITIGATIONS + +config PAGE_TABLE_ISOLATION + bool "Remove the kernel mapping in user mode" + default y + depends on (X86_64 || X86_PAE) + help + This feature reduces the number of hardware side channels by + ensuring that the majority of kernel addresses are not mapped + into userspace. + + See Documentation/x86/pti.rst for more details. + +config RETPOLINE + bool "Avoid speculative indirect branches in kernel" + default y + help + Compile kernel with the retpoline compiler options to guard against + kernel-to-user data leaks by avoiding speculative indirect + branches. Requires a compiler with -mindirect-branch=thunk-extern + support for full protection. The kernel may run slower. + +config RETHUNK + bool "Enable return-thunks" + depends on RETPOLINE && CC_HAS_RETURN_THUNK + default y + help + Compile the kernel with the return-thunks compiler option to guard + against kernel-to-user data leaks by avoiding return speculation. + Requires a compiler with -mfunction-return=thunk-extern + support for full protection. The kernel may run slower. + +config CPU_UNRET_ENTRY + bool "Enable UNRET on kernel entry" + depends on CPU_SUP_AMD && RETHUNK + default y + help + Compile the kernel with support for the retbleed=unret mitigation. + +config CPU_IBPB_ENTRY + bool "Enable IBPB on kernel entry" + depends on CPU_SUP_AMD + default y + help + Compile the kernel with support for the retbleed=ibpb mitigation. + +config CPU_IBRS_ENTRY + bool "Enable IBRS on kernel entry" + depends on CPU_SUP_INTEL + default y + help + Compile the kernel with support for the spectre_v2=ibrs mitigation. + This mitigates both spectre_v2 and retbleed at great cost to + performance. + +config SLS + bool "Mitigate Straight-Line-Speculation" + depends on CC_HAS_SLS && X86_64 + default n + help + Compile the kernel with straight-line-speculation options to guard + against straight line speculation. The kernel image might be slightly + larger. + +endif + config ARCH_HAS_ADD_PAGES def_bool y depends on X86_64 && ARCH_ENABLE_MEMORY_HOTPLUG diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h index c3c68e97a404..a4b357e5bbfe 100644 --- a/arch/x86/entry/calling.h +++ b/arch/x86/entry/calling.h @@ -323,6 +323,7 @@ For 32-bit we have the following conventions - kernel is built with * Assumes x86_spec_ctrl_{base,current} to have SPEC_CTRL_IBRS set. */ .macro IBRS_ENTER save_reg +#ifdef CONFIG_CPU_IBRS_ENTRY ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS movl $MSR_IA32_SPEC_CTRL, %ecx @@ -343,6 +344,7 @@ For 32-bit we have the following conventions - kernel is built with shr $32, %rdx wrmsr .Lend_\@: +#endif .endm /* @@ -350,6 +352,7 @@ For 32-bit we have the following conventions - kernel is built with * regs. Must be called after the last RET. */ .macro IBRS_EXIT save_reg +#ifdef CONFIG_CPU_IBRS_ENTRY ALTERNATIVE "jmp .Lend_\@", "", X86_FEATURE_KERNEL_IBRS movl $MSR_IA32_SPEC_CTRL, %ecx @@ -364,6 +367,7 @@ For 32-bit we have the following conventions - kernel is built with shr $32, %rdx wrmsr .Lend_\@: +#endif .endm /* diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h index e5b4e3d460c1..16c24915210d 100644 --- a/arch/x86/include/asm/disabled-features.h +++ b/arch/x86/include/asm/disabled-features.h @@ -60,9 +60,19 @@ # define DISABLE_RETPOLINE 0 #else # define DISABLE_RETPOLINE ((1 << (X86_FEATURE_RETPOLINE & 31)) | \ - (1 << (X86_FEATURE_RETPOLINE_LFENCE & 31)) | \ - (1 << (X86_FEATURE_RETHUNK & 31)) | \ - (1 << (X86_FEATURE_UNRET & 31))) + (1 << (X86_FEATURE_RETPOLINE_LFENCE & 31))) +#endif + +#ifdef CONFIG_RETHUNK +# define DISABLE_RETHUNK 0 +#else +# define DISABLE_RETHUNK (1 << (X86_FEATURE_RETHUNK & 31)) +#endif + +#ifdef CONFIG_CPU_UNRET_ENTRY +# define DISABLE_UNRET 0 +#else +# define DISABLE_UNRET (1 << (X86_FEATURE_UNRET & 31)) #endif /* Force disable because it's broken beyond repair */ @@ -82,7 +92,7 @@ #define DISABLED_MASK8 0 #define DISABLED_MASK9 (DISABLE_SMAP) #define DISABLED_MASK10 0 -#define DISABLED_MASK11 (DISABLE_RETPOLINE) +#define DISABLED_MASK11 (DISABLE_RETPOLINE|DISABLE_RETHUNK|DISABLE_UNRET) #define DISABLED_MASK12 0 #define DISABLED_MASK13 0 #define DISABLED_MASK14 0 diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h index d04e61c2f863..5000cf59bdf5 100644 --- a/arch/x86/include/asm/linkage.h +++ b/arch/x86/include/asm/linkage.h @@ -18,7 +18,7 @@ #define __ALIGN_STR __stringify(__ALIGN) #endif -#if defined(CONFIG_RETPOLINE) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO) +#if defined(CONFIG_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO) #define RET jmp __x86_return_thunk #else /* CONFIG_RETPOLINE */ #ifdef CONFIG_SLS @@ -30,7 +30,7 @@ #else /* __ASSEMBLY__ */ -#if defined(CONFIG_RETPOLINE) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO) +#if defined(CONFIG_RETHUNK) && !defined(__DISABLE_EXPORTS) && !defined(BUILD_VDSO) #define ASM_RET "jmp __x86_return_thunk\n\t" #else /* CONFIG_RETPOLINE */ #ifdef CONFIG_SLS diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index f50a18d62ad7..2989b42fcf4e 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -127,6 +127,12 @@ .Lskip_rsb_\@: .endm +#ifdef CONFIG_CPU_UNRET_ENTRY +#define CALL_ZEN_UNTRAIN_RET "call zen_untrain_ret" +#else +#define CALL_ZEN_UNTRAIN_RET "" +#endif + /* * Mitigate RETBleed for AMD/Hygon Zen uarch. Requires KERNEL CR3 because the * return thunk isn't mapped into the userspace tables (then again, AMD @@ -139,10 +145,10 @@ * where we have a stack but before any RET instruction. */ .macro UNTRAIN_RET -#ifdef CONFIG_RETPOLINE +#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) ANNOTATE_UNRET_END ALTERNATIVE_2 "", \ - "call zen_untrain_ret", X86_FEATURE_UNRET, \ + CALL_ZEN_UNTRAIN_RET, X86_FEATURE_UNRET, \ "call entry_ibpb", X86_FEATURE_ENTRY_IBPB #endif .endm diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h index 7efce22d6355..491aadfac611 100644 --- a/arch/x86/include/asm/static_call.h +++ b/arch/x86/include/asm/static_call.h @@ -44,7 +44,7 @@ #define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) \ __ARCH_DEFINE_STATIC_CALL_TRAMP(name, ".byte 0xe9; .long " #func " - (. + 4)") -#ifdef CONFIG_RETPOLINE +#ifdef CONFIG_RETHUNK #define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) \ __ARCH_DEFINE_STATIC_CALL_TRAMP(name, "jmp __x86_return_thunk") #else diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 1fded2daaa2e..77eefa0b0f32 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -662,6 +662,7 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) } } +#ifdef CONFIG_RETHUNK /* * Rewrite the compiler generated return thunk tail-calls. * @@ -723,6 +724,10 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end) } } } +#else +void __init_or_module noinline apply_returns(s32 *start, s32 *end) { } +#endif /* CONFIG_RETHUNK */ + #else /* !RETPOLINES || !CONFIG_STACK_VALIDATION */ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) { } diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index a2a7f0275e36..8b9e3277a6ce 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -916,6 +916,7 @@ static void init_amd_bd(struct cpuinfo_x86 *c) void init_spectral_chicken(struct cpuinfo_x86 *c) { +#ifdef CONFIG_CPU_UNRET_ENTRY u64 value; /* @@ -932,6 +933,7 @@ void init_spectral_chicken(struct cpuinfo_x86 *c) wrmsrl_safe(MSR_ZEN2_SPECTRAL_CHICKEN, value); } } +#endif } static void init_amd_zn(struct cpuinfo_x86 *c) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 5e88663591bb..e96d33675229 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -798,7 +798,6 @@ static int __init retbleed_parse_cmdline(char *str) early_param("retbleed", retbleed_parse_cmdline); #define RETBLEED_UNTRAIN_MSG "WARNING: BTB untrained return thunk mitigation is only effective on AMD/Hygon!\n" -#define RETBLEED_COMPILER_MSG "WARNING: kernel not compiled with RETPOLINE or -mfunction-return capable compiler; falling back to IBPB!\n" #define RETBLEED_INTEL_MSG "WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!\n" static void __init retbleed_select_mitigation(void) @@ -813,18 +812,33 @@ static void __init retbleed_select_mitigation(void) return; case RETBLEED_CMD_UNRET: - retbleed_mitigation = RETBLEED_MITIGATION_UNRET; + if (IS_ENABLED(CONFIG_CPU_UNRET_ENTRY)) { + retbleed_mitigation = RETBLEED_MITIGATION_UNRET; + } else { + pr_err("WARNING: kernel not compiled with CPU_UNRET_ENTRY.\n"); + goto do_cmd_auto; + } break; case RETBLEED_CMD_IBPB: - retbleed_mitigation = RETBLEED_MITIGATION_IBPB; + if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY)) { + retbleed_mitigation = RETBLEED_MITIGATION_IBPB; + } else { + pr_err("WARNING: kernel not compiled with CPU_IBPB_ENTRY.\n"); + goto do_cmd_auto; + } break; +do_cmd_auto: case RETBLEED_CMD_AUTO: default: if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || - boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) - retbleed_mitigation = RETBLEED_MITIGATION_UNRET; + boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) { + if (IS_ENABLED(CONFIG_CPU_UNRET_ENTRY)) + retbleed_mitigation = RETBLEED_MITIGATION_UNRET; + else if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY)) + retbleed_mitigation = RETBLEED_MITIGATION_IBPB; + } /* * The Intel mitigation (IBRS or eIBRS) was already selected in @@ -837,14 +851,6 @@ static void __init retbleed_select_mitigation(void) switch (retbleed_mitigation) { case RETBLEED_MITIGATION_UNRET: - - if (!IS_ENABLED(CONFIG_RETPOLINE) || - !IS_ENABLED(CONFIG_CC_HAS_RETURN_THUNK)) { - pr_err(RETBLEED_COMPILER_MSG); - retbleed_mitigation = RETBLEED_MITIGATION_IBPB; - goto retbleed_force_ibpb; - } - setup_force_cpu_cap(X86_FEATURE_RETHUNK); setup_force_cpu_cap(X86_FEATURE_UNRET); @@ -856,7 +862,6 @@ static void __init retbleed_select_mitigation(void) break; case RETBLEED_MITIGATION_IBPB: -retbleed_force_ibpb: setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB); mitigate_smt = true; break; @@ -1227,6 +1232,12 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void) return SPECTRE_V2_CMD_AUTO; } + if (cmd == SPECTRE_V2_CMD_IBRS && !IS_ENABLED(CONFIG_CPU_IBRS_ENTRY)) { + pr_err("%s selected but not compiled in. Switching to AUTO select\n", + mitigation_options[i].option); + return SPECTRE_V2_CMD_AUTO; + } + if (cmd == SPECTRE_V2_CMD_IBRS && boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) { pr_err("%s selected but not Intel CPU. Switching to AUTO select\n", mitigation_options[i].option); @@ -1284,7 +1295,8 @@ static void __init spectre_v2_select_mitigation(void) break; } - if (boot_cpu_has_bug(X86_BUG_RETBLEED) && + if (IS_ENABLED(CONFIG_CPU_IBRS_ENTRY) && + boot_cpu_has_bug(X86_BUG_RETBLEED) && retbleed_cmd != RETBLEED_CMD_OFF && boot_cpu_has(X86_FEATURE_IBRS) && boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) { diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c index 02e471c27efa..8f7ba3ae1c3c 100644 --- a/arch/x86/kernel/static_call.c +++ b/arch/x86/kernel/static_call.c @@ -108,7 +108,7 @@ void arch_static_call_transform(void *site, void *tramp, void *func, bool tail) } EXPORT_SYMBOL_GPL(arch_static_call_transform); -#ifdef CONFIG_RETPOLINE +#ifdef CONFIG_RETHUNK /* * This is called by apply_returns() to fix up static call trampolines, * specifically ARCH_DEFINE_STATIC_CALL_NULL_TRAMP which is recorded as diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 1f953f230ad5..59e5d79f5c34 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -435,10 +435,10 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop); * Depending on .config the SETcc functions look like: * * SETcc %al [3 bytes] - * RET | JMP __x86_return_thunk [1,5 bytes; CONFIG_RETPOLINE] + * RET | JMP __x86_return_thunk [1,5 bytes; CONFIG_RETHUNK] * INT3 [1 byte; CONFIG_SLS] */ -#define RET_LENGTH (1 + (4 * IS_ENABLED(CONFIG_RETPOLINE)) + \ +#define RET_LENGTH (1 + (4 * IS_ENABLED(CONFIG_RETHUNK)) + \ IS_ENABLED(CONFIG_SLS)) #define SETCC_LENGTH (3 + RET_LENGTH) #define SETCC_ALIGN (4 << ((SETCC_LENGTH > 4) & 1) << ((SETCC_LENGTH > 8) & 1)) diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index 807f674fd59e..1221bb099afb 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -71,6 +71,8 @@ SYM_CODE_END(__x86_indirect_thunk_array) * This function name is magical and is used by -mfunction-return=thunk-extern * for the compiler to generate JMPs to it. */ +#ifdef CONFIG_RETHUNK + .section .text.__x86.return_thunk /* @@ -135,3 +137,5 @@ SYM_FUNC_END(zen_untrain_ret) __EXPORT_THUNK(zen_untrain_ret) EXPORT_SYMBOL(__x86_return_thunk) + +#endif /* CONFIG_RETHUNK */ diff --git a/scripts/Makefile.build b/scripts/Makefile.build index bea7e54b2ab9..17e8b2065900 100644 --- a/scripts/Makefile.build +++ b/scripts/Makefile.build @@ -227,6 +227,9 @@ endif ifdef CONFIG_RETPOLINE objtool_args += --retpoline endif +ifdef CONFIG_RETHUNK + objtool_args += --rethunk +endif ifdef CONFIG_X86_SMAP objtool_args += --uaccess endif diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh index a7c72eb5a624..d0b44bee9286 100755 --- a/scripts/link-vmlinux.sh +++ b/scripts/link-vmlinux.sh @@ -65,7 +65,7 @@ objtool_link() if [ -n "${CONFIG_VMLINUX_VALIDATION}" ]; then objtoolopt="check" - if [ -n "${CONFIG_RETPOLINE}" ]; then + if [ -n "${CONFIG_CPU_UNRET_ENTRY}" ]; then objtoolopt="${objtoolopt} --unret" fi if [ -z "${CONFIG_FRAME_POINTER}" ]; then diff --git a/security/Kconfig b/security/Kconfig index 0548db16c49d..9893c316da89 100644 --- a/security/Kconfig +++ b/security/Kconfig @@ -54,17 +54,6 @@ config SECURITY_NETWORK implement socket and networking access controls. If you are unsure how to answer this question, answer N. -config PAGE_TABLE_ISOLATION - bool "Remove the kernel mapping in user mode" - default y - depends on (X86_64 || X86_PAE) && !UML - help - This feature reduces the number of hardware side channels by - ensuring that the majority of kernel addresses are not mapped - into userspace. - - See Documentation/x86/pti.rst for more details. - config SECURITY_INFINIBAND bool "Infiniband Security Hooks" depends on SECURITY && INFINIBAND diff --git a/tools/objtool/builtin-check.c b/tools/objtool/builtin-check.c index d04eab7e77ae..447a49c03abb 100644 --- a/tools/objtool/builtin-check.c +++ b/tools/objtool/builtin-check.c @@ -19,7 +19,7 @@ #include "objtool.h" bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, - validate_dup, vmlinux, sls, unret; + validate_dup, vmlinux, sls, unret, rethunk; static const char * const check_usage[] = { "objtool check [] file.o", @@ -30,6 +30,7 @@ const struct option check_options[] = { OPT_BOOLEAN('f', "no-fp", &no_fp, "Skip frame pointer validation"), OPT_BOOLEAN('u', "no-unreachable", &no_unreachable, "Skip 'unreachable instruction' warnings"), OPT_BOOLEAN('r', "retpoline", &retpoline, "Validate retpoline assumptions"), + OPT_BOOLEAN(0, "rethunk", &rethunk, "validate and annotate rethunk usage"), OPT_BOOLEAN(0, "unret", &unret, "validate entry unret placement"), OPT_BOOLEAN('m', "module", &module, "Indicates the object will be part of a kernel module"), OPT_BOOLEAN('b', "backtrace", &backtrace, "unwind on error"), diff --git a/tools/objtool/builtin.h b/tools/objtool/builtin.h index b6b486e28d54..61d8d49dbc65 100644 --- a/tools/objtool/builtin.h +++ b/tools/objtool/builtin.h @@ -9,7 +9,7 @@ extern const struct option check_options[]; extern bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, - validate_dup, vmlinux, sls, unret; + validate_dup, vmlinux, sls, unret, rethunk; extern int cmd_check(int argc, const char **argv); extern int cmd_orc(int argc, const char **argv); diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 864d407009c9..ea80b29b9913 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -3262,8 +3262,11 @@ static int validate_retpoline(struct objtool_file *file) continue; if (insn->type == INSN_RETURN) { - WARN_FUNC("'naked' return found in RETPOLINE build", - insn->sec, insn->offset); + if (rethunk) { + WARN_FUNC("'naked' return found in RETHUNK build", + insn->sec, insn->offset); + } else + continue; } else { WARN_FUNC("indirect %s found in RETPOLINE build", insn->sec, insn->offset, @@ -3533,7 +3536,9 @@ int check(struct objtool_file *file) if (ret < 0) goto out; warnings += ret; + } + if (rethunk) { ret = create_return_sites_sections(file); if (ret < 0) goto out; From 609336351d08699395be24860902e6e0b7860e2b Mon Sep 17 00:00:00 2001 From: Pawan Gupta Date: Wed, 6 Jul 2022 15:01:15 -0700 Subject: [PATCH 1555/1842] x86/bugs: Add Cannon lake to RETBleed affected CPU list commit f54d45372c6ac9c993451de5e51312485f7d10bc upstream. Cannon lake is also affected by RETBleed, add it to the list. Fixes: 6ad0ad2bf8a6 ("x86/bugs: Report Intel retbleed vulnerability") Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/common.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 3364498f5537..901352bd3b42 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1129,6 +1129,7 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED), VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED), VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPING_ANY, SRBDS | MMIO | RETBLEED), + VULNBL_INTEL_STEPPINGS(CANNONLAKE_L, X86_STEPPING_ANY, RETBLEED), VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED), VULNBL_INTEL_STEPPINGS(ICELAKE_D, X86_STEPPING_ANY, MMIO), VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPING_ANY, MMIO), From 51552b6b52fc865f37ef3ddacd27d807a36695ac Mon Sep 17 00:00:00 2001 From: Thadeu Lima de Souza Cascardo Date: Thu, 7 Jul 2022 13:41:52 -0300 Subject: [PATCH 1556/1842] x86/bugs: Do not enable IBPB-on-entry when IBPB is not supported commit 2259da159fbe5dba8ac00b560cf00b6a6537fa18 upstream. There are some VM configurations which have Skylake model but do not support IBPB. In those cases, when using retbleed=ibpb, userspace is going to be killed and kernel is going to panic. If the CPU does not support IBPB, warn and proceed with the auto option. Also, do not fallback to IBPB on AMD/Hygon systems if it is not supported. Fixes: 3ebc17006888 ("x86/bugs: Add retbleed=ibpb") Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Borislav Petkov Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index e96d33675229..bb5a5069fad9 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -821,7 +821,10 @@ static void __init retbleed_select_mitigation(void) break; case RETBLEED_CMD_IBPB: - if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY)) { + if (!boot_cpu_has(X86_FEATURE_IBPB)) { + pr_err("WARNING: CPU does not support IBPB.\n"); + goto do_cmd_auto; + } else if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY)) { retbleed_mitigation = RETBLEED_MITIGATION_IBPB; } else { pr_err("WARNING: kernel not compiled with CPU_IBPB_ENTRY.\n"); @@ -836,7 +839,7 @@ static void __init retbleed_select_mitigation(void) boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) { if (IS_ENABLED(CONFIG_CPU_UNRET_ENTRY)) retbleed_mitigation = RETBLEED_MITIGATION_UNRET; - else if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY)) + else if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY) && boot_cpu_has(X86_FEATURE_IBPB)) retbleed_mitigation = RETBLEED_MITIGATION_IBPB; } From c2ca992144281917cfae19d231b1195c02906a4e Mon Sep 17 00:00:00 2001 From: Konrad Rzeszutek Wilk Date: Fri, 8 Jul 2022 19:10:11 +0200 Subject: [PATCH 1557/1842] x86/kexec: Disable RET on kexec commit 697977d8415d61f3acbc4ee6d564c9dcf0309507 upstream. All the invocations unroll to __x86_return_thunk and this file must be PIC independent. This fixes kexec on 64-bit AMD boxes. [ bp: Fix 32-bit build. ] Reported-by: Edward Tran Reported-by: Awais Tanveer Suggested-by: Ankur Arora Signed-off-by: Konrad Rzeszutek Wilk Signed-off-by: Alexandre Chartre Signed-off-by: Borislav Petkov Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/relocate_kernel_32.S | 25 +++++++++++++++++++------ arch/x86/kernel/relocate_kernel_64.S | 23 +++++++++++++++++------ 2 files changed, 36 insertions(+), 12 deletions(-) diff --git a/arch/x86/kernel/relocate_kernel_32.S b/arch/x86/kernel/relocate_kernel_32.S index 8190136cc63a..82a839885a3c 100644 --- a/arch/x86/kernel/relocate_kernel_32.S +++ b/arch/x86/kernel/relocate_kernel_32.S @@ -7,10 +7,12 @@ #include #include #include +#include #include /* - * Must be relocatable PIC code callable as a C function + * Must be relocatable PIC code callable as a C function, in particular + * there must be a plain RET and not jump to return thunk. */ #define PTR(x) (x << 2) @@ -91,7 +93,9 @@ SYM_CODE_START_NOALIGN(relocate_kernel) movl %edi, %eax addl $(identity_mapped - relocate_kernel), %eax pushl %eax - RET + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(relocate_kernel) SYM_CODE_START_LOCAL_NOALIGN(identity_mapped) @@ -159,12 +163,15 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped) xorl %edx, %edx xorl %esi, %esi xorl %ebp, %ebp - RET + ANNOTATE_UNRET_SAFE + ret + int3 1: popl %edx movl CP_PA_SWAP_PAGE(%edi), %esp addl $PAGE_SIZE, %esp 2: + ANNOTATE_RETPOLINE_SAFE call *%edx /* get the re-entry point of the peer system */ @@ -190,7 +197,9 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped) movl %edi, %eax addl $(virtual_mapped - relocate_kernel), %eax pushl %eax - RET + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(identity_mapped) SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped) @@ -208,7 +217,9 @@ SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped) popl %edi popl %esi popl %ebx - RET + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(virtual_mapped) /* Do the copies */ @@ -271,7 +282,9 @@ SYM_CODE_START_LOCAL_NOALIGN(swap_pages) popl %edi popl %ebx popl %ebp - RET + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(swap_pages) .globl kexec_control_code_size diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S index a0ae167b1aeb..d2b7d212caa4 100644 --- a/arch/x86/kernel/relocate_kernel_64.S +++ b/arch/x86/kernel/relocate_kernel_64.S @@ -13,7 +13,8 @@ #include /* - * Must be relocatable PIC code callable as a C function + * Must be relocatable PIC code callable as a C function, in particular + * there must be a plain RET and not jump to return thunk. */ #define PTR(x) (x << 3) @@ -104,7 +105,9 @@ SYM_CODE_START_NOALIGN(relocate_kernel) /* jump to identity mapped page */ addq $(identity_mapped - relocate_kernel), %r8 pushq %r8 - RET + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(relocate_kernel) SYM_CODE_START_LOCAL_NOALIGN(identity_mapped) @@ -191,7 +194,9 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped) xorl %r14d, %r14d xorl %r15d, %r15d - RET + ANNOTATE_UNRET_SAFE + ret + int3 1: popq %rdx @@ -210,7 +215,9 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped) call swap_pages movq $virtual_mapped, %rax pushq %rax - RET + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(identity_mapped) SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped) @@ -231,7 +238,9 @@ SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped) popq %r12 popq %rbp popq %rbx - RET + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(virtual_mapped) /* Do the copies */ @@ -288,7 +297,9 @@ SYM_CODE_START_LOCAL_NOALIGN(swap_pages) lea PAGE_SIZE(%rax), %rsi jmp 0b 3: - RET + ANNOTATE_UNRET_SAFE + ret + int3 SYM_CODE_END(swap_pages) .globl kexec_control_code_size From eb38964b6ff864b8bdf87c9cf6221d0b0611a990 Mon Sep 17 00:00:00 2001 From: Pawan Gupta Date: Fri, 8 Jul 2022 13:36:09 -0700 Subject: [PATCH 1558/1842] x86/speculation: Disable RRSBA behavior commit 4ad3278df6fe2b0852b00d5757fc2ccd8e92c26e upstream. Some Intel processors may use alternate predictors for RETs on RSB-underflow. This condition may be vulnerable to Branch History Injection (BHI) and intramode-BTI. Kernel earlier added spectre_v2 mitigation modes (eIBRS+Retpolines, eIBRS+LFENCE, Retpolines) which protect indirect CALLs and JMPs against such attacks. However, on RSB-underflow, RET target prediction may fallback to alternate predictors. As a result, RET's predicted target may get influenced by branch history. A new MSR_IA32_SPEC_CTRL bit (RRSBA_DIS_S) controls this fallback behavior when in kernel mode. When set, RETs will not take predictions from alternate predictors, hence mitigating RETs as well. Support for this is enumerated by CPUID.7.2.EDX[RRSBA_CTRL] (bit2). For spectre v2 mitigation, when a user selects a mitigation that protects indirect CALLs and JMPs against BHI and intramode-BTI, set RRSBA_DIS_S also to protect RETs for RSB-underflow case. Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov [bwh: Backported to 5.15: adjust context in scattered.c] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/cpufeatures.h | 2 +- arch/x86/include/asm/msr-index.h | 9 +++++++++ arch/x86/kernel/cpu/bugs.c | 26 ++++++++++++++++++++++++++ arch/x86/kernel/cpu/scattered.c | 1 + tools/arch/x86/include/asm/msr-index.h | 9 +++++++++ 5 files changed, 46 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 1d2feea0805a..e077f4fe7c6d 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -293,7 +293,7 @@ /* FREE! (11*32+ 8) */ /* FREE! (11*32+ 9) */ #define X86_FEATURE_ENTRY_IBPB (11*32+10) /* "" Issue an IBPB on kernel entry */ -/* FREE! (11*32+11) */ +#define X86_FEATURE_RRSBA_CTRL (11*32+11) /* "" RET prediction control */ #define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ #define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */ #define X86_FEATURE_RETHUNK (11*32+14) /* "" Use REturn THUNK */ diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 19cc68aea58a..407de670cd60 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -51,6 +51,8 @@ #define SPEC_CTRL_STIBP BIT(SPEC_CTRL_STIBP_SHIFT) /* STIBP mask */ #define SPEC_CTRL_SSBD_SHIFT 2 /* Speculative Store Bypass Disable bit */ #define SPEC_CTRL_SSBD BIT(SPEC_CTRL_SSBD_SHIFT) /* Speculative Store Bypass Disable */ +#define SPEC_CTRL_RRSBA_DIS_S_SHIFT 6 /* Disable RRSBA behavior */ +#define SPEC_CTRL_RRSBA_DIS_S BIT(SPEC_CTRL_RRSBA_DIS_S_SHIFT) #define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */ #define PRED_CMD_IBPB BIT(0) /* Indirect Branch Prediction Barrier */ @@ -139,6 +141,13 @@ * bit available to control VERW * behavior. */ +#define ARCH_CAP_RRSBA BIT(19) /* + * Indicates RET may use predictors + * other than the RSB. With eIBRS + * enabled predictions in kernel mode + * are restricted to targets in + * kernel. + */ #define MSR_IA32_FLUSH_CMD 0x0000010b #define L1D_FLUSH BIT(0) /* diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index bb5a5069fad9..62a07bac906e 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1274,6 +1274,22 @@ static enum spectre_v2_mitigation __init spectre_v2_select_retpoline(void) return SPECTRE_V2_RETPOLINE; } +/* Disable in-kernel use of non-RSB RET predictors */ +static void __init spec_ctrl_disable_kernel_rrsba(void) +{ + u64 ia32_cap; + + if (!boot_cpu_has(X86_FEATURE_RRSBA_CTRL)) + return; + + ia32_cap = x86_read_arch_cap_msr(); + + if (ia32_cap & ARCH_CAP_RRSBA) { + x86_spec_ctrl_base |= SPEC_CTRL_RRSBA_DIS_S; + write_spec_ctrl_current(x86_spec_ctrl_base, true); + } +} + static void __init spectre_v2_select_mitigation(void) { enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline(); @@ -1368,6 +1384,16 @@ static void __init spectre_v2_select_mitigation(void) break; } + /* + * Disable alternate RSB predictions in kernel when indirect CALLs and + * JMPs gets protection against BHI and Intramode-BTI, but RET + * prediction from a non-RSB predictor is still a risk. + */ + if (mode == SPECTRE_V2_EIBRS_LFENCE || + mode == SPECTRE_V2_EIBRS_RETPOLINE || + mode == SPECTRE_V2_RETPOLINE) + spec_ctrl_disable_kernel_rrsba(); + spectre_v2_enabled = mode; pr_info("%s\n", spectre_v2_strings[mode]); diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c index 866c9a9bcdee..82fe492121bb 100644 --- a/arch/x86/kernel/cpu/scattered.c +++ b/arch/x86/kernel/cpu/scattered.c @@ -26,6 +26,7 @@ struct cpuid_bit { static const struct cpuid_bit cpuid_bits[] = { { X86_FEATURE_APERFMPERF, CPUID_ECX, 0, 0x00000006, 0 }, { X86_FEATURE_EPB, CPUID_ECX, 3, 0x00000006, 0 }, + { X86_FEATURE_RRSBA_CTRL, CPUID_EDX, 2, 0x00000007, 2 }, { X86_FEATURE_CQM_LLC, CPUID_EDX, 1, 0x0000000f, 0 }, { X86_FEATURE_CQM_OCCUP_LLC, CPUID_EDX, 0, 0x0000000f, 1 }, { X86_FEATURE_CQM_MBM_TOTAL, CPUID_EDX, 1, 0x0000000f, 1 }, diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h index 96973d197972..7acc3fbb87fb 100644 --- a/tools/arch/x86/include/asm/msr-index.h +++ b/tools/arch/x86/include/asm/msr-index.h @@ -51,6 +51,8 @@ #define SPEC_CTRL_STIBP BIT(SPEC_CTRL_STIBP_SHIFT) /* STIBP mask */ #define SPEC_CTRL_SSBD_SHIFT 2 /* Speculative Store Bypass Disable bit */ #define SPEC_CTRL_SSBD BIT(SPEC_CTRL_SSBD_SHIFT) /* Speculative Store Bypass Disable */ +#define SPEC_CTRL_RRSBA_DIS_S_SHIFT 6 /* Disable RRSBA behavior */ +#define SPEC_CTRL_RRSBA_DIS_S BIT(SPEC_CTRL_RRSBA_DIS_S_SHIFT) #define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */ #define PRED_CMD_IBPB BIT(0) /* Indirect Branch Prediction Barrier */ @@ -138,6 +140,13 @@ * bit available to control VERW * behavior. */ +#define ARCH_CAP_RRSBA BIT(19) /* + * Indicates RET may use predictors + * other than the RSB. With eIBRS + * enabled predictions in kernel mode + * are restricted to targets in + * kernel. + */ #define MSR_IA32_FLUSH_CMD 0x0000010b #define L1D_FLUSH BIT(0) /* From c035ca88b0742952150b1671bb5d26b96f921245 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner Date: Tue, 12 Jul 2022 14:01:06 +0200 Subject: [PATCH 1559/1842] x86/static_call: Serialize __static_call_fixup() properly commit c27c753ea6fd1237f4f96abf8b623d7bab505513 upstream. __static_call_fixup() invokes __static_call_transform() without holding text_mutex, which causes lockdep to complain in text_poke_bp(). Adding the proper locking cures that, but as this is either used during early boot or during module finalizing, it's not required to use text_poke_bp(). Add an argument to __static_call_transform() which tells it to use text_poke_early() for it. Fixes: ee88d363d156 ("x86,static_call: Use alternative RET encoding") Signed-off-by: Thomas Gleixner Signed-off-by: Borislav Petkov Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/static_call.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c index 8f7ba3ae1c3c..2973b3fb0ec1 100644 --- a/arch/x86/kernel/static_call.c +++ b/arch/x86/kernel/static_call.c @@ -20,7 +20,8 @@ static const u8 tramp_ud[] = { 0x0f, 0xb9, 0xcc }; static const u8 retinsn[] = { RET_INSN_OPCODE, 0xcc, 0xcc, 0xcc, 0xcc }; -static void __ref __static_call_transform(void *insn, enum insn_type type, void *func) +static void __ref __static_call_transform(void *insn, enum insn_type type, + void *func, bool modinit) { int size = CALL_INSN_SIZE; const void *code; @@ -49,7 +50,7 @@ static void __ref __static_call_transform(void *insn, enum insn_type type, void if (memcmp(insn, code, size) == 0) return; - if (unlikely(system_state == SYSTEM_BOOTING)) + if (system_state == SYSTEM_BOOTING || modinit) return text_poke_early(insn, code, size); text_poke_bp(insn, code, size, NULL); @@ -96,12 +97,12 @@ void arch_static_call_transform(void *site, void *tramp, void *func, bool tail) if (tramp) { __static_call_validate(tramp, true); - __static_call_transform(tramp, __sc_insn(!func, true), func); + __static_call_transform(tramp, __sc_insn(!func, true), func, false); } if (IS_ENABLED(CONFIG_HAVE_STATIC_CALL_INLINE) && site) { __static_call_validate(site, tail); - __static_call_transform(site, __sc_insn(!func, tail), func); + __static_call_transform(site, __sc_insn(!func, tail), func, false); } mutex_unlock(&text_mutex); @@ -127,8 +128,10 @@ bool __static_call_fixup(void *tramp, u8 op, void *dest) return false; } + mutex_lock(&text_mutex); if (op == RET_INSN_OPCODE || dest == &__x86_return_thunk) - __static_call_transform(tramp, RET, NULL); + __static_call_transform(tramp, RET, NULL, true); + mutex_unlock(&text_mutex); return true; } From 844947eee36c8ab21005883d6626e713a678c868 Mon Sep 17 00:00:00 2001 From: Borislav Petkov Date: Wed, 17 Mar 2021 11:33:04 +0100 Subject: [PATCH 1560/1842] tools/insn: Restore the relative include paths for cross building commit 0705ef64d1ff52b817e278ca6e28095585ff31e1 upstream. Building perf on ppc causes: In file included from util/intel-pt-decoder/intel-pt-insn-decoder.c:15: util/intel-pt-decoder/../../../arch/x86/lib/insn.c:14:10: fatal error: asm/inat.h: No such file or directory 14 | #include /*__ignore_sync_check__ */ | ^~~~~~~~~~~~ Restore the relative include paths so that the compiler can find the headers. Fixes: 93281c4a9657 ("x86/insn: Add an insn_decode() API") Reported-by: Ian Rogers Reported-by: Stephen Rothwell Signed-off-by: Borislav Petkov Tested-by: Ian Rogers Tested-by: Stephen Rothwell Link: https://lkml.kernel.org/r/20210317150858.02b1bbc8@canb.auug.org.au Cc: Florian Fainelli Signed-off-by: Greg Kroah-Hartman --- tools/arch/x86/lib/insn.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/tools/arch/x86/lib/insn.c b/tools/arch/x86/lib/insn.c index 819c82529fe8..f24cc0f618e4 100644 --- a/tools/arch/x86/lib/insn.c +++ b/tools/arch/x86/lib/insn.c @@ -10,13 +10,13 @@ #else #include #endif -#include /* __ignore_sync_check__ */ -#include /* __ignore_sync_check__ */ +#include "../include/asm/inat.h" /* __ignore_sync_check__ */ +#include "../include/asm/insn.h" /* __ignore_sync_check__ */ #include #include -#include /* __ignore_sync_check__ */ +#include "../include/asm/emulate_prefix.h" /* __ignore_sync_check__ */ /* Verify next sizeof(t) bytes can be on the same instruction */ #define validate_next(t, insn, n) \ From 81f20e5000eca278a8bab4959c4fa1beda1fbed5 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 30 Jun 2022 12:19:47 +0200 Subject: [PATCH 1561/1842] x86, kvm: use proper ASM macros for kvm_vcpu_is_preempted commit edbaf6e5e93acda96aae23ba134ef3c1466da3b5 upstream. The build rightfully complains about: arch/x86/kernel/kvm.o: warning: objtool: __raw_callee_save___kvm_vcpu_is_preempted()+0x12: missing int3 after ret because the ASM_RET call is not being used correctly in kvm_vcpu_is_preempted(). This was hand-fixed-up in the kvm merge commit a4cfff3f0f8c ("Merge branch 'kvm-older-features' into HEAD") which of course can not be backported to stable kernels, so just fix this up directly instead. Cc: Paolo Bonzini Cc: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/kvm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 971609fb15c5..fe9babe94861 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -953,7 +953,7 @@ asm( "movq __per_cpu_offset(,%rdi,8), %rax;" "cmpb $0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);" "setne %al;" -"ret;" +ASM_RET ".size __raw_callee_save___kvm_vcpu_is_preempted, .-__raw_callee_save___kvm_vcpu_is_preempted;" ".popsection"); From 668cb1ddf0ae7fcffcfc2ac1cfec9f770c8191fc Mon Sep 17 00:00:00 2001 From: Ben Hutchings Date: Thu, 14 Jul 2022 00:39:33 +0200 Subject: [PATCH 1562/1842] x86/xen: Fix initialisation in hypercall_page after rethunk The hypercall_page is special and the RETs there should not be changed into rethunk calls (but can have SLS mitigation). Change the initial instructions to ret + int3 padding, as was done in upstream commit 5b2fc51576ef "x86/ibt,xen: Sprinkle the ENDBR". Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/xen/xen-head.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S index 38b73e7e54ba..2a3ef5fcba34 100644 --- a/arch/x86/xen/xen-head.S +++ b/arch/x86/xen/xen-head.S @@ -69,9 +69,9 @@ SYM_CODE_END(asm_cpu_bringup_and_idle) SYM_CODE_START(hypercall_page) .rept (PAGE_SIZE / 32) UNWIND_HINT_FUNC - .skip 31, 0x90 ANNOTATE_UNRET_SAFE - RET + ret + .skip 31, 0xcc .endr #define HYPERCALL(n) \ From 95d89ec7dba56f7974aeda2beace2112df9f1418 Mon Sep 17 00:00:00 2001 From: Josh Poimboeuf Date: Thu, 21 Jan 2021 15:29:21 -0600 Subject: [PATCH 1563/1842] x86/ftrace: Add UNWIND_HINT_FUNC annotation for ftrace_stub commit 18660698a3d30868524cefb60dcd4e0e297f71bb upstream. Prevent an unreachable objtool warning after the sibling call detection gets improved. ftrace_stub() is basically a function, annotate it as such. Acked-by: Steven Rostedt (VMware) Signed-off-by: Josh Poimboeuf Link: https://lore.kernel.org/r/6845e1b2fb0723a95740c6674e548ba38c5ea489.1611263461.git.jpoimboe@redhat.com Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/ftrace_64.S | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S index b656c11521a1..e3a375185a1b 100644 --- a/arch/x86/kernel/ftrace_64.S +++ b/arch/x86/kernel/ftrace_64.S @@ -173,6 +173,7 @@ SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL) * It is also used to copy the RET for trampolines. */ SYM_INNER_LABEL_ALIGN(ftrace_stub, SYM_L_WEAK) + UNWIND_HINT_FUNC RET SYM_FUNC_END(ftrace_epilogue) From ecc0d92a9f6cc3f74b67d2c9887d0c800018e661 Mon Sep 17 00:00:00 2001 From: Jiri Slaby Date: Wed, 13 Jul 2022 11:50:46 +0200 Subject: [PATCH 1564/1842] x86/asm/32: Fix ANNOTATE_UNRET_SAFE use on 32-bit commit 3131ef39fb03bbde237d0b8260445898f3dfda5b upstream. The build on x86_32 currently fails after commit 9bb2ec608a20 (objtool: Update Retpoline validation) with: arch/x86/kernel/../../x86/xen/xen-head.S:35: Error: no such instruction: `annotate_unret_safe' ANNOTATE_UNRET_SAFE is defined in nospec-branch.h. And head_32.S is missing this include. Fix this. Fixes: 9bb2ec608a20 ("objtool: Update Retpoline validation") Signed-off-by: Jiri Slaby Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/63e23f80-033f-f64e-7522-2816debbc367@kernel.org Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/head_32.S | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S index 9b2b1ac7e8c9..3f1691b89231 100644 --- a/arch/x86/kernel/head_32.S +++ b/arch/x86/kernel/head_32.S @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include From abf88ff13414f3991b35c295c4b564c31e3cb2b0 Mon Sep 17 00:00:00 2001 From: Nathan Chancellor Date: Wed, 13 Jul 2022 08:24:37 -0700 Subject: [PATCH 1565/1842] x86/speculation: Use DECLARE_PER_CPU for x86_spec_ctrl_current commit db886979683a8360ced9b24ab1125ad0c4d2cf76 upstream. Clang warns: arch/x86/kernel/cpu/bugs.c:58:21: error: section attribute is specified on redeclared variable [-Werror,-Wsection] DEFINE_PER_CPU(u64, x86_spec_ctrl_current); ^ arch/x86/include/asm/nospec-branch.h:283:12: note: previous declaration is here extern u64 x86_spec_ctrl_current; ^ 1 error generated. The declaration should be using DECLARE_PER_CPU instead so all attributes stay in sync. Cc: stable@vger.kernel.org Fixes: fc02735b14ff ("KVM: VMX: Prevent guest RSB poisoning attacks with eIBRS") Reported-by: kernel test robot Signed-off-by: Nathan Chancellor Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/nospec-branch.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 2989b42fcf4e..6f2adaf53f46 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -11,6 +11,7 @@ #include #include #include +#include #define RETPOLINE_THUNK_SIZE 32 @@ -281,7 +282,7 @@ static inline void indirect_branch_prediction_barrier(void) /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; -extern u64 x86_spec_ctrl_current; +DECLARE_PER_CPU(u64, x86_spec_ctrl_current); extern void write_spec_ctrl_current(u64 val, bool force); extern u64 spec_ctrl_current(void); From 5779e2f0cc241d76ae309e2f6f52168204e93fdb Mon Sep 17 00:00:00 2001 From: Thadeu Lima de Souza Cascardo Date: Fri, 15 Jul 2022 16:45:50 -0300 Subject: [PATCH 1566/1842] efi/x86: use naked RET on mixed mode call wrapper commit 51a6fa0732d6be6a44e0032752ad2ac10d67c796 upstream. When running with return thunks enabled under 32-bit EFI, the system crashes with: kernel tried to execute NX-protected page - exploit attempt? (uid: 0) BUG: unable to handle page fault for address: 000000005bc02900 #PF: supervisor instruction fetch in kernel mode #PF: error_code(0x0011) - permissions violation PGD 18f7063 P4D 18f7063 PUD 18ff063 PMD 190e063 PTE 800000005bc02063 Oops: 0011 [#1] PREEMPT SMP PTI CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.19.0-rc6+ #166 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 RIP: 0010:0x5bc02900 Code: Unable to access opcode bytes at RIP 0x5bc028d6. RSP: 0018:ffffffffb3203e10 EFLAGS: 00010046 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000048 RDX: 000000000190dfac RSI: 0000000000001710 RDI: 000000007eae823b RBP: ffffffffb3203e70 R08: 0000000001970000 R09: ffffffffb3203e28 R10: 747563657865206c R11: 6c6977203a696665 R12: 0000000000001710 R13: 0000000000000030 R14: 0000000001970000 R15: 0000000000000001 FS: 0000000000000000(0000) GS:ffff8e013ca00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0018 ES: 0018 CR0: 0000000080050033 CR2: 000000005bc02900 CR3: 0000000001930000 CR4: 00000000000006f0 Call Trace: ? efi_set_virtual_address_map+0x9c/0x175 efi_enter_virtual_mode+0x4a6/0x53e start_kernel+0x67c/0x71e x86_64_start_reservations+0x24/0x2a x86_64_start_kernel+0xe9/0xf4 secondary_startup_64_no_verify+0xe5/0xeb That's because it cannot jump to the return thunk from the 32-bit code. Using a naked RET and marking it as safe allows the system to proceed booting. Fixes: aa3d480315ba ("x86: Use return-thunk in asm code") Reported-by: Guenter Roeck Signed-off-by: Thadeu Lima de Souza Cascardo Cc: Peter Zijlstra (Intel) Cc: Borislav Petkov Cc: Josh Poimboeuf Cc: Tested-by: Guenter Roeck Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- arch/x86/platform/efi/efi_thunk_64.S | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/x86/platform/efi/efi_thunk_64.S b/arch/x86/platform/efi/efi_thunk_64.S index 063eb27c0a12..9eac088ef4de 100644 --- a/arch/x86/platform/efi/efi_thunk_64.S +++ b/arch/x86/platform/efi/efi_thunk_64.S @@ -22,6 +22,7 @@ #include #include #include +#include .text .code64 @@ -63,7 +64,9 @@ SYM_CODE_START(__efi64_thunk) 1: movq 24(%rsp), %rsp pop %rbx pop %rbp - RET + ANNOTATE_UNRET_SAFE + ret + int3 .code32 2: pushl $__KERNEL_CS From 8e31dfd6306e7c6bb3758a459c2a6067be807886 Mon Sep 17 00:00:00 2001 From: Thadeu Lima de Souza Cascardo Date: Wed, 13 Jul 2022 14:12:41 -0300 Subject: [PATCH 1567/1842] x86/kvm: fix FASTOP_SIZE when return thunks are enabled commit 84e7051c0bc1f2a13101553959b3a9d9a8e24939 upstream. The return thunk call makes the fastop functions larger, just like IBT does. Consider a 16-byte FASTOP_SIZE when CONFIG_RETHUNK is enabled. Otherwise, functions will be incorrectly aligned and when computing their position for differently sized operators, they will executed in the middle or end of a function, which may as well be an int3, leading to a crash like: [ 36.091116] int3: 0000 [#1] SMP NOPTI [ 36.091119] CPU: 3 PID: 1371 Comm: qemu-system-x86 Not tainted 5.15.0-41-generic #44 [ 36.091120] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.15.0-1 04/01/2014 [ 36.091121] RIP: 0010:xaddw_ax_dx+0x9/0x10 [kvm] [ 36.091185] Code: 00 0f bb d0 c3 cc cc cc cc 48 0f bb d0 c3 cc cc cc cc 0f 1f 80 00 00 00 00 0f c0 d0 c3 cc cc cc cc 66 0f c1 d0 c3 cc cc cc cc <0f> 1f 80 00 00 00 00 0f c1 d0 c3 cc cc cc cc 48 0f c1 d0 c3 cc cc [ 36.091186] RSP: 0018:ffffb1f541143c98 EFLAGS: 00000202 [ 36.091188] RAX: 0000000089abcdef RBX: 0000000000000001 RCX: 0000000000000000 [ 36.091188] RDX: 0000000076543210 RSI: ffffffffc073c6d0 RDI: 0000000000000200 [ 36.091189] RBP: ffffb1f541143ca0 R08: ffff9f1803350a70 R09: 0000000000000002 [ 36.091190] R10: ffff9f1803350a70 R11: 0000000000000000 R12: ffff9f1803350a70 [ 36.091190] R13: ffffffffc077fee0 R14: 0000000000000000 R15: 0000000000000000 [ 36.091191] FS: 00007efdfce8d640(0000) GS:ffff9f187dd80000(0000) knlGS:0000000000000000 [ 36.091192] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 36.091192] CR2: 0000000000000000 CR3: 0000000009b62002 CR4: 0000000000772ee0 [ 36.091195] PKRU: 55555554 [ 36.091195] Call Trace: [ 36.091197] [ 36.091198] ? fastop+0x5a/0xa0 [kvm] [ 36.091222] x86_emulate_insn+0x7b8/0xe90 [kvm] [ 36.091244] x86_emulate_instruction+0x2f4/0x630 [kvm] [ 36.091263] ? kvm_arch_vcpu_load+0x7c/0x230 [kvm] [ 36.091283] ? vmx_prepare_switch_to_host+0xf7/0x190 [kvm_intel] [ 36.091290] complete_emulated_mmio+0x297/0x320 [kvm] [ 36.091310] kvm_arch_vcpu_ioctl_run+0x32f/0x550 [kvm] [ 36.091330] kvm_vcpu_ioctl+0x29e/0x6d0 [kvm] [ 36.091344] ? kvm_vcpu_ioctl+0x120/0x6d0 [kvm] [ 36.091357] ? __fget_files+0x86/0xc0 [ 36.091362] ? __fget_files+0x86/0xc0 [ 36.091363] __x64_sys_ioctl+0x92/0xd0 [ 36.091366] do_syscall_64+0x59/0xc0 [ 36.091369] ? syscall_exit_to_user_mode+0x27/0x50 [ 36.091370] ? do_syscall_64+0x69/0xc0 [ 36.091371] ? syscall_exit_to_user_mode+0x27/0x50 [ 36.091372] ? __x64_sys_writev+0x1c/0x30 [ 36.091374] ? do_syscall_64+0x69/0xc0 [ 36.091374] ? exit_to_user_mode_prepare+0x37/0xb0 [ 36.091378] ? syscall_exit_to_user_mode+0x27/0x50 [ 36.091379] ? do_syscall_64+0x69/0xc0 [ 36.091379] ? do_syscall_64+0x69/0xc0 [ 36.091380] ? do_syscall_64+0x69/0xc0 [ 36.091381] ? do_syscall_64+0x69/0xc0 [ 36.091381] entry_SYSCALL_64_after_hwframe+0x61/0xcb [ 36.091384] RIP: 0033:0x7efdfe6d1aff [ 36.091390] Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <41> 89 c0 3d 00 f0 ff ff 77 1f 48 8b 44 24 18 64 48 2b 04 25 28 00 [ 36.091391] RSP: 002b:00007efdfce8c460 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 [ 36.091393] RAX: ffffffffffffffda RBX: 000000000000ae80 RCX: 00007efdfe6d1aff [ 36.091393] RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 000000000000000c [ 36.091394] RBP: 0000558f1609e220 R08: 0000558f13fb8190 R09: 00000000ffffffff [ 36.091394] R10: 0000558f16b5e950 R11: 0000000000000246 R12: 0000000000000000 [ 36.091394] R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000000000 [ 36.091396] [ 36.091397] Modules linked in: isofs nls_iso8859_1 kvm_intel joydev kvm input_leds serio_raw sch_fq_codel dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua ipmi_devintf ipmi_msghandler drm msr ip_tables x_tables autofs4 btrfs blake2b_generic zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel virtio_net net_failover crypto_simd ahci xhci_pci cryptd psmouse virtio_blk libahci xhci_pci_renesas failover [ 36.123271] ---[ end trace db3c0ab5a48fabcc ]--- [ 36.123272] RIP: 0010:xaddw_ax_dx+0x9/0x10 [kvm] [ 36.123319] Code: 00 0f bb d0 c3 cc cc cc cc 48 0f bb d0 c3 cc cc cc cc 0f 1f 80 00 00 00 00 0f c0 d0 c3 cc cc cc cc 66 0f c1 d0 c3 cc cc cc cc <0f> 1f 80 00 00 00 00 0f c1 d0 c3 cc cc cc cc 48 0f c1 d0 c3 cc cc [ 36.123320] RSP: 0018:ffffb1f541143c98 EFLAGS: 00000202 [ 36.123321] RAX: 0000000089abcdef RBX: 0000000000000001 RCX: 0000000000000000 [ 36.123321] RDX: 0000000076543210 RSI: ffffffffc073c6d0 RDI: 0000000000000200 [ 36.123322] RBP: ffffb1f541143ca0 R08: ffff9f1803350a70 R09: 0000000000000002 [ 36.123322] R10: ffff9f1803350a70 R11: 0000000000000000 R12: ffff9f1803350a70 [ 36.123323] R13: ffffffffc077fee0 R14: 0000000000000000 R15: 0000000000000000 [ 36.123323] FS: 00007efdfce8d640(0000) GS:ffff9f187dd80000(0000) knlGS:0000000000000000 [ 36.123324] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 36.123325] CR2: 0000000000000000 CR3: 0000000009b62002 CR4: 0000000000772ee0 [ 36.123327] PKRU: 55555554 [ 36.123328] Kernel panic - not syncing: Fatal exception in interrupt [ 36.123410] Kernel Offset: 0x1400000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff) [ 36.135305] ---[ end Kernel panic - not syncing: Fatal exception in interrupt ]--- Fixes: aa3d480315ba ("x86: Use return-thunk in asm code") Signed-off-by: Thadeu Lima de Souza Cascardo Co-developed-by: Peter Zijlstra (Intel) Cc: Borislav Petkov Cc: Josh Poimboeuf Cc: Paolo Bonzini Reported-by: Linux Kernel Functional Testing Message-Id: <20220713171241.184026-1-cascardo@canonical.com> Tested-by: Jack Wang Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/emulate.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 59e5d79f5c34..b4589fa6bbae 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -188,8 +188,12 @@ #define X8(x...) X4(x), X4(x) #define X16(x...) X8(x), X8(x) -#define NR_FASTOP (ilog2(sizeof(ulong)) + 1) -#define FASTOP_SIZE 8 +#define NR_FASTOP (ilog2(sizeof(ulong)) + 1) +#define RET_LENGTH (1 + (4 * IS_ENABLED(CONFIG_RETHUNK)) + \ + IS_ENABLED(CONFIG_SLS)) +#define FASTOP_LENGTH (ENDBR_INSN_SIZE + 7 + RET_LENGTH) +#define FASTOP_SIZE (8 << ((FASTOP_LENGTH > 8) & 1) << ((FASTOP_LENGTH > 16) & 1)) +static_assert(FASTOP_LENGTH <= FASTOP_SIZE); struct opcode { u64 flags : 56; @@ -438,8 +442,6 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop); * RET | JMP __x86_return_thunk [1,5 bytes; CONFIG_RETHUNK] * INT3 [1 byte; CONFIG_SLS] */ -#define RET_LENGTH (1 + (4 * IS_ENABLED(CONFIG_RETHUNK)) + \ - IS_ENABLED(CONFIG_SLS)) #define SETCC_LENGTH (3 + RET_LENGTH) #define SETCC_ALIGN (4 << ((SETCC_LENGTH > 4) & 1) << ((SETCC_LENGTH > 8) & 1)) static_assert(SETCC_LENGTH <= SETCC_ALIGN); From 2ef1b06ceacfc6f96f6792126e107e75c0d7bd5d Mon Sep 17 00:00:00 2001 From: Paolo Bonzini Date: Fri, 15 Jul 2022 07:34:55 -0400 Subject: [PATCH 1568/1842] KVM: emulate: do not adjust size of fastop and setcc subroutines commit 79629181607e801c0b41b8790ac4ee2eb5d7bc3e upstream. Instead of doing complicated calculations to find the size of the subroutines (which are even more complicated because they need to be stringified into an asm statement), just hardcode to 16. It is less dense for a few combinations of IBT/SLS/retbleed, but it has the advantage of being really simple. Cc: stable@vger.kernel.org # 5.15.x: 84e7051c0bc1: x86/kvm: fix FASTOP_SIZE when return thunks are enabled Cc: stable@vger.kernel.org Suggested-by: Linus Torvalds Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/emulate.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index b4589fa6bbae..a1765658f002 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -188,13 +188,6 @@ #define X8(x...) X4(x), X4(x) #define X16(x...) X8(x), X8(x) -#define NR_FASTOP (ilog2(sizeof(ulong)) + 1) -#define RET_LENGTH (1 + (4 * IS_ENABLED(CONFIG_RETHUNK)) + \ - IS_ENABLED(CONFIG_SLS)) -#define FASTOP_LENGTH (ENDBR_INSN_SIZE + 7 + RET_LENGTH) -#define FASTOP_SIZE (8 << ((FASTOP_LENGTH > 8) & 1) << ((FASTOP_LENGTH > 16) & 1)) -static_assert(FASTOP_LENGTH <= FASTOP_SIZE); - struct opcode { u64 flags : 56; u64 intercept : 8; @@ -308,9 +301,15 @@ static void invalidate_registers(struct x86_emulate_ctxt *ctxt) * Moreover, they are all exactly FASTOP_SIZE bytes long, so functions for * different operand sizes can be reached by calculation, rather than a jump * table (which would be bigger than the code). + * + * The 16 byte alignment, considering 5 bytes for the RET thunk, 3 for ENDBR + * and 1 for the straight line speculation INT3, leaves 7 bytes for the + * body of the function. Currently none is larger than 4. */ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop); +#define FASTOP_SIZE 16 + #define __FOP_FUNC(name) \ ".align " __stringify(FASTOP_SIZE) " \n\t" \ ".type " name ", @function \n\t" \ @@ -442,9 +441,7 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop); * RET | JMP __x86_return_thunk [1,5 bytes; CONFIG_RETHUNK] * INT3 [1 byte; CONFIG_SLS] */ -#define SETCC_LENGTH (3 + RET_LENGTH) -#define SETCC_ALIGN (4 << ((SETCC_LENGTH > 4) & 1) << ((SETCC_LENGTH > 8) & 1)) -static_assert(SETCC_LENGTH <= SETCC_ALIGN); +#define SETCC_ALIGN 16 #define FOP_SETCC(op) \ ".align " __stringify(SETCC_ALIGN) " \n\t" \ From 3f93b8630a91e9195607312b7f16a25417f61f7b Mon Sep 17 00:00:00 2001 From: Arnaldo Carvalho de Melo Date: Thu, 1 Jul 2021 13:32:18 -0300 Subject: [PATCH 1569/1842] tools arch x86: Sync the msr-index.h copy with the kernel sources commit 91d248c3b903b46a58cbc7e8d38d684d3e4007c2 upstream. To pick up the changes from these csets: 4ad3278df6fe2b08 ("x86/speculation: Disable RRSBA behavior") d7caac991feeef1b ("x86/cpu/amd: Add Spectral Chicken") That cause no changes to tooling: $ tools/perf/trace/beauty/tracepoints/x86_msr.sh > before $ cp arch/x86/include/asm/msr-index.h tools/arch/x86/include/asm/msr-index.h $ tools/perf/trace/beauty/tracepoints/x86_msr.sh > after $ diff -u before after $ Just silences this perf build warning: Warning: Kernel ABI header at 'tools/arch/x86/include/asm/msr-index.h' differs from latest version at 'arch/x86/include/asm/msr-index.h' diff -u tools/arch/x86/include/asm/msr-index.h arch/x86/include/asm/msr-index.h Cc: Adrian Hunter Cc: Borislav Petkov Cc: Ian Rogers Cc: Jiri Olsa Cc: Namhyung Kim Cc: Pawan Gupta Cc: Peter Zijlstra Link: https://lore.kernel.org/lkml/YtQTm9wsB3hxQWvy@kernel.org Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Greg Kroah-Hartman --- tools/arch/x86/include/asm/msr-index.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h index 7acc3fbb87fb..407de670cd60 100644 --- a/tools/arch/x86/include/asm/msr-index.h +++ b/tools/arch/x86/include/asm/msr-index.h @@ -93,6 +93,7 @@ #define MSR_IA32_ARCH_CAPABILITIES 0x0000010a #define ARCH_CAP_RDCL_NO BIT(0) /* Not susceptible to Meltdown */ #define ARCH_CAP_IBRS_ALL BIT(1) /* Enhanced IBRS support */ +#define ARCH_CAP_RSBA BIT(2) /* RET may use alternative branch predictors */ #define ARCH_CAP_SKIP_VMENTRY_L1DFLUSH BIT(3) /* Skip L1D flush on vmentry */ #define ARCH_CAP_SSB_NO BIT(4) /* * Not susceptible to Speculative Store Bypass @@ -516,6 +517,9 @@ /* Fam 17h MSRs */ #define MSR_F17H_IRPERF 0xc00000e9 +#define MSR_ZEN2_SPECTRAL_CHICKEN 0xc00110e3 +#define MSR_ZEN2_SPECTRAL_CHICKEN_BIT BIT_ULL(1) + /* Fam 16h MSRs */ #define MSR_F16H_L2I_PERF_CTL 0xc0010230 #define MSR_F16H_L2I_PERF_CTR 0xc0010231 From 81604506c26aef661bc460609981e01b706cf025 Mon Sep 17 00:00:00 2001 From: Arnaldo Carvalho de Melo Date: Thu, 1 Jul 2021 13:39:15 -0300 Subject: [PATCH 1570/1842] tools headers cpufeatures: Sync with the kernel sources commit f098addbdb44c8a565367f5162f3ab170ed9404a upstream. To pick the changes from: f43b9876e857c739 ("x86/retbleed: Add fine grained Kconfig knobs") a149180fbcf336e9 ("x86: Add magic AMD return-thunk") 15e67227c49a5783 ("x86: Undo return-thunk damage") 369ae6ffc41a3c11 ("x86/retpoline: Cleanup some #ifdefery") 4ad3278df6fe2b08 x86/speculation: Disable RRSBA behavior 26aae8ccbc197223 x86/cpu/amd: Enumerate BTC_NO 9756bba28470722d x86/speculation: Fill RSB on vmexit for IBRS 3ebc170068885b6f x86/bugs: Add retbleed=ibpb 2dbb887e875b1de3 x86/entry: Add kernel IBRS implementation 6b80b59b35557065 x86/bugs: Report AMD retbleed vulnerability a149180fbcf336e9 x86: Add magic AMD return-thunk 15e67227c49a5783 x86: Undo return-thunk damage a883d624aed463c8 x86/cpufeatures: Move RETPOLINE flags to word 11 51802186158c74a0 x86/speculation/mmio: Enumerate Processor MMIO Stale Data bug This only causes these perf files to be rebuilt: CC /tmp/build/perf/bench/mem-memcpy-x86-64-asm.o CC /tmp/build/perf/bench/mem-memset-x86-64-asm.o And addresses this perf build warning: Warning: Kernel ABI header at 'tools/arch/x86/include/asm/cpufeatures.h' differs from latest version at 'arch/x86/include/asm/cpufeatures.h' diff -u tools/arch/x86/include/asm/cpufeatures.h arch/x86/include/asm/cpufeatures.h Warning: Kernel ABI header at 'tools/arch/x86/include/asm/disabled-features.h' differs from latest version at 'arch/x86/include/asm/disabled-features.h' diff -u tools/arch/x86/include/asm/disabled-features.h arch/x86/include/asm/disabled-features.h Cc: Adrian Hunter Cc: Borislav Petkov Cc: Ian Rogers Cc: Jiri Olsa Cc: Namhyung Kim Cc: Peter Zijlstra Signed-off-by: Greg Kroah-Hartman --- tools/arch/x86/include/asm/cpufeatures.h | 12 +++++++++-- .../arch/x86/include/asm/disabled-features.h | 21 ++++++++++++++++++- 2 files changed, 30 insertions(+), 3 deletions(-) diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h index a7b5c5efcf3b..54ba20492ad1 100644 --- a/tools/arch/x86/include/asm/cpufeatures.h +++ b/tools/arch/x86/include/asm/cpufeatures.h @@ -203,8 +203,8 @@ #define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ -#define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ -#define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCEs for Spectre variant 2 */ +#define X86_FEATURE_KERNEL_IBRS ( 7*32+12) /* "" Set/clear IBRS on kernel entry/exit */ +#define X86_FEATURE_RSB_VMEXIT ( 7*32+13) /* "" Fill RSB on VM-Exit */ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */ #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */ #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */ @@ -290,6 +290,12 @@ #define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel entry SWAPGS path */ #define X86_FEATURE_SPLIT_LOCK_DETECT (11*32+ 6) /* #AC for split lock */ #define X86_FEATURE_PER_THREAD_MBA (11*32+ 7) /* "" Per-thread Memory Bandwidth Allocation */ +#define X86_FEATURE_ENTRY_IBPB (11*32+10) /* "" Issue an IBPB on kernel entry */ +#define X86_FEATURE_RRSBA_CTRL (11*32+11) /* "" RET prediction control */ +#define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ +#define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */ +#define X86_FEATURE_RETHUNK (11*32+14) /* "" Use REturn THUNK */ +#define X86_FEATURE_UNRET (11*32+15) /* "" AMD BTB untrain return */ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */ @@ -308,6 +314,7 @@ #define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */ #define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */ #define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */ +#define X86_FEATURE_BTC_NO (13*32+29) /* "" Not vulnerable to Branch Type Confusion */ /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */ #define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */ @@ -418,5 +425,6 @@ #define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */ #define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */ #define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */ +#define X86_BUG_RETBLEED X86_BUG(26) /* CPU is affected by RETBleed */ #endif /* _ASM_X86_CPUFEATURES_H */ diff --git a/tools/arch/x86/include/asm/disabled-features.h b/tools/arch/x86/include/asm/disabled-features.h index 5861d34f9771..d109c5ec967c 100644 --- a/tools/arch/x86/include/asm/disabled-features.h +++ b/tools/arch/x86/include/asm/disabled-features.h @@ -56,6 +56,25 @@ # define DISABLE_PTI (1 << (X86_FEATURE_PTI & 31)) #endif +#ifdef CONFIG_RETPOLINE +# define DISABLE_RETPOLINE 0 +#else +# define DISABLE_RETPOLINE ((1 << (X86_FEATURE_RETPOLINE & 31)) | \ + (1 << (X86_FEATURE_RETPOLINE_LFENCE & 31))) +#endif + +#ifdef CONFIG_RETHUNK +# define DISABLE_RETHUNK 0 +#else +# define DISABLE_RETHUNK (1 << (X86_FEATURE_RETHUNK & 31)) +#endif + +#ifdef CONFIG_CPU_UNRET_ENTRY +# define DISABLE_UNRET 0 +#else +# define DISABLE_UNRET (1 << (X86_FEATURE_UNRET & 31)) +#endif + #ifdef CONFIG_IOMMU_SUPPORT # define DISABLE_ENQCMD 0 #else @@ -76,7 +95,7 @@ #define DISABLED_MASK8 0 #define DISABLED_MASK9 (DISABLE_SMAP) #define DISABLED_MASK10 0 -#define DISABLED_MASK11 0 +#define DISABLED_MASK11 (DISABLE_RETPOLINE|DISABLE_RETHUNK|DISABLE_UNRET) #define DISABLED_MASK12 0 #define DISABLED_MASK13 0 #define DISABLED_MASK14 0 From 725da3e67cec529788e284dac2c1249b2a531839 Mon Sep 17 00:00:00 2001 From: Kim Phillips Date: Fri, 8 Jul 2022 16:21:28 -0500 Subject: [PATCH 1571/1842] x86/bugs: Remove apostrophe typo commit bcf163150cd37348a0cb59e95c916a83a9344b0e upstream. Remove a superfluous ' in the mitigation string. Fixes: e8ec1b6e08a2 ("x86/bugs: Enable STIBP for JMP2RET") Signed-off-by: Kim Phillips Signed-off-by: Borislav Petkov Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 62a07bac906e..bc6382f5ec27 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1137,7 +1137,7 @@ spectre_v2_user_select_mitigation(void) if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET) { if (mode != SPECTRE_V2_USER_STRICT && mode != SPECTRE_V2_USER_STRICT_PREFERRED) - pr_info("Selecting STIBP always-on mode to complement retbleed mitigation'\n"); + pr_info("Selecting STIBP always-on mode to complement retbleed mitigation\n"); mode = SPECTRE_V2_USER_STRICT_PREFERRED; } From 8e2774270aa319f552fab969bc4500e412d2566b Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Thu, 14 Jul 2022 12:20:19 +0200 Subject: [PATCH 1572/1842] um: Add missing apply_returns() commit 564d998106397394b6aad260f219b882b3347e62 upstream. Implement apply_returns() stub for UM, just like all the other patching routines. Fixes: 15e67227c49a ("x86: Undo return-thunk damage") Reported-by: Randy Dunlap Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/Ys%2Ft45l%2FgarIrD0u@worktop.programming.kicks-ass.net Signed-off-by: Greg Kroah-Hartman --- arch/um/kernel/um_arch.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c index 2676ac322146..26af24b5d900 100644 --- a/arch/um/kernel/um_arch.c +++ b/arch/um/kernel/um_arch.c @@ -362,6 +362,10 @@ void apply_retpolines(s32 *start, s32 *end) { } +void apply_returns(s32 *start, s32 *end) +{ +} + void apply_alternatives(struct alt_instr *start, struct alt_instr *end) { } From 6849ed81a33ae616bfadf40e5c68af265d106dff Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Fri, 19 Nov 2021 17:50:25 +0100 Subject: [PATCH 1573/1842] x86: Use -mindirect-branch-cs-prefix for RETPOLINE builds commit 68cf4f2a72ef8786e6b7af6fd9a89f27ac0f520d upstream. In order to further enable commit: bbe2df3f6b6d ("x86/alternative: Try inline spectre_v2=retpoline,amd") add the new GCC flag -mindirect-branch-cs-prefix: https://gcc.gnu.org/g:2196a681d7810ad8b227bf983f38ba716620545e https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102952 https://bugs.llvm.org/show_bug.cgi?id=52323 to RETPOLINE=y builds. This should allow fully inlining retpoline,amd for GCC builds. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Kees Cook Acked-by: Nick Desaulniers Link: https://lkml.kernel.org/r/20211119165630.276205624@infradead.org Signed-off-by: Greg Kroah-Hartman --- Makefile | 1 + 1 file changed, 1 insertion(+) diff --git a/Makefile b/Makefile index 96794db9f18a..df150538571f 100644 --- a/Makefile +++ b/Makefile @@ -672,6 +672,7 @@ endif ifdef CONFIG_CC_IS_GCC RETPOLINE_CFLAGS := $(call cc-option,-mindirect-branch=thunk-extern -mindirect-branch-register) +RETPOLINE_CFLAGS += $(call cc-option,-mindirect-branch-cs-prefix) RETPOLINE_VDSO_CFLAGS := $(call cc-option,-mindirect-branch=thunk-inline -mindirect-branch-register) endif ifdef CONFIG_CC_IS_CLANG From 39065d54347fe1395371ad5397df353ef77c8989 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Sun, 3 Oct 2021 13:34:19 -0700 Subject: [PATCH 1574/1842] kvm: fix objtool relocation warning commit 291073a566b2094c7192872cc0f17ce73d83cb76 upstream. The recent change to make objtool aware of more symbol relocation types (commit 24ff65257375: "objtool: Teach get_alt_entry() about more relocation types") also added another check, and resulted in this objtool warning when building kvm on x86: arch/x86/kvm/emulate.o: warning: objtool: __ex_table+0x4: don't know how to handle reloc symbol type: kvm_fastop_exception The reason seems to be that kvm_fastop_exception() is marked as a global symbol, which causes the relocation to ke kept around for objtool. And at the same time, the kvm_fastop_exception definition (which is done as an inline asm statement) doesn't actually set the type of the global, which then makes objtool unhappy. The minimal fix is to just not mark kvm_fastop_exception as being a global symbol. It's only used in that one compilation unit anyway, so it was always pointless. That's how all the other local exception table labels are done. I'm not entirely happy about the kinds of games that the kvm code plays with doing its own exception handling, and the fact that it confused objtool is most definitely a symptom of the code being a bit too subtle and ad-hoc. But at least this trivial one-liner makes objtool no longer upset about what is going on. Fixes: 24ff65257375 ("objtool: Teach get_alt_entry() about more relocation types") Link: https://lore.kernel.org/lkml/CAHk-=wiZwq-0LknKhXN4M+T8jbxn_2i9mcKpO+OaBSSq_Eh7tg@mail.gmail.com/ Cc: Borislav Petkov Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vitaly Kuznetsov Cc: Wanpeng Li Cc: Jim Mattson Cc: Joerg Roedel Cc: Peter Zijlstra Cc: Josh Poimboeuf Cc: Nathan Chancellor Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- arch/x86/kvm/emulate.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index a1765658f002..737035f16a9e 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -452,7 +452,6 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop); ".skip " __stringify(SETCC_ALIGN) " - (.-" #op "), 0xcc \n\t" asm(".pushsection .fixup, \"ax\"\n" - ".global kvm_fastop_exception \n" "kvm_fastop_exception: xor %esi, %esi; " ASM_RET ".popsection"); From fbf60f83e241f0ef967644ca06455f37fcea064b Mon Sep 17 00:00:00 2001 From: Vasily Gorbik Date: Wed, 12 May 2021 19:42:10 +0200 Subject: [PATCH 1575/1842] objtool: Fix elf_create_undef_symbol() endianness commit 46c7405df7de8deb97229eacebcee96d61415f3f upstream. Currently x86 cross-compilation fails on big endian system with: x86_64-cross-ld: init/main.o: invalid string offset 488112128 >= 6229 for section `.strtab' Mark new ELF data in elf_create_undef_symbol() as symbol, so that libelf does endianness handling correctly. Fixes: 2f2f7e47f052 ("objtool: Add elf_create_undef_symbol()") Signed-off-by: Vasily Gorbik Signed-off-by: Ingo Molnar Acked-by: Peter Zijlstra Link: https://lore.kernel.org/r/patch-1.thread-6c9df9.git-d39264656387.your-ad-here.call-01620841104-ext-2554@work.hours Signed-off-by: Greg Kroah-Hartman --- tools/objtool/elf.c | 1 + 1 file changed, 1 insertion(+) diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c index 0fb5b45aec53..5aa3b4e76479 100644 --- a/tools/objtool/elf.c +++ b/tools/objtool/elf.c @@ -961,6 +961,7 @@ static int elf_add_string(struct elf *elf, struct section *strtab, char *str) data->d_buf = str; data->d_size = strlen(str) + 1; data->d_align = 1; + data->d_type = ELF_T_SYM; len = strtab->len; strtab->len += data->d_size; From 060e39b8c21ceeca3eeaaca5f97dcd8530d85b67 Mon Sep 17 00:00:00 2001 From: Arnaldo Carvalho de Melo Date: Sun, 9 May 2021 10:19:37 -0300 Subject: [PATCH 1576/1842] tools arch: Update arch/x86/lib/mem{cpy,set}_64.S copies used in 'perf bench mem memcpy' - again commit fb24e308b6310541e70d11a3f19dc40742974b95 upstream. To bring in the change made in this cset: 5e21a3ecad1500e3 ("x86/alternative: Merge include files") This just silences these perf tools build warnings, no change in the tools: Warning: Kernel ABI header at 'tools/arch/x86/lib/memcpy_64.S' differs from latest version at 'arch/x86/lib/memcpy_64.S' diff -u tools/arch/x86/lib/memcpy_64.S arch/x86/lib/memcpy_64.S Warning: Kernel ABI header at 'tools/arch/x86/lib/memset_64.S' differs from latest version at 'arch/x86/lib/memset_64.S' diff -u tools/arch/x86/lib/memset_64.S arch/x86/lib/memset_64.S Cc: Borislav Petkov Cc: Juergen Gross Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Greg Kroah-Hartman --- tools/arch/x86/lib/memcpy_64.S | 2 +- tools/arch/x86/lib/memset_64.S | 2 +- tools/include/asm/{alternative-asm.h => alternative.h} | 0 3 files changed, 2 insertions(+), 2 deletions(-) rename tools/include/asm/{alternative-asm.h => alternative.h} (100%) diff --git a/tools/arch/x86/lib/memcpy_64.S b/tools/arch/x86/lib/memcpy_64.S index 644da05e5e71..59cf2343f3d9 100644 --- a/tools/arch/x86/lib/memcpy_64.S +++ b/tools/arch/x86/lib/memcpy_64.S @@ -4,7 +4,7 @@ #include #include #include -#include +#include #include .pushsection .noinstr.text, "ax" diff --git a/tools/arch/x86/lib/memset_64.S b/tools/arch/x86/lib/memset_64.S index 4d793ff820e2..d624f2bc42f1 100644 --- a/tools/arch/x86/lib/memset_64.S +++ b/tools/arch/x86/lib/memset_64.S @@ -3,7 +3,7 @@ #include #include -#include +#include #include /* diff --git a/tools/include/asm/alternative-asm.h b/tools/include/asm/alternative.h similarity index 100% rename from tools/include/asm/alternative-asm.h rename to tools/include/asm/alternative.h From 2fc7f18ba2f98d15f174ce8e25a5afa46926eb55 Mon Sep 17 00:00:00 2001 From: Arnaldo Carvalho de Melo Date: Wed, 14 Jul 2021 14:28:02 -0300 Subject: [PATCH 1577/1842] tools headers: Remove broken definition of __LITTLE_ENDIAN commit fa2c02e5798c17c89cbb3135940086ebe07e5c9f upstream. The linux/kconfig.h file was copied from the kernel but the line where with the generated/autoconf.h include from where the CONFIG_ entries would come from was deleted, as tools/ build system don't create that file, so we ended up always defining just __LITTLE_ENDIAN as CONFIG_CPU_BIG_ENDIAN was nowhere to be found. This in turn ended up breaking the build in some systems where __LITTLE_ENDIAN was already defined, such as the androind NDK. So just ditch that block that depends on the CONFIG_CPU_BIG_ENDIAN define. The kconfig.h file was copied just to get IS_ENABLED() and a 'make -C tools/all' doesn't breaks with this removal. Fixes: 93281c4a96572a34 ("x86/insn: Add an insn_decode() API") Cc: Adrian Hunter Cc: Borislav Petkov Cc: Jiri Olsa Cc: Namhyung Kim Link: http://lore.kernel.org/lkml/YO8hK7lqJcIWuBzx@kernel.org Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Greg Kroah-Hartman --- tools/include/linux/kconfig.h | 6 ------ 1 file changed, 6 deletions(-) diff --git a/tools/include/linux/kconfig.h b/tools/include/linux/kconfig.h index 1555a0c4f345..13b86bd3b746 100644 --- a/tools/include/linux/kconfig.h +++ b/tools/include/linux/kconfig.h @@ -4,12 +4,6 @@ /* CONFIG_CC_VERSION_TEXT (Do not delete this comment. See help in Kconfig) */ -#ifdef CONFIG_CPU_BIG_ENDIAN -#define __BIG_ENDIAN 4321 -#else -#define __LITTLE_ENDIAN 1234 -#endif - #define __ARG_PLACEHOLDER_1 0, #define __take_second_arg(__ignored, val, ...) val From 5034934536433b2831c80134f1531bbdbc2de160 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Mon, 25 Jul 2022 11:26:57 +0200 Subject: [PATCH 1578/1842] Linux 5.10.133 Link: https://lore.kernel.org/r/20220723095224.302504400@linuxfoundation.org Tested-by: Guenter Roeck Tested-by: Linux Kernel Functional Testing Tested-by: Rudi Heitbaum Tested-by: Sudip Mukherjee Tested-by: Jon Hunter Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index df150538571f..fbd330e58c3b 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 132 +SUBLEVEL = 133 EXTRAVERSION = NAME = Dare mighty things From f004760d695d9be9992fdfedf9321b951e6945d0 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Wed, 27 Jul 2022 13:58:06 +0200 Subject: [PATCH 1579/1842] Revert "ext4: verify dir block before splitting it" This reverts commit da2f05919238c7bdc6e28c79539f55c8355408bb which is commit 46c116b920ebec58031f0a78c5ea9599b0d2a371 upstream, as it breaks the build in Android kernel builds due to out-of-tree changes that were never merged upstream. Bug: 236690716 Fixes: 0e8e989142a0 ("Merge 5.10.121 into android12-5.10-lts") Cc: Daniel Rosenberg Signed-off-by: Greg Kroah-Hartman Change-Id: I08929715fb488a7d1977300e84d0940a9bf4dc98 --- fs/ext4/namei.c | 32 +++++++++++--------------------- 1 file changed, 11 insertions(+), 21 deletions(-) diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c index 25431c4a77ad..9a5ba0daaea2 100644 --- a/fs/ext4/namei.c +++ b/fs/ext4/namei.c @@ -280,9 +280,9 @@ static struct dx_frame *dx_probe(struct ext4_filename *fname, struct dx_hash_info *hinfo, struct dx_frame *frame); static void dx_release(struct dx_frame *frames); -static int dx_make_map(struct inode *dir, struct buffer_head *bh, - struct dx_hash_info *hinfo, - struct dx_map_entry *map_tail); +static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de, + unsigned blocksize, struct dx_hash_info *hinfo, + struct dx_map_entry map[]); static void dx_sort_map(struct dx_map_entry *map, unsigned count); static struct ext4_dir_entry_2 *dx_move_dirents(struct inode *dir, char *from, char *to, struct dx_map_entry *offsets, @@ -1259,23 +1259,15 @@ static inline int search_dirblock(struct buffer_head *bh, * Create map of hash values, offsets, and sizes, stored at end of block. * Returns number of entries mapped. */ -static int dx_make_map(struct inode *dir, struct buffer_head *bh, - struct dx_hash_info *hinfo, +static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de, + unsigned blocksize, struct dx_hash_info *hinfo, struct dx_map_entry *map_tail) { int count = 0; - struct ext4_dir_entry_2 *de = (struct ext4_dir_entry_2 *)bh->b_data; - unsigned int buflen = bh->b_size; - char *base = bh->b_data; + char *base = (char *) de; struct dx_hash_info h = *hinfo; - if (ext4_has_metadata_csum(dir->i_sb)) - buflen -= sizeof(struct ext4_dir_entry_tail); - - while ((char *) de < base + buflen) { - if (ext4_check_dir_entry(dir, NULL, de, bh, base, buflen, - ((char *)de) - base)) - return -EFSCORRUPTED; + while ((char *) de < base + blocksize) { if (de->name_len && de->inode) { if (ext4_hash_in_dirent(dir)) h.hash = EXT4_DIRENT_HASH(de); @@ -1288,7 +1280,8 @@ static int dx_make_map(struct inode *dir, struct buffer_head *bh, count++; cond_resched(); } - de = ext4_next_entry(de, dir->i_sb->s_blocksize); + /* XXX: do we need to check rec_len == 0 case? -Chris */ + de = ext4_next_entry(de, blocksize); } return count; } @@ -1958,11 +1951,8 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir, /* create map in the end of data2 block */ map = (struct dx_map_entry *) (data2 + blocksize); - count = dx_make_map(dir, *bh, hinfo, map); - if (count < 0) { - err = count; - goto journal_error; - } + count = dx_make_map(dir, (struct ext4_dir_entry_2 *) data1, + blocksize, hinfo, map); map -= count; dx_sort_map(map, count); /* Ensure that neither split block is over half full */ From eaa4878a26e1dc5a5aa6afa1d29c5462bc0d61c6 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Wed, 27 Jul 2022 13:58:52 +0200 Subject: [PATCH 1580/1842] Revert "ext4: fix use-after-free in ext4_rename_dir_prepare" This reverts commit dd887f83ea54aea5b780a84527e23ab95f777fed which is commit 0be698ecbe4471fcad80e81ec6a05001421041b3 upstream as it breaks the build in Android kernel builds due to out-of-tree changes that were never merged upstream. Bug: 236690716 Fixes: 0e8e989142a0 ("Merge 5.10.121 into android12-5.10-lts") Cc: Daniel Rosenberg Signed-off-by: Greg Kroah-Hartman Change-Id: I511f362fecb21bdc53fb3a93bb9772be96e7f985 --- fs/ext4/namei.c | 30 +++--------------------------- 1 file changed, 3 insertions(+), 27 deletions(-) diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c index 9a5ba0daaea2..b745665955f8 100644 --- a/fs/ext4/namei.c +++ b/fs/ext4/namei.c @@ -3626,9 +3626,6 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle, struct buffer_head *bh; if (!ext4_has_inline_data(inode)) { - struct ext4_dir_entry_2 *de; - unsigned int offset; - /* The first directory block must not be a hole, so * treat it as DIRENT_HTREE */ @@ -3637,30 +3634,9 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle, *retval = PTR_ERR(bh); return NULL; } - - de = (struct ext4_dir_entry_2 *) bh->b_data; - if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data, - bh->b_size, 0) || - le32_to_cpu(de->inode) != inode->i_ino || - strcmp(".", de->name)) { - EXT4_ERROR_INODE(inode, "directory missing '.'"); - brelse(bh); - *retval = -EFSCORRUPTED; - return NULL; - } - offset = ext4_rec_len_from_disk(de->rec_len, - inode->i_sb->s_blocksize); - de = ext4_next_entry(de, inode->i_sb->s_blocksize); - if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data, - bh->b_size, offset) || - le32_to_cpu(de->inode) == 0 || strcmp("..", de->name)) { - EXT4_ERROR_INODE(inode, "directory missing '..'"); - brelse(bh); - *retval = -EFSCORRUPTED; - return NULL; - } - *parent_de = de; - + *parent_de = ext4_next_entry( + (struct ext4_dir_entry_2 *)bh->b_data, + inode->i_sb->s_blocksize); return bh; } From 0889c70b1fb16729ef3cdc705c631f772559ac3a Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 28 Jul 2022 14:51:45 +0200 Subject: [PATCH 1581/1842] Revert "pinctrl: bcm2835: implement hook for missing gpio-ranges" This reverts commit ceb61ab22dbd5635b450b002f7dce84ca55e53f3 which is commit d2b67744fd99b06555b7e4d67302ede6c7c6a638 upstream. It breaks the Android kernel ABI and is not needed for Android devices, so it is safe to revert for now. If it is determined that it is needed in the future, it can be brought back in an abi-preserving way. Bug: 161946584 Signed-off-by: Greg Kroah-Hartman Change-Id: I509752798a361ebe2b3e53df526099349b5aba81 --- drivers/pinctrl/bcm/pinctrl-bcm2835.c | 18 ------------------ 1 file changed, 18 deletions(-) diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c index 39d2024dc2ee..6768b2f03d68 100644 --- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c +++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c @@ -351,22 +351,6 @@ static int bcm2835_gpio_direction_output(struct gpio_chip *chip, return pinctrl_gpio_direction_output(chip->base + offset); } -static int bcm2835_of_gpio_ranges_fallback(struct gpio_chip *gc, - struct device_node *np) -{ - struct pinctrl_dev *pctldev = of_pinctrl_get(np); - - of_node_put(np); - - if (!pctldev) - return 0; - - gpiochip_add_pin_range(gc, pinctrl_dev_get_devname(pctldev), 0, 0, - gc->ngpio); - - return 0; -} - static const struct gpio_chip bcm2835_gpio_chip = { .label = MODULE_NAME, .owner = THIS_MODULE, @@ -381,7 +365,6 @@ static const struct gpio_chip bcm2835_gpio_chip = { .base = -1, .ngpio = BCM2835_NUM_GPIOS, .can_sleep = false, - .of_gpio_ranges_fallback = bcm2835_of_gpio_ranges_fallback, }; static const struct gpio_chip bcm2711_gpio_chip = { @@ -398,7 +381,6 @@ static const struct gpio_chip bcm2711_gpio_chip = { .base = -1, .ngpio = BCM2711_NUM_GPIOS, .can_sleep = false, - .of_gpio_ranges_fallback = bcm2835_of_gpio_ranges_fallback, }; static void bcm2835_gpio_irq_handle_bank(struct bcm2835_pinctrl *pc, From 86366714388ab59542c982a214be1215d66d7d6f Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 28 Jul 2022 09:33:04 +0200 Subject: [PATCH 1582/1842] Revert "gpiolib: of: Introduce hook for missing gpio-ranges" This reverts commit cda45b715d7052c23c2eb1c270023f758b209b16 which is commit 3550bba25d5587a701e6edf20e20984d2ee72c78 upstream. It breaks the Android kernel ABI and is not needed for Android devices, so it is safe to revert for now. If it is determined that it is needed in the future, it can be brought back in an abi-preserving way. Bug: 161946584 Signed-off-by: Greg Kroah-Hartman Change-Id: I4039000cca532561dc7c02f31dfa8274620018b3 --- drivers/gpio/gpiolib-of.c | 5 ----- include/linux/gpio/driver.h | 12 ------------ 2 files changed, 17 deletions(-) diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c index e4a9860eae13..ffb3d6898323 100644 --- a/drivers/gpio/gpiolib-of.c +++ b/drivers/gpio/gpiolib-of.c @@ -933,11 +933,6 @@ static int of_gpiochip_add_pin_range(struct gpio_chip *chip) if (!np) return 0; - if (!of_property_read_bool(np, "gpio-ranges") && - chip->of_gpio_ranges_fallback) { - return chip->of_gpio_ranges_fallback(chip, np); - } - group_names = of_find_property(np, group_names_propname, NULL); for (;; index++) { diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h index e17f8071f4de..c5585bcf53a5 100644 --- a/include/linux/gpio/driver.h +++ b/include/linux/gpio/driver.h @@ -479,18 +479,6 @@ struct gpio_chip { */ int (*of_xlate)(struct gpio_chip *gc, const struct of_phandle_args *gpiospec, u32 *flags); - - /** - * @of_gpio_ranges_fallback: - * - * Optional hook for the case that no gpio-ranges property is defined - * within the device tree node "np" (usually DT before introduction - * of gpio-ranges). So this callback is helpful to provide the - * necessary backward compatibility for the pin ranges. - */ - int (*of_gpio_ranges_fallback)(struct gpio_chip *gc, - struct device_node *np); - #endif /* CONFIG_OF_GPIO */ ANDROID_KABI_RESERVE(1); From 28fd8700b4045baf4c80b24f8de768297ca6b7ea Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 28 Jul 2022 09:33:34 +0200 Subject: [PATCH 1583/1842] Revert "ALSA: jack: Access input_dev under mutex" This reverts commit e2b8681769f6e205382f026b907d28aa5ec9d59a which is commit 1b6a6fc5280e97559287b61eade2d4b363e836f2 upstream. It breaks the Android kernel ABI and is not needed for Android devices, so it is safe to revert for now. If it is determined that it is needed in the future, it can be brought back in an abi-preserving way. Bug: 161946584 Signed-off-by: Greg Kroah-Hartman Change-Id: I3b24427044ea6b385a2f686c71c6b239a3d00f18 --- include/sound/jack.h | 1 - sound/core/jack.c | 34 +++++++--------------------------- 2 files changed, 7 insertions(+), 28 deletions(-) diff --git a/include/sound/jack.h b/include/sound/jack.h index 781ac4bb1086..70400b20e952 100644 --- a/include/sound/jack.h +++ b/include/sound/jack.h @@ -63,7 +63,6 @@ struct snd_jack { const char *id; #ifdef CONFIG_SND_JACK_INPUT_DEV struct input_dev *input_dev; - struct mutex input_dev_lock; int registered; int type; char name[100]; diff --git a/sound/core/jack.c b/sound/core/jack.c index 45e28db6ea38..dc2e06ae2414 100644 --- a/sound/core/jack.c +++ b/sound/core/jack.c @@ -34,11 +34,8 @@ static int snd_jack_dev_disconnect(struct snd_device *device) #ifdef CONFIG_SND_JACK_INPUT_DEV struct snd_jack *jack = device->device_data; - mutex_lock(&jack->input_dev_lock); - if (!jack->input_dev) { - mutex_unlock(&jack->input_dev_lock); + if (!jack->input_dev) return 0; - } /* If the input device is registered with the input subsystem * then we need to use a different deallocator. */ @@ -47,7 +44,6 @@ static int snd_jack_dev_disconnect(struct snd_device *device) else input_free_device(jack->input_dev); jack->input_dev = NULL; - mutex_unlock(&jack->input_dev_lock); #endif /* CONFIG_SND_JACK_INPUT_DEV */ return 0; } @@ -86,11 +82,8 @@ static int snd_jack_dev_register(struct snd_device *device) snprintf(jack->name, sizeof(jack->name), "%s %s", card->shortname, jack->id); - mutex_lock(&jack->input_dev_lock); - if (!jack->input_dev) { - mutex_unlock(&jack->input_dev_lock); + if (!jack->input_dev) return 0; - } jack->input_dev->name = jack->name; @@ -115,7 +108,6 @@ static int snd_jack_dev_register(struct snd_device *device) if (err == 0) jack->registered = 1; - mutex_unlock(&jack->input_dev_lock); return err; } #endif /* CONFIG_SND_JACK_INPUT_DEV */ @@ -236,11 +228,9 @@ int snd_jack_new(struct snd_card *card, const char *id, int type, return -ENOMEM; } -#ifdef CONFIG_SND_JACK_INPUT_DEV - mutex_init(&jack->input_dev_lock); - - /* don't create input device for phantom jack */ + /* don't creat input device for phantom jack */ if (!phantom_jack) { +#ifdef CONFIG_SND_JACK_INPUT_DEV int i; jack->input_dev = input_allocate_device(); @@ -258,8 +248,8 @@ int snd_jack_new(struct snd_card *card, const char *id, int type, input_set_capability(jack->input_dev, EV_SW, jack_switch_types[i]); - } #endif /* CONFIG_SND_JACK_INPUT_DEV */ + } err = snd_device_new(card, SNDRV_DEV_JACK, jack, &ops); if (err < 0) @@ -299,14 +289,10 @@ EXPORT_SYMBOL(snd_jack_new); void snd_jack_set_parent(struct snd_jack *jack, struct device *parent) { WARN_ON(jack->registered); - mutex_lock(&jack->input_dev_lock); - if (!jack->input_dev) { - mutex_unlock(&jack->input_dev_lock); + if (!jack->input_dev) return; - } jack->input_dev->dev.parent = parent; - mutex_unlock(&jack->input_dev_lock); } EXPORT_SYMBOL(snd_jack_set_parent); @@ -354,8 +340,6 @@ EXPORT_SYMBOL(snd_jack_set_key); /** * snd_jack_report - Report the current status of a jack - * Note: This function uses mutexes and should be called from a - * context which can sleep (such as a workqueue). * * @jack: The jack to report status for * @status: The current status of the jack @@ -375,11 +359,8 @@ void snd_jack_report(struct snd_jack *jack, int status) status & jack_kctl->mask_bits); #ifdef CONFIG_SND_JACK_INPUT_DEV - mutex_lock(&jack->input_dev_lock); - if (!jack->input_dev) { - mutex_unlock(&jack->input_dev_lock); + if (!jack->input_dev) return; - } for (i = 0; i < ARRAY_SIZE(jack->key); i++) { int testbit = SND_JACK_BTN_0 >> i; @@ -398,7 +379,6 @@ void snd_jack_report(struct snd_jack *jack, int status) } input_sync(jack->input_dev); - mutex_unlock(&jack->input_dev_lock); #endif /* CONFIG_SND_JACK_INPUT_DEV */ } EXPORT_SYMBOL(snd_jack_report); From 2dc56158cb5e0b6c1447d0adf8ee3e85a5d30a70 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 28 Jul 2022 09:36:36 +0200 Subject: [PATCH 1584/1842] Revert "thermal/core: Fix memory leak in the error path" This reverts commit 54cdc10ac7184f2159a4f5658b497e90244d1516 which is commit d44616c6cc3e35eea03ecfe9040edfa2b486a059 upstream. It breaks the Android kernel ABI and is not needed for Android devices, so it is safe to revert for now. If it is determined that it is needed in the future, it can be brought back in an abi-preserving way. Bug: 161946584 Signed-off-by: Greg Kroah-Hartman Change-Id: I83010322077db689f57ae26b2bc3775af944384f --- drivers/thermal/thermal_core.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c index 02eee8e975b4..8e78a18f9a83 100644 --- a/drivers/thermal/thermal_core.c +++ b/drivers/thermal/thermal_core.c @@ -1155,7 +1155,6 @@ __thermal_cooling_device_register(struct device_node *np, out_ida_remove: ida_simple_remove(&thermal_cdev_ida, id); out_kfree_cdev: - kfree(cdev); return ERR_PTR(ret); } From 090d920be96f1a02d67ca9b07b798d74b4f631fc Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 28 Jul 2022 09:36:50 +0200 Subject: [PATCH 1585/1842] Revert "thermal/core: fix a UAF bug in __thermal_cooling_device_register()" This reverts commit b132abaa6515e14e0db292389c25007d666e1925 which is commit 0a5c26712f963f0500161a23e0ffff8d29f742ab upstream. It breaks the Android kernel ABI and is not needed for Android devices, so it is safe to revert for now. If it is determined that it is needed in the future, it can be brought back in an abi-preserving way. Bug: 161946584 Signed-off-by: Greg Kroah-Hartman Change-Id: Ibaaab9a23fa0cbb5b11204cd6dc59b29215ee8a1 --- drivers/thermal/thermal_core.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c index 8e78a18f9a83..a1721cdf37ec 100644 --- a/drivers/thermal/thermal_core.c +++ b/drivers/thermal/thermal_core.c @@ -1095,7 +1095,7 @@ __thermal_cooling_device_register(struct device_node *np, { struct thermal_cooling_device *cdev; struct thermal_zone_device *pos = NULL; - int id, ret; + int ret; if (!ops || !ops->get_max_state || !ops->get_cur_state || !ops->set_cur_state) @@ -1109,7 +1109,6 @@ __thermal_cooling_device_register(struct device_node *np, if (ret < 0) goto out_kfree_cdev; cdev->id = ret; - id = ret; cdev->type = kstrdup(type ? type : "", GFP_KERNEL); if (!cdev->type) { @@ -1151,9 +1150,8 @@ __thermal_cooling_device_register(struct device_node *np, thermal_cooling_device_destroy_sysfs(cdev); kfree(cdev->type); put_device(&cdev->device); - cdev = NULL; out_ida_remove: - ida_simple_remove(&thermal_cdev_ida, id); + ida_simple_remove(&thermal_cdev_ida, cdev->id); out_kfree_cdev: return ERR_PTR(ret); } From 361d75b4c1fd9a1b57401a6a9a4b808aae52d6aa Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 28 Jul 2022 09:37:06 +0200 Subject: [PATCH 1586/1842] Revert "thermal/core: Fix memory leak in __thermal_cooling_device_register()" This reverts commit 18530bedd221160823f63ccc20dd55c7a03edbcf which is commit 98a160e898c0f4a979af9de3ab48b4b1d42d1dbb upstream. It breaks the Android kernel ABI and is not needed for Android devices, so it is safe to revert for now. If it is determined that it is needed in the future, it can be brought back in an abi-preserving way. Bug: 161946584 Signed-off-by: Greg Kroah-Hartman Change-Id: Iba9a1c4019b19674fc3f9520b31e4e43be30c847 --- drivers/thermal/thermal_core.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c index a1721cdf37ec..e8da2cd61e77 100644 --- a/drivers/thermal/thermal_core.c +++ b/drivers/thermal/thermal_core.c @@ -1147,7 +1147,6 @@ __thermal_cooling_device_register(struct device_node *np, return cdev; out_kfree_type: - thermal_cooling_device_destroy_sysfs(cdev); kfree(cdev->type); put_device(&cdev->device); out_ida_remove: From fe070690841b7a7a4957366676844f2b985d952a Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 28 Jul 2022 09:37:18 +0200 Subject: [PATCH 1587/1842] Revert "thermal/drivers/core: Use a char pointer for the cooling device name" This reverts commit dcf5ffc91c91cf58c5f172e83daa34d5ea6b69fc which is commit 58483761810087e5ffdf36e84ac1bf26df909097 upstream. It breaks the Android kernel ABI and is not needed for Android devices, so it is safe to revert for now. If it is determined that it is needed in the future, it can be brought back in an abi-preserving way. Bug: 161946584 Signed-off-by: Greg Kroah-Hartman Change-Id: I1554f3e0e4972d5553d62dc7b994303708c617b5 --- .../ethernet/mellanox/mlxsw/core_thermal.c | 2 +- drivers/thermal/thermal_core.c | 38 ++++++++----------- include/linux/thermal.h | 2 +- 3 files changed, 18 insertions(+), 24 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c index ecd1856bef5e..7ec1d0ee9bee 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c +++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c @@ -133,7 +133,7 @@ static int mlxsw_get_cooling_device_idx(struct mlxsw_thermal *thermal, /* Allow mlxsw thermal zone binding to an external cooling device */ for (i = 0; i < ARRAY_SIZE(mlxsw_thermal_external_allowed_cdev); i++) { if (strnstr(cdev->type, mlxsw_thermal_external_allowed_cdev[i], - strlen(cdev->type))) + sizeof(cdev->type))) return 0; } diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c index e8da2cd61e77..48a993d2236a 100644 --- a/drivers/thermal/thermal_core.c +++ b/drivers/thermal/thermal_core.c @@ -1095,7 +1095,10 @@ __thermal_cooling_device_register(struct device_node *np, { struct thermal_cooling_device *cdev; struct thermal_zone_device *pos = NULL; - int ret; + int result; + + if (type && strlen(type) >= THERMAL_NAME_LENGTH) + return ERR_PTR(-EINVAL); if (!ops || !ops->get_max_state || !ops->get_cur_state || !ops->set_cur_state) @@ -1105,17 +1108,14 @@ __thermal_cooling_device_register(struct device_node *np, if (!cdev) return ERR_PTR(-ENOMEM); - ret = ida_simple_get(&thermal_cdev_ida, 0, 0, GFP_KERNEL); - if (ret < 0) - goto out_kfree_cdev; - cdev->id = ret; - - cdev->type = kstrdup(type ? type : "", GFP_KERNEL); - if (!cdev->type) { - ret = -ENOMEM; - goto out_ida_remove; + result = ida_simple_get(&thermal_cdev_ida, 0, 0, GFP_KERNEL); + if (result < 0) { + kfree(cdev); + return ERR_PTR(result); } + cdev->id = result; + strlcpy(cdev->type, type ? : "", sizeof(cdev->type)); mutex_init(&cdev->lock); INIT_LIST_HEAD(&cdev->thermal_instances); cdev->np = np; @@ -1125,9 +1125,12 @@ __thermal_cooling_device_register(struct device_node *np, cdev->devdata = devdata; thermal_cooling_device_setup_sysfs(cdev); dev_set_name(&cdev->device, "cooling_device%d", cdev->id); - ret = device_register(&cdev->device); - if (ret) - goto out_kfree_type; + result = device_register(&cdev->device); + if (result) { + ida_simple_remove(&thermal_cdev_ida, cdev->id); + put_device(&cdev->device); + return ERR_PTR(result); + } /* Add 'this' new cdev to the global cdev list */ mutex_lock(&thermal_list_lock); @@ -1145,14 +1148,6 @@ __thermal_cooling_device_register(struct device_node *np, mutex_unlock(&thermal_list_lock); return cdev; - -out_kfree_type: - kfree(cdev->type); - put_device(&cdev->device); -out_ida_remove: - ida_simple_remove(&thermal_cdev_ida, cdev->id); -out_kfree_cdev: - return ERR_PTR(ret); } /** @@ -1311,7 +1306,6 @@ void thermal_cooling_device_unregister(struct thermal_cooling_device *cdev) ida_simple_remove(&thermal_cdev_ida, cdev->id); device_del(&cdev->device); thermal_cooling_device_destroy_sysfs(cdev); - kfree(cdev->type); put_device(&cdev->device); } EXPORT_SYMBOL_GPL(thermal_cooling_device_unregister); diff --git a/include/linux/thermal.h b/include/linux/thermal.h index e78c34314ec6..a8b81be94a81 100644 --- a/include/linux/thermal.h +++ b/include/linux/thermal.h @@ -96,7 +96,7 @@ struct thermal_cooling_device_ops { struct thermal_cooling_device { int id; - char *type; + char type[THERMAL_NAME_LENGTH]; struct device device; struct device_node *np; void *devdata; From b41a77c33b26cd57df750195e02347155a10613b Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 28 Jul 2022 09:39:01 +0200 Subject: [PATCH 1588/1842] Revert "Bluetooth: use hdev lock for accept_list and reject_list in conn req" This reverts commit 8ace1e63550a4488f3eb4ce0fea7007898908f7d which is commit fb048cae51bacdfbbda2954af3c213fdb1d484f4 upstream. It breaks the Android kernel ABI and is not needed for Android devices, so it is safe to revert for now. If it is determined that it is needed in the future, it can be brought back in an abi-preserving way. Bug: 161946584 Signed-off-by: Greg Kroah-Hartman Change-Id: I6593a06f315b0b5798602895e095e680f8dbd535 --- net/bluetooth/hci_event.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c index 954b29605c94..f75869835e3e 100644 --- a/net/bluetooth/hci_event.c +++ b/net/bluetooth/hci_event.c @@ -2709,12 +2709,10 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb) return; } - hci_dev_lock(hdev); - if (hci_bdaddr_list_lookup(&hdev->reject_list, &ev->bdaddr, BDADDR_BREDR)) { hci_reject_conn(hdev, &ev->bdaddr); - goto unlock; + return; } /* Require HCI_CONNECTABLE or an accept list entry to accept the @@ -2726,11 +2724,13 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb) !hci_bdaddr_list_lookup_with_flags(&hdev->accept_list, &ev->bdaddr, BDADDR_BREDR)) { hci_reject_conn(hdev, &ev->bdaddr); - goto unlock; + return; } /* Connection accepted */ + hci_dev_lock(hdev); + ie = hci_inquiry_cache_lookup(hdev, &ev->bdaddr); if (ie) memcpy(ie->data.dev_class, ev->dev_class, 3); @@ -2742,7 +2742,8 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb) HCI_ROLE_SLAVE); if (!conn) { bt_dev_err(hdev, "no memory for new connection"); - goto unlock; + hci_dev_unlock(hdev); + return; } } @@ -2782,10 +2783,6 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb) conn->state = BT_CONNECT2; hci_connect_cfm(conn, 0); } - - return; -unlock: - hci_dev_unlock(hdev); } static u8 hci_to_mgmt_reason(u8 err) From 8046f2ad50fd512e29673af8dae6ea56c8c5fc5e Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 28 Jul 2022 09:39:06 +0200 Subject: [PATCH 1589/1842] Revert "Bluetooth: use inclusive language when filtering devices" This reverts commit 792f8b0e748c9ef8f19f27dbf5ce656e843c7b1f which is commit 3d4f9c00492b4e21641e5140a5e78cb50b58d60b upstream. It breaks the Android kernel ABI and is not needed for Android devices, so it is safe to revert for now. If it is determined that it is needed in the future, it can be brought back in an abi-preserving way. Bug: 161946584 Signed-off-by: Greg Kroah-Hartman Change-Id: Ie2af5f73a1726c05497d64b797a97fd4f6564383 --- include/net/bluetooth/hci.h | 16 +++--- include/net/bluetooth/hci_core.h | 8 +-- net/bluetooth/hci_core.c | 24 ++++----- net/bluetooth/hci_debugfs.c | 8 +-- net/bluetooth/hci_event.c | 88 +++++++++++++++---------------- net/bluetooth/hci_request.c | 89 ++++++++++++++++---------------- net/bluetooth/hci_sock.c | 12 ++--- net/bluetooth/l2cap_core.c | 4 +- net/bluetooth/mgmt.c | 14 ++--- 9 files changed, 131 insertions(+), 132 deletions(-) diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h index 11be9675d4ba..6da4b3c5dd55 100644 --- a/include/net/bluetooth/hci.h +++ b/include/net/bluetooth/hci.h @@ -1503,7 +1503,7 @@ struct hci_cp_le_set_scan_enable { } __packed; #define HCI_LE_USE_PEER_ADDR 0x00 -#define HCI_LE_USE_ACCEPT_LIST 0x01 +#define HCI_LE_USE_WHITELIST 0x01 #define HCI_OP_LE_CREATE_CONN 0x200d struct hci_cp_le_create_conn { @@ -1523,22 +1523,22 @@ struct hci_cp_le_create_conn { #define HCI_OP_LE_CREATE_CONN_CANCEL 0x200e -#define HCI_OP_LE_READ_ACCEPT_LIST_SIZE 0x200f -struct hci_rp_le_read_accept_list_size { +#define HCI_OP_LE_READ_WHITE_LIST_SIZE 0x200f +struct hci_rp_le_read_white_list_size { __u8 status; __u8 size; } __packed; -#define HCI_OP_LE_CLEAR_ACCEPT_LIST 0x2010 +#define HCI_OP_LE_CLEAR_WHITE_LIST 0x2010 -#define HCI_OP_LE_ADD_TO_ACCEPT_LIST 0x2011 -struct hci_cp_le_add_to_accept_list { +#define HCI_OP_LE_ADD_TO_WHITE_LIST 0x2011 +struct hci_cp_le_add_to_white_list { __u8 bdaddr_type; bdaddr_t bdaddr; } __packed; -#define HCI_OP_LE_DEL_FROM_ACCEPT_LIST 0x2012 -struct hci_cp_le_del_from_accept_list { +#define HCI_OP_LE_DEL_FROM_WHITE_LIST 0x2012 +struct hci_cp_le_del_from_white_list { __u8 bdaddr_type; bdaddr_t bdaddr; } __packed; diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h index e099f73e8dd9..009b572aeeb3 100644 --- a/include/net/bluetooth/hci_core.h +++ b/include/net/bluetooth/hci_core.h @@ -309,7 +309,7 @@ struct hci_dev { __u8 max_page; __u8 features[HCI_MAX_PAGES][8]; __u8 le_features[8]; - __u8 le_accept_list_size; + __u8 le_white_list_size; __u8 le_resolv_list_size; __u8 le_num_of_adv_sets; __u8 le_states[8]; @@ -500,14 +500,14 @@ struct hci_dev { struct hci_conn_hash conn_hash; struct list_head mgmt_pending; - struct list_head reject_list; - struct list_head accept_list; + struct list_head blacklist; + struct list_head whitelist; struct list_head uuids; struct list_head link_keys; struct list_head long_term_keys; struct list_head identity_resolving_keys; struct list_head remote_oob_data; - struct list_head le_accept_list; + struct list_head le_white_list; struct list_head le_resolv_list; struct list_head le_conn_params; struct list_head pend_le_conns; diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c index 2cb0cf035476..99657c1e66ba 100644 --- a/net/bluetooth/hci_core.c +++ b/net/bluetooth/hci_core.c @@ -742,14 +742,14 @@ static int hci_init3_req(struct hci_request *req, unsigned long opt) } if (hdev->commands[26] & 0x40) { - /* Read LE Accept List Size */ - hci_req_add(req, HCI_OP_LE_READ_ACCEPT_LIST_SIZE, + /* Read LE White List Size */ + hci_req_add(req, HCI_OP_LE_READ_WHITE_LIST_SIZE, 0, NULL); } if (hdev->commands[26] & 0x80) { - /* Clear LE Accept List */ - hci_req_add(req, HCI_OP_LE_CLEAR_ACCEPT_LIST, 0, NULL); + /* Clear LE White List */ + hci_req_add(req, HCI_OP_LE_CLEAR_WHITE_LIST, 0, NULL); } if (hdev->commands[34] & 0x40) { @@ -3548,13 +3548,13 @@ static int hci_suspend_notifier(struct notifier_block *nb, unsigned long action, /* Suspend consists of two actions: * - First, disconnect everything and make the controller not * connectable (disabling scanning) - * - Second, program event filter/accept list and enable scan + * - Second, program event filter/whitelist and enable scan */ ret = hci_change_suspend_state(hdev, BT_SUSPEND_DISCONNECT); if (!ret) state = BT_SUSPEND_DISCONNECT; - /* Only configure accept list if disconnect succeeded and wake + /* Only configure whitelist if disconnect succeeded and wake * isn't being prevented. */ if (!ret && !(hdev->prevent_wake && hdev->prevent_wake(hdev))) { @@ -3657,14 +3657,14 @@ struct hci_dev *hci_alloc_dev(void) mutex_init(&hdev->req_lock); INIT_LIST_HEAD(&hdev->mgmt_pending); - INIT_LIST_HEAD(&hdev->reject_list); - INIT_LIST_HEAD(&hdev->accept_list); + INIT_LIST_HEAD(&hdev->blacklist); + INIT_LIST_HEAD(&hdev->whitelist); INIT_LIST_HEAD(&hdev->uuids); INIT_LIST_HEAD(&hdev->link_keys); INIT_LIST_HEAD(&hdev->long_term_keys); INIT_LIST_HEAD(&hdev->identity_resolving_keys); INIT_LIST_HEAD(&hdev->remote_oob_data); - INIT_LIST_HEAD(&hdev->le_accept_list); + INIT_LIST_HEAD(&hdev->le_white_list); INIT_LIST_HEAD(&hdev->le_resolv_list); INIT_LIST_HEAD(&hdev->le_conn_params); INIT_LIST_HEAD(&hdev->pend_le_conns); @@ -3880,8 +3880,8 @@ void hci_cleanup_dev(struct hci_dev *hdev) destroy_workqueue(hdev->req_workqueue); hci_dev_lock(hdev); - hci_bdaddr_list_clear(&hdev->reject_list); - hci_bdaddr_list_clear(&hdev->accept_list); + hci_bdaddr_list_clear(&hdev->blacklist); + hci_bdaddr_list_clear(&hdev->whitelist); hci_uuids_clear(hdev); hci_link_keys_clear(hdev); hci_smp_ltks_clear(hdev); @@ -3889,7 +3889,7 @@ void hci_cleanup_dev(struct hci_dev *hdev) hci_remote_oob_data_clear(hdev); hci_adv_instances_clear(hdev); hci_adv_monitors_clear(hdev); - hci_bdaddr_list_clear(&hdev->le_accept_list); + hci_bdaddr_list_clear(&hdev->le_white_list); hci_bdaddr_list_clear(&hdev->le_resolv_list); hci_conn_params_clear_all(hdev); hci_discovery_filter_clear(hdev); diff --git a/net/bluetooth/hci_debugfs.c b/net/bluetooth/hci_debugfs.c index 338833f12365..5e8af2658e44 100644 --- a/net/bluetooth/hci_debugfs.c +++ b/net/bluetooth/hci_debugfs.c @@ -125,7 +125,7 @@ static int device_list_show(struct seq_file *f, void *ptr) struct bdaddr_list *b; hci_dev_lock(hdev); - list_for_each_entry(b, &hdev->accept_list, list) + list_for_each_entry(b, &hdev->whitelist, list) seq_printf(f, "%pMR (type %u)\n", &b->bdaddr, b->bdaddr_type); list_for_each_entry(p, &hdev->le_conn_params, list) { seq_printf(f, "%pMR (type %u) %u\n", &p->addr, p->addr_type, @@ -144,7 +144,7 @@ static int blacklist_show(struct seq_file *f, void *p) struct bdaddr_list *b; hci_dev_lock(hdev); - list_for_each_entry(b, &hdev->reject_list, list) + list_for_each_entry(b, &hdev->blacklist, list) seq_printf(f, "%pMR (type %u)\n", &b->bdaddr, b->bdaddr_type); hci_dev_unlock(hdev); @@ -734,7 +734,7 @@ static int white_list_show(struct seq_file *f, void *ptr) struct bdaddr_list *b; hci_dev_lock(hdev); - list_for_each_entry(b, &hdev->le_accept_list, list) + list_for_each_entry(b, &hdev->le_white_list, list) seq_printf(f, "%pMR (type %u)\n", &b->bdaddr, b->bdaddr_type); hci_dev_unlock(hdev); @@ -1145,7 +1145,7 @@ void hci_debugfs_create_le(struct hci_dev *hdev) &force_static_address_fops); debugfs_create_u8("white_list_size", 0444, hdev->debugfs, - &hdev->le_accept_list_size); + &hdev->le_white_list_size); debugfs_create_file("white_list", 0444, hdev->debugfs, hdev, &white_list_fops); debugfs_create_u8("resolv_list_size", 0444, hdev->debugfs, diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c index f75869835e3e..061ef20e135e 100644 --- a/net/bluetooth/hci_event.c +++ b/net/bluetooth/hci_event.c @@ -236,7 +236,7 @@ static void hci_cc_reset(struct hci_dev *hdev, struct sk_buff *skb) hdev->ssp_debug_mode = 0; - hci_bdaddr_list_clear(&hdev->le_accept_list); + hci_bdaddr_list_clear(&hdev->le_white_list); hci_bdaddr_list_clear(&hdev->le_resolv_list); } @@ -1456,22 +1456,36 @@ static void hci_cc_le_read_num_adv_sets(struct hci_dev *hdev, hdev->le_num_of_adv_sets = rp->num_of_sets; } -static void hci_cc_le_read_accept_list_size(struct hci_dev *hdev, - struct sk_buff *skb) +static void hci_cc_le_read_white_list_size(struct hci_dev *hdev, + struct sk_buff *skb) { - struct hci_rp_le_read_accept_list_size *rp = (void *)skb->data; + struct hci_rp_le_read_white_list_size *rp = (void *) skb->data; BT_DBG("%s status 0x%2.2x size %u", hdev->name, rp->status, rp->size); if (rp->status) return; - hdev->le_accept_list_size = rp->size; + hdev->le_white_list_size = rp->size; } -static void hci_cc_le_clear_accept_list(struct hci_dev *hdev, +static void hci_cc_le_clear_white_list(struct hci_dev *hdev, + struct sk_buff *skb) +{ + __u8 status = *((__u8 *) skb->data); + + BT_DBG("%s status 0x%2.2x", hdev->name, status); + + if (status) + return; + + hci_bdaddr_list_clear(&hdev->le_white_list); +} + +static void hci_cc_le_add_to_white_list(struct hci_dev *hdev, struct sk_buff *skb) { + struct hci_cp_le_add_to_white_list *sent; __u8 status = *((__u8 *) skb->data); BT_DBG("%s status 0x%2.2x", hdev->name, status); @@ -1479,32 +1493,18 @@ static void hci_cc_le_clear_accept_list(struct hci_dev *hdev, if (status) return; - hci_bdaddr_list_clear(&hdev->le_accept_list); -} - -static void hci_cc_le_add_to_accept_list(struct hci_dev *hdev, - struct sk_buff *skb) -{ - struct hci_cp_le_add_to_accept_list *sent; - __u8 status = *((__u8 *) skb->data); - - BT_DBG("%s status 0x%2.2x", hdev->name, status); - - if (status) - return; - - sent = hci_sent_cmd_data(hdev, HCI_OP_LE_ADD_TO_ACCEPT_LIST); + sent = hci_sent_cmd_data(hdev, HCI_OP_LE_ADD_TO_WHITE_LIST); if (!sent) return; - hci_bdaddr_list_add(&hdev->le_accept_list, &sent->bdaddr, - sent->bdaddr_type); + hci_bdaddr_list_add(&hdev->le_white_list, &sent->bdaddr, + sent->bdaddr_type); } -static void hci_cc_le_del_from_accept_list(struct hci_dev *hdev, - struct sk_buff *skb) +static void hci_cc_le_del_from_white_list(struct hci_dev *hdev, + struct sk_buff *skb) { - struct hci_cp_le_del_from_accept_list *sent; + struct hci_cp_le_del_from_white_list *sent; __u8 status = *((__u8 *) skb->data); BT_DBG("%s status 0x%2.2x", hdev->name, status); @@ -1512,11 +1512,11 @@ static void hci_cc_le_del_from_accept_list(struct hci_dev *hdev, if (status) return; - sent = hci_sent_cmd_data(hdev, HCI_OP_LE_DEL_FROM_ACCEPT_LIST); + sent = hci_sent_cmd_data(hdev, HCI_OP_LE_DEL_FROM_WHITE_LIST); if (!sent) return; - hci_bdaddr_list_del(&hdev->le_accept_list, &sent->bdaddr, + hci_bdaddr_list_del(&hdev->le_white_list, &sent->bdaddr, sent->bdaddr_type); } @@ -2331,7 +2331,7 @@ static void cs_le_create_conn(struct hci_dev *hdev, bdaddr_t *peer_addr, /* We don't want the connection attempt to stick around * indefinitely since LE doesn't have a page timeout concept * like BR/EDR. Set a timer for any connection that doesn't use - * the accept list for connecting. + * the white list for connecting. */ if (filter_policy == HCI_LE_USE_PEER_ADDR) queue_delayed_work(conn->hdev->workqueue, @@ -2587,7 +2587,7 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) * only used during suspend. */ if (ev->link_type == ACL_LINK && - hci_bdaddr_list_lookup_with_flags(&hdev->accept_list, + hci_bdaddr_list_lookup_with_flags(&hdev->whitelist, &ev->bdaddr, BDADDR_BREDR)) { conn = hci_conn_add(hdev, ev->link_type, &ev->bdaddr, @@ -2709,19 +2709,19 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb) return; } - if (hci_bdaddr_list_lookup(&hdev->reject_list, &ev->bdaddr, + if (hci_bdaddr_list_lookup(&hdev->blacklist, &ev->bdaddr, BDADDR_BREDR)) { hci_reject_conn(hdev, &ev->bdaddr); return; } - /* Require HCI_CONNECTABLE or an accept list entry to accept the + /* Require HCI_CONNECTABLE or a whitelist entry to accept the * connection. These features are only touched through mgmt so * only do the checks if HCI_MGMT is set. */ if (hci_dev_test_flag(hdev, HCI_MGMT) && !hci_dev_test_flag(hdev, HCI_CONNECTABLE) && - !hci_bdaddr_list_lookup_with_flags(&hdev->accept_list, &ev->bdaddr, + !hci_bdaddr_list_lookup_with_flags(&hdev->whitelist, &ev->bdaddr, BDADDR_BREDR)) { hci_reject_conn(hdev, &ev->bdaddr); return; @@ -3481,20 +3481,20 @@ static void hci_cmd_complete_evt(struct hci_dev *hdev, struct sk_buff *skb, hci_cc_le_set_scan_enable(hdev, skb); break; - case HCI_OP_LE_READ_ACCEPT_LIST_SIZE: - hci_cc_le_read_accept_list_size(hdev, skb); + case HCI_OP_LE_READ_WHITE_LIST_SIZE: + hci_cc_le_read_white_list_size(hdev, skb); break; - case HCI_OP_LE_CLEAR_ACCEPT_LIST: - hci_cc_le_clear_accept_list(hdev, skb); + case HCI_OP_LE_CLEAR_WHITE_LIST: + hci_cc_le_clear_white_list(hdev, skb); break; - case HCI_OP_LE_ADD_TO_ACCEPT_LIST: - hci_cc_le_add_to_accept_list(hdev, skb); + case HCI_OP_LE_ADD_TO_WHITE_LIST: + hci_cc_le_add_to_white_list(hdev, skb); break; - case HCI_OP_LE_DEL_FROM_ACCEPT_LIST: - hci_cc_le_del_from_accept_list(hdev, skb); + case HCI_OP_LE_DEL_FROM_WHITE_LIST: + hci_cc_le_del_from_white_list(hdev, skb); break; case HCI_OP_LE_READ_SUPPORTED_STATES: @@ -5154,7 +5154,7 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status, /* If we didn't have a hci_conn object previously * but we're in central role this must be something - * initiated using an accept list. Since accept list based + * initiated using a white list. Since white list based * connections are not "first class citizens" we don't * have full tracking of them. Therefore, we go ahead * with a "best effort" approach of determining the @@ -5204,7 +5204,7 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status, addr_type = BDADDR_LE_RANDOM; /* Drop the connection if the device is blocked */ - if (hci_bdaddr_list_lookup(&hdev->reject_list, &conn->dst, addr_type)) { + if (hci_bdaddr_list_lookup(&hdev->blacklist, &conn->dst, addr_type)) { hci_conn_drop(conn); goto unlock; } @@ -5372,7 +5372,7 @@ static struct hci_conn *check_pending_le_conn(struct hci_dev *hdev, return NULL; /* Ignore if the device is blocked */ - if (hci_bdaddr_list_lookup(&hdev->reject_list, addr, addr_type)) + if (hci_bdaddr_list_lookup(&hdev->blacklist, addr, addr_type)) return NULL; /* Most controller will fail if we try to create new connections diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c index b4cc8ab13c11..d4a2bb4a2c6f 100644 --- a/net/bluetooth/hci_request.c +++ b/net/bluetooth/hci_request.c @@ -736,17 +736,17 @@ void hci_req_add_le_scan_disable(struct hci_request *req, bool rpa_le_conn) } } -static void del_from_accept_list(struct hci_request *req, bdaddr_t *bdaddr, - u8 bdaddr_type) +static void del_from_white_list(struct hci_request *req, bdaddr_t *bdaddr, + u8 bdaddr_type) { - struct hci_cp_le_del_from_accept_list cp; + struct hci_cp_le_del_from_white_list cp; cp.bdaddr_type = bdaddr_type; bacpy(&cp.bdaddr, bdaddr); - bt_dev_dbg(req->hdev, "Remove %pMR (0x%x) from accept list", &cp.bdaddr, + bt_dev_dbg(req->hdev, "Remove %pMR (0x%x) from whitelist", &cp.bdaddr, cp.bdaddr_type); - hci_req_add(req, HCI_OP_LE_DEL_FROM_ACCEPT_LIST, sizeof(cp), &cp); + hci_req_add(req, HCI_OP_LE_DEL_FROM_WHITE_LIST, sizeof(cp), &cp); if (use_ll_privacy(req->hdev) && hci_dev_test_flag(req->hdev, HCI_ENABLE_LL_PRIVACY)) { @@ -765,31 +765,31 @@ static void del_from_accept_list(struct hci_request *req, bdaddr_t *bdaddr, } } -/* Adds connection to accept list if needed. On error, returns -1. */ -static int add_to_accept_list(struct hci_request *req, - struct hci_conn_params *params, u8 *num_entries, - bool allow_rpa) +/* Adds connection to white list if needed. On error, returns -1. */ +static int add_to_white_list(struct hci_request *req, + struct hci_conn_params *params, u8 *num_entries, + bool allow_rpa) { - struct hci_cp_le_add_to_accept_list cp; + struct hci_cp_le_add_to_white_list cp; struct hci_dev *hdev = req->hdev; - /* Already in accept list */ - if (hci_bdaddr_list_lookup(&hdev->le_accept_list, ¶ms->addr, + /* Already in white list */ + if (hci_bdaddr_list_lookup(&hdev->le_white_list, ¶ms->addr, params->addr_type)) return 0; /* Select filter policy to accept all advertising */ - if (*num_entries >= hdev->le_accept_list_size) + if (*num_entries >= hdev->le_white_list_size) return -1; - /* Accept list can not be used with RPAs */ + /* White list can not be used with RPAs */ if (!allow_rpa && !hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY) && hci_find_irk_by_addr(hdev, ¶ms->addr, params->addr_type)) { return -1; } - /* During suspend, only wakeable devices can be in accept list */ + /* During suspend, only wakeable devices can be in whitelist */ if (hdev->suspended && !hci_conn_test_flag(HCI_CONN_FLAG_REMOTE_WAKEUP, params->current_flags)) return 0; @@ -798,9 +798,9 @@ static int add_to_accept_list(struct hci_request *req, cp.bdaddr_type = params->addr_type; bacpy(&cp.bdaddr, ¶ms->addr); - bt_dev_dbg(hdev, "Add %pMR (0x%x) to accept list", &cp.bdaddr, + bt_dev_dbg(hdev, "Add %pMR (0x%x) to whitelist", &cp.bdaddr, cp.bdaddr_type); - hci_req_add(req, HCI_OP_LE_ADD_TO_ACCEPT_LIST, sizeof(cp), &cp); + hci_req_add(req, HCI_OP_LE_ADD_TO_WHITE_LIST, sizeof(cp), &cp); if (use_ll_privacy(hdev) && hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY)) { @@ -828,15 +828,15 @@ static int add_to_accept_list(struct hci_request *req, return 0; } -static u8 update_accept_list(struct hci_request *req) +static u8 update_white_list(struct hci_request *req) { struct hci_dev *hdev = req->hdev; struct hci_conn_params *params; struct bdaddr_list *b; u8 num_entries = 0; bool pend_conn, pend_report; - /* We allow usage of accept list even with RPAs in suspend. In the worst - * case, we won't be able to wake from devices that use the privacy1.2 + /* We allow whitelisting even with RPAs in suspend. In the worst case, + * we won't be able to wake from devices that use the privacy1.2 * features. Additionally, once we support privacy1.2 and IRK * offloading, we can update this to also check for those conditions. */ @@ -846,13 +846,13 @@ static u8 update_accept_list(struct hci_request *req) hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY)) allow_rpa = true; - /* Go through the current accept list programmed into the + /* Go through the current white list programmed into the * controller one by one and check if that address is still * in the list of pending connections or list of devices to * report. If not present in either list, then queue the * command to remove it from the controller. */ - list_for_each_entry(b, &hdev->le_accept_list, list) { + list_for_each_entry(b, &hdev->le_white_list, list) { pend_conn = hci_pend_le_action_lookup(&hdev->pend_le_conns, &b->bdaddr, b->bdaddr_type); @@ -861,14 +861,14 @@ static u8 update_accept_list(struct hci_request *req) b->bdaddr_type); /* If the device is not likely to connect or report, - * remove it from the accept list. + * remove it from the whitelist. */ if (!pend_conn && !pend_report) { - del_from_accept_list(req, &b->bdaddr, b->bdaddr_type); + del_from_white_list(req, &b->bdaddr, b->bdaddr_type); continue; } - /* Accept list can not be used with RPAs */ + /* White list can not be used with RPAs */ if (!allow_rpa && !hci_dev_test_flag(hdev, HCI_ENABLE_LL_PRIVACY) && hci_find_irk_by_addr(hdev, &b->bdaddr, b->bdaddr_type)) { @@ -878,27 +878,27 @@ static u8 update_accept_list(struct hci_request *req) num_entries++; } - /* Since all no longer valid accept list entries have been + /* Since all no longer valid white list entries have been * removed, walk through the list of pending connections * and ensure that any new device gets programmed into * the controller. * * If the list of the devices is larger than the list of - * available accept list entries in the controller, then + * available white list entries in the controller, then * just abort and return filer policy value to not use the - * accept list. + * white list. */ list_for_each_entry(params, &hdev->pend_le_conns, action) { - if (add_to_accept_list(req, params, &num_entries, allow_rpa)) + if (add_to_white_list(req, params, &num_entries, allow_rpa)) return 0x00; } /* After adding all new pending connections, walk through * the list of pending reports and also add these to the - * accept list if there is still space. Abort if space runs out. + * white list if there is still space. Abort if space runs out. */ list_for_each_entry(params, &hdev->pend_le_reports, action) { - if (add_to_accept_list(req, params, &num_entries, allow_rpa)) + if (add_to_white_list(req, params, &num_entries, allow_rpa)) return 0x00; } @@ -915,7 +915,7 @@ static u8 update_accept_list(struct hci_request *req) hdev->interleave_scan_state != INTERLEAVE_SCAN_ALLOWLIST) return 0x00; - /* Select filter policy to use accept list */ + /* Select filter policy to use white list */ return 0x01; } @@ -1069,20 +1069,20 @@ void hci_req_add_le_passive_scan(struct hci_request *req) return; bt_dev_dbg(hdev, "interleave state %d", hdev->interleave_scan_state); - /* Adding or removing entries from the accept list must + /* Adding or removing entries from the white list must * happen before enabling scanning. The controller does - * not allow accept list modification while scanning. + * not allow white list modification while scanning. */ - filter_policy = update_accept_list(req); + filter_policy = update_white_list(req); /* When the controller is using random resolvable addresses and * with that having LE privacy enabled, then controllers with * Extended Scanner Filter Policies support can now enable support * for handling directed advertising. * - * So instead of using filter polices 0x00 (no accept list) - * and 0x01 (accept list enabled) use the new filter policies - * 0x02 (no accept list) and 0x03 (accept list enabled). + * So instead of using filter polices 0x00 (no whitelist) + * and 0x01 (whitelist enabled) use the new filter policies + * 0x02 (no whitelist) and 0x03 (whitelist enabled). */ if (hci_dev_test_flag(hdev, HCI_PRIVACY) && (hdev->le_features[0] & HCI_LE_EXT_SCAN_POLICY)) @@ -1102,8 +1102,7 @@ void hci_req_add_le_passive_scan(struct hci_request *req) interval = hdev->le_scan_interval; } - bt_dev_dbg(hdev, "LE passive scan with accept list = %d", - filter_policy); + bt_dev_dbg(hdev, "LE passive scan with whitelist = %d", filter_policy); hci_req_start_scan(req, LE_SCAN_PASSIVE, interval, window, own_addr_type, filter_policy, addr_resolv); } @@ -1151,7 +1150,7 @@ static void hci_req_set_event_filter(struct hci_request *req) /* Always clear event filter when starting */ hci_req_clear_event_filter(req); - list_for_each_entry(b, &hdev->accept_list, list) { + list_for_each_entry(b, &hdev->whitelist, list) { if (!hci_conn_test_flag(HCI_CONN_FLAG_REMOTE_WAKEUP, b->current_flags)) continue; @@ -2556,11 +2555,11 @@ int hci_update_random_address(struct hci_request *req, bool require_privacy, return 0; } -static bool disconnected_accept_list_entries(struct hci_dev *hdev) +static bool disconnected_whitelist_entries(struct hci_dev *hdev) { struct bdaddr_list *b; - list_for_each_entry(b, &hdev->accept_list, list) { + list_for_each_entry(b, &hdev->whitelist, list) { struct hci_conn *conn; conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, &b->bdaddr); @@ -2592,7 +2591,7 @@ void __hci_req_update_scan(struct hci_request *req) return; if (hci_dev_test_flag(hdev, HCI_CONNECTABLE) || - disconnected_accept_list_entries(hdev)) + disconnected_whitelist_entries(hdev)) scan = SCAN_PAGE; else scan = SCAN_DISABLED; @@ -3081,7 +3080,7 @@ static int active_scan(struct hci_request *req, unsigned long opt) uint16_t interval = opt; struct hci_dev *hdev = req->hdev; u8 own_addr_type; - /* Accept list is not used for discovery */ + /* White list is not used for discovery */ u8 filter_policy = 0x00; /* Discovery doesn't require controller address resolution */ bool addr_resolv = false; diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c index 71d18d3295f5..53f85d7c5f9e 100644 --- a/net/bluetooth/hci_sock.c +++ b/net/bluetooth/hci_sock.c @@ -897,7 +897,7 @@ static int hci_sock_release(struct socket *sock) return 0; } -static int hci_sock_reject_list_add(struct hci_dev *hdev, void __user *arg) +static int hci_sock_blacklist_add(struct hci_dev *hdev, void __user *arg) { bdaddr_t bdaddr; int err; @@ -907,14 +907,14 @@ static int hci_sock_reject_list_add(struct hci_dev *hdev, void __user *arg) hci_dev_lock(hdev); - err = hci_bdaddr_list_add(&hdev->reject_list, &bdaddr, BDADDR_BREDR); + err = hci_bdaddr_list_add(&hdev->blacklist, &bdaddr, BDADDR_BREDR); hci_dev_unlock(hdev); return err; } -static int hci_sock_reject_list_del(struct hci_dev *hdev, void __user *arg) +static int hci_sock_blacklist_del(struct hci_dev *hdev, void __user *arg) { bdaddr_t bdaddr; int err; @@ -924,7 +924,7 @@ static int hci_sock_reject_list_del(struct hci_dev *hdev, void __user *arg) hci_dev_lock(hdev); - err = hci_bdaddr_list_del(&hdev->reject_list, &bdaddr, BDADDR_BREDR); + err = hci_bdaddr_list_del(&hdev->blacklist, &bdaddr, BDADDR_BREDR); hci_dev_unlock(hdev); @@ -964,12 +964,12 @@ static int hci_sock_bound_ioctl(struct sock *sk, unsigned int cmd, case HCIBLOCKADDR: if (!capable(CAP_NET_ADMIN)) return -EPERM; - return hci_sock_reject_list_add(hdev, (void __user *)arg); + return hci_sock_blacklist_add(hdev, (void __user *)arg); case HCIUNBLOCKADDR: if (!capable(CAP_NET_ADMIN)) return -EPERM; - return hci_sock_reject_list_del(hdev, (void __user *)arg); + return hci_sock_blacklist_del(hdev, (void __user *)arg); } return -ENOIOCTLCMD; diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c index 2557cd917f5e..6c80e62cea0c 100644 --- a/net/bluetooth/l2cap_core.c +++ b/net/bluetooth/l2cap_core.c @@ -7652,7 +7652,7 @@ static void l2cap_recv_frame(struct l2cap_conn *conn, struct sk_buff *skb) * at least ensure that we ignore incoming data from them. */ if (hcon->type == LE_LINK && - hci_bdaddr_list_lookup(&hcon->hdev->reject_list, &hcon->dst, + hci_bdaddr_list_lookup(&hcon->hdev->blacklist, &hcon->dst, bdaddr_dst_type(hcon))) { kfree_skb(skb); return; @@ -8108,7 +8108,7 @@ static void l2cap_connect_cfm(struct hci_conn *hcon, u8 status) dst_type = bdaddr_dst_type(hcon); /* If device is blocked, do not create channels for it */ - if (hci_bdaddr_list_lookup(&hdev->reject_list, &hcon->dst, dst_type)) + if (hci_bdaddr_list_lookup(&hdev->blacklist, &hcon->dst, dst_type)) return; /* Find fixed channels and notify them of the new connection. We diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c index 878bf7382244..08f67f91d427 100644 --- a/net/bluetooth/mgmt.c +++ b/net/bluetooth/mgmt.c @@ -4041,7 +4041,7 @@ static int get_device_flags(struct sock *sk, struct hci_dev *hdev, void *data, memset(&rp, 0, sizeof(rp)); if (cp->addr.type == BDADDR_BREDR) { - br_params = hci_bdaddr_list_lookup_with_flags(&hdev->accept_list, + br_params = hci_bdaddr_list_lookup_with_flags(&hdev->whitelist, &cp->addr.bdaddr, cp->addr.type); if (!br_params) @@ -4109,7 +4109,7 @@ static int set_device_flags(struct sock *sk, struct hci_dev *hdev, void *data, hci_dev_lock(hdev); if (cp->addr.type == BDADDR_BREDR) { - br_params = hci_bdaddr_list_lookup_with_flags(&hdev->accept_list, + br_params = hci_bdaddr_list_lookup_with_flags(&hdev->whitelist, &cp->addr.bdaddr, cp->addr.type); @@ -4979,7 +4979,7 @@ static int block_device(struct sock *sk, struct hci_dev *hdev, void *data, hci_dev_lock(hdev); - err = hci_bdaddr_list_add(&hdev->reject_list, &cp->addr.bdaddr, + err = hci_bdaddr_list_add(&hdev->blacklist, &cp->addr.bdaddr, cp->addr.type); if (err < 0) { status = MGMT_STATUS_FAILED; @@ -5015,7 +5015,7 @@ static int unblock_device(struct sock *sk, struct hci_dev *hdev, void *data, hci_dev_lock(hdev); - err = hci_bdaddr_list_del(&hdev->reject_list, &cp->addr.bdaddr, + err = hci_bdaddr_list_del(&hdev->blacklist, &cp->addr.bdaddr, cp->addr.type); if (err < 0) { status = MGMT_STATUS_INVALID_PARAMS; @@ -6506,7 +6506,7 @@ static int add_device(struct sock *sk, struct hci_dev *hdev, goto unlock; } - err = hci_bdaddr_list_add_with_flags(&hdev->accept_list, + err = hci_bdaddr_list_add_with_flags(&hdev->whitelist, &cp->addr.bdaddr, cp->addr.type, 0); if (err) @@ -6604,7 +6604,7 @@ static int remove_device(struct sock *sk, struct hci_dev *hdev, } if (cp->addr.type == BDADDR_BREDR) { - err = hci_bdaddr_list_del(&hdev->accept_list, + err = hci_bdaddr_list_del(&hdev->whitelist, &cp->addr.bdaddr, cp->addr.type); if (err) { @@ -6675,7 +6675,7 @@ static int remove_device(struct sock *sk, struct hci_dev *hdev, goto unlock; } - list_for_each_entry_safe(b, btmp, &hdev->accept_list, list) { + list_for_each_entry_safe(b, btmp, &hdev->whitelist, list) { device_removed(sk, hdev, &b->bdaddr, b->bdaddr_type); list_del(&b->list); kfree(b); From 26e506a63e1b28beb833d3d7fda9b489b013f4ff Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 28 Jul 2022 09:39:19 +0200 Subject: [PATCH 1590/1842] Revert "Bluetooth: Interleave with allowlist scan" This reverts commit 5702c3c6576d753f405aa5863fdc079e7f6fb779 which is commit c4f1f408168cd6a83d973e98e1cd1888e4d3d907 upstream. It breaks the Android kernel ABI and is not needed for Android devices, so it is safe to revert for now. If it is determined that it is needed in the future, it can be brought back in an abi-preserving way. Bug: 161946584 Signed-off-by: Greg Kroah-Hartman Change-Id: I0931559c61d70d9750eae7c12cdcfe4485163a39 --- include/net/bluetooth/hci_core.h | 10 --- net/bluetooth/hci_core.c | 3 - net/bluetooth/hci_request.c | 128 ++----------------------------- net/bluetooth/mgmt_config.c | 10 --- 4 files changed, 7 insertions(+), 144 deletions(-) diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h index 009b572aeeb3..46eeb73c1289 100644 --- a/include/net/bluetooth/hci_core.h +++ b/include/net/bluetooth/hci_core.h @@ -365,8 +365,6 @@ struct hci_dev { __u8 ssp_debug_mode; __u8 hw_error_code; __u32 clock; - __u16 advmon_allowlist_duration; - __u16 advmon_no_filter_duration; __u16 devid_source; __u16 devid_vendor; @@ -548,14 +546,6 @@ struct hci_dev { struct delayed_work rpa_expired; bdaddr_t rpa; - enum { - INTERLEAVE_SCAN_NONE, - INTERLEAVE_SCAN_NO_FILTER, - INTERLEAVE_SCAN_ALLOWLIST - } interleave_scan_state; - - struct delayed_work interleave_scan; - #if IS_ENABLED(CONFIG_BT_LEDS) struct led_trigger *power_led; #endif diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c index 99657c1e66ba..c331b4176de7 100644 --- a/net/bluetooth/hci_core.c +++ b/net/bluetooth/hci_core.c @@ -3606,9 +3606,6 @@ struct hci_dev *hci_alloc_dev(void) hdev->cur_adv_instance = 0x00; hdev->adv_instance_timeout = 0; - hdev->advmon_allowlist_duration = 300; - hdev->advmon_no_filter_duration = 500; - hdev->sniff_max_interval = 800; hdev->sniff_min_interval = 80; diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c index d4a2bb4a2c6f..a21df117edff 100644 --- a/net/bluetooth/hci_request.c +++ b/net/bluetooth/hci_request.c @@ -382,53 +382,6 @@ void __hci_req_write_fast_connectable(struct hci_request *req, bool enable) hci_req_add(req, HCI_OP_WRITE_PAGE_SCAN_TYPE, 1, &type); } -static void start_interleave_scan(struct hci_dev *hdev) -{ - hdev->interleave_scan_state = INTERLEAVE_SCAN_NO_FILTER; - queue_delayed_work(hdev->req_workqueue, - &hdev->interleave_scan, 0); -} - -static bool is_interleave_scanning(struct hci_dev *hdev) -{ - return hdev->interleave_scan_state != INTERLEAVE_SCAN_NONE; -} - -static void cancel_interleave_scan(struct hci_dev *hdev) -{ - bt_dev_dbg(hdev, "cancelling interleave scan"); - - cancel_delayed_work_sync(&hdev->interleave_scan); - - hdev->interleave_scan_state = INTERLEAVE_SCAN_NONE; -} - -/* Return true if interleave_scan wasn't started until exiting this function, - * otherwise, return false - */ -static bool __hci_update_interleaved_scan(struct hci_dev *hdev) -{ - /* If there is at least one ADV monitors and one pending LE connection - * or one device to be scanned for, we should alternate between - * allowlist scan and one without any filters to save power. - */ - bool use_interleaving = hci_is_adv_monitoring(hdev) && - !(list_empty(&hdev->pend_le_conns) && - list_empty(&hdev->pend_le_reports)); - bool is_interleaving = is_interleave_scanning(hdev); - - if (use_interleaving && !is_interleaving) { - start_interleave_scan(hdev); - bt_dev_dbg(hdev, "starting interleave scan"); - return true; - } - - if (!use_interleaving && is_interleaving) - cancel_interleave_scan(hdev); - - return false; -} - /* This function controls the background scanning based on hdev->pend_le_conns * list. If there are pending LE connection we start the background scanning, * otherwise we stop it. @@ -501,7 +454,8 @@ static void __hci_update_background_scan(struct hci_request *req) hci_req_add_le_scan_disable(req, false); hci_req_add_le_passive_scan(req); - bt_dev_dbg(hdev, "starting background scanning"); + + BT_DBG("%s starting background scanning", hdev->name); } } @@ -902,17 +856,12 @@ static u8 update_white_list(struct hci_request *req) return 0x00; } - /* Use the allowlist unless the following conditions are all true: - * - We are not currently suspending - * - There are 1 or more ADV monitors registered - * - Interleaved scanning is not currently using the allowlist - * - * Once the controller offloading of advertisement monitor is in place, - * the above condition should include the support of MSFT extension - * support. + /* Once the controller offloading of advertisement monitor is in place, + * the if condition should include the support of MSFT extension + * support. If suspend is ongoing, whitelist should be the default to + * prevent waking by random advertisements. */ - if (!idr_is_empty(&hdev->adv_monitors_idr) && !hdev->suspended && - hdev->interleave_scan_state != INTERLEAVE_SCAN_ALLOWLIST) + if (!idr_is_empty(&hdev->adv_monitors_idr) && !hdev->suspended) return 0x00; /* Select filter policy to use white list */ @@ -1065,10 +1014,6 @@ void hci_req_add_le_passive_scan(struct hci_request *req) &own_addr_type)) return; - if (__hci_update_interleaved_scan(hdev)) - return; - - bt_dev_dbg(hdev, "interleave state %d", hdev->interleave_scan_state); /* Adding or removing entries from the white list must * happen before enabling scanning. The controller does * not allow white list modification while scanning. @@ -1936,62 +1881,6 @@ static void adv_timeout_expire(struct work_struct *work) hci_dev_unlock(hdev); } -static int hci_req_add_le_interleaved_scan(struct hci_request *req, - unsigned long opt) -{ - struct hci_dev *hdev = req->hdev; - int ret = 0; - - hci_dev_lock(hdev); - - if (hci_dev_test_flag(hdev, HCI_LE_SCAN)) - hci_req_add_le_scan_disable(req, false); - hci_req_add_le_passive_scan(req); - - switch (hdev->interleave_scan_state) { - case INTERLEAVE_SCAN_ALLOWLIST: - bt_dev_dbg(hdev, "next state: allowlist"); - hdev->interleave_scan_state = INTERLEAVE_SCAN_NO_FILTER; - break; - case INTERLEAVE_SCAN_NO_FILTER: - bt_dev_dbg(hdev, "next state: no filter"); - hdev->interleave_scan_state = INTERLEAVE_SCAN_ALLOWLIST; - break; - case INTERLEAVE_SCAN_NONE: - BT_ERR("unexpected error"); - ret = -1; - } - - hci_dev_unlock(hdev); - - return ret; -} - -static void interleave_scan_work(struct work_struct *work) -{ - struct hci_dev *hdev = container_of(work, struct hci_dev, - interleave_scan.work); - u8 status; - unsigned long timeout; - - if (hdev->interleave_scan_state == INTERLEAVE_SCAN_ALLOWLIST) { - timeout = msecs_to_jiffies(hdev->advmon_allowlist_duration); - } else if (hdev->interleave_scan_state == INTERLEAVE_SCAN_NO_FILTER) { - timeout = msecs_to_jiffies(hdev->advmon_no_filter_duration); - } else { - bt_dev_err(hdev, "unexpected error"); - return; - } - - hci_req_sync(hdev, hci_req_add_le_interleaved_scan, 0, - HCI_CMD_TIMEOUT, &status); - - /* Don't continue interleaving if it was canceled */ - if (is_interleave_scanning(hdev)) - queue_delayed_work(hdev->req_workqueue, - &hdev->interleave_scan, timeout); -} - int hci_get_random_address(struct hci_dev *hdev, bool require_privacy, bool use_rpa, struct adv_info *adv_instance, u8 *own_addr_type, bdaddr_t *rand_addr) @@ -3419,7 +3308,6 @@ void hci_request_setup(struct hci_dev *hdev) INIT_DELAYED_WORK(&hdev->le_scan_disable, le_scan_disable_work); INIT_DELAYED_WORK(&hdev->le_scan_restart, le_scan_restart_work); INIT_DELAYED_WORK(&hdev->adv_instance_expire, adv_timeout_expire); - INIT_DELAYED_WORK(&hdev->interleave_scan, interleave_scan_work); } void hci_request_cancel_all(struct hci_dev *hdev) @@ -3439,6 +3327,4 @@ void hci_request_cancel_all(struct hci_dev *hdev) cancel_delayed_work_sync(&hdev->adv_instance_expire); hdev->adv_instance_timeout = 0; } - - cancel_interleave_scan(hdev); } diff --git a/net/bluetooth/mgmt_config.c b/net/bluetooth/mgmt_config.c index 2d3ad288c78a..b30b571f8caf 100644 --- a/net/bluetooth/mgmt_config.c +++ b/net/bluetooth/mgmt_config.c @@ -67,8 +67,6 @@ int read_def_system_config(struct sock *sk, struct hci_dev *hdev, void *data, HDEV_PARAM_U16(0x001a, le_supv_timeout), HDEV_PARAM_U16_JIFFIES_TO_MSECS(0x001b, def_le_autoconnect_timeout), - HDEV_PARAM_U16(0x001d, advmon_allowlist_duration), - HDEV_PARAM_U16(0x001e, advmon_no_filter_duration), }; struct mgmt_rp_read_def_system_config *rp = (void *)params; @@ -140,8 +138,6 @@ int set_def_system_config(struct sock *sk, struct hci_dev *hdev, void *data, case 0x0019: case 0x001a: case 0x001b: - case 0x001d: - case 0x001e: if (len != sizeof(u16)) { bt_dev_warn(hdev, "invalid length %d, exp %zu for type %d", len, sizeof(u16), type); @@ -255,12 +251,6 @@ int set_def_system_config(struct sock *sk, struct hci_dev *hdev, void *data, hdev->def_le_autoconnect_timeout = msecs_to_jiffies(TLV_GET_LE16(buffer)); break; - case 0x0001d: - hdev->advmon_allowlist_duration = TLV_GET_LE16(buffer); - break; - case 0x0001e: - hdev->advmon_no_filter_duration = TLV_GET_LE16(buffer); - break; default: bt_dev_warn(hdev, "unsupported parameter %u", type); break; From 8324f66c71ab5c09b20156390b36ebe378a9ab13 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 28 Jul 2022 10:32:51 +0200 Subject: [PATCH 1591/1842] Revert "parisc/stifb: Keep track of hardware path of graphics card" This reverts commit 860e44f21f26d037041bf57b907a26b12bcef851 which is commit b046f984814af7985f444150ec28716d42d00d9a upstream. It breaks the Android kernel ABI and is not needed for Android devices, so it is safe to revert for now. If it is determined that it is needed in the future, it can be brought back in an abi-preserving way. Bug: 161946584 Signed-off-by: Greg Kroah-Hartman Change-Id: Iecee4bbc7165e712da3b7683697a6bd29daae1f8 --- drivers/video/console/sticon.c | 5 +---- drivers/video/console/sticore.c | 15 ++++++++------- drivers/video/fbdev/sticore.h | 3 --- 3 files changed, 9 insertions(+), 14 deletions(-) diff --git a/drivers/video/console/sticon.c b/drivers/video/console/sticon.c index f304163e87e9..40496e9e9b43 100644 --- a/drivers/video/console/sticon.c +++ b/drivers/video/console/sticon.c @@ -46,7 +46,6 @@ #include #include #include -#include #include @@ -393,9 +392,7 @@ static int __init sticonsole_init(void) for (i = 0; i < MAX_NR_CONSOLES; i++) font_data[i] = STI_DEF_FONT; - pr_info("sticon: Initializing STI text console on %s at [%s]\n", - sticon_sti->sti_data->inq_outptr.dev_name, - sticon_sti->pa_path); + pr_info("sticon: Initializing STI text console.\n"); console_lock(); err = do_take_over_console(&sti_con, 0, MAX_NR_CONSOLES - 1, PAGE0->mem_cons.cl_class != CL_DUPLEX); diff --git a/drivers/video/console/sticore.c b/drivers/video/console/sticore.c index 77622ef401d8..53ffd5898e85 100644 --- a/drivers/video/console/sticore.c +++ b/drivers/video/console/sticore.c @@ -34,7 +34,7 @@ #include "../fbdev/sticore.h" -#define STI_DRIVERVERSION "Version 0.9c" +#define STI_DRIVERVERSION "Version 0.9b" static struct sti_struct *default_sti __read_mostly; @@ -503,7 +503,7 @@ sti_select_fbfont(struct sti_cooked_rom *cooked_rom, const char *fbfont_name) if (!fbfont) return NULL; - pr_info(" using %ux%u framebuffer font %s\n", + pr_info("STI selected %ux%u framebuffer font %s for sticon\n", fbfont->width, fbfont->height, fbfont->name); bpc = ((fbfont->width+7)/8) * fbfont->height; @@ -947,7 +947,6 @@ static struct sti_struct *sti_try_rom_generic(unsigned long address, static void sticore_check_for_default_sti(struct sti_struct *sti, char *path) { - pr_info(" located at [%s]\n", sti->pa_path); if (strcmp (path, default_sti_path) == 0) default_sti = sti; } @@ -959,6 +958,7 @@ static void sticore_check_for_default_sti(struct sti_struct *sti, char *path) */ static int __init sticore_pa_init(struct parisc_device *dev) { + char pa_path[21]; struct sti_struct *sti = NULL; int hpa = dev->hpa.start; @@ -971,8 +971,8 @@ static int __init sticore_pa_init(struct parisc_device *dev) if (!sti) return 1; - print_pa_hwpath(dev, sti->pa_path); - sticore_check_for_default_sti(sti, sti->pa_path); + print_pa_hwpath(dev, pa_path); + sticore_check_for_default_sti(sti, pa_path); return 0; } @@ -1008,8 +1008,9 @@ static int sticore_pci_init(struct pci_dev *pd, const struct pci_device_id *ent) sti = sti_try_rom_generic(rom_base, fb_base, pd); if (sti) { - print_pci_hwpath(pd, sti->pa_path); - sticore_check_for_default_sti(sti, sti->pa_path); + char pa_path[30]; + print_pci_hwpath(pd, pa_path); + sticore_check_for_default_sti(sti, pa_path); } if (!sti) { diff --git a/drivers/video/fbdev/sticore.h b/drivers/video/fbdev/sticore.h index 0ebdd28a0b81..c338f7848ae2 100644 --- a/drivers/video/fbdev/sticore.h +++ b/drivers/video/fbdev/sticore.h @@ -370,9 +370,6 @@ struct sti_struct { /* pointer to all internal data */ struct sti_all_data *sti_data; - - /* pa_path of this device */ - char pa_path[24]; }; From a73f6da5a306e4a73531de284bb3e2d1a7aae279 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 28 Jul 2022 10:32:57 +0200 Subject: [PATCH 1592/1842] Revert "Fonts: Make font size unsigned in font_desc" This reverts commit 78e008dca2255bba0ffa890203049f6eb4bdbc28 which is commit 7cb415003468d41aecd6877ae088c38f6c0fc174 upstream. It breaks the Android kernel ABI and is not needed for Android devices, so it is safe to revert for now. If it is determined that it is needed in the future, it can be brought back in an abi-preserving way. Bug: 161946584 Signed-off-by: Greg Kroah-Hartman Change-Id: I6070f402034c1d22ea70db2d1be160b0d0270b8e --- drivers/video/console/sticore.c | 2 +- include/linux/font.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/video/console/sticore.c b/drivers/video/console/sticore.c index 53ffd5898e85..680a573af00b 100644 --- a/drivers/video/console/sticore.c +++ b/drivers/video/console/sticore.c @@ -503,7 +503,7 @@ sti_select_fbfont(struct sti_cooked_rom *cooked_rom, const char *fbfont_name) if (!fbfont) return NULL; - pr_info("STI selected %ux%u framebuffer font %s for sticon\n", + pr_info("STI selected %dx%d framebuffer font %s for sticon\n", fbfont->width, fbfont->height, fbfont->name); bpc = ((fbfont->width+7)/8) * fbfont->height; diff --git a/include/linux/font.h b/include/linux/font.h index 4f50d736ea72..b5b312c19e46 100644 --- a/include/linux/font.h +++ b/include/linux/font.h @@ -16,7 +16,7 @@ struct font_desc { int idx; const char *name; - unsigned int width, height; + int width, height; const void *data; int pref; }; From dca272b05d6f0f2f73b3205c9328b9ef010a3dc7 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 28 Jul 2022 10:33:28 +0200 Subject: [PATCH 1593/1842] Revert "mailbox: forward the hrtimer if not queued and under a lock" This reverts commit bb2220e0672b7433a9a42618599cd261b2629240 which is commit bca1a1004615efe141fd78f360ecc48c60bc4ad5 upstream. It breaks the Android kernel ABI and is not needed for Android devices, so it is safe to revert for now. If it is determined that it is needed in the future, it can be brought back in an abi-preserving way. Bug: 161946584 Signed-off-by: Greg Kroah-Hartman Change-Id: I5e9282790af470cf09d353c7e1e54e9f93f01d85 --- drivers/mailbox/mailbox.c | 19 ++++++------------- include/linux/mailbox_controller.h | 1 - 2 files changed, 6 insertions(+), 14 deletions(-) diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c index 4229b9b5da98..3e7d4b20ab34 100644 --- a/drivers/mailbox/mailbox.c +++ b/drivers/mailbox/mailbox.c @@ -82,11 +82,11 @@ static void msg_submit(struct mbox_chan *chan) exit: spin_unlock_irqrestore(&chan->lock, flags); + /* kick start the timer immediately to avoid delays */ if (!err && (chan->txdone_method & TXDONE_BY_POLL)) { - /* kick start the timer immediately to avoid delays */ - spin_lock_irqsave(&chan->mbox->poll_hrt_lock, flags); - hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL); - spin_unlock_irqrestore(&chan->mbox->poll_hrt_lock, flags); + /* but only if not already active */ + if (!hrtimer_active(&chan->mbox->poll_hrt)) + hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL); } } @@ -120,26 +120,20 @@ static enum hrtimer_restart txdone_hrtimer(struct hrtimer *hrtimer) container_of(hrtimer, struct mbox_controller, poll_hrt); bool txdone, resched = false; int i; - unsigned long flags; for (i = 0; i < mbox->num_chans; i++) { struct mbox_chan *chan = &mbox->chans[i]; if (chan->active_req && chan->cl) { + resched = true; txdone = chan->mbox->ops->last_tx_done(chan); if (txdone) tx_tick(chan, 0); - else - resched = true; } } if (resched) { - spin_lock_irqsave(&mbox->poll_hrt_lock, flags); - if (!hrtimer_is_queued(hrtimer)) - hrtimer_forward_now(hrtimer, ms_to_ktime(mbox->txpoll_period)); - spin_unlock_irqrestore(&mbox->poll_hrt_lock, flags); - + hrtimer_forward_now(hrtimer, ms_to_ktime(mbox->txpoll_period)); return HRTIMER_RESTART; } return HRTIMER_NORESTART; @@ -506,7 +500,6 @@ int mbox_controller_register(struct mbox_controller *mbox) hrtimer_init(&mbox->poll_hrt, CLOCK_MONOTONIC, HRTIMER_MODE_REL); mbox->poll_hrt.function = txdone_hrtimer; - spin_lock_init(&mbox->poll_hrt_lock); } for (i = 0; i < mbox->num_chans; i++) { diff --git a/include/linux/mailbox_controller.h b/include/linux/mailbox_controller.h index 6fee33cb52f5..36d6ce673503 100644 --- a/include/linux/mailbox_controller.h +++ b/include/linux/mailbox_controller.h @@ -83,7 +83,6 @@ struct mbox_controller { const struct of_phandle_args *sp); /* Internal to API */ struct hrtimer poll_hrt; - spinlock_t poll_hrt_lock; struct list_head node; }; From 0ced6946acd12b22ebb20fef66a5c181fcf2823e Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 28 Jul 2022 14:24:19 +0200 Subject: [PATCH 1594/1842] Revert "drm: fix EDID struct for old ARM OABI format" This reverts commit 61bbbde9b6d21f857cc0fce8538f244aa4f449d3 which is commit 47f15561b69e226bfc034e94ff6dbec51a4662af upstream. It breaks the Android kernel ABI CRC signatures and is not needed for Android devices, so it is safe to revert for now. If it is determined that it is needed in the future, it can be brought back in an abi-preserving way. Bug: 161946584 Signed-off-by: Greg Kroah-Hartman Change-Id: Ib4225a2aa2b9ad6eac9af856a6230eb34f2a42f4 --- include/drm/drm_edid.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/include/drm/drm_edid.h b/include/drm/drm_edid.h index 4526b6a1e583..e97daf6ffbb1 100644 --- a/include/drm/drm_edid.h +++ b/include/drm/drm_edid.h @@ -121,7 +121,7 @@ struct detailed_data_monitor_range { u8 supported_scalings; u8 preferred_refresh; } __attribute__((packed)) cvt; - } __attribute__((packed)) formula; + } formula; } __attribute__((packed)); struct detailed_data_wpindex { @@ -154,7 +154,7 @@ struct detailed_non_pixel { struct detailed_data_wpindex color; struct std_timing timings[6]; struct cvt_timing cvt[4]; - } __attribute__((packed)) data; + } data; } __attribute__((packed)); #define EDID_DETAIL_EST_TIMINGS 0xf7 @@ -172,7 +172,7 @@ struct detailed_timing { union { struct detailed_pixel_timing pixel_data; struct detailed_non_pixel other_data; - } __attribute__((packed)) data; + } data; } __attribute__((packed)); #define DRM_EDID_INPUT_SERRATION_VSYNC (1 << 0) From bbc03f7ab8620a2488028556733466bbd601aefa Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 28 Jul 2022 17:36:04 +0200 Subject: [PATCH 1595/1842] Revert "cgroup: Use separate src/dst nodes when preloading css_sets for migration" This reverts commit 7657e3958535d101a24ab4400f9b8062b9107cc4 which is commit 07fd5b6cdf3cc30bfde8fe0f644771688be04447 upstream. It breaks the Android kernel ABI and is not needed for Android devices, so it is safe to revert for now. If it is determined that it is needed in the future, it can be brought back in an abi-preserving way. Bug: 161946584 Signed-off-by: Greg Kroah-Hartman Change-Id: I13460fd4409b5bb84a0ae95c790ac604fc1ce7be --- include/linux/cgroup-defs.h | 3 +-- kernel/cgroup/cgroup.c | 37 ++++++++++++++----------------------- 2 files changed, 15 insertions(+), 25 deletions(-) diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h index 4fb56228203f..8c193033604f 100644 --- a/include/linux/cgroup-defs.h +++ b/include/linux/cgroup-defs.h @@ -261,8 +261,7 @@ struct css_set { * List of csets participating in the on-going migration either as * source or destination. Protected by cgroup_mutex. */ - struct list_head mg_src_preload_node; - struct list_head mg_dst_preload_node; + struct list_head mg_preload_node; struct list_head mg_node; /* diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index 63f59a246a49..2b39f8ce5ac4 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -755,8 +755,7 @@ struct css_set init_css_set = { .task_iters = LIST_HEAD_INIT(init_css_set.task_iters), .threaded_csets = LIST_HEAD_INIT(init_css_set.threaded_csets), .cgrp_links = LIST_HEAD_INIT(init_css_set.cgrp_links), - .mg_src_preload_node = LIST_HEAD_INIT(init_css_set.mg_src_preload_node), - .mg_dst_preload_node = LIST_HEAD_INIT(init_css_set.mg_dst_preload_node), + .mg_preload_node = LIST_HEAD_INIT(init_css_set.mg_preload_node), .mg_node = LIST_HEAD_INIT(init_css_set.mg_node), /* @@ -1231,8 +1230,7 @@ static struct css_set *find_css_set(struct css_set *old_cset, INIT_LIST_HEAD(&cset->threaded_csets); INIT_HLIST_NODE(&cset->hlist); INIT_LIST_HEAD(&cset->cgrp_links); - INIT_LIST_HEAD(&cset->mg_src_preload_node); - INIT_LIST_HEAD(&cset->mg_dst_preload_node); + INIT_LIST_HEAD(&cset->mg_preload_node); INIT_LIST_HEAD(&cset->mg_node); /* Copy the set of subsystem state objects generated in @@ -2580,27 +2578,21 @@ int cgroup_migrate_vet_dst(struct cgroup *dst_cgrp) */ void cgroup_migrate_finish(struct cgroup_mgctx *mgctx) { + LIST_HEAD(preloaded); struct css_set *cset, *tmp_cset; lockdep_assert_held(&cgroup_mutex); spin_lock_irq(&css_set_lock); - list_for_each_entry_safe(cset, tmp_cset, &mgctx->preloaded_src_csets, - mg_src_preload_node) { - cset->mg_src_cgrp = NULL; - cset->mg_dst_cgrp = NULL; - cset->mg_dst_cset = NULL; - list_del_init(&cset->mg_src_preload_node); - put_css_set_locked(cset); - } + list_splice_tail_init(&mgctx->preloaded_src_csets, &preloaded); + list_splice_tail_init(&mgctx->preloaded_dst_csets, &preloaded); - list_for_each_entry_safe(cset, tmp_cset, &mgctx->preloaded_dst_csets, - mg_dst_preload_node) { + list_for_each_entry_safe(cset, tmp_cset, &preloaded, mg_preload_node) { cset->mg_src_cgrp = NULL; cset->mg_dst_cgrp = NULL; cset->mg_dst_cset = NULL; - list_del_init(&cset->mg_dst_preload_node); + list_del_init(&cset->mg_preload_node); put_css_set_locked(cset); } @@ -2642,7 +2634,7 @@ void cgroup_migrate_add_src(struct css_set *src_cset, src_cgrp = cset_cgroup_from_root(src_cset, dst_cgrp->root); - if (!list_empty(&src_cset->mg_src_preload_node)) + if (!list_empty(&src_cset->mg_preload_node)) return; WARN_ON(src_cset->mg_src_cgrp); @@ -2653,7 +2645,7 @@ void cgroup_migrate_add_src(struct css_set *src_cset, src_cset->mg_src_cgrp = src_cgrp; src_cset->mg_dst_cgrp = dst_cgrp; get_css_set(src_cset); - list_add_tail(&src_cset->mg_src_preload_node, &mgctx->preloaded_src_csets); + list_add_tail(&src_cset->mg_preload_node, &mgctx->preloaded_src_csets); } /** @@ -2678,7 +2670,7 @@ int cgroup_migrate_prepare_dst(struct cgroup_mgctx *mgctx) /* look up the dst cset for each src cset and link it to src */ list_for_each_entry_safe(src_cset, tmp_cset, &mgctx->preloaded_src_csets, - mg_src_preload_node) { + mg_preload_node) { struct css_set *dst_cset; struct cgroup_subsys *ss; int ssid; @@ -2697,7 +2689,7 @@ int cgroup_migrate_prepare_dst(struct cgroup_mgctx *mgctx) if (src_cset == dst_cset) { src_cset->mg_src_cgrp = NULL; src_cset->mg_dst_cgrp = NULL; - list_del_init(&src_cset->mg_src_preload_node); + list_del_init(&src_cset->mg_preload_node); put_css_set(src_cset); put_css_set(dst_cset); continue; @@ -2705,8 +2697,8 @@ int cgroup_migrate_prepare_dst(struct cgroup_mgctx *mgctx) src_cset->mg_dst_cset = dst_cset; - if (list_empty(&dst_cset->mg_dst_preload_node)) - list_add_tail(&dst_cset->mg_dst_preload_node, + if (list_empty(&dst_cset->mg_preload_node)) + list_add_tail(&dst_cset->mg_preload_node, &mgctx->preloaded_dst_csets); else put_css_set(dst_cset); @@ -2957,8 +2949,7 @@ static int cgroup_update_dfl_csses(struct cgroup *cgrp) goto out_finish; spin_lock_irq(&css_set_lock); - list_for_each_entry(src_cset, &mgctx.preloaded_src_csets, - mg_src_preload_node) { + list_for_each_entry(src_cset, &mgctx.preloaded_src_csets, mg_preload_node) { struct task_struct *task, *ntask; /* all tasks in src_csets need to be migrated */ From 31f3bb363a8972891b51747f27665663bce17a5b Mon Sep 17 00:00:00 2001 From: Fabien Dessenne Date: Mon, 27 Jun 2022 16:23:50 +0200 Subject: [PATCH 1596/1842] pinctrl: stm32: fix optional IRQ support to gpios commit a1d4ef1adf8bbd302067534ead671a94759687ed upstream. To act as an interrupt controller, a gpio bank relies on the "interrupt-parent" of the pin controller. When this optional "interrupt-parent" misses, do not create any IRQ domain. This fixes a "NULL pointer in stm32_gpio_domain_alloc()" kernel crash when the interrupt-parent = property is not declared in the Device Tree. Fixes: 0eb9f683336d ("pinctrl: Add IRQ support to STM32 gpios") Signed-off-by: Fabien Dessenne Link: https://lore.kernel.org/r/20220627142350.742973-1-fabien.dessenne@foss.st.com Signed-off-by: Linus Walleij Signed-off-by: Greg Kroah-Hartman --- drivers/pinctrl/stm32/pinctrl-stm32.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c index b017dd400c46..60406f1f8337 100644 --- a/drivers/pinctrl/stm32/pinctrl-stm32.c +++ b/drivers/pinctrl/stm32/pinctrl-stm32.c @@ -1303,15 +1303,17 @@ static int stm32_gpiolib_register_bank(struct stm32_pinctrl *pctl, bank->bank_ioport_nr = bank_ioport_nr; spin_lock_init(&bank->lock); - /* create irq hierarchical domain */ - bank->fwnode = of_node_to_fwnode(np); + if (pctl->domain) { + /* create irq hierarchical domain */ + bank->fwnode = of_node_to_fwnode(np); - bank->domain = irq_domain_create_hierarchy(pctl->domain, 0, - STM32_GPIO_IRQ_LINE, bank->fwnode, - &stm32_gpio_domain_ops, bank); + bank->domain = irq_domain_create_hierarchy(pctl->domain, 0, STM32_GPIO_IRQ_LINE, + bank->fwnode, &stm32_gpio_domain_ops, + bank); - if (!bank->domain) - return -ENODEV; + if (!bank->domain) + return -ENODEV; + } err = gpiochip_add_data(&bank->gpio_chip, bank); if (err) { @@ -1481,6 +1483,8 @@ int stm32_pctl_probe(struct platform_device *pdev) pctl->domain = stm32_pctrl_get_irq_domain(np); if (IS_ERR(pctl->domain)) return PTR_ERR(pctl->domain); + if (!pctl->domain) + dev_warn(dev, "pinctrl without interrupt support\n"); /* hwspinlock is optional */ hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0); From 15155fa898cb0530c3c44eaf92df18e04bef613f Mon Sep 17 00:00:00 2001 From: Ben Dooks Date: Sun, 29 May 2022 16:22:00 +0100 Subject: [PATCH 1597/1842] riscv: add as-options for modules with assembly compontents commit c1f6eff304e4dfa4558b6a8c6b2d26a91db6c998 upstream. When trying to load modules built for RISC-V which include assembly files the kernel loader errors with "unexpected relocation type 'R_RISCV_ALIGN'" due to R_RISCV_ALIGN relocations being generated by the assembler. The R_RISCV_ALIGN relocations can be removed at the expense of code space by adding -mno-relax to gcc and as. In commit 7a8e7da42250138 ("RISC-V: Fixes to module loading") -mno-relax is added to the build variable KBUILD_CFLAGS_MODULE. See [1] for more info. The issue is that when kbuild builds a .S file, it invokes gcc with the -mno-relax flag, but this is not being passed through to the assembler. Adding -Wa,-mno-relax to KBUILD_AFLAGS_MODULE ensures that the assembler is invoked correctly. This may have now been fixed in gcc[2] and this addition should not stop newer gcc and as from working. [1] https://github.com/riscv/riscv-elf-psabi-doc/issues/183 [2] https://github.com/gcc-mirror/gcc/commit/3b0a7d624e64eeb81e4d5e8c62c46d86ef521857 Signed-off-by: Ben Dooks Reviewed-by: Bin Meng Link: https://lore.kernel.org/r/20220529152200.609809-1-ben.dooks@codethink.co.uk Fixes: ab1ef68e5401 ("RISC-V: Add sections of PLT and GOT for kernel module") Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt Signed-off-by: Greg Kroah-Hartman --- arch/riscv/Makefile | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile index db9505c658ea..3d3016092b31 100644 --- a/arch/riscv/Makefile +++ b/arch/riscv/Makefile @@ -73,6 +73,7 @@ ifeq ($(CONFIG_PERF_EVENTS),y) endif KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax) +KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax) # GCC versions that support the "-mstrict-align" option default to allowing # unaligned accesses. While unaligned accesses are explicitly allowed in the From 426336de3557b5b7290399425b5ca142e635a777 Mon Sep 17 00:00:00 2001 From: Ido Schimmel Date: Tue, 19 Jul 2022 15:26:26 +0300 Subject: [PATCH 1598/1842] mlxsw: spectrum_router: Fix IPv4 nexthop gateway indication commit e5ec6a2513383fe2ecc2ee3b5f51d97acbbcd4d8 upstream. mlxsw needs to distinguish nexthops with a gateway from connected nexthops in order to write the former to the adjacency table of the device. The check used to rely on the fact that nexthops with a gateway have a 'link' scope whereas connected nexthops have a 'host' scope. This is no longer correct after commit 747c14307214 ("ip: fix dflt addr selection for connected nexthop"). Fix that by instead checking the address family of the gateway IP. This is a more direct way and also consistent with the IPv6 counterpart in mlxsw_sp_rt6_is_gateway(). Cc: stable@vger.kernel.org Fixes: 747c14307214 ("ip: fix dflt addr selection for connected nexthop") Fixes: 597cfe4fc339 ("nexthop: Add support for IPv4 nexthops") Signed-off-by: Ido Schimmel Reviewed-by: Amit Cohen Reviewed-by: Nicolas Dichtel Reviewed-by: David Ahern Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index 53128382fc2e..5f143ca16c01 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -4003,7 +4003,7 @@ static bool mlxsw_sp_fi_is_gateway(const struct mlxsw_sp *mlxsw_sp, { const struct fib_nh *nh = fib_info_nh(fi, 0); - return nh->fib_nh_scope == RT_SCOPE_LINK || + return nh->fib_nh_gw_family || mlxsw_sp_nexthop4_ipip_type(mlxsw_sp, nh, NULL); } From ab5050fd7430dde3a9f073129036d3da3facc8ec Mon Sep 17 00:00:00 2001 From: Eric Snowberg Date: Wed, 20 Jul 2022 12:40:27 -0400 Subject: [PATCH 1599/1842] lockdown: Fix kexec lockdown bypass with ima policy commit 543ce63b664e2c2f9533d089a4664b559c3e6b5b upstream. The lockdown LSM is primarily used in conjunction with UEFI Secure Boot. This LSM may also be used on machines without UEFI. It can also be enabled when UEFI Secure Boot is disabled. One of lockdown's features is to prevent kexec from loading untrusted kernels. Lockdown can be enabled through a bootparam or after the kernel has booted through securityfs. If IMA appraisal is used with the "ima_appraise=log" boot param, lockdown can be defeated with kexec on any machine when Secure Boot is disabled or unavailable. IMA prevents setting "ima_appraise=log" from the boot param when Secure Boot is enabled, but this does not cover cases where lockdown is used without Secure Boot. To defeat lockdown, boot without Secure Boot and add ima_appraise=log to the kernel command line; then: $ echo "integrity" > /sys/kernel/security/lockdown $ echo "appraise func=KEXEC_KERNEL_CHECK appraise_type=imasig" > \ /sys/kernel/security/ima/policy $ kexec -ls unsigned-kernel Add a call to verify ima appraisal is set to "enforce" whenever lockdown is enabled. This fixes CVE-2022-21505. Cc: stable@vger.kernel.org Fixes: 29d3c1c8dfe7 ("kexec: Allow kexec_file() with appropriate IMA policy when locked down") Signed-off-by: Eric Snowberg Acked-by: Mimi Zohar Reviewed-by: John Haxby Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- security/integrity/ima/ima_policy.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c index e737c216efc4..18569adcb4fe 100644 --- a/security/integrity/ima/ima_policy.c +++ b/security/integrity/ima/ima_policy.c @@ -1805,6 +1805,10 @@ bool ima_appraise_signature(enum kernel_read_file_id id) if (id >= READING_MAX_ID) return false; + if (id == READING_KEXEC_IMAGE && !(ima_appraise & IMA_APPRAISE_ENFORCE) + && security_locked_down(LOCKDOWN_KEXEC)) + return false; + func = read_idmap[id] ?: FILE_CHECK; rcu_read_lock(); From 2ee0cab11f6626071f8a64c7792406dabdd94c8d Mon Sep 17 00:00:00 2001 From: Lee Jones Date: Tue, 19 Jul 2022 12:52:51 +0100 Subject: [PATCH 1600/1842] io_uring: Use original task for req identity in io_identity_cow() This issue is conceptually identical to the one fixed in 29f077d07051 ("io_uring: always use original task when preparing req identity"), so rather than reinvent the wheel, I'm shamelessly quoting the commit message from that patch - thanks Jens: "If the ring is setup with IORING_SETUP_IOPOLL and we have more than one task doing submissions on a ring, we can up in a situation where we assign the context from the current task rather than the request originator. Always use req->task rather than assume it's the same as current. No upstream patch exists for this issue, as only older kernels with the non-native workers have this problem." Cc: Jens Axboe Cc: Pavel Begunkov Cc: Alexander Viro Cc: io-uring@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Fixes: 5c3462cfd123b ("io_uring: store io_identity in io_uring_task") Signed-off-by: Lee Jones Signed-off-by: Greg Kroah-Hartman --- fs/io_uring.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 023c3eb34dcc..a952288b2ab8 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -1325,7 +1325,7 @@ static void io_req_clean_work(struct io_kiocb *req) */ static bool io_identity_cow(struct io_kiocb *req) { - struct io_uring_task *tctx = current->io_uring; + struct io_uring_task *tctx = req->task->io_uring; const struct cred *creds = NULL; struct io_identity *id; From 7a99c7c32c85cd5239600533b77a34f884741fcc Mon Sep 17 00:00:00 2001 From: Demi Marie Obenour Date: Sun, 10 Jul 2022 19:05:22 -0400 Subject: [PATCH 1601/1842] xen/gntdev: Ignore failure to unmap INVALID_GRANT_HANDLE commit 166d3863231667c4f64dee72b77d1102cdfad11f upstream. The error paths of gntdev_mmap() can call unmap_grant_pages() even though not all of the pages have been successfully mapped. This will trigger the WARN_ON()s in __unmap_grant_pages_done(). The number of warnings can be very large; I have observed thousands of lines of warnings in the systemd journal. Avoid this problem by only warning on unmapping failure if the handle being unmapped is not INVALID_GRANT_HANDLE. The handle field of any page that was not successfully mapped will be INVALID_GRANT_HANDLE, so this catches all cases where unmapping can legitimately fail. Fixes: dbe97cff7dd9 ("xen/gntdev: Avoid blocking in unmap_grant_pages()") Cc: stable@vger.kernel.org Suggested-by: Juergen Gross Signed-off-by: Demi Marie Obenour Reviewed-by: Oleksandr Tyshchenko Reviewed-by: Juergen Gross Link: https://lore.kernel.org/r/20220710230522.1563-1-demi@invisiblethingslab.com Signed-off-by: Juergen Gross Signed-off-by: Greg Kroah-Hartman --- drivers/xen/gntdev.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c index f415c056ff8a..54fee4087bf1 100644 --- a/drivers/xen/gntdev.c +++ b/drivers/xen/gntdev.c @@ -401,7 +401,8 @@ static void __unmap_grant_pages_done(int result, unsigned int offset = data->unmap_ops - map->unmap_ops; for (i = 0; i < data->count; i++) { - WARN_ON(map->unmap_ops[offset+i].status); + WARN_ON(map->unmap_ops[offset+i].status && + map->unmap_ops[offset+i].handle != -1); pr_debug("unmap handle=%d st=%d\n", map->unmap_ops[offset+i].handle, map->unmap_ops[offset+i].status); From 2686f62fa78ce4734e871af80161a602d22be843 Mon Sep 17 00:00:00 2001 From: Jakub Kicinski Date: Fri, 15 Jul 2022 19:26:26 +0300 Subject: [PATCH 1602/1842] docs: net: explain struct net_device lifetime commit 2b446e650b418f9a9e75f99852e2f2560cabfa17 upstream. Explain the two basic flows of struct net_device's operation. Signed-off-by: Jakub Kicinski Signed-off-by: Fedor Pchelkin Signed-off-by: Greg Kroah-Hartman --- Documentation/networking/netdevices.rst | 171 +++++++++++++++++++++++- net/core/rtnetlink.c | 2 +- 2 files changed, 166 insertions(+), 7 deletions(-) diff --git a/Documentation/networking/netdevices.rst b/Documentation/networking/netdevices.rst index 5a85fcc80c76..557b97483437 100644 --- a/Documentation/networking/netdevices.rst +++ b/Documentation/networking/netdevices.rst @@ -10,18 +10,177 @@ Introduction The following is a random collection of documentation regarding network devices. -struct net_device allocation rules -================================== +struct net_device lifetime rules +================================ Network device structures need to persist even after module is unloaded and must be allocated with alloc_netdev_mqs() and friends. If device has registered successfully, it will be freed on last use -by free_netdev(). This is required to handle the pathologic case cleanly -(example: rmmod mydriver needs_free_netdev = true; + } + + static void my_destructor(struct net_device *dev) + { + some_obj_destroy(priv->obj); + some_uninit(priv); + } + + int create_link() + { + struct my_device_priv *priv; + int err; + + ASSERT_RTNL(); + + dev = alloc_netdev(sizeof(*priv), "net%d", NET_NAME_UNKNOWN, my_setup); + if (!dev) + return -ENOMEM; + priv = netdev_priv(dev); + + /* Implicit constructor */ + err = some_init(priv); + if (err) + goto err_free_dev; + + priv->obj = some_obj_create(); + if (!priv->obj) { + err = -ENOMEM; + goto err_some_uninit; + } + /* End of constructor, set the destructor: */ + dev->priv_destructor = my_destructor; + + err = register_netdevice(dev); + if (err) + /* register_netdevice() calls destructor on failure */ + goto err_free_dev; + + /* If anything fails now unregister_netdevice() (or unregister_netdev()) + * will take care of calling my_destructor and free_netdev(). + */ + + return 0; + + err_some_uninit: + some_uninit(priv); + err_free_dev: + free_netdev(dev); + return err; + } + +If struct net_device.priv_destructor is set it will be called by the core +some time after unregister_netdevice(), it will also be called if +register_netdevice() fails. The callback may be invoked with or without +``rtnl_lock`` held. + +There is no explicit constructor callback, driver "constructs" the private +netdev state after allocating it and before registration. + +Setting struct net_device.needs_free_netdev makes core call free_netdevice() +automatically after unregister_netdevice() when all references to the device +are gone. It only takes effect after a successful call to register_netdevice() +so if register_netdevice() fails driver is responsible for calling +free_netdev(). + +free_netdev() is safe to call on error paths right after unregister_netdevice() +or when register_netdevice() fails. Parts of netdev (de)registration process +happen after ``rtnl_lock`` is released, therefore in those cases free_netdev() +will defer some of the processing until ``rtnl_lock`` is released. + +Devices spawned from struct rtnl_link_ops should never free the +struct net_device directly. + +.ndo_init and .ndo_uninit +~~~~~~~~~~~~~~~~~~~~~~~~~ + +``.ndo_init`` and ``.ndo_uninit`` callbacks are called during net_device +registration and de-registration, under ``rtnl_lock``. Drivers can use +those e.g. when parts of their init process need to run under ``rtnl_lock``. + +``.ndo_init`` runs before device is visible in the system, ``.ndo_uninit`` +runs during de-registering after device is closed but other subsystems +may still have outstanding references to the netdevice. MTU === diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c index 873081cda950..d8c82292413d 100644 --- a/net/core/rtnetlink.c +++ b/net/core/rtnetlink.c @@ -3444,7 +3444,7 @@ static int __rtnl_newlink(struct sk_buff *skb, struct nlmsghdr *nlh, if (ops->newlink) { err = ops->newlink(link_net ? : net, dev, tb, data, extack); - /* Drivers should call free_netdev() in ->destructor + /* Drivers should set dev->needs_free_netdev * and unregister it on failure after registration * so that device could be finally freed in rtnl_unlock. */ From ed6964ff47149091075b0009a26a05c06a4cdcf5 Mon Sep 17 00:00:00 2001 From: Jakub Kicinski Date: Fri, 15 Jul 2022 19:26:27 +0300 Subject: [PATCH 1603/1842] net: make free_netdev() more lenient with unregistering devices commit c269a24ce057abfc31130960e96ab197ef6ab196 upstream. There are two flavors of handling netdev registration: - ones called without holding rtnl_lock: register_netdev() and unregister_netdev(); and - those called with rtnl_lock held: register_netdevice() and unregister_netdevice(). While the semantics of the former are pretty clear, the same can't be said about the latter. The netdev_todo mechanism is utilized to perform some of the device unregistering tasks and it hooks into rtnl_unlock() so the locked variants can't actually finish the work. In general free_netdev() does not mix well with locked calls. Most drivers operating under rtnl_lock set dev->needs_free_netdev to true and expect core to make the free_netdev() call some time later. The part where this becomes most problematic is error paths. There is no way to unwind the state cleanly after a call to register_netdevice(), since unreg can't be performed fully without dropping locks. Make free_netdev() more lenient, and defer the freeing if device is being unregistered. This allows error paths to simply call free_netdev() both after register_netdevice() failed, and after a call to unregister_netdevice() but before dropping rtnl_lock. Simplify the error paths which are currently doing gymnastics around free_netdev() handling. Signed-off-by: Jakub Kicinski Signed-off-by: Fedor Pchelkin Signed-off-by: Greg Kroah-Hartman --- net/8021q/vlan.c | 4 +--- net/core/dev.c | 11 +++++++++++ net/core/rtnetlink.c | 23 ++++++----------------- 3 files changed, 18 insertions(+), 20 deletions(-) diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c index d12c9a8a9953..64a94c9812da 100644 --- a/net/8021q/vlan.c +++ b/net/8021q/vlan.c @@ -278,9 +278,7 @@ static int register_vlan_device(struct net_device *real_dev, u16 vlan_id) return 0; out_free_newdev: - if (new_dev->reg_state == NETREG_UNINITIALIZED || - new_dev->reg_state == NETREG_UNREGISTERED) - free_netdev(new_dev); + free_netdev(new_dev); return err; } diff --git a/net/core/dev.c b/net/core/dev.c index af52050b0f38..6ff9bd199ba6 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -10683,6 +10683,17 @@ void free_netdev(struct net_device *dev) struct napi_struct *p, *n; might_sleep(); + + /* When called immediately after register_netdevice() failed the unwind + * handling may still be dismantling the device. Handle that case by + * deferring the free. + */ + if (dev->reg_state == NETREG_UNREGISTERING) { + ASSERT_RTNL(); + dev->needs_free_netdev = true; + return; + } + netif_free_tx_queues(dev); netif_free_rx_queues(dev); diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c index d8c82292413d..3c9c2d6e3b92 100644 --- a/net/core/rtnetlink.c +++ b/net/core/rtnetlink.c @@ -3442,26 +3442,15 @@ static int __rtnl_newlink(struct sk_buff *skb, struct nlmsghdr *nlh, dev->ifindex = ifm->ifi_index; - if (ops->newlink) { + if (ops->newlink) err = ops->newlink(link_net ? : net, dev, tb, data, extack); - /* Drivers should set dev->needs_free_netdev - * and unregister it on failure after registration - * so that device could be finally freed in rtnl_unlock. - */ - if (err < 0) { - /* If device is not registered at all, free it now */ - if (dev->reg_state == NETREG_UNINITIALIZED || - dev->reg_state == NETREG_UNREGISTERED) - free_netdev(dev); - goto out; - } - } else { + else err = register_netdevice(dev); - if (err < 0) { - free_netdev(dev); - goto out; - } + if (err < 0) { + free_netdev(dev); + goto out; } + err = rtnl_configure_link(dev, ifm); if (err < 0) goto out_unregister; From 2e11856ec37985a5fa8c7ad1cc406a4dd3962ec9 Mon Sep 17 00:00:00 2001 From: Jakub Kicinski Date: Fri, 15 Jul 2022 19:26:28 +0300 Subject: [PATCH 1604/1842] net: make sure devices go through netdev_wait_all_refs commit 766b0515d5bec4b780750773ed3009b148df8c0a upstream. If register_netdevice() fails at the very last stage - the notifier call - some subsystems may have already seen it and grabbed a reference. struct net_device can't be freed right away without calling netdev_wait_all_refs(). Now that we have a clean interface in form of dev->needs_free_netdev and lenient free_netdev() we can undo what commit 93ee31f14f6f ("[NET]: Fix free_netdev on register_netdev failure.") has done and complete the unregistration path by bringing the net_set_todo() call back. After registration fails user is still expected to explicitly free the net_device, so make sure ->needs_free_netdev is cleared, otherwise rolling back the registration will cause the old double free for callers who release rtnl_lock before the free. This also solves the problem of priv_destructor not being called on notifier error. net_set_todo() will be moved back into unregister_netdevice_queue() in a follow up. Reported-by: Hulk Robot Reported-by: Yang Yingliang Signed-off-by: Jakub Kicinski Signed-off-by: Fedor Pchelkin Signed-off-by: Greg Kroah-Hartman --- net/core/dev.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/net/core/dev.c b/net/core/dev.c index 6ff9bd199ba6..1112b07eaad9 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -10144,17 +10144,11 @@ int register_netdevice(struct net_device *dev) ret = call_netdevice_notifiers(NETDEV_REGISTER, dev); ret = notifier_to_errno(ret); if (ret) { + /* Expect explicit free_netdev() on failure */ + dev->needs_free_netdev = false; rollback_registered(dev); - rcu_barrier(); - - dev->reg_state = NETREG_UNREGISTERED; - /* We should put the kobject that hold in - * netdev_unregister_kobject(), otherwise - * the net device cannot be freed when - * driver calls free_netdev(), because the - * kobject is being hold. - */ - kobject_put(&dev->dev.kobj); + net_set_todo(dev); + goto out; } /* * Prevent userspace races by waiting until the network From b1158677d46bd67e32d6606d1f8c01d8ba9e6d22 Mon Sep 17 00:00:00 2001 From: Jakub Kicinski Date: Fri, 15 Jul 2022 19:26:29 +0300 Subject: [PATCH 1605/1842] net: move net_set_todo inside rollback_registered() commit 2014beea7eb165c745706b13659a0f1d0a9a2a61 upstream. Commit 93ee31f14f6f ("[NET]: Fix free_netdev on register_netdev failure.") moved net_set_todo() outside of rollback_registered() so that rollback_registered() can be used in the failure path of register_netdevice() but without risking a double free. Since commit cf124db566e6 ("net: Fix inconsistent teardown and release of private netdev state."), however, we have a better way of handling that condition, since destructors don't call free_netdev() directly. After the change in commit c269a24ce057 ("net: make free_netdev() more lenient with unregistering devices") we can now move net_set_todo() back. Reviewed-by: Edwin Peer Signed-off-by: Jakub Kicinski Signed-off-by: Fedor Pchelkin Signed-off-by: Greg Kroah-Hartman --- net/core/dev.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/net/core/dev.c b/net/core/dev.c index 1112b07eaad9..e96583686824 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -9595,8 +9595,10 @@ static void rollback_registered_many(struct list_head *head) synchronize_net(); - list_for_each_entry(dev, head, unreg_list) + list_for_each_entry(dev, head, unreg_list) { dev_put(dev); + net_set_todo(dev); + } } static void rollback_registered(struct net_device *dev) @@ -10147,7 +10149,6 @@ int register_netdevice(struct net_device *dev) /* Expect explicit free_netdev() on failure */ dev->needs_free_netdev = false; rollback_registered(dev); - net_set_todo(dev); goto out; } /* @@ -10755,8 +10756,6 @@ void unregister_netdevice_queue(struct net_device *dev, struct list_head *head) list_move_tail(&dev->unreg_list, head); } else { rollback_registered(dev); - /* Finish processing unregister after unlock */ - net_set_todo(dev); } } EXPORT_SYMBOL(unregister_netdevice_queue); @@ -10770,12 +10769,8 @@ EXPORT_SYMBOL(unregister_netdevice_queue); */ void unregister_netdevice_many(struct list_head *head) { - struct net_device *dev; - if (!list_empty(head)) { rollback_registered_many(head); - list_for_each_entry(dev, head, unreg_list) - net_set_todo(dev); list_del(head); } } From 672fac0a4372b7cc053bc7458c536eaea450a572 Mon Sep 17 00:00:00 2001 From: Jakub Kicinski Date: Fri, 15 Jul 2022 19:26:30 +0300 Subject: [PATCH 1606/1842] net: inline rollback_registered() commit 037e56bd965e1bc72c2fa9684ac25b56839a338e upstream. rollback_registered() is a local helper, it's common for driver code to call unregister_netdevice_queue(dev, NULL) when they want to unregister netdevices under rtnl_lock. Inline rollback_registered() and adjust the only remaining caller. Reviewed-by: Edwin Peer Signed-off-by: Jakub Kicinski Signed-off-by: Fedor Pchelkin Signed-off-by: Greg Kroah-Hartman --- net/core/dev.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/net/core/dev.c b/net/core/dev.c index e96583686824..021aaf0f6454 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -9601,15 +9601,6 @@ static void rollback_registered_many(struct list_head *head) } } -static void rollback_registered(struct net_device *dev) -{ - LIST_HEAD(single); - - list_add(&dev->unreg_list, &single); - rollback_registered_many(&single); - list_del(&single); -} - static netdev_features_t netdev_sync_upper_features(struct net_device *lower, struct net_device *upper, netdev_features_t features) { @@ -10148,7 +10139,7 @@ int register_netdevice(struct net_device *dev) if (ret) { /* Expect explicit free_netdev() on failure */ dev->needs_free_netdev = false; - rollback_registered(dev); + unregister_netdevice_queue(dev, NULL); goto out; } /* @@ -10755,7 +10746,11 @@ void unregister_netdevice_queue(struct net_device *dev, struct list_head *head) if (head) { list_move_tail(&dev->unreg_list, head); } else { - rollback_registered(dev); + LIST_HEAD(single); + + list_add(&dev->unreg_list, &single); + rollback_registered_many(&single); + list_del(&single); } } EXPORT_SYMBOL(unregister_netdevice_queue); From bf2f3d1970c03f14c84515738de5c0cb28c8ebce Mon Sep 17 00:00:00 2001 From: Jakub Kicinski Date: Fri, 15 Jul 2022 19:26:31 +0300 Subject: [PATCH 1607/1842] net: move rollback_registered_many() commit bcfe2f1a3818d9dca945b6aca4ae741cb1f75329 upstream. Move rollback_registered_many() and add a temporary forward declaration to make merging the code into unregister_netdevice_many() easier to review. No functional changes. Reviewed-by: Edwin Peer Signed-off-by: Jakub Kicinski Signed-off-by: Fedor Pchelkin Signed-off-by: Greg Kroah-Hartman --- net/core/dev.c | 188 +++++++++++++++++++++++++------------------------ 1 file changed, 95 insertions(+), 93 deletions(-) diff --git a/net/core/dev.c b/net/core/dev.c index 021aaf0f6454..af7289d59519 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -9508,99 +9508,6 @@ static void net_set_todo(struct net_device *dev) dev_net(dev)->dev_unreg_count++; } -static void rollback_registered_many(struct list_head *head) -{ - struct net_device *dev, *tmp; - LIST_HEAD(close_head); - - BUG_ON(dev_boot_phase); - ASSERT_RTNL(); - - list_for_each_entry_safe(dev, tmp, head, unreg_list) { - /* Some devices call without registering - * for initialization unwind. Remove those - * devices and proceed with the remaining. - */ - if (dev->reg_state == NETREG_UNINITIALIZED) { - pr_debug("unregister_netdevice: device %s/%p never was registered\n", - dev->name, dev); - - WARN_ON(1); - list_del(&dev->unreg_list); - continue; - } - dev->dismantle = true; - BUG_ON(dev->reg_state != NETREG_REGISTERED); - } - - /* If device is running, close it first. */ - list_for_each_entry(dev, head, unreg_list) - list_add_tail(&dev->close_list, &close_head); - dev_close_many(&close_head, true); - - list_for_each_entry(dev, head, unreg_list) { - /* And unlink it from device chain. */ - unlist_netdevice(dev); - - dev->reg_state = NETREG_UNREGISTERING; - } - flush_all_backlogs(); - - synchronize_net(); - - list_for_each_entry(dev, head, unreg_list) { - struct sk_buff *skb = NULL; - - /* Shutdown queueing discipline. */ - dev_shutdown(dev); - - dev_xdp_uninstall(dev); - - /* Notify protocols, that we are about to destroy - * this device. They should clean all the things. - */ - call_netdevice_notifiers(NETDEV_UNREGISTER, dev); - - if (!dev->rtnl_link_ops || - dev->rtnl_link_state == RTNL_LINK_INITIALIZED) - skb = rtmsg_ifinfo_build_skb(RTM_DELLINK, dev, ~0U, 0, - GFP_KERNEL, NULL, 0); - - /* - * Flush the unicast and multicast chains - */ - dev_uc_flush(dev); - dev_mc_flush(dev); - - netdev_name_node_alt_flush(dev); - netdev_name_node_free(dev->name_node); - - if (dev->netdev_ops->ndo_uninit) - dev->netdev_ops->ndo_uninit(dev); - - if (skb) - rtmsg_ifinfo_send(skb, dev, GFP_KERNEL); - - /* Notifier chain MUST detach us all upper devices. */ - WARN_ON(netdev_has_any_upper_dev(dev)); - WARN_ON(netdev_has_any_lower_dev(dev)); - - /* Remove entries from kobject tree */ - netdev_unregister_kobject(dev); -#ifdef CONFIG_XPS - /* Remove XPS queueing entries */ - netif_reset_xps_queues_gt(dev, 0); -#endif - } - - synchronize_net(); - - list_for_each_entry(dev, head, unreg_list) { - dev_put(dev); - net_set_todo(dev); - } -} - static netdev_features_t netdev_sync_upper_features(struct net_device *lower, struct net_device *upper, netdev_features_t features) { @@ -10726,6 +10633,8 @@ void synchronize_net(void) } EXPORT_SYMBOL(synchronize_net); +static void rollback_registered_many(struct list_head *head); + /** * unregister_netdevice_queue - remove device from the kernel * @dev: device @@ -10771,6 +10680,99 @@ void unregister_netdevice_many(struct list_head *head) } EXPORT_SYMBOL(unregister_netdevice_many); +static void rollback_registered_many(struct list_head *head) +{ + struct net_device *dev, *tmp; + LIST_HEAD(close_head); + + BUG_ON(dev_boot_phase); + ASSERT_RTNL(); + + list_for_each_entry_safe(dev, tmp, head, unreg_list) { + /* Some devices call without registering + * for initialization unwind. Remove those + * devices and proceed with the remaining. + */ + if (dev->reg_state == NETREG_UNINITIALIZED) { + pr_debug("unregister_netdevice: device %s/%p never was registered\n", + dev->name, dev); + + WARN_ON(1); + list_del(&dev->unreg_list); + continue; + } + dev->dismantle = true; + BUG_ON(dev->reg_state != NETREG_REGISTERED); + } + + /* If device is running, close it first. */ + list_for_each_entry(dev, head, unreg_list) + list_add_tail(&dev->close_list, &close_head); + dev_close_many(&close_head, true); + + list_for_each_entry(dev, head, unreg_list) { + /* And unlink it from device chain. */ + unlist_netdevice(dev); + + dev->reg_state = NETREG_UNREGISTERING; + } + flush_all_backlogs(); + + synchronize_net(); + + list_for_each_entry(dev, head, unreg_list) { + struct sk_buff *skb = NULL; + + /* Shutdown queueing discipline. */ + dev_shutdown(dev); + + dev_xdp_uninstall(dev); + + /* Notify protocols, that we are about to destroy + * this device. They should clean all the things. + */ + call_netdevice_notifiers(NETDEV_UNREGISTER, dev); + + if (!dev->rtnl_link_ops || + dev->rtnl_link_state == RTNL_LINK_INITIALIZED) + skb = rtmsg_ifinfo_build_skb(RTM_DELLINK, dev, ~0U, 0, + GFP_KERNEL, NULL, 0); + + /* + * Flush the unicast and multicast chains + */ + dev_uc_flush(dev); + dev_mc_flush(dev); + + netdev_name_node_alt_flush(dev); + netdev_name_node_free(dev->name_node); + + if (dev->netdev_ops->ndo_uninit) + dev->netdev_ops->ndo_uninit(dev); + + if (skb) + rtmsg_ifinfo_send(skb, dev, GFP_KERNEL); + + /* Notifier chain MUST detach us all upper devices. */ + WARN_ON(netdev_has_any_upper_dev(dev)); + WARN_ON(netdev_has_any_lower_dev(dev)); + + /* Remove entries from kobject tree */ + netdev_unregister_kobject(dev); +#ifdef CONFIG_XPS + /* Remove XPS queueing entries */ + netif_reset_xps_queues_gt(dev, 0); +#endif + } + + synchronize_net(); + + list_for_each_entry(dev, head, unreg_list) { + dev_put(dev); + net_set_todo(dev); + } +} + /** * unregister_netdev - remove device from the kernel * @dev: device From 4f900c37f13eef18e56f541b525d1e743948e01c Mon Sep 17 00:00:00 2001 From: Jakub Kicinski Date: Fri, 15 Jul 2022 19:26:32 +0300 Subject: [PATCH 1608/1842] net: inline rollback_registered_many() commit 0cbe1e57a7b93517100b0eb63d8e445cfbeb630c upstream. Similar to the change for rollback_registered() - rollback_registered_many() was a part of unregister_netdevice_many() minus the net_set_todo(), which is no longer needed. Functionally this patch moves the list_empty() check back after: BUG_ON(dev_boot_phase); ASSERT_RTNL(); but I can't find any reason why that would be an issue. Reviewed-by: Edwin Peer Signed-off-by: Jakub Kicinski Signed-off-by: Fedor Pchelkin Signed-off-by: Greg Kroah-Hartman --- net/core/dev.c | 22 ++++++++-------------- 1 file changed, 8 insertions(+), 14 deletions(-) diff --git a/net/core/dev.c b/net/core/dev.c index af7289d59519..637bc576fbd2 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -5750,7 +5750,7 @@ static void flush_all_backlogs(void) } /* we can have in flight packet[s] on the cpus we are not flushing, - * synchronize_net() in rollback_registered_many() will take care of + * synchronize_net() in unregister_netdevice_many() will take care of * them */ for_each_cpu(cpu, &flush_cpus) @@ -10633,8 +10633,6 @@ void synchronize_net(void) } EXPORT_SYMBOL(synchronize_net); -static void rollback_registered_many(struct list_head *head); - /** * unregister_netdevice_queue - remove device from the kernel * @dev: device @@ -10658,8 +10656,7 @@ void unregister_netdevice_queue(struct net_device *dev, struct list_head *head) LIST_HEAD(single); list_add(&dev->unreg_list, &single); - rollback_registered_many(&single); - list_del(&single); + unregister_netdevice_many(&single); } } EXPORT_SYMBOL(unregister_netdevice_queue); @@ -10672,15 +10669,6 @@ EXPORT_SYMBOL(unregister_netdevice_queue); * we force a list_del() to make sure stack wont be corrupted later. */ void unregister_netdevice_many(struct list_head *head) -{ - if (!list_empty(head)) { - rollback_registered_many(head); - list_del(head); - } -} -EXPORT_SYMBOL(unregister_netdevice_many); - -static void rollback_registered_many(struct list_head *head) { struct net_device *dev, *tmp; LIST_HEAD(close_head); @@ -10688,6 +10676,9 @@ static void rollback_registered_many(struct list_head *head) BUG_ON(dev_boot_phase); ASSERT_RTNL(); + if (list_empty(head)) + return; + list_for_each_entry_safe(dev, tmp, head, unreg_list) { /* Some devices call without registering * for initialization unwind. Remove those @@ -10771,7 +10762,10 @@ static void rollback_registered_many(struct list_head *head) dev_put(dev); net_set_todo(dev); } + + list_del(head); } +EXPORT_SYMBOL(unregister_netdevice_many); /** * unregister_netdev - remove device from the kernel From b07240ce4a09d9771d544e15a3a4ec008cc186ad Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Sat, 23 Jul 2022 17:06:06 +0200 Subject: [PATCH 1609/1842] Revert "m68knommu: only set CONFIG_ISA_DMA_API for ColdFire sub-arch" This reverts commit 87ae522e467e17a13b796e2cb595f9c3943e4d5e which is commit db87db65c1059f3be04506d122f8ec9b2fa3b05e upstream. It is not needed in 5.10.y and causes problems. Link: https://lore.kernel.org/r/CAK8P3a0vZrXxNp3YhrxFjFunHgxSZBKD9Y4darSODgeFAukCeQ@mail.gmail.com Reported-by: kernel test robot Reported-by: Arnd Bergmann Cc: Greg Ungerer Cc: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- arch/m68k/Kconfig.bus | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/m68k/Kconfig.bus b/arch/m68k/Kconfig.bus index d1e93a39cd3b..f1be832e2b74 100644 --- a/arch/m68k/Kconfig.bus +++ b/arch/m68k/Kconfig.bus @@ -63,7 +63,7 @@ source "drivers/zorro/Kconfig" endif -if COLDFIRE +if !MMU config ISA_DMA_API def_bool !M5272 From f1d2f1ce05355742f0fdb721e30ddd03de90be94 Mon Sep 17 00:00:00 2001 From: Jeffrey Hugo Date: Fri, 15 Jul 2022 21:37:37 +0000 Subject: [PATCH 1610/1842] PCI: hv: Fix multi-MSI to allow more than one MSI vector commit 08e61e861a0e47e5e1a3fb78406afd6b0cea6b6d upstream. If the allocation of multiple MSI vectors for multi-MSI fails in the core PCI framework, the framework will retry the allocation as a single MSI vector, assuming that meets the min_vecs specified by the requesting driver. Hyper-V advertises that multi-MSI is supported, but reuses the VECTOR domain to implement that for x86. The VECTOR domain does not support multi-MSI, so the alloc will always fail and fallback to a single MSI allocation. In short, Hyper-V advertises a capability it does not implement. Hyper-V can support multi-MSI because it coordinates with the hypervisor to map the MSIs in the IOMMU's interrupt remapper, which is something the VECTOR domain does not have. Therefore the fix is simple - copy what the x86 IOMMU drivers (AMD/Intel-IR) do by removing X86_IRQ_ALLOC_CONTIGUOUS_VECTORS after calling the VECTOR domain's pci_msi_prepare(). 5.10 backport - adds the hv_msi_prepare wrapper function Fixes: 4daace0d8ce8 ("PCI: hv: Add paravirtual PCI front-end for Microsoft Hyper-V VMs") Signed-off-by: Jeffrey Hugo Reviewed-by: Dexuan Cui Link: https://lore.kernel.org/r/1649856981-14649-1-git-send-email-quic_jhugo@quicinc.com Signed-off-by: Wei Liu Signed-off-by: Carl Vanderlip Signed-off-by: Greg Kroah-Hartman --- drivers/pci/controller/pci-hyperv.c | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index a070e69bb49c..84085939769c 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -1180,6 +1180,21 @@ static void hv_irq_mask(struct irq_data *data) pci_msi_mask_irq(data); } +static int hv_msi_prepare(struct irq_domain *domain, struct device *dev, + int nvec, msi_alloc_info_t *info) +{ + int ret = pci_msi_prepare(domain, dev, nvec, info); + + /* + * By using the interrupt remapper in the hypervisor IOMMU, contiguous + * CPU vectors is not needed for multi-MSI + */ + if (info->type == X86_IRQ_ALLOC_TYPE_PCI_MSI) + info->flags &= ~X86_IRQ_ALLOC_CONTIGUOUS_VECTORS; + + return ret; +} + /** * hv_irq_unmask() - "Unmask" the IRQ by setting its current * affinity. @@ -1545,7 +1560,7 @@ static struct irq_chip hv_msi_irq_chip = { }; static struct msi_domain_ops hv_msi_ops = { - .msi_prepare = pci_msi_prepare, + .msi_prepare = hv_msi_prepare, .msi_free = hv_msi_free, }; From 73bf070408a7f07e813ab26ebde1b09fca159cd6 Mon Sep 17 00:00:00 2001 From: Jeffrey Hugo Date: Fri, 15 Jul 2022 21:37:38 +0000 Subject: [PATCH 1611/1842] PCI: hv: Fix hv_arch_irq_unmask() for multi-MSI commit 455880dfe292a2bdd3b4ad6a107299fce610e64b upstream. In the multi-MSI case, hv_arch_irq_unmask() will only operate on the first MSI of the N allocated. This is because only the first msi_desc is cached and it is shared by all the MSIs of the multi-MSI block. This means that hv_arch_irq_unmask() gets the correct address, but the wrong data (always 0). This can break MSIs. Lets assume MSI0 is vector 34 on CPU0, and MSI1 is vector 33 on CPU0. hv_arch_irq_unmask() is called on MSI0. It uses a hypercall to configure the MSI address and data (0) to vector 34 of CPU0. This is correct. Then hv_arch_irq_unmask is called on MSI1. It uses another hypercall to configure the MSI address and data (0) to vector 33 of CPU0. This is wrong, and results in both MSI0 and MSI1 being routed to vector 33. Linux will observe extra instances of MSI1 and no instances of MSI0 despite the endpoint device behaving correctly. For the multi-MSI case, we need unique address and data info for each MSI, but the cached msi_desc does not provide that. However, that information can be gotten from the int_desc cached in the chip_data by compose_msi_msg(). Fix the multi-MSI case to use that cached information instead. Since hv_set_msi_entry_from_desc() is no longer applicable, remove it. 5.10 backport - removed unused hv_set_msi_entry_from_desc function from mshyperv.h instead of pci-hyperv.c. msi_entry.address/data.as_uint32 changed to direct reference (as they are u32's, just sans union). Signed-off-by: Jeffrey Hugo Reviewed-by: Michael Kelley Link: https://lore.kernel.org/r/1651068453-29588-1-git-send-email-quic_jhugo@quicinc.com Signed-off-by: Wei Liu Signed-off-by: Carl Vanderlip Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/mshyperv.h | 7 ------- drivers/pci/controller/pci-hyperv.c | 5 ++++- 2 files changed, 4 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 30f76b966857..871a8b06d430 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -247,13 +247,6 @@ bool hv_vcpu_is_preempted(int vcpu); static inline void hv_apic_init(void) {} #endif -static inline void hv_set_msi_entry_from_desc(union hv_msi_entry *msi_entry, - struct msi_desc *msi_desc) -{ - msi_entry->address = msi_desc->msg.address_lo; - msi_entry->data = msi_desc->msg.data; -} - #else /* CONFIG_HYPERV */ static inline void hyperv_init(void) {} static inline void hyperv_setup_mmu_ops(void) {} diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index 84085939769c..a1f27c555aec 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -1210,6 +1210,7 @@ static void hv_irq_unmask(struct irq_data *data) struct msi_desc *msi_desc = irq_data_get_msi_desc(data); struct irq_cfg *cfg = irqd_cfg(data); struct hv_retarget_device_interrupt *params; + struct tran_int_desc *int_desc; struct hv_pcibus_device *hbus; struct cpumask *dest; cpumask_var_t tmp; @@ -1224,6 +1225,7 @@ static void hv_irq_unmask(struct irq_data *data) pdev = msi_desc_to_pci_dev(msi_desc); pbus = pdev->bus; hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata); + int_desc = data->chip_data; spin_lock_irqsave(&hbus->retarget_msi_interrupt_lock, flags); @@ -1231,7 +1233,8 @@ static void hv_irq_unmask(struct irq_data *data) memset(params, 0, sizeof(*params)); params->partition_id = HV_PARTITION_ID_SELF; params->int_entry.source = 1; /* MSI(-X) */ - hv_set_msi_entry_from_desc(¶ms->int_entry.msi_entry, msi_desc); + params->int_entry.msi_entry.address = int_desc->address & 0xffffffff; + params->int_entry.msi_entry.data = int_desc->data; params->device_id = (hbus->hdev->dev_instance.b[5] << 24) | (hbus->hdev->dev_instance.b[4] << 16) | (hbus->hdev->dev_instance.b[7] << 8) | From 522bd31d6b4bb783e4454d8f11d012e77c627648 Mon Sep 17 00:00:00 2001 From: Jeffrey Hugo Date: Fri, 15 Jul 2022 21:37:39 +0000 Subject: [PATCH 1612/1842] PCI: hv: Reuse existing IRTE allocation in compose_msi_msg() commit b4b77778ecc5bfbd4e77de1b2fd5c1dd3c655f1f upstream. Currently if compose_msi_msg() is called multiple times, it will free any previous IRTE allocation, and generate a new allocation. While nothing prevents this from occurring, it is extraneous when Linux could just reuse the existing allocation and avoid a bunch of overhead. However, when future IRTE allocations operate on blocks of MSIs instead of a single line, freeing the allocation will impact all of the lines. This could cause an issue where an allocation of N MSIs occurs, then some of the lines are retargeted, and finally the allocation is freed/reallocated. The freeing of the allocation removes all of the configuration for the entire block, which requires all the lines to be retargeted, which might not happen since some lines might already be unmasked/active. Signed-off-by: Jeffrey Hugo Reviewed-by: Dexuan Cui Tested-by: Dexuan Cui Tested-by: Michael Kelley Link: https://lore.kernel.org/r/1652282582-21595-1-git-send-email-quic_jhugo@quicinc.com Signed-off-by: Wei Liu Signed-off-by: Carl Vanderlip Signed-off-by: Greg Kroah-Hartman --- drivers/pci/controller/pci-hyperv.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index a1f27c555aec..6e2035deb52a 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -1409,6 +1409,15 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) u32 size; int ret; + /* Reuse the previous allocation */ + if (data->chip_data) { + int_desc = data->chip_data; + msg->address_hi = int_desc->address >> 32; + msg->address_lo = int_desc->address & 0xffffffff; + msg->data = int_desc->data; + return; + } + pdev = msi_desc_to_pci_dev(irq_data_get_msi_desc(data)); dest = irq_data_get_effective_affinity_mask(data); pbus = pdev->bus; @@ -1418,13 +1427,6 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) if (!hpdev) goto return_null_message; - /* Free any previous message that might have already been composed. */ - if (data->chip_data) { - int_desc = data->chip_data; - data->chip_data = NULL; - hv_int_desc_free(hpdev, int_desc); - } - int_desc = kzalloc(sizeof(*int_desc), GFP_ATOMIC); if (!int_desc) goto drop_reference; From e744aad0c4421c83cec35d62394e9cd210ccade6 Mon Sep 17 00:00:00 2001 From: Jeffrey Hugo Date: Fri, 15 Jul 2022 21:37:40 +0000 Subject: [PATCH 1613/1842] PCI: hv: Fix interrupt mapping for multi-MSI commit a2bad844a67b1c7740bda63e87453baf63c3a7f7 upstream. According to Dexuan, the hypervisor folks beleive that multi-msi allocations are not correct. compose_msi_msg() will allocate multi-msi one by one. However, multi-msi is a block of related MSIs, with alignment requirements. In order for the hypervisor to allocate properly aligned and consecutive entries in the IOMMU Interrupt Remapping Table, there should be a single mapping request that requests all of the multi-msi vectors in one shot. Dexuan suggests detecting the multi-msi case and composing a single request related to the first MSI. Then for the other MSIs in the same block, use the cached information. This appears to be viable, so do it. 5.10 backport - add hv_msi_get_int_vector helper function. Fixed merge conflict due to delivery_mode name change (APIC_DELIVERY_MODE_FIXED is the value given to dest_Fixed). Removed unused variable in hv_compose_msi_msg. Fixed reference to msi_desc->pci to point to the same is_msix variable. Removed changes to compose_msi_req_v3 since it doesn't exist yet. Suggested-by: Dexuan Cui Signed-off-by: Jeffrey Hugo Reviewed-by: Dexuan Cui Tested-by: Michael Kelley Link: https://lore.kernel.org/r/1652282599-21643-1-git-send-email-quic_jhugo@quicinc.com Signed-off-by: Wei Liu Signed-off-by: Carl Vanderlip Signed-off-by: Greg Kroah-Hartman --- drivers/pci/controller/pci-hyperv.c | 61 +++++++++++++++++++++++++---- 1 file changed, 53 insertions(+), 8 deletions(-) diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c index 6e2035deb52a..4353443b89d8 100644 --- a/drivers/pci/controller/pci-hyperv.c +++ b/drivers/pci/controller/pci-hyperv.c @@ -1118,6 +1118,10 @@ static void hv_int_desc_free(struct hv_pci_dev *hpdev, u8 buffer[sizeof(struct pci_delete_interrupt)]; } ctxt; + if (!int_desc->vector_count) { + kfree(int_desc); + return; + } memset(&ctxt, 0, sizeof(ctxt)); int_pkt = (struct pci_delete_interrupt *)&ctxt.pkt.message; int_pkt->message_type.type = @@ -1180,6 +1184,13 @@ static void hv_irq_mask(struct irq_data *data) pci_msi_mask_irq(data); } +static unsigned int hv_msi_get_int_vector(struct irq_data *data) +{ + struct irq_cfg *cfg = irqd_cfg(data); + + return cfg->vector; +} + static int hv_msi_prepare(struct irq_domain *domain, struct device *dev, int nvec, msi_alloc_info_t *info) { @@ -1335,12 +1346,12 @@ static void hv_pci_compose_compl(void *context, struct pci_response *resp, static u32 hv_compose_msi_req_v1( struct pci_create_interrupt *int_pkt, struct cpumask *affinity, - u32 slot, u8 vector) + u32 slot, u8 vector, u8 vector_count) { int_pkt->message_type.type = PCI_CREATE_INTERRUPT_MESSAGE; int_pkt->wslot.slot = slot; int_pkt->int_desc.vector = vector; - int_pkt->int_desc.vector_count = 1; + int_pkt->int_desc.vector_count = vector_count; int_pkt->int_desc.delivery_mode = dest_Fixed; /* @@ -1354,14 +1365,14 @@ static u32 hv_compose_msi_req_v1( static u32 hv_compose_msi_req_v2( struct pci_create_interrupt2 *int_pkt, struct cpumask *affinity, - u32 slot, u8 vector) + u32 slot, u8 vector, u8 vector_count) { int cpu; int_pkt->message_type.type = PCI_CREATE_INTERRUPT_MESSAGE2; int_pkt->wslot.slot = slot; int_pkt->int_desc.vector = vector; - int_pkt->int_desc.vector_count = 1; + int_pkt->int_desc.vector_count = vector_count; int_pkt->int_desc.delivery_mode = dest_Fixed; /* @@ -1389,7 +1400,6 @@ static u32 hv_compose_msi_req_v2( */ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) { - struct irq_cfg *cfg = irqd_cfg(data); struct hv_pcibus_device *hbus; struct vmbus_channel *channel; struct hv_pci_dev *hpdev; @@ -1398,6 +1408,8 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) struct cpumask *dest; struct compose_comp_ctxt comp; struct tran_int_desc *int_desc; + struct msi_desc *msi_desc; + u8 vector, vector_count; struct { struct pci_packet pci_pkt; union { @@ -1418,7 +1430,8 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) return; } - pdev = msi_desc_to_pci_dev(irq_data_get_msi_desc(data)); + msi_desc = irq_data_get_msi_desc(data); + pdev = msi_desc_to_pci_dev(msi_desc); dest = irq_data_get_effective_affinity_mask(data); pbus = pdev->bus; hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata); @@ -1431,6 +1444,36 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) if (!int_desc) goto drop_reference; + if (!msi_desc->msi_attrib.is_msix && msi_desc->nvec_used > 1) { + /* + * If this is not the first MSI of Multi MSI, we already have + * a mapping. Can exit early. + */ + if (msi_desc->irq != data->irq) { + data->chip_data = int_desc; + int_desc->address = msi_desc->msg.address_lo | + (u64)msi_desc->msg.address_hi << 32; + int_desc->data = msi_desc->msg.data + + (data->irq - msi_desc->irq); + msg->address_hi = msi_desc->msg.address_hi; + msg->address_lo = msi_desc->msg.address_lo; + msg->data = int_desc->data; + put_pcichild(hpdev); + return; + } + /* + * The vector we select here is a dummy value. The correct + * value gets sent to the hypervisor in unmask(). This needs + * to be aligned with the count, and also not zero. Multi-msi + * is powers of 2 up to 32, so 32 will always work here. + */ + vector = 32; + vector_count = msi_desc->nvec_used; + } else { + vector = hv_msi_get_int_vector(data); + vector_count = 1; + } + memset(&ctxt, 0, sizeof(ctxt)); init_completion(&comp.comp_pkt.host_event); ctxt.pci_pkt.completion_func = hv_pci_compose_compl; @@ -1441,7 +1484,8 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) size = hv_compose_msi_req_v1(&ctxt.int_pkts.v1, dest, hpdev->desc.win_slot.slot, - cfg->vector); + vector, + vector_count); break; case PCI_PROTOCOL_VERSION_1_2: @@ -1449,7 +1493,8 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) size = hv_compose_msi_req_v2(&ctxt.int_pkts.v2, dest, hpdev->desc.win_slot.slot, - cfg->vector); + vector, + vector_count); break; default: From 3777ea39f05aefeadf8b9f5216bf2f7978d2649e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pali=20Roh=C3=A1r?= Date: Tue, 28 Jun 2022 12:09:22 +0200 Subject: [PATCH 1614/1842] serial: mvebu-uart: correctly report configured baudrate value MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 4f532c1e25319e42996ec18a1f473fd50c8e575d upstream. Functions tty_termios_encode_baud_rate() and uart_update_timeout() should be called with the baudrate value which was set to hardware. Linux then report exact values via ioctl(TCGETS2) to userspace. Change mvebu_uart_baud_rate_set() function to return baudrate value which was set to hardware and propagate this value to above mentioned functions. With this change userspace would see precise value in termios c_ospeed field. Fixes: 68a0db1d7da2 ("serial: mvebu-uart: add function to change baudrate") Cc: stable Reviewed-by: Ilpo Järvinen Signed-off-by: Pali Rohár Link: https://lore.kernel.org/r/20220628100922.10717-1-pali@kernel.org Signed-off-by: Greg Kroah-Hartman --- drivers/tty/serial/mvebu-uart.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c index 34ff2181afd1..e941f57de953 100644 --- a/drivers/tty/serial/mvebu-uart.c +++ b/drivers/tty/serial/mvebu-uart.c @@ -443,13 +443,13 @@ static void mvebu_uart_shutdown(struct uart_port *port) } } -static int mvebu_uart_baud_rate_set(struct uart_port *port, unsigned int baud) +static unsigned int mvebu_uart_baud_rate_set(struct uart_port *port, unsigned int baud) { unsigned int d_divisor, m_divisor; u32 brdv, osamp; if (!port->uartclk) - return -EOPNOTSUPP; + return 0; /* * The baudrate is derived from the UART clock thanks to two divisors: @@ -473,7 +473,7 @@ static int mvebu_uart_baud_rate_set(struct uart_port *port, unsigned int baud) osamp &= ~OSAMP_DIVISORS_MASK; writel(osamp, port->membase + UART_OSAMP); - return 0; + return DIV_ROUND_CLOSEST(port->uartclk, d_divisor * m_divisor); } static void mvebu_uart_set_termios(struct uart_port *port, @@ -510,15 +510,11 @@ static void mvebu_uart_set_termios(struct uart_port *port, max_baud = 230400; baud = uart_get_baud_rate(port, termios, old, min_baud, max_baud); - if (mvebu_uart_baud_rate_set(port, baud)) { - /* No clock available, baudrate cannot be changed */ - if (old) - baud = uart_get_baud_rate(port, old, NULL, - min_baud, max_baud); - } else { - tty_termios_encode_baud_rate(termios, baud, baud); - uart_update_timeout(port, termios->c_cflag, baud); - } + baud = mvebu_uart_baud_rate_set(port, baud); + + /* In case baudrate cannot be changed, report previous old value */ + if (baud == 0 && old) + baud = tty_termios_baud_rate(old); /* Only the following flag changes are supported */ if (old) { @@ -529,6 +525,11 @@ static void mvebu_uart_set_termios(struct uart_port *port, termios->c_cflag |= CS8; } + if (baud != 0) { + tty_termios_encode_baud_rate(termios, baud, baud); + uart_update_timeout(port, termios->c_cflag, baud); + } + spin_unlock_irqrestore(&port->lock, flags); } From 47b696dd654450cdec3103a833e5bf29c4b83bfa Mon Sep 17 00:00:00 2001 From: Hangyu Hua Date: Wed, 1 Jun 2022 14:46:25 +0800 Subject: [PATCH 1615/1842] xfrm: xfrm_policy: fix a possible double xfrm_pols_put() in xfrm_bundle_lookup() [ Upstream commit f85daf0e725358be78dfd208dea5fd665d8cb901 ] xfrm_policy_lookup() will call xfrm_pol_hold_rcu() to get a refcount of pols[0]. This refcount can be dropped in xfrm_expand_policies() when xfrm_expand_policies() return error. pols[0]'s refcount is balanced in here. But xfrm_bundle_lookup() will also call xfrm_pols_put() with num_pols == 1 to drop this refcount when xfrm_expand_policies() return error. This patch also fix an illegal address access. pols[0] will save a error point when xfrm_policy_lookup fails. This lead to xfrm_pols_put to resolve an illegal address in xfrm_bundle_lookup's error path. Fix these by setting num_pols = 0 in xfrm_expand_policies()'s error path. Fixes: 80c802f3073e ("xfrm: cache bundles instead of policies for outgoing flows") Signed-off-by: Hangyu Hua Signed-off-by: Steffen Klassert Signed-off-by: Sasha Levin --- net/xfrm/xfrm_policy.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index 93cbcc8f9b39..603b05ed7eb4 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -2680,8 +2680,10 @@ static int xfrm_expand_policies(const struct flowi *fl, u16 family, *num_xfrms = 0; return 0; } - if (IS_ERR(pols[0])) + if (IS_ERR(pols[0])) { + *num_pols = 0; return PTR_ERR(pols[0]); + } *num_xfrms = pols[0]->xfrm_nr; @@ -2696,6 +2698,7 @@ static int xfrm_expand_policies(const struct flowi *fl, u16 family, if (pols[1]) { if (IS_ERR(pols[1])) { xfrm_pols_put(pols, *num_pols); + *num_pols = 0; return PTR_ERR(pols[1]); } (*num_pols)++; From 493ceca3271316e74639c89ff8ac35883de64256 Mon Sep 17 00:00:00 2001 From: Miaoqian Lin Date: Mon, 23 May 2022 18:10:09 +0400 Subject: [PATCH 1616/1842] power/reset: arm-versatile: Fix refcount leak in versatile_reboot_probe [ Upstream commit 80192eff64eee9b3bc0594a47381937b94b9d65a ] of_find_matching_node_and_match() returns a node pointer with refcount incremented, we should use of_node_put() on it when not need anymore. Add missing of_node_put() to avoid refcount leak. Fixes: 0e545f57b708 ("power: reset: driver for the Versatile syscon reboot") Signed-off-by: Miaoqian Lin Reviewed-by: Linus Walleij Signed-off-by: Sebastian Reichel Signed-off-by: Sasha Levin --- drivers/power/reset/arm-versatile-reboot.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/power/reset/arm-versatile-reboot.c b/drivers/power/reset/arm-versatile-reboot.c index 08d0a07b58ef..c7624d7611a7 100644 --- a/drivers/power/reset/arm-versatile-reboot.c +++ b/drivers/power/reset/arm-versatile-reboot.c @@ -146,6 +146,7 @@ static int __init versatile_reboot_probe(void) versatile_reboot_type = (enum versatile_reboot)reboot_id->data; syscon_regmap = syscon_node_to_regmap(np); + of_node_put(np); if (IS_ERR(syscon_regmap)) return PTR_ERR(syscon_regmap); From 5694b162f275fb9a9f89422701b2b963be11e496 Mon Sep 17 00:00:00 2001 From: William Dean Date: Sun, 10 Jul 2022 23:49:22 +0800 Subject: [PATCH 1617/1842] pinctrl: ralink: Check for null return of devm_kcalloc [ Upstream commit c3b821e8e406d5650e587b7ac624ac24e9b780a8 ] Because of the possible failure of the allocation, data->domains might be NULL pointer and will cause the dereference of the NULL pointer later. Therefore, it might be better to check it and directly return -ENOMEM without releasing data manually if fails, because the comment of the devm_kmalloc() says "Memory allocated with this function is automatically freed on driver detach.". Fixes: a86854d0c599b ("treewide: devm_kzalloc() -> devm_kcalloc()") Reported-by: Hacash Robot Signed-off-by: William Dean Link: https://lore.kernel.org/r/20220710154922.2610876-1-williamsukatube@163.com Signed-off-by: Linus Walleij Signed-off-by: Sasha Levin --- drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c b/drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c index 09b0b8a16e99..2e971cbe2d7a 100644 --- a/drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c +++ b/drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c @@ -267,6 +267,8 @@ static int rt2880_pinmux_pins(struct rt2880_priv *p) p->func[i]->pin_count, sizeof(int), GFP_KERNEL); + if (!p->func[i]->pins) + return -ENOMEM; for (j = 0; j < p->func[i]->pin_count; j++) p->func[i]->pins[j] = p->func[i]->pin_first + j; From 43128b3eee337824158f34da6648163d2f2fb937 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 5 Jul 2022 15:07:26 +0200 Subject: [PATCH 1618/1842] perf/core: Fix data race between perf_event_set_output() and perf_mmap_close() [ Upstream commit 68e3c69803dada336893640110cb87221bb01dcf ] Yang Jihing reported a race between perf_event_set_output() and perf_mmap_close(): CPU1 CPU2 perf_mmap_close(e2) if (atomic_dec_and_test(&e2->rb->mmap_count)) // 1 - > 0 detach_rest = true ioctl(e1, IOC_SET_OUTPUT, e2) perf_event_set_output(e1, e2) ... list_for_each_entry_rcu(e, &e2->rb->event_list, rb_entry) ring_buffer_attach(e, NULL); // e1 isn't yet added and // therefore not detached ring_buffer_attach(e1, e2->rb) list_add_rcu(&e1->rb_entry, &e2->rb->event_list) After this; e1 is attached to an unmapped rb and a subsequent perf_mmap() will loop forever more: again: mutex_lock(&e->mmap_mutex); if (event->rb) { ... if (!atomic_inc_not_zero(&e->rb->mmap_count)) { ... mutex_unlock(&e->mmap_mutex); goto again; } } The loop in perf_mmap_close() holds e2->mmap_mutex, while the attach in perf_event_set_output() holds e1->mmap_mutex. As such there is no serialization to avoid this race. Change perf_event_set_output() to take both e1->mmap_mutex and e2->mmap_mutex to alleviate that problem. Additionally, have the loop in perf_mmap() detach the rb directly, this avoids having to wait for the concurrent perf_mmap_close() to get around to doing it to make progress. Fixes: 9bb5d40cd93c ("perf: Fix mmap() accounting hole") Reported-by: Yang Jihong Signed-off-by: Peter Zijlstra (Intel) Tested-by: Yang Jihong Link: https://lkml.kernel.org/r/YsQ3jm2GR38SW7uD@worktop.programming.kicks-ass.net Signed-off-by: Sasha Levin --- kernel/events/core.c | 45 ++++++++++++++++++++++++++++++-------------- 1 file changed, 31 insertions(+), 14 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 8ba155a7b59e..0e01216f4e5a 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -6228,10 +6228,10 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) if (!atomic_inc_not_zero(&event->rb->mmap_count)) { /* - * Raced against perf_mmap_close() through - * perf_event_set_output(). Try again, hope for better - * luck. + * Raced against perf_mmap_close(); remove the + * event and try again. */ + ring_buffer_attach(event, NULL); mutex_unlock(&event->mmap_mutex); goto again; } @@ -11587,14 +11587,25 @@ static int perf_copy_attr(struct perf_event_attr __user *uattr, goto out; } +static void mutex_lock_double(struct mutex *a, struct mutex *b) +{ + if (b < a) + swap(a, b); + + mutex_lock(a); + mutex_lock_nested(b, SINGLE_DEPTH_NESTING); +} + static int perf_event_set_output(struct perf_event *event, struct perf_event *output_event) { struct perf_buffer *rb = NULL; int ret = -EINVAL; - if (!output_event) + if (!output_event) { + mutex_lock(&event->mmap_mutex); goto set; + } /* don't allow circular references */ if (event == output_event) @@ -11632,8 +11643,15 @@ perf_event_set_output(struct perf_event *event, struct perf_event *output_event) event->pmu != output_event->pmu) goto out; + /* + * Hold both mmap_mutex to serialize against perf_mmap_close(). Since + * output_event is already on rb->event_list, and the list iteration + * restarts after every removal, it is guaranteed this new event is + * observed *OR* if output_event is already removed, it's guaranteed we + * observe !rb->mmap_count. + */ + mutex_lock_double(&event->mmap_mutex, &output_event->mmap_mutex); set: - mutex_lock(&event->mmap_mutex); /* Can't redirect output if we've got an active mmap() */ if (atomic_read(&event->mmap_count)) goto unlock; @@ -11643,6 +11661,12 @@ perf_event_set_output(struct perf_event *event, struct perf_event *output_event) rb = ring_buffer_get(output_event); if (!rb) goto unlock; + + /* did we race against perf_mmap_close() */ + if (!atomic_read(&rb->mmap_count)) { + ring_buffer_put(rb); + goto unlock; + } } ring_buffer_attach(event, rb); @@ -11650,20 +11674,13 @@ perf_event_set_output(struct perf_event *event, struct perf_event *output_event) ret = 0; unlock: mutex_unlock(&event->mmap_mutex); + if (output_event) + mutex_unlock(&output_event->mmap_mutex); out: return ret; } -static void mutex_lock_double(struct mutex *a, struct mutex *b) -{ - if (b < a) - swap(a, b); - - mutex_lock(a); - mutex_lock_nested(b, SINGLE_DEPTH_NESTING); -} - static int perf_event_set_clock(struct perf_event *event, clockid_t clk_id) { bool nmi_safe = false; From fb6031203ebbc17fa36aa3a85f007854d118d266 Mon Sep 17 00:00:00 2001 From: Alex Deucher Date: Wed, 20 Oct 2021 16:45:00 -0400 Subject: [PATCH 1619/1842] drm/amdgpu/display: add quirk handling for stutter mode [ Upstream commit 3ce51649cdf23ab463494df2bd6d1e9529ebdc6a ] Stutter mode is a power saving feature on GPUs, however at least one early raven system exhibits stability issues with it. Add a quirk to disable it for that system. Bug: https://bugzilla.kernel.org/show_bug.cgi?id=214417 Fixes: 005440066f929b ("drm/amdgpu: enable gfxoff again on raven series (v2)") Reviewed-by: Harry Wentland Signed-off-by: Alex Deucher Signed-off-by: Sasha Levin --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 33 +++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index f069d0faba64..55ecc67592eb 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -922,6 +922,37 @@ static void amdgpu_check_debugfs_connector_property_change(struct amdgpu_device } } +struct amdgpu_stutter_quirk { + u16 chip_vendor; + u16 chip_device; + u16 subsys_vendor; + u16 subsys_device; + u8 revision; +}; + +static const struct amdgpu_stutter_quirk amdgpu_stutter_quirk_list[] = { + /* https://bugzilla.kernel.org/show_bug.cgi?id=214417 */ + { 0x1002, 0x15dd, 0x1002, 0x15dd, 0xc8 }, + { 0, 0, 0, 0, 0 }, +}; + +static bool dm_should_disable_stutter(struct pci_dev *pdev) +{ + const struct amdgpu_stutter_quirk *p = amdgpu_stutter_quirk_list; + + while (p && p->chip_device != 0) { + if (pdev->vendor == p->chip_vendor && + pdev->device == p->chip_device && + pdev->subsystem_vendor == p->subsys_vendor && + pdev->subsystem_device == p->subsys_device && + pdev->revision == p->revision) { + return true; + } + ++p; + } + return false; +} + static int amdgpu_dm_init(struct amdgpu_device *adev) { struct dc_init_data init_data; @@ -1014,6 +1045,8 @@ static int amdgpu_dm_init(struct amdgpu_device *adev) if (adev->asic_type != CHIP_CARRIZO && adev->asic_type != CHIP_STONEY) adev->dm.dc->debug.disable_stutter = amdgpu_pp_feature_mask & PP_STUTTER_MODE ? false : true; + if (dm_should_disable_stutter(adev->pdev)) + adev->dm.dc->debug.disable_stutter = true; if (amdgpu_dc_debug_mask & DC_DISABLE_STUTTER) adev->dm.dc->debug.disable_stutter = true; From 77836dbe35382aaf8108489060c5c89530c77494 Mon Sep 17 00:00:00 2001 From: Lennert Buytenhek Date: Thu, 2 Jun 2022 18:58:11 +0300 Subject: [PATCH 1620/1842] igc: Reinstate IGC_REMOVED logic and implement it properly [ Upstream commit 7c1ddcee5311f3315096217881d2dbe47cc683f9 ] The initially merged version of the igc driver code (via commit 146740f9abc4, "igc: Add support for PF") contained the following IGC_REMOVED checks in the igc_rd32/wr32() MMIO accessors: u32 igc_rd32(struct igc_hw *hw, u32 reg) { u8 __iomem *hw_addr = READ_ONCE(hw->hw_addr); u32 value = 0; if (IGC_REMOVED(hw_addr)) return ~value; value = readl(&hw_addr[reg]); /* reads should not return all F's */ if (!(~value) && (!reg || !(~readl(hw_addr)))) hw->hw_addr = NULL; return value; } And: #define wr32(reg, val) \ do { \ u8 __iomem *hw_addr = READ_ONCE((hw)->hw_addr); \ if (!IGC_REMOVED(hw_addr)) \ writel((val), &hw_addr[(reg)]); \ } while (0) E.g. igb has similar checks in its MMIO accessors, and has a similar macro E1000_REMOVED, which is implemented as follows: #define E1000_REMOVED(h) unlikely(!(h)) These checks serve to detect and take note of an 0xffffffff MMIO read return from the device, which can be caused by a PCIe link flap or some other kind of PCI bus error, and to avoid performing MMIO reads and writes from that point onwards. However, the IGC_REMOVED macro was not originally implemented: #ifndef IGC_REMOVED #define IGC_REMOVED(a) (0) #endif /* IGC_REMOVED */ This led to the IGC_REMOVED logic to be removed entirely in a subsequent commit (commit 3c215fb18e70, "igc: remove IGC_REMOVED function"), with the rationale that such checks matter only for virtualization and that igc does not support virtualization -- but a PCIe device can become detached even without virtualization being in use, and without proper checks, a PCIe bus error affecting an igc adapter will lead to various NULL pointer dereferences, as the first access after the error will set hw->hw_addr to NULL, and subsequent accesses will blindly dereference this now-NULL pointer. This patch reinstates the IGC_REMOVED checks in igc_rd32/wr32(), and implements IGC_REMOVED the way it is done for igb, by checking for the unlikely() case of hw_addr being NULL. This change prevents the oopses seen when a PCIe link flap occurs on an igc adapter. Fixes: 146740f9abc4 ("igc: Add support for PF") Signed-off-by: Lennert Buytenhek Tested-by: Naama Meir Acked-by: Sasha Neftin Signed-off-by: Tony Nguyen Signed-off-by: Sasha Levin --- drivers/net/ethernet/intel/igc/igc_main.c | 3 +++ drivers/net/ethernet/intel/igc/igc_regs.h | 5 ++++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 53e31002ce52..e7ffe63925fd 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -4933,6 +4933,9 @@ u32 igc_rd32(struct igc_hw *hw, u32 reg) u8 __iomem *hw_addr = READ_ONCE(hw->hw_addr); u32 value = 0; + if (IGC_REMOVED(hw_addr)) + return ~value; + value = readl(&hw_addr[reg]); /* reads should not return all F's */ diff --git a/drivers/net/ethernet/intel/igc/igc_regs.h b/drivers/net/ethernet/intel/igc/igc_regs.h index b52dd9d737e8..a273e1c33b3f 100644 --- a/drivers/net/ethernet/intel/igc/igc_regs.h +++ b/drivers/net/ethernet/intel/igc/igc_regs.h @@ -252,7 +252,8 @@ u32 igc_rd32(struct igc_hw *hw, u32 reg); #define wr32(reg, val) \ do { \ u8 __iomem *hw_addr = READ_ONCE((hw)->hw_addr); \ - writel((val), &hw_addr[(reg)]); \ + if (!IGC_REMOVED(hw_addr)) \ + writel((val), &hw_addr[(reg)]); \ } while (0) #define rd32(reg) (igc_rd32(hw, reg)) @@ -264,4 +265,6 @@ do { \ #define array_rd32(reg, offset) (igc_rd32(hw, (reg) + ((offset) << 2))) +#define IGC_REMOVED(h) unlikely(!(h)) + #endif From 5e343e3ef464a5dbcc8fd3a99830908e31a4a61d Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 13 Jul 2022 13:51:52 -0700 Subject: [PATCH 1621/1842] ip: Fix data-races around sysctl_ip_no_pmtu_disc. [ Upstream commit 0968d2a441bf6afb551fd99e60fa65ed67068963 ] While reading sysctl_ip_no_pmtu_disc, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/af_inet.c | 2 +- net/ipv4/icmp.c | 2 +- net/ipv6/af_inet6.c | 2 +- net/xfrm/xfrm_state.c | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index e77283069c7b..9d1ff3baa213 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -338,7 +338,7 @@ static int inet_create(struct net *net, struct socket *sock, int protocol, inet->hdrincl = 1; } - if (net->ipv4.sysctl_ip_no_pmtu_disc) + if (READ_ONCE(net->ipv4.sysctl_ip_no_pmtu_disc)) inet->pmtudisc = IP_PMTUDISC_DONT; else inet->pmtudisc = IP_PMTUDISC_WANT; diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c index 0fa0da1d71f5..a1aacf5e75a6 100644 --- a/net/ipv4/icmp.c +++ b/net/ipv4/icmp.c @@ -887,7 +887,7 @@ static bool icmp_unreach(struct sk_buff *skb) * values please see * Documentation/networking/ip-sysctl.rst */ - switch (net->ipv4.sysctl_ip_no_pmtu_disc) { + switch (READ_ONCE(net->ipv4.sysctl_ip_no_pmtu_disc)) { default: net_dbg_ratelimited("%pI4: fragmentation needed and DF set\n", &iph->daddr); diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c index 890a9cfc6ce2..d30c9d949c1b 100644 --- a/net/ipv6/af_inet6.c +++ b/net/ipv6/af_inet6.c @@ -225,7 +225,7 @@ static int inet6_create(struct net *net, struct socket *sock, int protocol, inet->mc_list = NULL; inet->rcv_tos = 0; - if (net->ipv4.sysctl_ip_no_pmtu_disc) + if (READ_ONCE(net->ipv4.sysctl_ip_no_pmtu_disc)) inet->pmtudisc = IP_PMTUDISC_DONT; else inet->pmtudisc = IP_PMTUDISC_WANT; diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c index 717db5ecd0bd..bc0bbb1571ce 100644 --- a/net/xfrm/xfrm_state.c +++ b/net/xfrm/xfrm_state.c @@ -2587,7 +2587,7 @@ int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload) int err; if (family == AF_INET && - xs_net(x)->ipv4.sysctl_ip_no_pmtu_disc) + READ_ONCE(xs_net(x)->ipv4.sysctl_ip_no_pmtu_disc)) x->props.flags |= XFRM_STATE_NOPMTUDISC; err = -EPROTONOSUPPORT; From b96ed5ccb09ae71103023ed13acefb194f609794 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 13 Jul 2022 13:51:53 -0700 Subject: [PATCH 1622/1842] ip: Fix data-races around sysctl_ip_fwd_use_pmtu. [ Upstream commit 60c158dc7b1f0558f6cadd5b50d0386da0000d50 ] While reading sysctl_ip_fwd_use_pmtu, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: f87c10a8aa1e ("ipv4: introduce ip_dst_mtu_maybe_forward and protect forwarding path against pmtu spoofing") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- include/net/ip.h | 2 +- net/ipv4/route.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/net/ip.h b/include/net/ip.h index 76aaa7eb5b82..a7e40ef02732 100644 --- a/include/net/ip.h +++ b/include/net/ip.h @@ -440,7 +440,7 @@ static inline unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst, struct net *net = dev_net(dst->dev); unsigned int mtu; - if (net->ipv4.sysctl_ip_fwd_use_pmtu || + if (READ_ONCE(net->ipv4.sysctl_ip_fwd_use_pmtu) || ip_mtu_locked(dst) || !forwarding) return dst_mtu(dst); diff --git a/net/ipv4/route.c b/net/ipv4/route.c index aab8ac383d5d..374647693d7a 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -1442,7 +1442,7 @@ u32 ip_mtu_from_fib_result(struct fib_result *res, __be32 daddr) struct fib_info *fi = res->fi; u32 mtu = 0; - if (dev_net(dev)->ipv4.sysctl_ip_fwd_use_pmtu || + if (READ_ONCE(dev_net(dev)->ipv4.sysctl_ip_fwd_use_pmtu) || fi->fib_metrics->metrics[RTAX_LOCK - 1] & (1 << RTAX_MTU)) mtu = fi->fib_mtu; From 11038fa781ab916535c53351537b22d6d405667d Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 13 Jul 2022 13:51:54 -0700 Subject: [PATCH 1623/1842] ip: Fix data-races around sysctl_ip_fwd_update_priority. [ Upstream commit 7bf9e18d9a5e99e3c83482973557e9f047b051e7 ] While reading sysctl_ip_fwd_update_priority, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 432e05d32892 ("net: ipv4: Control SKB reprioritization after forwarding") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c | 3 ++- net/ipv4/ip_forward.c | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c index 5f143ca16c01..d2887ae508bb 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c @@ -8038,13 +8038,14 @@ static int mlxsw_sp_dscp_init(struct mlxsw_sp *mlxsw_sp) static int __mlxsw_sp_router_init(struct mlxsw_sp *mlxsw_sp) { struct net *net = mlxsw_sp_net(mlxsw_sp); - bool usp = net->ipv4.sysctl_ip_fwd_update_priority; char rgcr_pl[MLXSW_REG_RGCR_LEN]; u64 max_rifs; + bool usp; if (!MLXSW_CORE_RES_VALID(mlxsw_sp->core, MAX_RIFS)) return -EIO; max_rifs = MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_RIFS); + usp = READ_ONCE(net->ipv4.sysctl_ip_fwd_update_priority); mlxsw_reg_rgcr_pack(rgcr_pl, true, true); mlxsw_reg_rgcr_max_router_interfaces_set(rgcr_pl, max_rifs); diff --git a/net/ipv4/ip_forward.c b/net/ipv4/ip_forward.c index 00ec819f949b..29730edda220 100644 --- a/net/ipv4/ip_forward.c +++ b/net/ipv4/ip_forward.c @@ -151,7 +151,7 @@ int ip_forward(struct sk_buff *skb) !skb_sec_path(skb)) ip_rt_send_redirect(skb); - if (net->ipv4.sysctl_ip_fwd_update_priority) + if (READ_ONCE(net->ipv4.sysctl_ip_fwd_update_priority)) skb->priority = rt_tos2priority(iph->tos); return NF_HOOK(NFPROTO_IPV4, NF_INET_FORWARD, From 94269132d0fc9ee357e555c17a8f85fd6660649b Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 13 Jul 2022 13:51:55 -0700 Subject: [PATCH 1624/1842] ip: Fix data-races around sysctl_ip_nonlocal_bind. [ Upstream commit 289d3b21fb0bfc94c4e98f10635bba1824e5f83c ] While reading sysctl_ip_nonlocal_bind, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- include/net/inet_sock.h | 2 +- net/sctp/protocol.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h index 89163ef8cf4b..f374946734b9 100644 --- a/include/net/inet_sock.h +++ b/include/net/inet_sock.h @@ -369,7 +369,7 @@ static inline bool inet_get_convert_csum(struct sock *sk) static inline bool inet_can_nonlocal_bind(struct net *net, struct inet_sock *inet) { - return net->ipv4.sysctl_ip_nonlocal_bind || + return READ_ONCE(net->ipv4.sysctl_ip_nonlocal_bind) || inet->freebind || inet->transparent; } diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c index 940f1e257a90..6e4ca837e91d 100644 --- a/net/sctp/protocol.c +++ b/net/sctp/protocol.c @@ -358,7 +358,7 @@ static int sctp_v4_available(union sctp_addr *addr, struct sctp_sock *sp) if (addr->v4.sin_addr.s_addr != htonl(INADDR_ANY) && ret != RTN_LOCAL && !sp->inet.freebind && - !net->ipv4.sysctl_ip_nonlocal_bind) + !READ_ONCE(net->ipv4.sysctl_ip_nonlocal_bind)) return 0; if (ipv6_only_sock(sctp_opt2sk(sp))) From 611ba70e5aca252ef43374dda97ed4cf1c47a07c Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 13 Jul 2022 13:51:56 -0700 Subject: [PATCH 1625/1842] ip: Fix a data-race around sysctl_ip_autobind_reuse. [ Upstream commit 0db232765887d9807df8bcb7b6f29b2871539eab ] While reading sysctl_ip_autobind_reuse, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 4b01a9674231 ("tcp: bind(0) remove the SO_REUSEADDR restriction when ephemeral ports are exhausted.") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/inet_connection_sock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c index 7785a4775e58..4d9713324003 100644 --- a/net/ipv4/inet_connection_sock.c +++ b/net/ipv4/inet_connection_sock.c @@ -251,7 +251,7 @@ inet_csk_find_open_port(struct sock *sk, struct inet_bind_bucket **tb_ret, int * goto other_half_scan; } - if (net->ipv4.sysctl_ip_autobind_reuse && !relax) { + if (READ_ONCE(net->ipv4.sysctl_ip_autobind_reuse) && !relax) { /* We still have a chance to connect to different destinations */ relax = true; goto ports_exhausted; From 0ee76fe01ff3c0b4efaa500aecc90d7c8d3a8860 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 13 Jul 2022 13:51:57 -0700 Subject: [PATCH 1626/1842] ip: Fix a data-race around sysctl_fwmark_reflect. [ Upstream commit 85d0b4dbd74b95cc492b1f4e34497d3f894f5d9a ] While reading sysctl_fwmark_reflect, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: e110861f8609 ("net: add a sysctl to reflect the fwmark on replies") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- include/net/ip.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/net/ip.h b/include/net/ip.h index a7e40ef02732..d715b25a8dc4 100644 --- a/include/net/ip.h +++ b/include/net/ip.h @@ -379,7 +379,7 @@ void ipfrag_init(void); void ip_static_sysctl_init(void); #define IP4_REPLY_MARK(net, mark) \ - ((net)->ipv4.sysctl_fwmark_reflect ? (mark) : 0) + (READ_ONCE((net)->ipv4.sysctl_fwmark_reflect) ? (mark) : 0) static inline bool ip_is_fragment(const struct iphdr *iph) { From d4f65615db7fca3df9f7e79eadf937e6ddb03c54 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 13 Jul 2022 13:51:58 -0700 Subject: [PATCH 1627/1842] tcp/dccp: Fix a data-race around sysctl_tcp_fwmark_accept. [ Upstream commit 1a0008f9df59451d0a17806c1ee1a19857032fa8 ] While reading sysctl_tcp_fwmark_accept, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 84f39b08d786 ("net: support marking accepting TCP sockets") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- include/net/inet_sock.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h index f374946734b9..3c039d4b0e48 100644 --- a/include/net/inet_sock.h +++ b/include/net/inet_sock.h @@ -107,7 +107,8 @@ static inline struct inet_request_sock *inet_rsk(const struct request_sock *sk) static inline u32 inet_request_mark(const struct sock *sk, struct sk_buff *skb) { - if (!sk->sk_mark && sock_net(sk)->ipv4.sysctl_tcp_fwmark_accept) + if (!sk->sk_mark && + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fwmark_accept)) return skb->mark; return sk->sk_mark; From 77a04845f0d28a3561494a5f3121488470a968a4 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 13 Jul 2022 13:52:00 -0700 Subject: [PATCH 1628/1842] tcp: Fix data-races around sysctl_tcp_mtu_probing. [ Upstream commit f47d00e077e7d61baf69e46dde3210c886360207 ] While reading sysctl_tcp_mtu_probing, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 5d424d5a674f ("[TCP]: MTU probing") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_output.c | 2 +- net/ipv4/tcp_timer.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 8634a5c853f5..423ec09ad831 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1763,7 +1763,7 @@ void tcp_mtup_init(struct sock *sk) struct inet_connection_sock *icsk = inet_csk(sk); struct net *net = sock_net(sk); - icsk->icsk_mtup.enabled = net->ipv4.sysctl_tcp_mtu_probing > 1; + icsk->icsk_mtup.enabled = READ_ONCE(net->ipv4.sysctl_tcp_mtu_probing) > 1; icsk->icsk_mtup.search_high = tp->rx_opt.mss_clamp + sizeof(struct tcphdr) + icsk->icsk_af_ops->net_header_len; icsk->icsk_mtup.search_low = tcp_mss_to_mtu(sk, net->ipv4.sysctl_tcp_base_mss); diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index 4ef08079ccfa..3c0d689cafac 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -163,7 +163,7 @@ static void tcp_mtu_probing(struct inet_connection_sock *icsk, struct sock *sk) int mss; /* Black hole detection */ - if (!net->ipv4.sysctl_tcp_mtu_probing) + if (!READ_ONCE(net->ipv4.sysctl_tcp_mtu_probing)) return; if (!icsk->icsk_mtup.enabled) { From 514d2254c7b8aa2d257f5ffc79f0d96be2d6bfda Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 13 Jul 2022 13:52:01 -0700 Subject: [PATCH 1629/1842] tcp: Fix data-races around sysctl_tcp_base_mss. [ Upstream commit 88d78bc097cd8ebc6541e93316c9d9bf651b13e8 ] While reading sysctl_tcp_base_mss, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 5d424d5a674f ("[TCP]: MTU probing") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_output.c | 2 +- net/ipv4/tcp_timer.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 423ec09ad831..9f3eec8e7e4c 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1766,7 +1766,7 @@ void tcp_mtup_init(struct sock *sk) icsk->icsk_mtup.enabled = READ_ONCE(net->ipv4.sysctl_tcp_mtu_probing) > 1; icsk->icsk_mtup.search_high = tp->rx_opt.mss_clamp + sizeof(struct tcphdr) + icsk->icsk_af_ops->net_header_len; - icsk->icsk_mtup.search_low = tcp_mss_to_mtu(sk, net->ipv4.sysctl_tcp_base_mss); + icsk->icsk_mtup.search_low = tcp_mss_to_mtu(sk, READ_ONCE(net->ipv4.sysctl_tcp_base_mss)); icsk->icsk_mtup.probe_size = 0; if (icsk->icsk_mtup.enabled) icsk->icsk_mtup.probe_timestamp = tcp_jiffies32; diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index 3c0d689cafac..795716fd3761 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -171,7 +171,7 @@ static void tcp_mtu_probing(struct inet_connection_sock *icsk, struct sock *sk) icsk->icsk_mtup.probe_timestamp = tcp_jiffies32; } else { mss = tcp_mtu_to_mss(sk, icsk->icsk_mtup.search_low) >> 1; - mss = min(net->ipv4.sysctl_tcp_base_mss, mss); + mss = min(READ_ONCE(net->ipv4.sysctl_tcp_base_mss), mss); mss = max(mss, net->ipv4.sysctl_tcp_mtu_probe_floor); mss = max(mss, net->ipv4.sysctl_tcp_min_snd_mss); icsk->icsk_mtup.search_low = tcp_mss_to_mtu(sk, mss); From 97992e8feff33b3ae154a113ec398546bbacda80 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 13 Jul 2022 13:52:02 -0700 Subject: [PATCH 1630/1842] tcp: Fix data-races around sysctl_tcp_min_snd_mss. [ Upstream commit 78eb166cdefcc3221c8c7c1e2d514e91a2eb5014 ] While reading sysctl_tcp_min_snd_mss, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 5f3e2bf008c2 ("tcp: add tcp_min_snd_mss sysctl") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_output.c | 3 ++- net/ipv4/tcp_timer.c | 2 +- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 9f3eec8e7e4c..b0d49317b221 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1720,7 +1720,8 @@ static inline int __tcp_mtu_to_mss(struct sock *sk, int pmtu) mss_now -= icsk->icsk_ext_hdr_len; /* Then reserve room for full set of TCP options and 8 bytes of data */ - mss_now = max(mss_now, sock_net(sk)->ipv4.sysctl_tcp_min_snd_mss); + mss_now = max(mss_now, + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_min_snd_mss)); return mss_now; } diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index 795716fd3761..953f36868369 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -173,7 +173,7 @@ static void tcp_mtu_probing(struct inet_connection_sock *icsk, struct sock *sk) mss = tcp_mtu_to_mss(sk, icsk->icsk_mtup.search_low) >> 1; mss = min(READ_ONCE(net->ipv4.sysctl_tcp_base_mss), mss); mss = max(mss, net->ipv4.sysctl_tcp_mtu_probe_floor); - mss = max(mss, net->ipv4.sysctl_tcp_min_snd_mss); + mss = max(mss, READ_ONCE(net->ipv4.sysctl_tcp_min_snd_mss)); icsk->icsk_mtup.search_low = tcp_mss_to_mtu(sk, mss); } tcp_sync_mss(sk, icsk->icsk_pmtu_cookie); From d5bece4df6090395f891110ef52a6f82d16685db Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 13 Jul 2022 13:52:03 -0700 Subject: [PATCH 1631/1842] tcp: Fix a data-race around sysctl_tcp_mtu_probe_floor. [ Upstream commit 8e92d4423615a5257d0d871fc067aa561f597deb ] While reading sysctl_tcp_mtu_probe_floor, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: c04b79b6cfd7 ("tcp: add new tcp_mtu_probe_floor sysctl") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_timer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index 953f36868369..da92c5d70b70 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -172,7 +172,7 @@ static void tcp_mtu_probing(struct inet_connection_sock *icsk, struct sock *sk) } else { mss = tcp_mtu_to_mss(sk, icsk->icsk_mtup.search_low) >> 1; mss = min(READ_ONCE(net->ipv4.sysctl_tcp_base_mss), mss); - mss = max(mss, net->ipv4.sysctl_tcp_mtu_probe_floor); + mss = max(mss, READ_ONCE(net->ipv4.sysctl_tcp_mtu_probe_floor)); mss = max(mss, READ_ONCE(net->ipv4.sysctl_tcp_min_snd_mss)); icsk->icsk_mtup.search_low = tcp_mss_to_mtu(sk, mss); } From d452ce36f2d4c402fa3f5275c9677f80166e7fc6 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 13 Jul 2022 13:52:04 -0700 Subject: [PATCH 1632/1842] tcp: Fix a data-race around sysctl_tcp_probe_threshold. [ Upstream commit 92c0aa4175474483d6cf373314343d4e624e882a ] While reading sysctl_tcp_probe_threshold, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 6b58e0a5f32d ("ipv4: Use binary search to choose tcp PMTU probe_size") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_output.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index b0d49317b221..7a8c8de45818 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2360,7 +2360,7 @@ static int tcp_mtu_probe(struct sock *sk) * probing process by not resetting search range to its orignal. */ if (probe_size > tcp_mtu_to_mss(sk, icsk->icsk_mtup.search_high) || - interval < net->ipv4.sysctl_tcp_probe_threshold) { + interval < READ_ONCE(net->ipv4.sysctl_tcp_probe_threshold)) { /* Check whether enough time has elaplased for * another round of probing. */ From c61aede097d350d890fa1edc9521b0072e14a0b8 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 13 Jul 2022 13:52:05 -0700 Subject: [PATCH 1633/1842] tcp: Fix a data-race around sysctl_tcp_probe_interval. [ Upstream commit 2a85388f1d94a9f8b5a529118a2c5eaa0520d85c ] While reading sysctl_tcp_probe_interval, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 05cbc0db03e8 ("ipv4: Create probe timer for tcp PMTU as per RFC4821") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_output.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 7a8c8de45818..b58697336bd4 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2278,7 +2278,7 @@ static inline void tcp_mtu_check_reprobe(struct sock *sk) u32 interval; s32 delta; - interval = net->ipv4.sysctl_tcp_probe_interval; + interval = READ_ONCE(net->ipv4.sysctl_tcp_probe_interval); delta = tcp_jiffies32 - icsk->icsk_mtup.probe_timestamp; if (unlikely(delta >= interval * HZ)) { int mss = tcp_current_mss(sk); From dd7b5ba44b67566ffde286f60a2f684a56c69e0d Mon Sep 17 00:00:00 2001 From: Biao Huang Date: Thu, 14 Jul 2022 14:00:14 +0800 Subject: [PATCH 1634/1842] net: stmmac: fix unbalanced ptp clock issue in suspend/resume flow [ Upstream commit f4c7d8948e866918d61493264dbbd67e45ef2bda ] Current stmmac driver will prepare/enable ptp_ref clock in stmmac_init_tstamp_counter(). The stmmac_pltfr_noirq_suspend will disable it once in suspend flow. But in resume flow, stmmac_pltfr_noirq_resume --> stmmac_init_tstamp_counter stmmac_resume --> stmmac_hw_setup --> stmmac_init_ptp --> stmmac_init_tstamp_counter ptp_ref clock reference counter increases twice, which leads to unbalance ptp clock when resume back. Move ptp_ref clock prepare/enable out of stmmac_init_tstamp_counter to fix it. Fixes: 0735e639f129d ("net: stmmac: skip only stmmac_ptp_register when resume from suspend") Signed-off-by: Biao Huang Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 17 ++++++++--------- .../ethernet/stmicro/stmmac/stmmac_platform.c | 8 +++++++- 2 files changed, 15 insertions(+), 10 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index e9aa9a5eba6b..27b7bb64a028 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -738,19 +738,10 @@ int stmmac_init_tstamp_counter(struct stmmac_priv *priv, u32 systime_flags) struct timespec64 now; u32 sec_inc = 0; u64 temp = 0; - int ret; if (!(priv->dma_cap.time_stamp || priv->dma_cap.atime_stamp)) return -EOPNOTSUPP; - ret = clk_prepare_enable(priv->plat->clk_ptp_ref); - if (ret < 0) { - netdev_warn(priv->dev, - "failed to enable PTP reference clock: %pe\n", - ERR_PTR(ret)); - return ret; - } - stmmac_config_hw_tstamping(priv, priv->ptpaddr, systime_flags); priv->systime_flags = systime_flags; @@ -2755,6 +2746,14 @@ static int stmmac_hw_setup(struct net_device *dev, bool ptp_register) stmmac_mmc_setup(priv); + if (ptp_register) { + ret = clk_prepare_enable(priv->plat->clk_ptp_ref); + if (ret < 0) + netdev_warn(priv->dev, + "failed to enable PTP reference clock: %pe\n", + ERR_PTR(ret)); + } + ret = stmmac_init_ptp(priv); if (ret == -EOPNOTSUPP) netdev_warn(priv->dev, "PTP not supported by HW\n"); diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c index b40b962055fa..f70d8d1ce329 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c @@ -814,7 +814,13 @@ static int __maybe_unused stmmac_pltfr_noirq_resume(struct device *dev) if (ret) return ret; - stmmac_init_tstamp_counter(priv, priv->systime_flags); + ret = clk_prepare_enable(priv->plat->clk_ptp_ref); + if (ret < 0) { + netdev_warn(priv->dev, + "failed to enable PTP reference clock: %pe\n", + ERR_PTR(ret)); + return ret; + } } return 0; From f3da643d87636b2605190b292f7c5ea7222707fd Mon Sep 17 00:00:00 2001 From: Robert Hancock Date: Tue, 14 Jun 2022 17:29:19 -0600 Subject: [PATCH 1635/1842] i2c: cadence: Change large transfer count reset logic to be unconditional [ Upstream commit 4ca8ca873d454635c20d508261bfc0081af75cf8 ] Problems were observed on the Xilinx ZynqMP platform with large I2C reads. When a read of 277 bytes was performed, the controller NAKed the transfer after only 252 bytes were transferred and returned an ENXIO error on the transfer. There is some code in cdns_i2c_master_isr to handle this case by resetting the transfer count in the controller before it reaches 0, to allow larger transfers to work, but it was conditional on the CDNS_I2C_BROKEN_HOLD_BIT quirk being set on the controller, and ZynqMP uses the r1p14 version of the core where this quirk is not being set. The requirement to do this to support larger reads seems like an inherently required workaround due to the core only having an 8-bit transfer size register, so it does not appear that this should be conditional on the broken HOLD bit quirk which is used elsewhere in the driver. Remove the dependency on the CDNS_I2C_BROKEN_HOLD_BIT for this transfer size reset logic to fix this problem. Fixes: 63cab195bf49 ("i2c: removed work arounds in i2c driver for Zynq Ultrascale+ MPSoC") Signed-off-by: Robert Hancock Reviewed-by: Shubhrajyoti Datta Acked-by: Michal Simek Signed-off-by: Wolfram Sang Signed-off-by: Sasha Levin --- drivers/i2c/busses/i2c-cadence.c | 30 +++++------------------------- 1 file changed, 5 insertions(+), 25 deletions(-) diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c index 01564bd96c62..0abce487ead7 100644 --- a/drivers/i2c/busses/i2c-cadence.c +++ b/drivers/i2c/busses/i2c-cadence.c @@ -386,9 +386,9 @@ static irqreturn_t cdns_i2c_slave_isr(void *ptr) */ static irqreturn_t cdns_i2c_master_isr(void *ptr) { - unsigned int isr_status, avail_bytes, updatetx; + unsigned int isr_status, avail_bytes; unsigned int bytes_to_send; - bool hold_quirk; + bool updatetx; struct cdns_i2c *id = ptr; /* Signal completion only after everything is updated */ int done_flag = 0; @@ -408,11 +408,7 @@ static irqreturn_t cdns_i2c_master_isr(void *ptr) * Check if transfer size register needs to be updated again for a * large data receive operation. */ - updatetx = 0; - if (id->recv_count > id->curr_recv_count) - updatetx = 1; - - hold_quirk = (id->quirks & CDNS_I2C_BROKEN_HOLD_BIT) && updatetx; + updatetx = id->recv_count > id->curr_recv_count; /* When receiving, handle data interrupt and completion interrupt */ if (id->p_recv_buf && @@ -443,7 +439,7 @@ static irqreturn_t cdns_i2c_master_isr(void *ptr) break; } - if (cdns_is_holdquirk(id, hold_quirk)) + if (cdns_is_holdquirk(id, updatetx)) break; } @@ -454,7 +450,7 @@ static irqreturn_t cdns_i2c_master_isr(void *ptr) * maintain transfer size non-zero while performing a large * receive operation. */ - if (cdns_is_holdquirk(id, hold_quirk)) { + if (cdns_is_holdquirk(id, updatetx)) { /* wait while fifo is full */ while (cdns_i2c_readreg(CDNS_I2C_XFER_SIZE_OFFSET) != (id->curr_recv_count - CDNS_I2C_FIFO_DEPTH)) @@ -476,22 +472,6 @@ static irqreturn_t cdns_i2c_master_isr(void *ptr) CDNS_I2C_XFER_SIZE_OFFSET); id->curr_recv_count = id->recv_count; } - } else if (id->recv_count && !hold_quirk && - !id->curr_recv_count) { - - /* Set the slave address in address register*/ - cdns_i2c_writereg(id->p_msg->addr & CDNS_I2C_ADDR_MASK, - CDNS_I2C_ADDR_OFFSET); - - if (id->recv_count > CDNS_I2C_TRANSFER_SIZE) { - cdns_i2c_writereg(CDNS_I2C_TRANSFER_SIZE, - CDNS_I2C_XFER_SIZE_OFFSET); - id->curr_recv_count = CDNS_I2C_TRANSFER_SIZE; - } else { - cdns_i2c_writereg(id->recv_count, - CDNS_I2C_XFER_SIZE_OFFSET); - id->curr_recv_count = id->recv_count; - } } /* Clear hold (if not repeated start) and signal completion */ From a3ac79f38d354b10925824899cdbd2caadce55ba Mon Sep 17 00:00:00 2001 From: Junxiao Chang Date: Fri, 15 Jul 2022 15:47:01 +0800 Subject: [PATCH 1636/1842] net: stmmac: fix dma queue left shift overflow issue [ Upstream commit 613b065ca32e90209024ec4a6bb5ca887ee70980 ] When queue number is > 4, left shift overflows due to 32 bits integer variable. Mask calculation is wrong for MTL_RXQ_DMA_MAP1. If CONFIG_UBSAN is enabled, kernel dumps below warning: [ 10.363842] ================================================================== [ 10.363882] UBSAN: shift-out-of-bounds in /build/linux-intel-iotg-5.15-8e6Tf4/ linux-intel-iotg-5.15-5.15.0/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c:224:12 [ 10.363929] shift exponent 40 is too large for 32-bit type 'unsigned int' [ 10.363953] CPU: 1 PID: 599 Comm: NetworkManager Not tainted 5.15.0-1003-intel-iotg [ 10.363956] Hardware name: ADLINK Technology Inc. LEC-EL/LEC-EL, BIOS 0.15.11 12/22/2021 [ 10.363958] Call Trace: [ 10.363960] [ 10.363963] dump_stack_lvl+0x4a/0x5f [ 10.363971] dump_stack+0x10/0x12 [ 10.363974] ubsan_epilogue+0x9/0x45 [ 10.363976] __ubsan_handle_shift_out_of_bounds.cold+0x61/0x10e [ 10.363979] ? wake_up_klogd+0x4a/0x50 [ 10.363983] ? vprintk_emit+0x8f/0x240 [ 10.363986] dwmac4_map_mtl_dma.cold+0x42/0x91 [stmmac] [ 10.364001] stmmac_mtl_configuration+0x1ce/0x7a0 [stmmac] [ 10.364009] ? dwmac410_dma_init_channel+0x70/0x70 [stmmac] [ 10.364020] stmmac_hw_setup.cold+0xf/0xb14 [stmmac] [ 10.364030] ? page_pool_alloc_pages+0x4d/0x70 [ 10.364034] ? stmmac_clear_tx_descriptors+0x6e/0xe0 [stmmac] [ 10.364042] stmmac_open+0x39e/0x920 [stmmac] [ 10.364050] __dev_open+0xf0/0x1a0 [ 10.364054] __dev_change_flags+0x188/0x1f0 [ 10.364057] dev_change_flags+0x26/0x60 [ 10.364059] do_setlink+0x908/0xc40 [ 10.364062] ? do_setlink+0xb10/0xc40 [ 10.364064] ? __nla_validate_parse+0x4c/0x1a0 [ 10.364068] __rtnl_newlink+0x597/0xa10 [ 10.364072] ? __nla_reserve+0x41/0x50 [ 10.364074] ? __kmalloc_node_track_caller+0x1d0/0x4d0 [ 10.364079] ? pskb_expand_head+0x75/0x310 [ 10.364082] ? nla_reserve_64bit+0x21/0x40 [ 10.364086] ? skb_free_head+0x65/0x80 [ 10.364089] ? security_sock_rcv_skb+0x2c/0x50 [ 10.364094] ? __cond_resched+0x19/0x30 [ 10.364097] ? kmem_cache_alloc_trace+0x15a/0x420 [ 10.364100] rtnl_newlink+0x49/0x70 This change fixes MTL_RXQ_DMA_MAP1 mask issue and channel/queue mapping warning. Fixes: d43042f4da3e ("net: stmmac: mapping mtl rx to dma channel") BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=216195 Reported-by: Cedric Wassenaar Signed-off-by: Junxiao Chang Reviewed-by: Florian Fainelli Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c index 16c538cfaf59..2e71e510e127 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c @@ -215,6 +215,9 @@ static void dwmac4_map_mtl_dma(struct mac_device_info *hw, u32 queue, u32 chan) if (queue == 0 || queue == 4) { value &= ~MTL_RXQ_DMA_Q04MDMACH_MASK; value |= MTL_RXQ_DMA_Q04MDMACH(chan); + } else if (queue > 4) { + value &= ~MTL_RXQ_DMA_QXMDMACH_MASK(queue - 4); + value |= MTL_RXQ_DMA_QXMDMACH(chan, queue - 4); } else { value &= ~MTL_RXQ_DMA_QXMDMACH_MASK(queue); value |= MTL_RXQ_DMA_QXMDMACH(chan, queue); From e80ff0b9661384d40e97a0a7d5cc8ae2a00c785d Mon Sep 17 00:00:00 2001 From: Tariq Toukan Date: Fri, 15 Jul 2022 11:42:16 +0300 Subject: [PATCH 1637/1842] net/tls: Fix race in TLS device down flow [ Upstream commit f08d8c1bb97c48f24a82afaa2fd8c140f8d3da8b ] Socket destruction flow and tls_device_down function sync against each other using tls_device_lock and the context refcount, to guarantee the device resources are freed via tls_dev_del() by the end of tls_device_down. In the following unfortunate flow, this won't happen: - refcount is decreased to zero in tls_device_sk_destruct. - tls_device_down starts, skips the context as refcount is zero, going all the way until it flushes the gc work, and returns without freeing the device resources. - only then, tls_device_queue_ctx_destruction is called, queues the gc work and frees the context's device resources. Solve it by decreasing the refcount in the socket's destruction flow under the tls_device_lock, for perfect synchronization. This does not slow down the common likely destructor flow, in which both the refcount is decreased and the spinlock is acquired, anyway. Fixes: e8f69799810c ("net/tls: Add generic NIC offload infrastructure") Reviewed-by: Maxim Mikityanskiy Signed-off-by: Tariq Toukan Reviewed-by: Jakub Kicinski Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/tls/tls_device.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index 6ae2ce411b4b..23eab7ac43ee 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -97,13 +97,16 @@ static void tls_device_queue_ctx_destruction(struct tls_context *ctx) unsigned long flags; spin_lock_irqsave(&tls_device_lock, flags); + if (unlikely(!refcount_dec_and_test(&ctx->refcount))) + goto unlock; + list_move_tail(&ctx->list, &tls_device_gc_list); /* schedule_work inside the spinlock * to make sure tls_device_down waits for that work. */ schedule_work(&tls_device_gc_work); - +unlock: spin_unlock_irqrestore(&tls_device_lock, flags); } @@ -194,8 +197,7 @@ void tls_device_sk_destruct(struct sock *sk) clean_acked_data_disable(inet_csk(sk)); } - if (refcount_dec_and_test(&tls_ctx->refcount)) - tls_device_queue_ctx_destruction(tls_ctx); + tls_device_queue_ctx_destruction(tls_ctx); } EXPORT_SYMBOL_GPL(tls_device_sk_destruct); From 473aad9ad57ff760005377e6f45a2ad4210e08ce Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 15 Jul 2022 10:17:41 -0700 Subject: [PATCH 1638/1842] igmp: Fix data-races around sysctl_igmp_llm_reports. [ Upstream commit f6da2267e71106474fbc0943dc24928b9cb79119 ] While reading sysctl_igmp_llm_reports, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. This test can be packed into a helper, so such changes will be in the follow-up series after net is merged into net-next. if (ipv4_is_local_multicast(pmc->multiaddr) && !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) Fixes: df2cf4a78e48 ("IGMP: Inhibit reports for local multicast groups") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/igmp.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c index 3817988a5a1d..fd9306950a26 100644 --- a/net/ipv4/igmp.c +++ b/net/ipv4/igmp.c @@ -467,7 +467,8 @@ static struct sk_buff *add_grec(struct sk_buff *skb, struct ip_mc_list *pmc, if (pmc->multiaddr == IGMP_ALL_HOSTS) return skb; - if (ipv4_is_local_multicast(pmc->multiaddr) && !net->ipv4.sysctl_igmp_llm_reports) + if (ipv4_is_local_multicast(pmc->multiaddr) && + !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) return skb; mtu = READ_ONCE(dev->mtu); @@ -593,7 +594,7 @@ static int igmpv3_send_report(struct in_device *in_dev, struct ip_mc_list *pmc) if (pmc->multiaddr == IGMP_ALL_HOSTS) continue; if (ipv4_is_local_multicast(pmc->multiaddr) && - !net->ipv4.sysctl_igmp_llm_reports) + !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) continue; spin_lock_bh(&pmc->lock); if (pmc->sfcount[MCAST_EXCLUDE]) @@ -736,7 +737,8 @@ static int igmp_send_report(struct in_device *in_dev, struct ip_mc_list *pmc, if (type == IGMPV3_HOST_MEMBERSHIP_REPORT) return igmpv3_send_report(in_dev, pmc); - if (ipv4_is_local_multicast(group) && !net->ipv4.sysctl_igmp_llm_reports) + if (ipv4_is_local_multicast(group) && + !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) return 0; if (type == IGMP_HOST_LEAVE_MESSAGE) @@ -920,7 +922,8 @@ static bool igmp_heard_report(struct in_device *in_dev, __be32 group) if (group == IGMP_ALL_HOSTS) return false; - if (ipv4_is_local_multicast(group) && !net->ipv4.sysctl_igmp_llm_reports) + if (ipv4_is_local_multicast(group) && + !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) return false; rcu_read_lock(); @@ -1045,7 +1048,7 @@ static bool igmp_heard_query(struct in_device *in_dev, struct sk_buff *skb, if (im->multiaddr == IGMP_ALL_HOSTS) continue; if (ipv4_is_local_multicast(im->multiaddr) && - !net->ipv4.sysctl_igmp_llm_reports) + !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) continue; spin_lock_bh(&im->lock); if (im->tm_running) @@ -1296,7 +1299,8 @@ static void __igmp_group_dropped(struct ip_mc_list *im, gfp_t gfp) #ifdef CONFIG_IP_MULTICAST if (im->multiaddr == IGMP_ALL_HOSTS) return; - if (ipv4_is_local_multicast(im->multiaddr) && !net->ipv4.sysctl_igmp_llm_reports) + if (ipv4_is_local_multicast(im->multiaddr) && + !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) return; reporter = im->reporter; @@ -1338,7 +1342,8 @@ static void igmp_group_added(struct ip_mc_list *im) #ifdef CONFIG_IP_MULTICAST if (im->multiaddr == IGMP_ALL_HOSTS) return; - if (ipv4_is_local_multicast(im->multiaddr) && !net->ipv4.sysctl_igmp_llm_reports) + if (ipv4_is_local_multicast(im->multiaddr) && + !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) return; if (in_dev->dead) @@ -1642,7 +1647,7 @@ static void ip_mc_rejoin_groups(struct in_device *in_dev) if (im->multiaddr == IGMP_ALL_HOSTS) continue; if (ipv4_is_local_multicast(im->multiaddr) && - !net->ipv4.sysctl_igmp_llm_reports) + !READ_ONCE(net->ipv4.sysctl_igmp_llm_reports)) continue; /* a failover is happening and switches From 7d26db0053548f66667da9de680fa45fa6819da5 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 15 Jul 2022 10:17:42 -0700 Subject: [PATCH 1639/1842] igmp: Fix a data-race around sysctl_igmp_max_memberships. [ Upstream commit 6305d821e3b9b5379d348528e5b5faf316383bc2 ] While reading sysctl_igmp_max_memberships, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/igmp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c index fd9306950a26..1a70664dcb1a 100644 --- a/net/ipv4/igmp.c +++ b/net/ipv4/igmp.c @@ -2197,7 +2197,7 @@ static int __ip_mc_join_group(struct sock *sk, struct ip_mreqn *imr, count++; } err = -ENOBUFS; - if (count >= net->ipv4.sysctl_igmp_max_memberships) + if (count >= READ_ONCE(net->ipv4.sysctl_igmp_max_memberships)) goto done; iml = sock_kmalloc(sk, sizeof(*iml), GFP_KERNEL); if (!iml) From f85119fb3fd6985f0f68f3454a826f1e267b4b71 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 15 Jul 2022 10:17:43 -0700 Subject: [PATCH 1640/1842] igmp: Fix data-races around sysctl_igmp_max_msf. [ Upstream commit 6ae0f2e553737b8cce49a1372573c81130ffa80e ] While reading sysctl_igmp_max_msf, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/igmp.c | 2 +- net/ipv4/ip_sockglue.c | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c index 1a70664dcb1a..428cc3a4c36f 100644 --- a/net/ipv4/igmp.c +++ b/net/ipv4/igmp.c @@ -2384,7 +2384,7 @@ int ip_mc_source(int add, int omode, struct sock *sk, struct } /* else, add a new source to the filter */ - if (psl && psl->sl_count >= net->ipv4.sysctl_igmp_max_msf) { + if (psl && psl->sl_count >= READ_ONCE(net->ipv4.sysctl_igmp_max_msf)) { err = -ENOBUFS; goto done; } diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c index ec6036713e2c..22507a6a3f71 100644 --- a/net/ipv4/ip_sockglue.c +++ b/net/ipv4/ip_sockglue.c @@ -783,7 +783,7 @@ static int ip_set_mcast_msfilter(struct sock *sk, sockptr_t optval, int optlen) /* numsrc >= (4G-140)/128 overflow in 32 bits */ err = -ENOBUFS; if (gsf->gf_numsrc >= 0x1ffffff || - gsf->gf_numsrc > sock_net(sk)->ipv4.sysctl_igmp_max_msf) + gsf->gf_numsrc > READ_ONCE(sock_net(sk)->ipv4.sysctl_igmp_max_msf)) goto out_free_gsf; err = -EINVAL; @@ -832,7 +832,7 @@ static int compat_ip_set_mcast_msfilter(struct sock *sk, sockptr_t optval, /* numsrc >= (4G-140)/128 overflow in 32 bits */ err = -ENOBUFS; - if (n > sock_net(sk)->ipv4.sysctl_igmp_max_msf) + if (n > READ_ONCE(sock_net(sk)->ipv4.sysctl_igmp_max_msf)) goto out_free_gsf; err = set_mcast_msfilter(sk, gf32->gf_interface, n, gf32->gf_fmode, &gf32->gf_group, gf32->gf_slist); @@ -1242,7 +1242,7 @@ static int do_ip_setsockopt(struct sock *sk, int level, int optname, } /* numsrc >= (1G-4) overflow in 32 bits */ if (msf->imsf_numsrc >= 0x3ffffffcU || - msf->imsf_numsrc > net->ipv4.sysctl_igmp_max_msf) { + msf->imsf_numsrc > READ_ONCE(net->ipv4.sysctl_igmp_max_msf)) { kfree(msf); err = -ENOBUFS; break; From fc489055e7e8abe409224ff18863999cede92fbf Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 15 Jul 2022 10:17:45 -0700 Subject: [PATCH 1641/1842] tcp: Fix data-races around keepalive sysctl knobs. [ Upstream commit f2f316e287e6c2e3a1c5bab8d9b77ee03daa0463 ] While reading sysctl_tcp_keepalive_(time|probes|intvl), they can be changed concurrently. Thus, we need to add READ_ONCE() to their readers. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- include/net/tcp.h | 9 ++++++--- net/smc/smc_llc.c | 2 +- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index 2a28e0925573..9ef9fd0677b5 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1447,21 +1447,24 @@ static inline int keepalive_intvl_when(const struct tcp_sock *tp) { struct net *net = sock_net((struct sock *)tp); - return tp->keepalive_intvl ? : net->ipv4.sysctl_tcp_keepalive_intvl; + return tp->keepalive_intvl ? : + READ_ONCE(net->ipv4.sysctl_tcp_keepalive_intvl); } static inline int keepalive_time_when(const struct tcp_sock *tp) { struct net *net = sock_net((struct sock *)tp); - return tp->keepalive_time ? : net->ipv4.sysctl_tcp_keepalive_time; + return tp->keepalive_time ? : + READ_ONCE(net->ipv4.sysctl_tcp_keepalive_time); } static inline int keepalive_probes(const struct tcp_sock *tp) { struct net *net = sock_net((struct sock *)tp); - return tp->keepalive_probes ? : net->ipv4.sysctl_tcp_keepalive_probes; + return tp->keepalive_probes ? : + READ_ONCE(net->ipv4.sysctl_tcp_keepalive_probes); } static inline u32 keepalive_time_elapsed(const struct tcp_sock *tp) diff --git a/net/smc/smc_llc.c b/net/smc/smc_llc.c index ee1f0fdba085..0ef15f8fba90 100644 --- a/net/smc/smc_llc.c +++ b/net/smc/smc_llc.c @@ -1787,7 +1787,7 @@ void smc_llc_lgr_init(struct smc_link_group *lgr, struct smc_sock *smc) init_waitqueue_head(&lgr->llc_flow_waiter); init_waitqueue_head(&lgr->llc_msg_waiter); mutex_init(&lgr->llc_conf_mutex); - lgr->llc_testlink_time = net->ipv4.sysctl_tcp_keepalive_time; + lgr->llc_testlink_time = READ_ONCE(net->ipv4.sysctl_tcp_keepalive_time); } /* called after lgr was removed from lgr_list */ From dc1a78a2b274bad6b1be1561759ab670640af2d7 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 15 Jul 2022 10:17:47 -0700 Subject: [PATCH 1642/1842] tcp: Fix data-races around sysctl_tcp_syncookies. [ Upstream commit f2e383b5bb6bbc60a0b94b87b3e49a2b1aefd11e ] While reading sysctl_tcp_syncookies, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/core/filter.c | 4 ++-- net/ipv4/syncookies.c | 3 ++- net/ipv4/tcp_input.c | 20 ++++++++++++-------- net/ipv6/syncookies.c | 3 ++- 4 files changed, 18 insertions(+), 12 deletions(-) diff --git a/net/core/filter.c b/net/core/filter.c index 34ae30503ac4..e2b491665775 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -6507,7 +6507,7 @@ BPF_CALL_5(bpf_tcp_check_syncookie, struct sock *, sk, void *, iph, u32, iph_len if (sk->sk_protocol != IPPROTO_TCP || sk->sk_state != TCP_LISTEN) return -EINVAL; - if (!sock_net(sk)->ipv4.sysctl_tcp_syncookies) + if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies)) return -EINVAL; if (!th->ack || th->rst || th->syn) @@ -6582,7 +6582,7 @@ BPF_CALL_5(bpf_tcp_gen_syncookie, struct sock *, sk, void *, iph, u32, iph_len, if (sk->sk_protocol != IPPROTO_TCP || sk->sk_state != TCP_LISTEN) return -EINVAL; - if (!sock_net(sk)->ipv4.sysctl_tcp_syncookies) + if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies)) return -ENOENT; if (!th->syn || th->ack || th->fin || th->rst) diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c index 10b469aee492..b52cc46bdadd 100644 --- a/net/ipv4/syncookies.c +++ b/net/ipv4/syncookies.c @@ -342,7 +342,8 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb) struct flowi4 fl4; u32 tsoff = 0; - if (!sock_net(sk)->ipv4.sysctl_tcp_syncookies || !th->ack || th->rst) + if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies) || + !th->ack || th->rst) goto out; if (tcp_synq_no_recent_overflow(sk)) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 54ed68e05b66..f514d0b4b1e0 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -6683,11 +6683,14 @@ static bool tcp_syn_flood_action(const struct sock *sk, const char *proto) { struct request_sock_queue *queue = &inet_csk(sk)->icsk_accept_queue; const char *msg = "Dropping request"; - bool want_cookie = false; struct net *net = sock_net(sk); + bool want_cookie = false; + u8 syncookies; + + syncookies = READ_ONCE(net->ipv4.sysctl_tcp_syncookies); #ifdef CONFIG_SYN_COOKIES - if (net->ipv4.sysctl_tcp_syncookies) { + if (syncookies) { msg = "Sending cookies"; want_cookie = true; __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPREQQFULLDOCOOKIES); @@ -6695,8 +6698,7 @@ static bool tcp_syn_flood_action(const struct sock *sk, const char *proto) #endif __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPREQQFULLDROP); - if (!queue->synflood_warned && - net->ipv4.sysctl_tcp_syncookies != 2 && + if (!queue->synflood_warned && syncookies != 2 && xchg(&queue->synflood_warned, 1) == 0) net_info_ratelimited("%s: Possible SYN flooding on port %d. %s. Check SNMP counters.\n", proto, sk->sk_num, msg); @@ -6745,7 +6747,7 @@ u16 tcp_get_syncookie_mss(struct request_sock_ops *rsk_ops, struct tcp_sock *tp = tcp_sk(sk); u16 mss; - if (sock_net(sk)->ipv4.sysctl_tcp_syncookies != 2 && + if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies) != 2 && !inet_csk_reqsk_queue_is_full(sk)) return 0; @@ -6779,13 +6781,15 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops, bool want_cookie = false; struct dst_entry *dst; struct flowi fl; + u8 syncookies; + + syncookies = READ_ONCE(net->ipv4.sysctl_tcp_syncookies); /* TW buckets are converted to open requests without * limitations, they conserve resources and peer is * evidently real one. */ - if ((net->ipv4.sysctl_tcp_syncookies == 2 || - inet_csk_reqsk_queue_is_full(sk)) && !isn) { + if ((syncookies == 2 || inet_csk_reqsk_queue_is_full(sk)) && !isn) { want_cookie = tcp_syn_flood_action(sk, rsk_ops->slab_name); if (!want_cookie) goto drop; @@ -6840,7 +6844,7 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops, if (!want_cookie && !isn) { /* Kill the following clause, if you dislike this way. */ - if (!net->ipv4.sysctl_tcp_syncookies && + if (!syncookies && (net->ipv4.sysctl_max_syn_backlog - inet_csk_reqsk_queue_len(sk) < (net->ipv4.sysctl_max_syn_backlog >> 2)) && !tcp_peer_is_proven(req, dst)) { diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c index ca92dd6981de..12ae817aaf2e 100644 --- a/net/ipv6/syncookies.c +++ b/net/ipv6/syncookies.c @@ -141,7 +141,8 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb) __u8 rcv_wscale; u32 tsoff = 0; - if (!sock_net(sk)->ipv4.sysctl_tcp_syncookies || !th->ack || th->rst) + if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies) || + !th->ack || th->rst) goto out; if (tcp_synq_no_recent_overflow(sk)) From 474510e174fb2bc9751e04885e4cc30023bcf9d7 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 15 Jul 2022 10:17:49 -0700 Subject: [PATCH 1643/1842] tcp: Fix data-races around sysctl_tcp_reordering. [ Upstream commit 46778cd16e6a5ad1b2e3a91f6c057c907379418e ] While reading sysctl_tcp_reordering, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp.c | 2 +- net/ipv4/tcp_input.c | 10 +++++++--- net/ipv4/tcp_metrics.c | 3 ++- 3 files changed, 10 insertions(+), 5 deletions(-) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 19c13ad5c121..5582b05d0638 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -440,7 +440,7 @@ void tcp_init_sock(struct sock *sk) tp->snd_cwnd_clamp = ~0; tp->mss_cache = TCP_MSS_DEFAULT; - tp->reordering = sock_net(sk)->ipv4.sysctl_tcp_reordering; + tp->reordering = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reordering); tcp_assign_congestion_control(sk); tp->tsoffset = 0; diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index f514d0b4b1e0..070e7015e9c9 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -2099,6 +2099,7 @@ void tcp_enter_loss(struct sock *sk) struct tcp_sock *tp = tcp_sk(sk); struct net *net = sock_net(sk); bool new_recovery = icsk->icsk_ca_state < TCP_CA_Recovery; + u8 reordering; tcp_timeout_mark_lost(sk); @@ -2119,10 +2120,12 @@ void tcp_enter_loss(struct sock *sk) /* Timeout in disordered state after receiving substantial DUPACKs * suggests that the degree of reordering is over-estimated. */ + reordering = READ_ONCE(net->ipv4.sysctl_tcp_reordering); if (icsk->icsk_ca_state <= TCP_CA_Disorder && - tp->sacked_out >= net->ipv4.sysctl_tcp_reordering) + tp->sacked_out >= reordering) tp->reordering = min_t(unsigned int, tp->reordering, - net->ipv4.sysctl_tcp_reordering); + reordering); + tcp_set_ca_state(sk, TCP_CA_Loss); tp->high_seq = tp->snd_nxt; tcp_ecn_queue_cwr(tp); @@ -3411,7 +3414,8 @@ static inline bool tcp_may_raise_cwnd(const struct sock *sk, const int flag) * new SACK or ECE mark may first advance cwnd here and later reduce * cwnd in tcp_fastretrans_alert() based on more states. */ - if (tcp_sk(sk)->reordering > sock_net(sk)->ipv4.sysctl_tcp_reordering) + if (tcp_sk(sk)->reordering > + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reordering)) return flag & FLAG_FORWARD_PROGRESS; return flag & FLAG_DATA_ACKED; diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c index 6b27c481fe18..8d7e32f4abf6 100644 --- a/net/ipv4/tcp_metrics.c +++ b/net/ipv4/tcp_metrics.c @@ -428,7 +428,8 @@ void tcp_update_metrics(struct sock *sk) if (!tcp_metric_locked(tm, TCP_METRIC_REORDERING)) { val = tcp_metric_get(tm, TCP_METRIC_REORDERING); if (val < tp->reordering && - tp->reordering != net->ipv4.sysctl_tcp_reordering) + tp->reordering != + READ_ONCE(net->ipv4.sysctl_tcp_reordering)) tcp_metric_set(tm, TCP_METRIC_REORDERING, tp->reordering); } From 768e424607203ae0d6fcba708cb41e7962ed9d6a Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 15 Jul 2022 10:17:50 -0700 Subject: [PATCH 1644/1842] tcp: Fix data-races around some timeout sysctl knobs. [ Upstream commit 39e24435a776e9de5c6dd188836cf2523547804b ] While reading these sysctl knobs, they can be changed concurrently. Thus, we need to add READ_ONCE() to their readers. - tcp_retries1 - tcp_retries2 - tcp_orphan_retries - tcp_fin_timeout Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- include/net/tcp.h | 3 ++- net/ipv4/tcp.c | 2 +- net/ipv4/tcp_output.c | 2 +- net/ipv4/tcp_timer.c | 10 +++++----- 4 files changed, 9 insertions(+), 8 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index 9ef9fd0677b5..da75513a77d4 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1477,7 +1477,8 @@ static inline u32 keepalive_time_elapsed(const struct tcp_sock *tp) static inline int tcp_fin_time(const struct sock *sk) { - int fin_timeout = tcp_sk(sk)->linger2 ? : sock_net(sk)->ipv4.sysctl_tcp_fin_timeout; + int fin_timeout = tcp_sk(sk)->linger2 ? : + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fin_timeout); const int rto = inet_csk(sk)->icsk_rto; if (fin_timeout < (rto << 2) - (rto >> 1)) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 5582b05d0638..6cd5ce3eac0c 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -3727,7 +3727,7 @@ static int do_tcp_getsockopt(struct sock *sk, int level, case TCP_LINGER2: val = tp->linger2; if (val >= 0) - val = (val ? : net->ipv4.sysctl_tcp_fin_timeout) / HZ; + val = (val ? : READ_ONCE(net->ipv4.sysctl_tcp_fin_timeout)) / HZ; break; case TCP_DEFER_ACCEPT: val = retrans_to_secs(icsk->icsk_accept_queue.rskq_defer_accept, diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index b58697336bd4..e7348e70e6e3 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -4092,7 +4092,7 @@ void tcp_send_probe0(struct sock *sk) icsk->icsk_probes_out++; if (err <= 0) { - if (icsk->icsk_backoff < net->ipv4.sysctl_tcp_retries2) + if (icsk->icsk_backoff < READ_ONCE(net->ipv4.sysctl_tcp_retries2)) icsk->icsk_backoff++; timeout = tcp_probe0_when(sk, TCP_RTO_MAX); } else { diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index da92c5d70b70..e20fd86a2a89 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -143,7 +143,7 @@ static int tcp_out_of_resources(struct sock *sk, bool do_reset) */ static int tcp_orphan_retries(struct sock *sk, bool alive) { - int retries = sock_net(sk)->ipv4.sysctl_tcp_orphan_retries; /* May be zero. */ + int retries = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_orphan_retries); /* May be zero. */ /* We know from an ICMP that something is wrong. */ if (sk->sk_err_soft && !alive) @@ -242,14 +242,14 @@ static int tcp_write_timeout(struct sock *sk) retry_until = icsk->icsk_syn_retries ? : net->ipv4.sysctl_tcp_syn_retries; expired = icsk->icsk_retransmits >= retry_until; } else { - if (retransmits_timed_out(sk, net->ipv4.sysctl_tcp_retries1, 0)) { + if (retransmits_timed_out(sk, READ_ONCE(net->ipv4.sysctl_tcp_retries1), 0)) { /* Black hole detection */ tcp_mtu_probing(icsk, sk); __dst_negative_advice(sk); } - retry_until = net->ipv4.sysctl_tcp_retries2; + retry_until = READ_ONCE(net->ipv4.sysctl_tcp_retries2); if (sock_flag(sk, SOCK_DEAD)) { const bool alive = icsk->icsk_rto < TCP_RTO_MAX; @@ -380,7 +380,7 @@ static void tcp_probe_timer(struct sock *sk) msecs_to_jiffies(icsk->icsk_user_timeout)) goto abort; - max_probes = sock_net(sk)->ipv4.sysctl_tcp_retries2; + max_probes = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_retries2); if (sock_flag(sk, SOCK_DEAD)) { const bool alive = inet_csk_rto_backoff(icsk, TCP_RTO_MAX) < TCP_RTO_MAX; @@ -585,7 +585,7 @@ void tcp_retransmit_timer(struct sock *sk) } inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS, tcp_clamp_rto_to_user_timeout(sk), TCP_RTO_MAX); - if (retransmits_timed_out(sk, net->ipv4.sysctl_tcp_retries1 + 1, 0)) + if (retransmits_timed_out(sk, READ_ONCE(net->ipv4.sysctl_tcp_retries1) + 1, 0)) __sk_dst_reset(sk); out:; From fd6f1284e380c377932186042ff0b5c987fb2b92 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 15 Jul 2022 10:17:51 -0700 Subject: [PATCH 1645/1842] tcp: Fix a data-race around sysctl_tcp_notsent_lowat. [ Upstream commit 55be873695ed8912eb77ff46d1d1cadf028bd0f3 ] While reading sysctl_tcp_notsent_lowat, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: c9bee3b7fdec ("tcp: TCP_NOTSENT_LOWAT socket option") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- include/net/tcp.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index da75513a77d4..aa46f4016245 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1973,7 +1973,7 @@ void __tcp_v4_send_check(struct sk_buff *skb, __be32 saddr, __be32 daddr); static inline u32 tcp_notsent_lowat(const struct tcp_sock *tp) { struct net *net = sock_net((struct sock *)tp); - return tp->notsent_lowat ?: net->ipv4.sysctl_tcp_notsent_lowat; + return tp->notsent_lowat ?: READ_ONCE(net->ipv4.sysctl_tcp_notsent_lowat); } /* @wake is one when sk_stream_write_space() calls us. From b6c189aa801a9c8749952c65038bc08d4fde8ce4 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 15 Jul 2022 10:17:52 -0700 Subject: [PATCH 1646/1842] tcp: Fix a data-race around sysctl_tcp_tw_reuse. [ Upstream commit cbfc6495586a3f09f6f07d9fb3c7cafe807e3c55 ] While reading sysctl_tcp_tw_reuse, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_ipv4.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 32c60122db9c..d5f13ff7d900 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -106,10 +106,10 @@ static u32 tcp_v4_init_ts_off(const struct net *net, const struct sk_buff *skb) int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp) { + int reuse = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_tw_reuse); const struct inet_timewait_sock *tw = inet_twsk(sktw); const struct tcp_timewait_sock *tcptw = tcp_twsk(sktw); struct tcp_sock *tp = tcp_sk(sk); - int reuse = sock_net(sk)->ipv4.sysctl_tcp_tw_reuse; if (reuse == 2) { /* Still does not detect *everything* that goes through From b3ce32e33ab71f5b6b69cba35825f43e5d979f55 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 15 Jul 2022 10:17:53 -0700 Subject: [PATCH 1647/1842] tcp: Fix data-races around sysctl_max_syn_backlog. [ Upstream commit 79539f34743d3e14cc1fa6577d326a82cc64d62f ] While reading sysctl_max_syn_backlog, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_input.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 070e7015e9c9..5cbabe0e42c9 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -6847,10 +6847,12 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops, goto drop_and_free; if (!want_cookie && !isn) { + int max_syn_backlog = READ_ONCE(net->ipv4.sysctl_max_syn_backlog); + /* Kill the following clause, if you dislike this way. */ if (!syncookies && - (net->ipv4.sysctl_max_syn_backlog - inet_csk_reqsk_queue_len(sk) < - (net->ipv4.sysctl_max_syn_backlog >> 2)) && + (max_syn_backlog - inet_csk_reqsk_queue_len(sk) < + (max_syn_backlog >> 2)) && !tcp_peer_is_proven(req, dst)) { /* Without syncookies last quarter of * backlog is filled with destinations, From 22938534c611136f35e2ca545bb668073ca5ef49 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 15 Jul 2022 10:17:54 -0700 Subject: [PATCH 1648/1842] tcp: Fix data-races around sysctl_tcp_fastopen. [ Upstream commit 5a54213318c43f4009ae158347aa6016e3b9b55a ] While reading sysctl_tcp_fastopen, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 2100c8d2d9db ("net-tcp: Fast Open base") Signed-off-by: Kuniyuki Iwashima Acked-by: Yuchung Cheng Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/af_inet.c | 2 +- net/ipv4/tcp.c | 6 ++++-- net/ipv4/tcp_fastopen.c | 4 ++-- 3 files changed, 7 insertions(+), 5 deletions(-) diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index 9d1ff3baa213..a733ce1a3f8f 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -220,7 +220,7 @@ int inet_listen(struct socket *sock, int backlog) * because the socket was in TCP_LISTEN state previously but * was shutdown() rather than close(). */ - tcp_fastopen = sock_net(sk)->ipv4.sysctl_tcp_fastopen; + tcp_fastopen = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fastopen); if ((tcp_fastopen & TFO_SERVER_WO_SOCKOPT1) && (tcp_fastopen & TFO_SERVER_ENABLE) && !inet_csk(sk)->icsk_accept_queue.fastopenq.max_qlen) { diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 6cd5ce3eac0c..f1fd26bb199c 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1148,7 +1148,8 @@ static int tcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg, struct sockaddr *uaddr = msg->msg_name; int err, flags; - if (!(sock_net(sk)->ipv4.sysctl_tcp_fastopen & TFO_CLIENT_ENABLE) || + if (!(READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fastopen) & + TFO_CLIENT_ENABLE) || (uaddr && msg->msg_namelen >= sizeof(uaddr->sa_family) && uaddr->sa_family == AF_UNSPEC)) return -EOPNOTSUPP; @@ -3390,7 +3391,8 @@ static int do_tcp_setsockopt(struct sock *sk, int level, int optname, case TCP_FASTOPEN_CONNECT: if (val > 1 || val < 0) { err = -EINVAL; - } else if (net->ipv4.sysctl_tcp_fastopen & TFO_CLIENT_ENABLE) { + } else if (READ_ONCE(net->ipv4.sysctl_tcp_fastopen) & + TFO_CLIENT_ENABLE) { if (sk->sk_state == TCP_CLOSE) tp->fastopen_connect = val; else diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c index 107111984384..ed7aa6ae7b51 100644 --- a/net/ipv4/tcp_fastopen.c +++ b/net/ipv4/tcp_fastopen.c @@ -349,7 +349,7 @@ static bool tcp_fastopen_no_cookie(const struct sock *sk, const struct dst_entry *dst, int flag) { - return (sock_net(sk)->ipv4.sysctl_tcp_fastopen & flag) || + return (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fastopen) & flag) || tcp_sk(sk)->fastopen_no_cookie || (dst && dst_metric(dst, RTAX_FASTOPEN_NO_COOKIE)); } @@ -364,7 +364,7 @@ struct sock *tcp_try_fastopen(struct sock *sk, struct sk_buff *skb, const struct dst_entry *dst) { bool syn_data = TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb)->seq + 1; - int tcp_fastopen = sock_net(sk)->ipv4.sysctl_tcp_fastopen; + int tcp_fastopen = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fastopen); struct tcp_fastopen_cookie valid_foc = { .len = -1 }; struct sock *child; int ret = 0; From 0dc2f19d8c2636cebda7976b5ea40c6d69f0d891 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 15 Jul 2022 10:17:55 -0700 Subject: [PATCH 1649/1842] tcp: Fix data-races around sysctl_tcp_fastopen_blackhole_timeout. [ Upstream commit 021266ec640c7a4527e6cd4b7349a512b351de1d ] While reading sysctl_tcp_fastopen_blackhole_timeout, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: cf1ef3f0719b ("net/tcp_fastopen: Disable active side TFO in certain scenarios") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_fastopen.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c index ed7aa6ae7b51..39fb037ce5f3 100644 --- a/net/ipv4/tcp_fastopen.c +++ b/net/ipv4/tcp_fastopen.c @@ -506,7 +506,7 @@ void tcp_fastopen_active_disable(struct sock *sk) { struct net *net = sock_net(sk); - if (!sock_net(sk)->ipv4.sysctl_tcp_fastopen_blackhole_timeout) + if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fastopen_blackhole_timeout)) return; /* Paired with READ_ONCE() in tcp_fastopen_active_should_disable() */ @@ -527,7 +527,8 @@ void tcp_fastopen_active_disable(struct sock *sk) */ bool tcp_fastopen_active_should_disable(struct sock *sk) { - unsigned int tfo_bh_timeout = sock_net(sk)->ipv4.sysctl_tcp_fastopen_blackhole_timeout; + unsigned int tfo_bh_timeout = + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_fastopen_blackhole_timeout); unsigned long timeout; int tfo_da_times; int multiplier; From c6af94324911ef0846af1a5ce5e049ca736db34b Mon Sep 17 00:00:00 2001 From: Przemyslaw Patynowski Date: Fri, 24 Jun 2022 17:33:01 -0700 Subject: [PATCH 1650/1842] iavf: Fix handling of dummy receive descriptors [ Upstream commit a9f49e0060301a9bfebeca76739158d0cf91cdf6 ] Fix memory leak caused by not handling dummy receive descriptor properly. iavf_get_rx_buffer now sets the rx_buffer return value for dummy receive descriptors. Without this patch, when the hardware writes a dummy descriptor, iavf would not free the page allocated for the previous receive buffer. This is an unlikely event but can still happen. [Jesse: massaged commit message] Fixes: efa14c398582 ("iavf: allow null RX descriptors") Signed-off-by: Przemyslaw Patynowski Signed-off-by: Jesse Brandeburg Tested-by: Konrad Jankowski Signed-off-by: Tony Nguyen Signed-off-by: Sasha Levin --- drivers/net/ethernet/intel/iavf/iavf_txrx.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c index 256fa07d54d5..99983f7a0ce0 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c @@ -1263,11 +1263,10 @@ static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring, { struct iavf_rx_buffer *rx_buffer; - if (!size) - return NULL; - rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean]; prefetchw(rx_buffer->page); + if (!size) + return rx_buffer; /* we are reusing so sync this buffer for CPU use */ dma_sync_single_range_for_cpu(rx_ring->dev, From 6f949e5615f89558416ee18892232305d57a5f38 Mon Sep 17 00:00:00 2001 From: Dawid Lukwinski Date: Fri, 15 Jul 2022 14:45:41 -0700 Subject: [PATCH 1651/1842] i40e: Fix erroneous adapter reinitialization during recovery process [ Upstream commit f838a63369818faadec4ad1736cfbd20ab5da00e ] Fix an issue when driver incorrectly detects state of recovery process and erroneously reinitializes interrupts, which results in a kernel error and call trace message. The issue was caused by a combination of two factors: 1. Assuming the EMP reset issued after completing firmware recovery means the whole recovery process is complete. 2. Erroneous reinitialization of interrupt vector after detecting the above mentioned EMP reset. Fixes (1) by changing how recovery state change is detected and (2) by adjusting the conditional expression to ensure using proper interrupt reinitialization method, depending on the situation. Fixes: 4ff0ee1af016 ("i40e: Introduce recovery mode support") Signed-off-by: Dawid Lukwinski Signed-off-by: Jan Sokolowski Tested-by: Konrad Jankowski Signed-off-by: Tony Nguyen Link: https://lore.kernel.org/r/20220715214542.2968762-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/intel/i40e/i40e_main.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 58453f7958df..11d4e3ba9af4 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -10187,7 +10187,7 @@ static int i40e_reset(struct i40e_pf *pf) **/ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) { - int old_recovery_mode_bit = test_bit(__I40E_RECOVERY_MODE, pf->state); + const bool is_recovery_mode_reported = i40e_check_recovery_mode(pf); struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi]; struct i40e_hw *hw = &pf->hw; i40e_status ret; @@ -10195,13 +10195,11 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) int v; if (test_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state) && - i40e_check_recovery_mode(pf)) { + is_recovery_mode_reported) i40e_set_ethtool_ops(pf->vsi[pf->lan_vsi]->netdev); - } if (test_bit(__I40E_DOWN, pf->state) && - !test_bit(__I40E_RECOVERY_MODE, pf->state) && - !old_recovery_mode_bit) + !test_bit(__I40E_RECOVERY_MODE, pf->state)) goto clear_recovery; dev_dbg(&pf->pdev->dev, "Rebuilding internal switch\n"); @@ -10228,13 +10226,12 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) * accordingly with regard to resources initialization * and deinitialization */ - if (test_bit(__I40E_RECOVERY_MODE, pf->state) || - old_recovery_mode_bit) { + if (test_bit(__I40E_RECOVERY_MODE, pf->state)) { if (i40e_get_capabilities(pf, i40e_aqc_opc_list_func_capabilities)) goto end_unlock; - if (test_bit(__I40E_RECOVERY_MODE, pf->state)) { + if (is_recovery_mode_reported) { /* we're staying in recovery mode so we'll reinitialize * misc vector here */ From b82de63f8f817b5735480293dda8e92ba8170c52 Mon Sep 17 00:00:00 2001 From: Piotr Skajewski Date: Fri, 15 Jul 2022 14:44:56 -0700 Subject: [PATCH 1652/1842] ixgbe: Add locking to prevent panic when setting sriov_numvfs to zero [ Upstream commit 1e53834ce541d4fe271cdcca7703e50be0a44f8a ] It is possible to disable VFs while the PF driver is processing requests from the VF driver. This can result in a panic. BUG: unable to handle kernel paging request at 000000000000106c PGD 0 P4D 0 Oops: 0000 [#1] SMP NOPTI CPU: 8 PID: 0 Comm: swapper/8 Kdump: loaded Tainted: G I --------- - Hardware name: Dell Inc. PowerEdge R740/06WXJT, BIOS 2.8.2 08/27/2020 RIP: 0010:ixgbe_msg_task+0x4c8/0x1690 [ixgbe] Code: 00 00 48 8d 04 40 48 c1 e0 05 89 7c 24 24 89 fd 48 89 44 24 10 83 ff 01 0f 84 b8 04 00 00 4c 8b 64 24 10 4d 03 a5 48 22 00 00 <41> 80 7c 24 4c 00 0f 84 8a 03 00 00 0f b7 c7 83 f8 08 0f 84 8f 0a RSP: 0018:ffffb337869f8df8 EFLAGS: 00010002 RAX: 0000000000001020 RBX: 0000000000000000 RCX: 000000000000002b RDX: 0000000000000002 RSI: 0000000000000008 RDI: 0000000000000006 RBP: 0000000000000006 R08: 0000000000000002 R09: 0000000000029780 R10: 00006957d8f42832 R11: 0000000000000000 R12: 0000000000001020 R13: ffff8a00e8978ac0 R14: 000000000000002b R15: ffff8a00e8979c80 FS: 0000000000000000(0000) GS:ffff8a07dfd00000(0000) knlGS:00000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000000000000106c CR3: 0000000063e10004 CR4: 00000000007726e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 PKRU: 55555554 Call Trace: ? ttwu_do_wakeup+0x19/0x140 ? try_to_wake_up+0x1cd/0x550 ? ixgbevf_update_xcast_mode+0x71/0xc0 [ixgbevf] ixgbe_msix_other+0x17e/0x310 [ixgbe] __handle_irq_event_percpu+0x40/0x180 handle_irq_event_percpu+0x30/0x80 handle_irq_event+0x36/0x53 handle_edge_irq+0x82/0x190 handle_irq+0x1c/0x30 do_IRQ+0x49/0xd0 common_interrupt+0xf/0xf This can be eventually be reproduced with the following script: while : do echo 63 > /sys/class/net//device/sriov_numvfs sleep 1 echo 0 > /sys/class/net//device/sriov_numvfs sleep 1 done Add lock when disabling SR-IOV to prevent process VF mailbox communication. Fixes: d773d1310625 ("ixgbe: Fix memory leak when SR-IOV VFs are direct assigned") Signed-off-by: Piotr Skajewski Tested-by: Marek Szlosek Signed-off-by: Tony Nguyen Link: https://lore.kernel.org/r/20220715214456.2968711-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/intel/ixgbe/ixgbe.h | 1 + drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 3 +++ drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c | 6 ++++++ 3 files changed, 10 insertions(+) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h index de0fc6ecf491..27c6f911737b 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h @@ -769,6 +769,7 @@ struct ixgbe_adapter { #ifdef CONFIG_IXGBE_IPSEC struct ixgbe_ipsec *ipsec; #endif /* CONFIG_IXGBE_IPSEC */ + spinlock_t vfs_lock; }; static inline u8 ixgbe_max_rss_indices(struct ixgbe_adapter *adapter) diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index a3a02e2f92f6..b5b8be4672aa 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -6403,6 +6403,9 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter, /* n-tuple support exists, always init our spinlock */ spin_lock_init(&adapter->fdir_perfect_lock); + /* init spinlock to avoid concurrency of VF resources */ + spin_lock_init(&adapter->vfs_lock); + #ifdef CONFIG_IXGBE_DCB ixgbe_init_dcb(adapter); #endif diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c index aaebdae8b5ff..0078ae592616 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c @@ -204,10 +204,13 @@ void ixgbe_enable_sriov(struct ixgbe_adapter *adapter, unsigned int max_vfs) int ixgbe_disable_sriov(struct ixgbe_adapter *adapter) { unsigned int num_vfs = adapter->num_vfs, vf; + unsigned long flags; int rss; + spin_lock_irqsave(&adapter->vfs_lock, flags); /* set num VFs to 0 to prevent access to vfinfo */ adapter->num_vfs = 0; + spin_unlock_irqrestore(&adapter->vfs_lock, flags); /* put the reference to all of the vf devices */ for (vf = 0; vf < num_vfs; ++vf) { @@ -1305,8 +1308,10 @@ static void ixgbe_rcv_ack_from_vf(struct ixgbe_adapter *adapter, u32 vf) void ixgbe_msg_task(struct ixgbe_adapter *adapter) { struct ixgbe_hw *hw = &adapter->hw; + unsigned long flags; u32 vf; + spin_lock_irqsave(&adapter->vfs_lock, flags); for (vf = 0; vf < adapter->num_vfs; vf++) { /* process any reset requests */ if (!ixgbe_check_for_rst(hw, vf)) @@ -1320,6 +1325,7 @@ void ixgbe_msg_task(struct ixgbe_adapter *adapter) if (!ixgbe_check_for_ack(hw, vf)) ixgbe_rcv_ack_from_vf(adapter, vf); } + spin_unlock_irqrestore(&adapter->vfs_lock, flags); } void ixgbe_disable_tx_rx(struct ixgbe_adapter *adapter) From 928ded3fc1b9f87fb60b9e977182b5087fed0218 Mon Sep 17 00:00:00 2001 From: Haibo Chen Date: Mon, 18 Jul 2022 16:31:41 +0800 Subject: [PATCH 1653/1842] gpio: pca953x: only use single read/write for No AI mode [ Upstream commit db8edaa09d7461ec08672a92a2eef63d5882bb79 ] For the device use NO AI mode(not support auto address increment), only use the single read/write when config the regmap. We meet issue on PCA9557PW on i.MX8QXP/DXL evk board, this device do not support AI mode, but when do the regmap sync, regmap will sync 3 byte data to register 1, logically this means write first data to register 1, write second data to register 2, write third data to register 3. But this device do not support AI mode, finally, these three data write only into register 1 one by one. the reault is the value of register 1 alway equal to the latest data, here is the third data, no operation happened on register 2 and register 3. This is not what we expect. Fixes: 49427232764d ("gpio: pca953x: Perform basic regmap conversion") Signed-off-by: Haibo Chen Reviewed-by: Andy Shevchenko Signed-off-by: Bartosz Golaszewski Signed-off-by: Sasha Levin --- drivers/gpio/gpio-pca953x.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c index bb4ca064447e..bd7828e0f2f5 100644 --- a/drivers/gpio/gpio-pca953x.c +++ b/drivers/gpio/gpio-pca953x.c @@ -350,6 +350,9 @@ static const struct regmap_config pca953x_i2c_regmap = { .reg_bits = 8, .val_bits = 8, + .use_single_read = true, + .use_single_write = true, + .readable_reg = pca953x_readable_register, .writeable_reg = pca953x_writeable_register, .volatile_reg = pca953x_volatile_register, From a941e6d5ba3bf989b48490abed9e68d135b44c9b Mon Sep 17 00:00:00 2001 From: Haibo Chen Date: Mon, 18 Jul 2022 16:31:42 +0800 Subject: [PATCH 1654/1842] gpio: pca953x: use the correct range when do regmap sync [ Upstream commit 2abc17a93867dc816f0ed9d32021dda8078e7330 ] regmap will sync a range of registers, here use the correct range to make sure the sync do not touch other unexpected registers. Find on pca9557pw on imx8qxp/dxl evk board, this device support 8 pin, so only need one register(8 bits) to cover all the 8 pins's property setting. But when sync the output, we find it actually update two registers, output register and the following register. Fixes: b76574300504 ("gpio: pca953x: Restore registers after suspend/resume cycle") Fixes: ec82d1eba346 ("gpio: pca953x: Zap ad-hoc reg_output cache") Fixes: 0f25fda840a9 ("gpio: pca953x: Zap ad-hoc reg_direction cache") Signed-off-by: Haibo Chen Reviewed-by: Andy Shevchenko Signed-off-by: Bartosz Golaszewski Signed-off-by: Sasha Levin --- drivers/gpio/gpio-pca953x.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c index bd7828e0f2f5..b63ac46beceb 100644 --- a/drivers/gpio/gpio-pca953x.c +++ b/drivers/gpio/gpio-pca953x.c @@ -899,12 +899,12 @@ static int device_pca95xx_init(struct pca953x_chip *chip, u32 invert) int ret; ret = regcache_sync_region(chip->regmap, chip->regs->output, - chip->regs->output + NBANK(chip)); + chip->regs->output + NBANK(chip) - 1); if (ret) goto out; ret = regcache_sync_region(chip->regmap, chip->regs->direction, - chip->regs->direction + NBANK(chip)); + chip->regs->direction + NBANK(chip) - 1); if (ret) goto out; @@ -1117,14 +1117,14 @@ static int pca953x_regcache_sync(struct device *dev) * sync these registers first and only then sync the rest. */ regaddr = pca953x_recalc_addr(chip, chip->regs->direction, 0); - ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip)); + ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip) - 1); if (ret) { dev_err(dev, "Failed to sync GPIO dir registers: %d\n", ret); return ret; } regaddr = pca953x_recalc_addr(chip, chip->regs->output, 0); - ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip)); + ret = regcache_sync_region(chip->regmap, regaddr, regaddr + NBANK(chip) - 1); if (ret) { dev_err(dev, "Failed to sync GPIO out registers: %d\n", ret); return ret; @@ -1134,7 +1134,7 @@ static int pca953x_regcache_sync(struct device *dev) if (chip->driver_data & PCA_PCAL) { regaddr = pca953x_recalc_addr(chip, PCAL953X_IN_LATCH, 0); ret = regcache_sync_region(chip->regmap, regaddr, - regaddr + NBANK(chip)); + regaddr + NBANK(chip) - 1); if (ret) { dev_err(dev, "Failed to sync INT latch registers: %d\n", ret); @@ -1143,7 +1143,7 @@ static int pca953x_regcache_sync(struct device *dev) regaddr = pca953x_recalc_addr(chip, PCAL953X_INT_MASK, 0); ret = regcache_sync_region(chip->regmap, regaddr, - regaddr + NBANK(chip)); + regaddr + NBANK(chip) - 1); if (ret) { dev_err(dev, "Failed to sync INT mask registers: %d\n", ret); From 47523928557e4988072edbaed4ffae2e0d41b71c Mon Sep 17 00:00:00 2001 From: Haibo Chen Date: Mon, 18 Jul 2022 16:31:43 +0800 Subject: [PATCH 1655/1842] gpio: pca953x: use the correct register address when regcache sync during init [ Upstream commit b8c768ccdd8338504fb78370747728d5002b1b5a ] For regcache_sync_region, we need to use pca953x_recalc_addr() to get the real register address. Fixes: ec82d1eba346 ("gpio: pca953x: Zap ad-hoc reg_output cache") Fixes: 0f25fda840a9 ("gpio: pca953x: Zap ad-hoc reg_direction cache") Signed-off-by: Haibo Chen Reviewed-by: Andy Shevchenko Signed-off-by: Bartosz Golaszewski Signed-off-by: Sasha Levin --- drivers/gpio/gpio-pca953x.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c index b63ac46beceb..957be5f69406 100644 --- a/drivers/gpio/gpio-pca953x.c +++ b/drivers/gpio/gpio-pca953x.c @@ -896,15 +896,18 @@ static int pca953x_irq_setup(struct pca953x_chip *chip, static int device_pca95xx_init(struct pca953x_chip *chip, u32 invert) { DECLARE_BITMAP(val, MAX_LINE); + u8 regaddr; int ret; - ret = regcache_sync_region(chip->regmap, chip->regs->output, - chip->regs->output + NBANK(chip) - 1); + regaddr = pca953x_recalc_addr(chip, chip->regs->output, 0); + ret = regcache_sync_region(chip->regmap, regaddr, + regaddr + NBANK(chip) - 1); if (ret) goto out; - ret = regcache_sync_region(chip->regmap, chip->regs->direction, - chip->regs->direction + NBANK(chip) - 1); + regaddr = pca953x_recalc_addr(chip, chip->regs->direction, 0); + ret = regcache_sync_region(chip->regmap, regaddr, + regaddr + NBANK(chip) - 1); if (ret) goto out; From 665cbe91de2f7c97c51ca8fce39aae26477c1948 Mon Sep 17 00:00:00 2001 From: Hristo Venev Date: Sat, 16 Jul 2022 11:51:34 +0300 Subject: [PATCH 1656/1842] be2net: Fix buffer overflow in be_get_module_eeprom [ Upstream commit d7241f679a59cfe27f92cb5c6272cb429fb1f7ec ] be_cmd_read_port_transceiver_data assumes that it is given a buffer that is at least PAGE_DATA_LEN long, or twice that if the module supports SFF 8472. However, this is not always the case. Fix this by passing the desired offset and length to be_cmd_read_port_transceiver_data so that we only copy the bytes once. Fixes: e36edd9d26cf ("be2net: add ethtool "-m" option support") Signed-off-by: Hristo Venev Link: https://lore.kernel.org/r/20220716085134.6095-1-hristo@venev.name Signed-off-by: Paolo Abeni Signed-off-by: Sasha Levin --- drivers/net/ethernet/emulex/benet/be_cmds.c | 10 +++--- drivers/net/ethernet/emulex/benet/be_cmds.h | 2 +- .../net/ethernet/emulex/benet/be_ethtool.c | 31 ++++++++++++------- 3 files changed, 25 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c index 649c5c429bd7..1288b5e3d220 100644 --- a/drivers/net/ethernet/emulex/benet/be_cmds.c +++ b/drivers/net/ethernet/emulex/benet/be_cmds.c @@ -2287,7 +2287,7 @@ int be_cmd_get_beacon_state(struct be_adapter *adapter, u8 port_num, u32 *state) /* Uses sync mcc */ int be_cmd_read_port_transceiver_data(struct be_adapter *adapter, - u8 page_num, u8 *data) + u8 page_num, u32 off, u32 len, u8 *data) { struct be_dma_mem cmd; struct be_mcc_wrb *wrb; @@ -2321,10 +2321,10 @@ int be_cmd_read_port_transceiver_data(struct be_adapter *adapter, req->port = cpu_to_le32(adapter->hba_port_num); req->page_num = cpu_to_le32(page_num); status = be_mcc_notify_wait(adapter); - if (!status) { + if (!status && len > 0) { struct be_cmd_resp_port_type *resp = cmd.va; - memcpy(data, resp->page_data, PAGE_DATA_LEN); + memcpy(data, resp->page_data + off, len); } err: mutex_unlock(&adapter->mcc_lock); @@ -2415,7 +2415,7 @@ int be_cmd_query_cable_type(struct be_adapter *adapter) int status; status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0, - page_data); + 0, PAGE_DATA_LEN, page_data); if (!status) { switch (adapter->phy.interface_type) { case PHY_TYPE_QSFP: @@ -2440,7 +2440,7 @@ int be_cmd_query_sfp_info(struct be_adapter *adapter) int status; status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0, - page_data); + 0, PAGE_DATA_LEN, page_data); if (!status) { strlcpy(adapter->phy.vendor_name, page_data + SFP_VENDOR_NAME_OFFSET, SFP_VENDOR_NAME_LEN - 1); diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.h b/drivers/net/ethernet/emulex/benet/be_cmds.h index c30d6d6f0f3a..9e17d6a7ab8c 100644 --- a/drivers/net/ethernet/emulex/benet/be_cmds.h +++ b/drivers/net/ethernet/emulex/benet/be_cmds.h @@ -2427,7 +2427,7 @@ int be_cmd_set_beacon_state(struct be_adapter *adapter, u8 port_num, u8 beacon, int be_cmd_get_beacon_state(struct be_adapter *adapter, u8 port_num, u32 *state); int be_cmd_read_port_transceiver_data(struct be_adapter *adapter, - u8 page_num, u8 *data); + u8 page_num, u32 off, u32 len, u8 *data); int be_cmd_query_cable_type(struct be_adapter *adapter); int be_cmd_query_sfp_info(struct be_adapter *adapter); int lancer_cmd_read_object(struct be_adapter *adapter, struct be_dma_mem *cmd, diff --git a/drivers/net/ethernet/emulex/benet/be_ethtool.c b/drivers/net/ethernet/emulex/benet/be_ethtool.c index 99cc1c46fb30..d90bf457e49c 100644 --- a/drivers/net/ethernet/emulex/benet/be_ethtool.c +++ b/drivers/net/ethernet/emulex/benet/be_ethtool.c @@ -1338,7 +1338,7 @@ static int be_get_module_info(struct net_device *netdev, return -EOPNOTSUPP; status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0, - page_data); + 0, PAGE_DATA_LEN, page_data); if (!status) { if (!page_data[SFP_PLUS_SFF_8472_COMP]) { modinfo->type = ETH_MODULE_SFF_8079; @@ -1356,25 +1356,32 @@ static int be_get_module_eeprom(struct net_device *netdev, { struct be_adapter *adapter = netdev_priv(netdev); int status; + u32 begin, end; if (!check_privilege(adapter, MAX_PRIVILEGES)) return -EOPNOTSUPP; - status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0, - data); - if (status) - goto err; + begin = eeprom->offset; + end = eeprom->offset + eeprom->len; - if (eeprom->offset + eeprom->len > PAGE_DATA_LEN) { - status = be_cmd_read_port_transceiver_data(adapter, - TR_PAGE_A2, - data + - PAGE_DATA_LEN); + if (begin < PAGE_DATA_LEN) { + status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A0, begin, + min_t(u32, end, PAGE_DATA_LEN) - begin, + data); + if (status) + goto err; + + data += PAGE_DATA_LEN - begin; + begin = PAGE_DATA_LEN; + } + + if (end > PAGE_DATA_LEN) { + status = be_cmd_read_port_transceiver_data(adapter, TR_PAGE_A2, + begin - PAGE_DATA_LEN, + end - begin, data); if (status) goto err; } - if (eeprom->offset) - memcpy(data, data + eeprom->offset, eeprom->len); err: return be_cmd_status(status); } From 36f1d9c607f9457e2d8ff26eb96eb40db74b92fd Mon Sep 17 00:00:00 2001 From: Liang He Date: Thu, 14 Jul 2022 16:13:37 +0800 Subject: [PATCH 1657/1842] drm/imx/dcss: Add missing of_node_put() in fail path [ Upstream commit 02c87df2480ac855d88ee308ce3fa857d9bd55a8 ] In dcss_dev_create() and dcss_dev_destroy(), we should call of_node_put() in fail path or before the dcss's destroy as of_graph_get_port_by_id() has increased the refcount. Fixes: 9021c317b770 ("drm/imx: Add initial support for DCSS on iMX8MQ") Signed-off-by: Liang He Reviewed-by: Laurentiu Palcu Signed-off-by: Laurentiu Palcu Link: https://patchwork.freedesktop.org/patch/msgid/20220714081337.374761-1-windhl@126.com Signed-off-by: Sasha Levin --- drivers/gpu/drm/imx/dcss/dcss-dev.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/gpu/drm/imx/dcss/dcss-dev.c b/drivers/gpu/drm/imx/dcss/dcss-dev.c index c849533ca83e..3f5750cc2673 100644 --- a/drivers/gpu/drm/imx/dcss/dcss-dev.c +++ b/drivers/gpu/drm/imx/dcss/dcss-dev.c @@ -207,6 +207,7 @@ struct dcss_dev *dcss_dev_create(struct device *dev, bool hdmi_output) ret = dcss_submodules_init(dcss); if (ret) { + of_node_put(dcss->of_port); dev_err(dev, "submodules initialization failed\n"); goto clks_err; } @@ -237,6 +238,8 @@ void dcss_dev_destroy(struct dcss_dev *dcss) dcss_clocks_disable(dcss); } + of_node_put(dcss->of_port); + pm_runtime_disable(dcss->dev); dcss_submodules_stop(dcss); From e045d672ba06e1d35bacb56374d350de0ac99066 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 18 Jul 2022 10:26:39 -0700 Subject: [PATCH 1658/1842] ipv4: Fix a data-race around sysctl_fib_multipath_use_neigh. [ Upstream commit 87507bcb4f5de16bb419e9509d874f4db6c0ad0f ] While reading sysctl_fib_multipath_use_neigh, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: a6db4494d218 ("net: ipv4: Consider failed nexthops in multipath routes") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/fib_semantics.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c index 70c866308abe..3824b7abecf7 100644 --- a/net/ipv4/fib_semantics.c +++ b/net/ipv4/fib_semantics.c @@ -2232,7 +2232,7 @@ void fib_select_multipath(struct fib_result *res, int hash) } change_nexthops(fi) { - if (net->ipv4.sysctl_fib_multipath_use_neigh) { + if (READ_ONCE(net->ipv4.sysctl_fib_multipath_use_neigh)) { if (!fib_good_nh(nexthop_nh)) continue; if (!first) { From 9add240f76af6d141d2eebd3a1558a0e503a993d Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 18 Jul 2022 10:26:42 -0700 Subject: [PATCH 1659/1842] ip: Fix data-races around sysctl_ip_prot_sock. [ Upstream commit 9b55c20f83369dd54541d9ddbe3a018a8377f451 ] sysctl_ip_prot_sock is accessed concurrently, and there is always a chance of data-race. So, all readers and writers need some basic protection to avoid load/store-tearing. Fixes: 4548b683b781 ("Introduce a sysctl that modifies the value of PROT_SOCK.") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- include/net/ip.h | 2 +- net/ipv4/sysctl_net_ipv4.c | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/net/ip.h b/include/net/ip.h index d715b25a8dc4..c5822d7824cd 100644 --- a/include/net/ip.h +++ b/include/net/ip.h @@ -352,7 +352,7 @@ static inline bool sysctl_dev_name_is_allowed(const char *name) static inline bool inet_port_requires_bind_service(struct net *net, unsigned short port) { - return port < net->ipv4.sysctl_ip_prot_sock; + return port < READ_ONCE(net->ipv4.sysctl_ip_prot_sock); } #else diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c index 08829809e88b..86f553864f98 100644 --- a/net/ipv4/sysctl_net_ipv4.c +++ b/net/ipv4/sysctl_net_ipv4.c @@ -95,7 +95,7 @@ static int ipv4_local_port_range(struct ctl_table *table, int write, * port limit. */ if ((range[1] < range[0]) || - (range[0] < net->ipv4.sysctl_ip_prot_sock)) + (range[0] < READ_ONCE(net->ipv4.sysctl_ip_prot_sock))) ret = -EINVAL; else set_local_port_range(net, range); @@ -121,7 +121,7 @@ static int ipv4_privileged_ports(struct ctl_table *table, int write, .extra2 = &ip_privileged_port_max, }; - pports = net->ipv4.sysctl_ip_prot_sock; + pports = READ_ONCE(net->ipv4.sysctl_ip_prot_sock); ret = proc_dointvec_minmax(&tmp, write, buffer, lenp, ppos); @@ -133,7 +133,7 @@ static int ipv4_privileged_ports(struct ctl_table *table, int write, if (range[0] < pports) ret = -EINVAL; else - net->ipv4.sysctl_ip_prot_sock = pports; + WRITE_ONCE(net->ipv4.sysctl_ip_prot_sock, pports); } return ret; From fcaef69c79ec222e55643e666b80b221e70fa6a8 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 18 Jul 2022 10:26:43 -0700 Subject: [PATCH 1660/1842] udp: Fix a data-race around sysctl_udp_l3mdev_accept. [ Upstream commit 3d72bb4188c708bb16758c60822fc4dda7a95174 ] While reading sysctl_udp_l3mdev_accept, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 63a6fff353d0 ("net: Avoid receiving packets with an l3mdev on unbound UDP sockets") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- include/net/udp.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/net/udp.h b/include/net/udp.h index 4017f257628f..010bc324f860 100644 --- a/include/net/udp.h +++ b/include/net/udp.h @@ -259,7 +259,7 @@ static inline bool udp_sk_bound_dev_eq(struct net *net, int bound_dev_if, int dif, int sdif) { #if IS_ENABLED(CONFIG_NET_L3_MASTER_DEV) - return inet_bound_dev_eq(!!net->ipv4.sysctl_udp_l3mdev_accept, + return inet_bound_dev_eq(!!READ_ONCE(net->ipv4.sysctl_udp_l3mdev_accept), bound_dev_if, dif, sdif); #else return inet_bound_dev_eq(true, bound_dev_if, dif, sdif); From ffc388f6f0d65f2a59798d883c63205919e6d452 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 18 Jul 2022 10:26:44 -0700 Subject: [PATCH 1661/1842] tcp: Fix data-races around sysctl knobs related to SYN option. [ Upstream commit 3666f666e99600518ab20982af04a078bbdad277 ] While reading these knobs, they can be changed concurrently. Thus, we need to add READ_ONCE() to their readers. - tcp_sack - tcp_window_scaling - tcp_timestamps Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- .../ethernet/chelsio/inline_crypto/chtls/chtls_cm.c | 6 +++--- net/core/secure_seq.c | 4 ++-- net/ipv4/syncookies.c | 6 +++--- net/ipv4/tcp_input.c | 6 +++--- net/ipv4/tcp_output.c | 10 +++++----- 5 files changed, 16 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c index 51e071c20e39..cd6e016e6210 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c @@ -1235,8 +1235,8 @@ static struct sock *chtls_recv_sock(struct sock *lsk, csk->sndbuf = newsk->sk_sndbuf; csk->smac_idx = ((struct port_info *)netdev_priv(ndev))->smt_idx; RCV_WSCALE(tp) = select_rcv_wscale(tcp_full_space(newsk), - sock_net(newsk)-> - ipv4.sysctl_tcp_window_scaling, + READ_ONCE(sock_net(newsk)-> + ipv4.sysctl_tcp_window_scaling), tp->window_clamp); neigh_release(n); inet_inherit_port(&tcp_hashinfo, lsk, newsk); @@ -1383,7 +1383,7 @@ static void chtls_pass_accept_request(struct sock *sk, #endif } if (req->tcpopt.wsf <= 14 && - sock_net(sk)->ipv4.sysctl_tcp_window_scaling) { + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_window_scaling)) { inet_rsk(oreq)->wscale_ok = 1; inet_rsk(oreq)->snd_wscale = req->tcpopt.wsf; } diff --git a/net/core/secure_seq.c b/net/core/secure_seq.c index 7131cd1fb2ad..189eea1372d5 100644 --- a/net/core/secure_seq.c +++ b/net/core/secure_seq.c @@ -64,7 +64,7 @@ u32 secure_tcpv6_ts_off(const struct net *net, .daddr = *(struct in6_addr *)daddr, }; - if (net->ipv4.sysctl_tcp_timestamps != 1) + if (READ_ONCE(net->ipv4.sysctl_tcp_timestamps) != 1) return 0; ts_secret_init(); @@ -120,7 +120,7 @@ EXPORT_SYMBOL(secure_ipv6_port_ephemeral); #ifdef CONFIG_INET u32 secure_tcp_ts_off(const struct net *net, __be32 saddr, __be32 daddr) { - if (net->ipv4.sysctl_tcp_timestamps != 1) + if (READ_ONCE(net->ipv4.sysctl_tcp_timestamps) != 1) return 0; ts_secret_init(); diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c index b52cc46bdadd..41afc9155f31 100644 --- a/net/ipv4/syncookies.c +++ b/net/ipv4/syncookies.c @@ -249,12 +249,12 @@ bool cookie_timestamp_decode(const struct net *net, return true; } - if (!net->ipv4.sysctl_tcp_timestamps) + if (!READ_ONCE(net->ipv4.sysctl_tcp_timestamps)) return false; tcp_opt->sack_ok = (options & TS_OPT_SACK) ? TCP_SACK_SEEN : 0; - if (tcp_opt->sack_ok && !net->ipv4.sysctl_tcp_sack) + if (tcp_opt->sack_ok && !READ_ONCE(net->ipv4.sysctl_tcp_sack)) return false; if ((options & TS_OPT_WSCALE_MASK) == TS_OPT_WSCALE_MASK) @@ -263,7 +263,7 @@ bool cookie_timestamp_decode(const struct net *net, tcp_opt->wscale_ok = 1; tcp_opt->snd_wscale = options & TS_OPT_WSCALE_MASK; - return net->ipv4.sysctl_tcp_window_scaling != 0; + return READ_ONCE(net->ipv4.sysctl_tcp_window_scaling) != 0; } EXPORT_SYMBOL(cookie_timestamp_decode); diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 5cbabe0e42c9..8ac3acde08b4 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -4007,7 +4007,7 @@ void tcp_parse_options(const struct net *net, break; case TCPOPT_WINDOW: if (opsize == TCPOLEN_WINDOW && th->syn && - !estab && net->ipv4.sysctl_tcp_window_scaling) { + !estab && READ_ONCE(net->ipv4.sysctl_tcp_window_scaling)) { __u8 snd_wscale = *(__u8 *)ptr; opt_rx->wscale_ok = 1; if (snd_wscale > TCP_MAX_WSCALE) { @@ -4023,7 +4023,7 @@ void tcp_parse_options(const struct net *net, case TCPOPT_TIMESTAMP: if ((opsize == TCPOLEN_TIMESTAMP) && ((estab && opt_rx->tstamp_ok) || - (!estab && net->ipv4.sysctl_tcp_timestamps))) { + (!estab && READ_ONCE(net->ipv4.sysctl_tcp_timestamps)))) { opt_rx->saw_tstamp = 1; opt_rx->rcv_tsval = get_unaligned_be32(ptr); opt_rx->rcv_tsecr = get_unaligned_be32(ptr + 4); @@ -4031,7 +4031,7 @@ void tcp_parse_options(const struct net *net, break; case TCPOPT_SACK_PERM: if (opsize == TCPOLEN_SACK_PERM && th->syn && - !estab && net->ipv4.sysctl_tcp_sack) { + !estab && READ_ONCE(net->ipv4.sysctl_tcp_sack)) { opt_rx->sack_ok = TCP_SACK_SEEN; tcp_sack_reset(opt_rx); } diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index e7348e70e6e3..772dd6241b70 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -789,18 +789,18 @@ static unsigned int tcp_syn_options(struct sock *sk, struct sk_buff *skb, opts->mss = tcp_advertise_mss(sk); remaining -= TCPOLEN_MSS_ALIGNED; - if (likely(sock_net(sk)->ipv4.sysctl_tcp_timestamps && !*md5)) { + if (likely(READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_timestamps) && !*md5)) { opts->options |= OPTION_TS; opts->tsval = tcp_skb_timestamp(skb) + tp->tsoffset; opts->tsecr = tp->rx_opt.ts_recent; remaining -= TCPOLEN_TSTAMP_ALIGNED; } - if (likely(sock_net(sk)->ipv4.sysctl_tcp_window_scaling)) { + if (likely(READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_window_scaling))) { opts->ws = tp->rx_opt.rcv_wscale; opts->options |= OPTION_WSCALE; remaining -= TCPOLEN_WSCALE_ALIGNED; } - if (likely(sock_net(sk)->ipv4.sysctl_tcp_sack)) { + if (likely(READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_sack))) { opts->options |= OPTION_SACK_ADVERTISE; if (unlikely(!(OPTION_TS & opts->options))) remaining -= TCPOLEN_SACKPERM_ALIGNED; @@ -3648,7 +3648,7 @@ static void tcp_connect_init(struct sock *sk) * See tcp_input.c:tcp_rcv_state_process case TCP_SYN_SENT. */ tp->tcp_header_len = sizeof(struct tcphdr); - if (sock_net(sk)->ipv4.sysctl_tcp_timestamps) + if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_timestamps)) tp->tcp_header_len += TCPOLEN_TSTAMP_ALIGNED; #ifdef CONFIG_TCP_MD5SIG @@ -3684,7 +3684,7 @@ static void tcp_connect_init(struct sock *sk) tp->advmss - (tp->rx_opt.ts_recent_stamp ? tp->tcp_header_len - sizeof(struct tcphdr) : 0), &tp->rcv_wnd, &tp->window_clamp, - sock_net(sk)->ipv4.sysctl_tcp_window_scaling, + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_window_scaling), &rcv_wscale, rcv_wnd); From 11e8b013d16e5db63f8f76acceb5b86964098aaa Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 18 Jul 2022 10:26:45 -0700 Subject: [PATCH 1662/1842] tcp: Fix a data-race around sysctl_tcp_early_retrans. [ Upstream commit 52e65865deb6a36718a463030500f16530eaab74 ] While reading sysctl_tcp_early_retrans, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: eed530b6c676 ("tcp: early retransmit") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_output.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 772dd6241b70..0cbf3d859745 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2735,7 +2735,7 @@ bool tcp_schedule_loss_probe(struct sock *sk, bool advancing_rto) if (rcu_access_pointer(tp->fastopen_rsk)) return false; - early_retrans = sock_net(sk)->ipv4.sysctl_tcp_early_retrans; + early_retrans = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_early_retrans); /* Schedule a loss probe in 2*RTT for SACK capable connections * not in loss recovery, that are either limited by cwnd or application. */ From d8781f7cd04091744f474a2bada74772084b9dc9 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 18 Jul 2022 10:26:46 -0700 Subject: [PATCH 1663/1842] tcp: Fix data-races around sysctl_tcp_recovery. [ Upstream commit e7d2ef837e14a971a05f60ea08c47f3fed1a36e4 ] While reading sysctl_tcp_recovery, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 4f41b1c58a32 ("tcp: use RACK to detect losses") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_input.c | 3 ++- net/ipv4/tcp_recovery.c | 6 ++++-- 2 files changed, 6 insertions(+), 3 deletions(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 8ac3acde08b4..1dc1d62093b3 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -2055,7 +2055,8 @@ static inline void tcp_init_undo(struct tcp_sock *tp) static bool tcp_is_rack(const struct sock *sk) { - return sock_net(sk)->ipv4.sysctl_tcp_recovery & TCP_RACK_LOSS_DETECTION; + return READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_recovery) & + TCP_RACK_LOSS_DETECTION; } /* If we detect SACK reneging, forget all SACK information diff --git a/net/ipv4/tcp_recovery.c b/net/ipv4/tcp_recovery.c index 31fc178f42c0..21fc9859d421 100644 --- a/net/ipv4/tcp_recovery.c +++ b/net/ipv4/tcp_recovery.c @@ -19,7 +19,8 @@ static u32 tcp_rack_reo_wnd(const struct sock *sk) return 0; if (tp->sacked_out >= tp->reordering && - !(sock_net(sk)->ipv4.sysctl_tcp_recovery & TCP_RACK_NO_DUPTHRESH)) + !(READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_recovery) & + TCP_RACK_NO_DUPTHRESH)) return 0; } @@ -190,7 +191,8 @@ void tcp_rack_update_reo_wnd(struct sock *sk, struct rate_sample *rs) { struct tcp_sock *tp = tcp_sk(sk); - if (sock_net(sk)->ipv4.sysctl_tcp_recovery & TCP_RACK_STATIC_REO_WND || + if ((READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_recovery) & + TCP_RACK_STATIC_REO_WND) || !rs->prior_delivered) return; From cc133e4f4bc225079198192623945bb872c08143 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 18 Jul 2022 10:26:47 -0700 Subject: [PATCH 1664/1842] tcp: Fix a data-race around sysctl_tcp_thin_linear_timeouts. [ Upstream commit 7c6f2a86ca590d5187a073d987e9599985fb1c7c ] While reading sysctl_tcp_thin_linear_timeouts, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 36e31b0af587 ("net: TCP thin linear timeouts") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_timer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c index e20fd86a2a89..888683f2ff3e 100644 --- a/net/ipv4/tcp_timer.c +++ b/net/ipv4/tcp_timer.c @@ -574,7 +574,7 @@ void tcp_retransmit_timer(struct sock *sk) * linear-timeout retransmissions into a black hole */ if (sk->sk_state == TCP_ESTABLISHED && - (tp->thin_lto || net->ipv4.sysctl_tcp_thin_linear_timeouts) && + (tp->thin_lto || READ_ONCE(net->ipv4.sysctl_tcp_thin_linear_timeouts)) && tcp_stream_is_thin(tp) && icsk->icsk_retransmits <= TCP_THIN_LINEAR_RETRIES) { icsk->icsk_backoff = 0; From 0e3f82a03ec8c3808e87283e12946227415706c9 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 18 Jul 2022 10:26:48 -0700 Subject: [PATCH 1665/1842] tcp: Fix data-races around sysctl_tcp_slow_start_after_idle. [ Upstream commit 4845b5713ab18a1bb6e31d1fbb4d600240b8b691 ] While reading sysctl_tcp_slow_start_after_idle, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 35089bb203f4 ("[TCP]: Add tcp_slow_start_after_idle sysctl.") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- include/net/tcp.h | 4 ++-- net/ipv4/tcp_output.c | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index aa46f4016245..44bfb22069c1 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1380,8 +1380,8 @@ static inline void tcp_slow_start_after_idle_check(struct sock *sk) struct tcp_sock *tp = tcp_sk(sk); s32 delta; - if (!sock_net(sk)->ipv4.sysctl_tcp_slow_start_after_idle || tp->packets_out || - ca_ops->cong_control) + if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_slow_start_after_idle) || + tp->packets_out || ca_ops->cong_control) return; delta = tcp_jiffies32 - tp->lsndtime; if (delta > inet_csk(sk)->icsk_rto) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 0cbf3d859745..ef64ee4c902a 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1899,7 +1899,7 @@ static void tcp_cwnd_validate(struct sock *sk, bool is_cwnd_limited) if (tp->packets_out > tp->snd_cwnd_used) tp->snd_cwnd_used = tp->packets_out; - if (sock_net(sk)->ipv4.sysctl_tcp_slow_start_after_idle && + if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_slow_start_after_idle) && (s32)(tcp_jiffies32 - tp->snd_cwnd_stamp) >= inet_csk(sk)->icsk_rto && !ca_ops->cong_control) tcp_cwnd_application_limited(sk); From daa8b5b8694c05a383fe43ffea679aa2153277cf Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 18 Jul 2022 10:26:49 -0700 Subject: [PATCH 1666/1842] tcp: Fix a data-race around sysctl_tcp_retrans_collapse. [ Upstream commit 1a63cb91f0c2fcdeced6d6edee8d1d886583d139 ] While reading sysctl_tcp_retrans_collapse, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_output.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index ef64ee4c902a..9b67c61576e4 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -3100,7 +3100,7 @@ static void tcp_retrans_try_collapse(struct sock *sk, struct sk_buff *to, struct sk_buff *skb = to, *tmp; bool first = true; - if (!sock_net(sk)->ipv4.sysctl_tcp_retrans_collapse) + if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_retrans_collapse)) return; if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_SYN) return; From 03bb3892f3f18fd98717aa6763ab3885416e69d0 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 18 Jul 2022 10:26:50 -0700 Subject: [PATCH 1667/1842] tcp: Fix a data-race around sysctl_tcp_stdurg. [ Upstream commit 4e08ed41cb1194009fc1a916a59ce3ed4afd77cd ] While reading sysctl_tcp_stdurg, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_input.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 1dc1d62093b3..c89452761b3f 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -5492,7 +5492,7 @@ static void tcp_check_urg(struct sock *sk, const struct tcphdr *th) struct tcp_sock *tp = tcp_sk(sk); u32 ptr = ntohs(th->urg_ptr); - if (ptr && !sock_net(sk)->ipv4.sysctl_tcp_stdurg) + if (ptr && !READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_stdurg)) ptr--; ptr += ntohl(th->seq); From 805f1c7ce47030dbb663b7d2a84c9473a27df774 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 18 Jul 2022 10:26:51 -0700 Subject: [PATCH 1668/1842] tcp: Fix a data-race around sysctl_tcp_rfc1337. [ Upstream commit 0b484c91911e758e53656d570de58c2ed81ec6f2 ] While reading sysctl_tcp_rfc1337, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_minisocks.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index 62f5ef9e6f93..e42312321191 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -180,7 +180,7 @@ tcp_timewait_state_process(struct inet_timewait_sock *tw, struct sk_buff *skb, * Oh well... nobody has a sufficient solution to this * protocol bug yet. */ - if (twsk_net(tw)->ipv4.sysctl_tcp_rfc1337 == 0) { + if (!READ_ONCE(twsk_net(tw)->ipv4.sysctl_tcp_rfc1337)) { kill: inet_twsk_deschedule_put(tw); return TCP_TW_SUCCESS; From 064852663308c801861bd54789d81421fa4c2928 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Mon, 18 Jul 2022 10:26:53 -0700 Subject: [PATCH 1669/1842] tcp: Fix data-races around sysctl_tcp_max_reordering. [ Upstream commit a11e5b3e7a59fde1a90b0eaeaa82320495cf8cae ] While reading sysctl_tcp_max_reordering, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: dca145ffaa8d ("tcp: allow for bigger reordering level") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_input.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index c89452761b3f..d817f8c31c9c 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -1011,7 +1011,7 @@ static void tcp_check_sack_reordering(struct sock *sk, const u32 low_seq, tp->undo_marker ? tp->undo_retrans : 0); #endif tp->reordering = min_t(u32, (metric + mss - 1) / mss, - sock_net(sk)->ipv4.sysctl_tcp_max_reordering); + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_max_reordering)); } /* This exciting event is worth to be remembered. 8) */ @@ -1990,7 +1990,7 @@ static void tcp_check_reno_reordering(struct sock *sk, const int addend) return; tp->reordering = min_t(u32, tp->packets_out + addend, - sock_net(sk)->ipv4.sysctl_tcp_max_reordering); + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_max_reordering)); tp->reord_seen++; NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPRENOREORDER); } From 684896e675edd8b669fd3e9f547c5038222d85bc Mon Sep 17 00:00:00 2001 From: Marc Kleine-Budde Date: Tue, 19 Jul 2022 09:22:35 +0200 Subject: [PATCH 1670/1842] spi: bcm2835: bcm2835_spi_handle_err(): fix NULL pointer deref for non DMA transfers commit 4ceaa684459d414992acbefb4e4c31f2dfc50641 upstream. In case a IRQ based transfer times out the bcm2835_spi_handle_err() function is called. Since commit 1513ceee70f2 ("spi: bcm2835: Drop dma_pending flag") the TX and RX DMA transfers are unconditionally canceled, leading to NULL pointer derefs if ctlr->dma_tx or ctlr->dma_rx are not set. Fix the NULL pointer deref by checking that ctlr->dma_tx and ctlr->dma_rx are valid pointers before accessing them. Fixes: 1513ceee70f2 ("spi: bcm2835: Drop dma_pending flag") Cc: Lukas Wunner Signed-off-by: Marc Kleine-Budde Link: https://lore.kernel.org/r/20220719072234.2782764-1-mkl@pengutronix.de Signed-off-by: Mark Brown Signed-off-by: Greg Kroah-Hartman --- drivers/spi/spi-bcm2835.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c index 33c32e931767..bb9d8386ba3b 100644 --- a/drivers/spi/spi-bcm2835.c +++ b/drivers/spi/spi-bcm2835.c @@ -1174,10 +1174,14 @@ static void bcm2835_spi_handle_err(struct spi_controller *ctlr, struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr); /* if an error occurred and we have an active dma, then terminate */ - dmaengine_terminate_sync(ctlr->dma_tx); - bs->tx_dma_active = false; - dmaengine_terminate_sync(ctlr->dma_rx); - bs->rx_dma_active = false; + if (ctlr->dma_tx) { + dmaengine_terminate_sync(ctlr->dma_tx); + bs->tx_dma_active = false; + } + if (ctlr->dma_rx) { + dmaengine_terminate_sync(ctlr->dma_rx); + bs->rx_dma_active = false; + } bcm2835_spi_undo_prologue(bs); /* and reset */ From 3616776bc51cd3262bb1be60cc01c72e0a1959cf Mon Sep 17 00:00:00 2001 From: Alexey Kardashevskiy Date: Wed, 1 Jun 2022 03:43:28 +0200 Subject: [PATCH 1671/1842] KVM: Don't null dereference ops->destroy commit e8bc2427018826e02add7b0ed0fc625a60390ae5 upstream. A KVM device cleanup happens in either of two callbacks: 1) destroy() which is called when the VM is being destroyed; 2) release() which is called when a device fd is closed. Most KVM devices use 1) but Book3s's interrupt controller KVM devices (XICS, XIVE, XIVE-native) use 2) as they need to close and reopen during the machine execution. The error handling in kvm_ioctl_create_device() assumes destroy() is always defined which leads to NULL dereference as discovered by Syzkaller. This adds a checks for destroy!=NULL and adds a missing release(). This is not changing kvm_destroy_devices() as devices with defined release() should have been removed from the KVM devices list by then. Suggested-by: Paolo Bonzini Signed-off-by: Alexey Kardashevskiy Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- virt/kvm/kvm_main.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 9cd8ca2d8bc1..c5dbac10c372 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3644,8 +3644,11 @@ static int kvm_ioctl_create_device(struct kvm *kvm, kvm_put_kvm_no_destroy(kvm); mutex_lock(&kvm->lock); list_del(&dev->vm_node); + if (ops->release) + ops->release(dev); mutex_unlock(&kvm->lock); - ops->destroy(dev); + if (ops->destroy) + ops->destroy(dev); return ret; } From ddb3f0b68863bd1c5f43177eea476bce316d4993 Mon Sep 17 00:00:00 2001 From: Wang Cheng Date: Thu, 19 May 2022 14:08:54 -0700 Subject: [PATCH 1672/1842] mm/mempolicy: fix uninit-value in mpol_rebind_policy() commit 018160ad314d75b1409129b2247b614a9f35894c upstream. mpol_set_nodemask()(mm/mempolicy.c) does not set up nodemask when pol->mode is MPOL_LOCAL. Check pol->mode before access pol->w.cpuset_mems_allowed in mpol_rebind_policy()(mm/mempolicy.c). BUG: KMSAN: uninit-value in mpol_rebind_policy mm/mempolicy.c:352 [inline] BUG: KMSAN: uninit-value in mpol_rebind_task+0x2ac/0x2c0 mm/mempolicy.c:368 mpol_rebind_policy mm/mempolicy.c:352 [inline] mpol_rebind_task+0x2ac/0x2c0 mm/mempolicy.c:368 cpuset_change_task_nodemask kernel/cgroup/cpuset.c:1711 [inline] cpuset_attach+0x787/0x15e0 kernel/cgroup/cpuset.c:2278 cgroup_migrate_execute+0x1023/0x1d20 kernel/cgroup/cgroup.c:2515 cgroup_migrate kernel/cgroup/cgroup.c:2771 [inline] cgroup_attach_task+0x540/0x8b0 kernel/cgroup/cgroup.c:2804 __cgroup1_procs_write+0x5cc/0x7a0 kernel/cgroup/cgroup-v1.c:520 cgroup1_tasks_write+0x94/0xb0 kernel/cgroup/cgroup-v1.c:539 cgroup_file_write+0x4c2/0x9e0 kernel/cgroup/cgroup.c:3852 kernfs_fop_write_iter+0x66a/0x9f0 fs/kernfs/file.c:296 call_write_iter include/linux/fs.h:2162 [inline] new_sync_write fs/read_write.c:503 [inline] vfs_write+0x1318/0x2030 fs/read_write.c:590 ksys_write+0x28b/0x510 fs/read_write.c:643 __do_sys_write fs/read_write.c:655 [inline] __se_sys_write fs/read_write.c:652 [inline] __x64_sys_write+0xdb/0x120 fs/read_write.c:652 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x54/0xd0 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x44/0xae Uninit was created at: slab_post_alloc_hook mm/slab.h:524 [inline] slab_alloc_node mm/slub.c:3251 [inline] slab_alloc mm/slub.c:3259 [inline] kmem_cache_alloc+0x902/0x11c0 mm/slub.c:3264 mpol_new mm/mempolicy.c:293 [inline] do_set_mempolicy+0x421/0xb70 mm/mempolicy.c:853 kernel_set_mempolicy mm/mempolicy.c:1504 [inline] __do_sys_set_mempolicy mm/mempolicy.c:1510 [inline] __se_sys_set_mempolicy+0x44c/0xb60 mm/mempolicy.c:1507 __x64_sys_set_mempolicy+0xd8/0x110 mm/mempolicy.c:1507 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x54/0xd0 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x44/0xae KMSAN: uninit-value in mpol_rebind_task (2) https://syzkaller.appspot.com/bug?id=d6eb90f952c2a5de9ea718a1b873c55cb13b59dc This patch seems to fix below bug too. KMSAN: uninit-value in mpol_rebind_mm (2) https://syzkaller.appspot.com/bug?id=f2fecd0d7013f54ec4162f60743a2b28df40926b The uninit-value is pol->w.cpuset_mems_allowed in mpol_rebind_policy(). When syzkaller reproducer runs to the beginning of mpol_new(), mpol_new() mm/mempolicy.c do_mbind() mm/mempolicy.c kernel_mbind() mm/mempolicy.c `mode` is 1(MPOL_PREFERRED), nodes_empty(*nodes) is `true` and `flags` is 0. Then mode = MPOL_LOCAL; ... policy->mode = mode; policy->flags = flags; will be executed. So in mpol_set_nodemask(), mpol_set_nodemask() mm/mempolicy.c do_mbind() kernel_mbind() pol->mode is 4 (MPOL_LOCAL), that `nodemask` in `pol` is not initialized, which will be accessed in mpol_rebind_policy(). Link: https://lkml.kernel.org/r/20220512123428.fq3wofedp6oiotd4@ppc.localdomain Signed-off-by: Wang Cheng Reported-by: Tested-by: Cc: David Rientjes Cc: Vlastimil Babka Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- mm/mempolicy.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 7315b978e834..f9f47449e8dd 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -374,7 +374,7 @@ static void mpol_rebind_preferred(struct mempolicy *pol, */ static void mpol_rebind_policy(struct mempolicy *pol, const nodemask_t *newmask) { - if (!pol) + if (!pol || pol->mode == MPOL_LOCAL) return; if (!mpol_store_user_nodemask(pol) && !(pol->flags & MPOL_F_LOCAL) && nodes_equal(pol->w.cpuset_mems_allowed, *newmask)) From 0c722a32f29abc4231e928a1d15b112c4c084934 Mon Sep 17 00:00:00 2001 From: Eric Dumazet Date: Thu, 7 Jul 2022 12:39:00 +0000 Subject: [PATCH 1673/1842] bpf: Make sure mac_header was set before using it commit 0326195f523a549e0a9d7fd44c70b26fd7265090 upstream. Classic BPF has a way to load bytes starting from the mac header. Some skbs do not have a mac header, and skb_mac_header() in this case is returning a pointer that 65535 bytes after skb->head. Existing range check in bpf_internal_load_pointer_neg_helper() was properly kicking and no illegal access was happening. New sanity check in skb_mac_header() is firing, so we need to avoid it. WARNING: CPU: 1 PID: 28990 at include/linux/skbuff.h:2785 skb_mac_header include/linux/skbuff.h:2785 [inline] WARNING: CPU: 1 PID: 28990 at include/linux/skbuff.h:2785 bpf_internal_load_pointer_neg_helper+0x1b1/0x1c0 kernel/bpf/core.c:74 Modules linked in: CPU: 1 PID: 28990 Comm: syz-executor.0 Not tainted 5.19.0-rc4-syzkaller-00865-g4874fb9484be #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/29/2022 RIP: 0010:skb_mac_header include/linux/skbuff.h:2785 [inline] RIP: 0010:bpf_internal_load_pointer_neg_helper+0x1b1/0x1c0 kernel/bpf/core.c:74 Code: ff ff 45 31 f6 e9 5a ff ff ff e8 aa 27 40 00 e9 3b ff ff ff e8 90 27 40 00 e9 df fe ff ff e8 86 27 40 00 eb 9e e8 2f 2c f3 ff <0f> 0b eb b1 e8 96 27 40 00 e9 79 fe ff ff 90 41 57 41 56 41 55 41 RSP: 0018:ffffc9000309f668 EFLAGS: 00010216 RAX: 0000000000000118 RBX: ffffffffffeff00c RCX: ffffc9000e417000 RDX: 0000000000040000 RSI: ffffffff81873f21 RDI: 0000000000000003 RBP: ffff8880842878c0 R08: 0000000000000003 R09: 000000000000ffff R10: 000000000000ffff R11: 0000000000000001 R12: 0000000000000004 R13: ffff88803ac56c00 R14: 000000000000ffff R15: dffffc0000000000 FS: 00007f5c88a16700(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fdaa9f6c058 CR3: 000000003a82c000 CR4: 00000000003506e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ____bpf_skb_load_helper_32 net/core/filter.c:276 [inline] bpf_skb_load_helper_32+0x191/0x220 net/core/filter.c:264 Fixes: f9aefd6b2aa3 ("net: warn if mac header was not set") Reported-by: syzbot Signed-off-by: Eric Dumazet Signed-off-by: Daniel Borkmann Link: https://lore.kernel.org/bpf/20220707123900.945305-1-edumazet@google.com Signed-off-by: Greg Kroah-Hartman --- kernel/bpf/core.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 845a4c052433..fd2aa6b9909e 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -66,11 +66,13 @@ void *bpf_internal_load_pointer_neg_helper(const struct sk_buff *skb, int k, uns { u8 *ptr = NULL; - if (k >= SKF_NET_OFF) + if (k >= SKF_NET_OFF) { ptr = skb_network_header(skb) + k - SKF_NET_OFF; - else if (k >= SKF_LL_OFF) + } else if (k >= SKF_LL_OFF) { + if (unlikely(!skb_mac_header_was_set(skb))) + return NULL; ptr = skb_mac_header(skb) + k - SKF_LL_OFF; - + } if (ptr >= skb->head && ptr + size <= skb_tail_pointer(skb)) return ptr; From 26d5eb3c25c3b89bfd62f81f6afea71b350adaf2 Mon Sep 17 00:00:00 2001 From: Juri Lelli Date: Thu, 14 Jul 2022 17:19:08 +0200 Subject: [PATCH 1674/1842] sched/deadline: Fix BUG_ON condition for deboosted tasks commit ddfc710395cccc61247348df9eb18ea50321cbed upstream. Tasks the are being deboosted from SCHED_DEADLINE might enter enqueue_task_dl() one last time and hit an erroneous BUG_ON condition: since they are not boosted anymore, the if (is_dl_boosted()) branch is not taken, but the else if (!dl_prio) is and inside this one we BUG_ON(!is_dl_boosted), which is of course false (BUG_ON triggered) otherwise we had entered the if branch above. Long story short, the current condition doesn't make sense and always leads to triggering of a BUG. Fix this by only checking enqueue flags, properly: ENQUEUE_REPLENISH has to be present, but additional flags are not a problem. Fixes: 64be6f1f5f71 ("sched/deadline: Don't replenish from a !SCHED_DEADLINE entity") Signed-off-by: Juri Lelli Signed-off-by: Peter Zijlstra (Intel) Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20220714151908.533052-1-juri.lelli@redhat.com Signed-off-by: Greg Kroah-Hartman --- kernel/sched/deadline.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index a3ae00c348a8..933706106b98 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1563,7 +1563,10 @@ static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags) * the throttle. */ p->dl.dl_throttled = 0; - BUG_ON(!is_dl_boosted(&p->dl) || flags != ENQUEUE_REPLENISH); + if (!(flags & ENQUEUE_REPLENISH)) + printk_deferred_once("sched: DL de-boosted task PID %d: REPLENISH flag missing\n", + task_pid_nr(p)); + return; } From cdcd20aa2cd44b83433aa6bf3f86cb48dc58a20a Mon Sep 17 00:00:00 2001 From: Pawan Gupta Date: Thu, 14 Jul 2022 16:15:35 -0700 Subject: [PATCH 1675/1842] x86/bugs: Warn when "ibrs" mitigation is selected on Enhanced IBRS parts commit eb23b5ef9131e6d65011de349a4d25ef1b3d4314 upstream. IBRS mitigation for spectre_v2 forces write to MSR_IA32_SPEC_CTRL at every kernel entry/exit. On Enhanced IBRS parts setting MSR_IA32_SPEC_CTRL[IBRS] only once at boot is sufficient. MSR writes at every kernel entry/exit incur unnecessary performance loss. When Enhanced IBRS feature is present, print a warning about this unnecessary performance loss. Signed-off-by: Pawan Gupta Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Thadeu Lima de Souza Cascardo Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/2a5eaf54583c2bfe0edc4fea64006656256cca17.1657814857.git.pawan.kumar.gupta@linux.intel.com Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index bc6382f5ec27..c2f78f893d98 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -931,6 +931,7 @@ static inline const char *spectre_v2_module_string(void) { return ""; } #define SPECTRE_V2_LFENCE_MSG "WARNING: LFENCE mitigation is not recommended for this CPU, data leaks possible!\n" #define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n" #define SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS+LFENCE mitigation and SMT, data leaks possible via Spectre v2 BHB attacks!\n" +#define SPECTRE_V2_IBRS_PERF_MSG "WARNING: IBRS mitigation selected on Enhanced IBRS CPU, this may cause unnecessary performance loss\n" #ifdef CONFIG_BPF_SYSCALL void unpriv_ebpf_notify(int new_state) @@ -1371,6 +1372,8 @@ static void __init spectre_v2_select_mitigation(void) case SPECTRE_V2_IBRS: setup_force_cpu_cap(X86_FEATURE_KERNEL_IBRS); + if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) + pr_warn(SPECTRE_V2_IBRS_PERF_MSG); break; case SPECTRE_V2_LFENCE: From 577b624689aa717704eefa858747cc40391d113f Mon Sep 17 00:00:00 2001 From: Alexander Aring Date: Wed, 6 Apr 2022 13:34:16 -0400 Subject: [PATCH 1676/1842] dlm: fix pending remove if msg allocation fails [ Upstream commit ba58995909b5098ca4003af65b0ccd5a8d13dd25 ] This patch unsets ls_remove_len and ls_remove_name if a message allocation of a remove messages fails. In this case we never send a remove message out but set the per ls ls_remove_len ls_remove_name variable for a pending remove. Unset those variable should indicate possible waiters in wait_pending_remove() that no pending remove is going on at this moment. Cc: stable@vger.kernel.org Signed-off-by: Alexander Aring Signed-off-by: David Teigland Signed-off-by: Sasha Levin --- fs/dlm/lock.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c index 2ce96a9ce63c..eaa28d654e9f 100644 --- a/fs/dlm/lock.c +++ b/fs/dlm/lock.c @@ -4067,13 +4067,14 @@ static void send_repeat_remove(struct dlm_ls *ls, char *ms_name, int len) rv = _create_message(ls, sizeof(struct dlm_message) + len, dir_nodeid, DLM_MSG_REMOVE, &ms, &mh); if (rv) - return; + goto out; memcpy(ms->m_extra, name, len); ms->m_hash = hash; send_message(mh, ms); +out: spin_lock(&ls->ls_remove_spin); ls->ls_remove_len = 0; memset(ls->ls_remove_name, 0, DLM_RESNAME_MAXLEN); From f62ffdb5e2ee31e04c7b9819eb1e8ef63060eda8 Mon Sep 17 00:00:00 2001 From: Wang ShaoBo Date: Fri, 11 Sep 2020 09:44:14 +0800 Subject: [PATCH 1677/1842] drm/imx/dcss: fix unused but set variable warnings MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ Upstream commit 523be44c334bc4e4c014032738dc277b8909d009 ] Fix unused but set variable warning building with `make W=1`: drivers/gpu/drm/imx/dcss/dcss-plane.c:270:6: warning: variable ‘pixel_format’ set but not used [-Wunused-but-set-variable] u32 pixel_format; ^~~~~~~~~~~~ Fixes: 9021c317b770 ("drm/imx: Add initial support for DCSS on iMX8MQ") Reported-by: Hulk Robot Signed-off-by: Wang ShaoBo Reviewed-by: Laurentiu Palcu Signed-off-by: Lucas Stach Link: https://patchwork.freedesktop.org/patch/msgid/20200911014414.4663-1-bobo.shaobowang@huawei.com Signed-off-by: Sasha Levin --- drivers/gpu/drm/imx/dcss/dcss-plane.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/gpu/drm/imx/dcss/dcss-plane.c b/drivers/gpu/drm/imx/dcss/dcss-plane.c index f54087ac44d3..46a188dd02ad 100644 --- a/drivers/gpu/drm/imx/dcss/dcss-plane.c +++ b/drivers/gpu/drm/imx/dcss/dcss-plane.c @@ -268,7 +268,6 @@ static void dcss_plane_atomic_update(struct drm_plane *plane, struct dcss_plane *dcss_plane = to_dcss_plane(plane); struct dcss_dev *dcss = plane->dev->dev_private; struct drm_framebuffer *fb = state->fb; - u32 pixel_format; struct drm_crtc_state *crtc_state; bool modifiers_present; u32 src_w, src_h, dst_w, dst_h; @@ -279,7 +278,6 @@ static void dcss_plane_atomic_update(struct drm_plane *plane, if (!fb || !state->crtc || !state->visible) return; - pixel_format = state->fb->format->format; crtc_state = state->crtc->state; modifiers_present = !!(fb->flags & DRM_MODE_FB_MODIFIERS); From d1dc861cd68cc83de52b01bec91e7362f8cca1b1 Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Wed, 10 Nov 2021 11:01:03 +0100 Subject: [PATCH 1678/1842] bitfield.h: Fix "type of reg too small for mask" test [ Upstream commit bff8c3848e071d387d8b0784dc91fa49cd563774 ] The test: 'mask > (typeof(_reg))~0ull' only works correctly when both sides are unsigned, consider: - 0xff000000 vs (int)~0ull - 0x000000ff vs (int)~0ull Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Josh Poimboeuf Link: https://lore.kernel.org/r/20211110101324.950210584@infradead.org Signed-off-by: Sasha Levin --- include/linux/bitfield.h | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h index 4e035aca6f7e..6093fa6db260 100644 --- a/include/linux/bitfield.h +++ b/include/linux/bitfield.h @@ -41,6 +41,22 @@ #define __bf_shf(x) (__builtin_ffsll(x) - 1) +#define __scalar_type_to_unsigned_cases(type) \ + unsigned type: (unsigned type)0, \ + signed type: (unsigned type)0 + +#define __unsigned_scalar_typeof(x) typeof( \ + _Generic((x), \ + char: (unsigned char)0, \ + __scalar_type_to_unsigned_cases(char), \ + __scalar_type_to_unsigned_cases(short), \ + __scalar_type_to_unsigned_cases(int), \ + __scalar_type_to_unsigned_cases(long), \ + __scalar_type_to_unsigned_cases(long long), \ + default: (x))) + +#define __bf_cast_unsigned(type, x) ((__unsigned_scalar_typeof(type))(x)) + #define __BF_FIELD_CHECK(_mask, _reg, _val, _pfx) \ ({ \ BUILD_BUG_ON_MSG(!__builtin_constant_p(_mask), \ @@ -49,7 +65,8 @@ BUILD_BUG_ON_MSG(__builtin_constant_p(_val) ? \ ~((_mask) >> __bf_shf(_mask)) & (_val) : 0, \ _pfx "value too large for the field"); \ - BUILD_BUG_ON_MSG((_mask) > (typeof(_reg))~0ull, \ + BUILD_BUG_ON_MSG(__bf_cast_unsigned(_mask, _mask) > \ + __bf_cast_unsigned(_reg, ~0ull), \ _pfx "type of reg too small for mask"); \ __BUILD_BUG_ON_NOT_POWER_OF_2((_mask) + \ (1ULL << __bf_shf(_mask))); \ From 4faf4bbc2d600a921052ff45b1b5914d583d9046 Mon Sep 17 00:00:00 2001 From: Takashi Iwai Date: Fri, 18 Dec 2020 15:56:24 +0100 Subject: [PATCH 1679/1842] ALSA: memalloc: Align buffer allocations in page size commit 5c1733e33c888a3cb7f576564d8ad543d5ad4a9e upstream. Currently the standard memory allocator (snd_dma_malloc_pages*()) passes the byte size to allocate as is. Most of the backends allocates real pages, hence the actual allocations are aligned in page size. However, the genalloc doesn't seem assuring the size alignment, hence it may result in the access outside the buffer when the whole memory pages are exposed via mmap. For avoiding such inconsistencies, this patch makes the allocation size always to be aligned in page size. Note that, after this change, snd_dma_buffer.bytes field contains the aligned size, not the originally requested size. This value is also used for releasing the pages in return. Reviewed-by: Lars-Peter Clausen Link: https://lore.kernel.org/r/20201218145625.2045-2-tiwai@suse.de Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman --- sound/core/memalloc.c | 1 + 1 file changed, 1 insertion(+) diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c index 0f335162f87c..966bef5acc75 100644 --- a/sound/core/memalloc.c +++ b/sound/core/memalloc.c @@ -133,6 +133,7 @@ int snd_dma_alloc_pages(int type, struct device *device, size_t size, if (WARN_ON(!dmab)) return -ENXIO; + size = PAGE_ALIGN(size); dmab->dev.type = type; dmab->dev.dev = device; dmab->bytes = 0; From c87b2bc9d74ab550fbc7dc6e5b2bd22aa9da40eb Mon Sep 17 00:00:00 2001 From: Luiz Augusto von Dentz Date: Fri, 3 Sep 2021 15:27:29 -0700 Subject: [PATCH 1680/1842] Bluetooth: Add bt_skb_sendmsg helper commit 38f64f650dc0e44c146ff88d15a7339efa325918 upstream. bt_skb_sendmsg helps takes care of allocation the skb and copying the the contents of msg over to the skb while checking for possible errors so it should be safe to call it without holding lock_sock. Signed-off-by: Luiz Augusto von Dentz Signed-off-by: Marcel Holtmann Cc: Harshit Mogalapalli Signed-off-by: Greg Kroah-Hartman --- include/net/bluetooth/bluetooth.h | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h index 3fecc4a411a1..83eaa42896bb 100644 --- a/include/net/bluetooth/bluetooth.h +++ b/include/net/bluetooth/bluetooth.h @@ -422,6 +422,34 @@ static inline struct sk_buff *bt_skb_send_alloc(struct sock *sk, return NULL; } +/* Shall not be called with lock_sock held */ +static inline struct sk_buff *bt_skb_sendmsg(struct sock *sk, + struct msghdr *msg, + size_t len, size_t mtu, + size_t headroom, size_t tailroom) +{ + struct sk_buff *skb; + size_t size = min_t(size_t, len, mtu); + int err; + + skb = bt_skb_send_alloc(sk, size + headroom + tailroom, + msg->msg_flags & MSG_DONTWAIT, &err); + if (!skb) + return ERR_PTR(err); + + skb_reserve(skb, headroom); + skb_tailroom_reserve(skb, mtu, tailroom); + + if (!copy_from_iter_full(skb_put(skb, size), size, &msg->msg_iter)) { + kfree_skb(skb); + return ERR_PTR(-EFAULT); + } + + skb->priority = sk->sk_priority; + + return skb; +} + int bt_to_errno(u16 code); void hci_sock_set_flag(struct sock *sk, int nr); From 3cce0e771fb5db6494136d83a8e0790865d370be Mon Sep 17 00:00:00 2001 From: Luiz Augusto von Dentz Date: Fri, 3 Sep 2021 15:27:30 -0700 Subject: [PATCH 1681/1842] Bluetooth: Add bt_skb_sendmmsg helper commit 97e4e80299844bb5f6ce5a7540742ffbffae3d97 upstream. This works similarly to bt_skb_sendmsg but can split the msg into multiple skb fragments which is useful for stream sockets. Signed-off-by: Luiz Augusto von Dentz Signed-off-by: Marcel Holtmann Cc: Harshit Mogalapalli Signed-off-by: Greg Kroah-Hartman --- include/net/bluetooth/bluetooth.h | 38 +++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h index 83eaa42896bb..3275d5737285 100644 --- a/include/net/bluetooth/bluetooth.h +++ b/include/net/bluetooth/bluetooth.h @@ -450,6 +450,44 @@ static inline struct sk_buff *bt_skb_sendmsg(struct sock *sk, return skb; } +/* Similar to bt_skb_sendmsg but can split the msg into multiple fragments + * accourding to the MTU. + */ +static inline struct sk_buff *bt_skb_sendmmsg(struct sock *sk, + struct msghdr *msg, + size_t len, size_t mtu, + size_t headroom, size_t tailroom) +{ + struct sk_buff *skb, **frag; + + skb = bt_skb_sendmsg(sk, msg, len, mtu, headroom, tailroom); + if (IS_ERR_OR_NULL(skb)) + return skb; + + len -= skb->len; + if (!len) + return skb; + + /* Add remaining data over MTU as continuation fragments */ + frag = &skb_shinfo(skb)->frag_list; + while (len) { + struct sk_buff *tmp; + + tmp = bt_skb_sendmsg(sk, msg, len, mtu, headroom, tailroom); + if (IS_ERR_OR_NULL(tmp)) { + kfree_skb(skb); + return tmp; + } + + len -= tmp->len; + + *frag = tmp; + frag = &(*frag)->next; + } + + return skb; +} + int bt_to_errno(u16 code); void hci_sock_set_flag(struct sock *sk, int nr); From 341a029cf39cb8492b156535f6b6cbd369fabd14 Mon Sep 17 00:00:00 2001 From: Luiz Augusto von Dentz Date: Fri, 3 Sep 2021 15:27:31 -0700 Subject: [PATCH 1682/1842] Bluetooth: SCO: Replace use of memcpy_from_msg with bt_skb_sendmsg commit 0771cbb3b97d3c1d68eecd7f00055f599954c34e upstream. This makes use of bt_skb_sendmsg instead of allocating a different buffer to be used with memcpy_from_msg which cause one extra copy. Signed-off-by: Luiz Augusto von Dentz Signed-off-by: Marcel Holtmann Cc: Harshit Mogalapalli Signed-off-by: Greg Kroah-Hartman --- net/bluetooth/sco.c | 34 +++++++++++----------------------- 1 file changed, 11 insertions(+), 23 deletions(-) diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c index df254c7de2dd..2702a0240e84 100644 --- a/net/bluetooth/sco.c +++ b/net/bluetooth/sco.c @@ -280,27 +280,19 @@ static int sco_connect(struct hci_dev *hdev, struct sock *sk) return err; } -static int sco_send_frame(struct sock *sk, void *buf, int len, - unsigned int msg_flags) +static int sco_send_frame(struct sock *sk, struct sk_buff *skb) { struct sco_conn *conn = sco_pi(sk)->conn; - struct sk_buff *skb; - int err; /* Check outgoing MTU */ - if (len > conn->mtu) + if (skb->len > conn->mtu) return -EINVAL; - BT_DBG("sk %p len %d", sk, len); + BT_DBG("sk %p len %d", sk, skb->len); - skb = bt_skb_send_alloc(sk, len, msg_flags & MSG_DONTWAIT, &err); - if (!skb) - return err; - - memcpy(skb_put(skb, len), buf, len); hci_send_sco(conn->hcon, skb); - return len; + return skb->len; } static void sco_recv_frame(struct sco_conn *conn, struct sk_buff *skb) @@ -727,7 +719,7 @@ static int sco_sock_sendmsg(struct socket *sock, struct msghdr *msg, size_t len) { struct sock *sk = sock->sk; - void *buf; + struct sk_buff *skb; int err; BT_DBG("sock %p, sk %p", sock, sk); @@ -739,24 +731,20 @@ static int sco_sock_sendmsg(struct socket *sock, struct msghdr *msg, if (msg->msg_flags & MSG_OOB) return -EOPNOTSUPP; - buf = kmalloc(len, GFP_KERNEL); - if (!buf) - return -ENOMEM; - - if (memcpy_from_msg(buf, msg, len)) { - kfree(buf); - return -EFAULT; - } + skb = bt_skb_sendmsg(sk, msg, len, len, 0, 0); + if (IS_ERR_OR_NULL(skb)) + return PTR_ERR(skb); lock_sock(sk); if (sk->sk_state == BT_CONNECTED) - err = sco_send_frame(sk, buf, len, msg->msg_flags); + err = sco_send_frame(sk, skb); else err = -ENOTCONN; release_sock(sk); - kfree(buf); + if (err) + kfree_skb(skb); return err; } From 5283591c84fae76c3dd7f5a9a2f4db95eae24569 Mon Sep 17 00:00:00 2001 From: Luiz Augusto von Dentz Date: Fri, 3 Sep 2021 15:27:32 -0700 Subject: [PATCH 1683/1842] Bluetooth: RFCOMM: Replace use of memcpy_from_msg with bt_skb_sendmmsg commit 81be03e026dc0c16dc1c64e088b2a53b73caa895 upstream. This makes use of bt_skb_sendmmsg instead using memcpy_from_msg which is not considered safe to be used when lock_sock is held. Also make rfcomm_dlc_send handle skb with fragments and queue them all atomically. Signed-off-by: Luiz Augusto von Dentz Signed-off-by: Marcel Holtmann Cc: Harshit Mogalapalli Signed-off-by: Greg Kroah-Hartman --- net/bluetooth/rfcomm/core.c | 52 +++++++++++++++++++++++++++++++------ net/bluetooth/rfcomm/sock.c | 50 +++++++++-------------------------- 2 files changed, 56 insertions(+), 46 deletions(-) diff --git a/net/bluetooth/rfcomm/core.c b/net/bluetooth/rfcomm/core.c index f2bacb464ccf..7324764384b6 100644 --- a/net/bluetooth/rfcomm/core.c +++ b/net/bluetooth/rfcomm/core.c @@ -549,22 +549,58 @@ struct rfcomm_dlc *rfcomm_dlc_exists(bdaddr_t *src, bdaddr_t *dst, u8 channel) return dlc; } -int rfcomm_dlc_send(struct rfcomm_dlc *d, struct sk_buff *skb) +static int rfcomm_dlc_send_frag(struct rfcomm_dlc *d, struct sk_buff *frag) { - int len = skb->len; - - if (d->state != BT_CONNECTED) - return -ENOTCONN; + int len = frag->len; BT_DBG("dlc %p mtu %d len %d", d, d->mtu, len); if (len > d->mtu) return -EINVAL; - rfcomm_make_uih(skb, d->addr); - skb_queue_tail(&d->tx_queue, skb); + rfcomm_make_uih(frag, d->addr); + __skb_queue_tail(&d->tx_queue, frag); - if (!test_bit(RFCOMM_TX_THROTTLED, &d->flags)) + return len; +} + +int rfcomm_dlc_send(struct rfcomm_dlc *d, struct sk_buff *skb) +{ + unsigned long flags; + struct sk_buff *frag, *next; + int len; + + if (d->state != BT_CONNECTED) + return -ENOTCONN; + + frag = skb_shinfo(skb)->frag_list; + skb_shinfo(skb)->frag_list = NULL; + + /* Queue all fragments atomically. */ + spin_lock_irqsave(&d->tx_queue.lock, flags); + + len = rfcomm_dlc_send_frag(d, skb); + if (len < 0 || !frag) + goto unlock; + + for (; frag; frag = next) { + int ret; + + next = frag->next; + + ret = rfcomm_dlc_send_frag(d, frag); + if (ret < 0) { + kfree_skb(frag); + goto unlock; + } + + len += ret; + } + +unlock: + spin_unlock_irqrestore(&d->tx_queue.lock, flags); + + if (len > 0 && !test_bit(RFCOMM_TX_THROTTLED, &d->flags)) rfcomm_schedule(); return len; } diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c index ae6f80730561..97f10f05ae19 100644 --- a/net/bluetooth/rfcomm/sock.c +++ b/net/bluetooth/rfcomm/sock.c @@ -575,47 +575,21 @@ static int rfcomm_sock_sendmsg(struct socket *sock, struct msghdr *msg, lock_sock(sk); sent = bt_sock_wait_ready(sk, msg->msg_flags); - if (sent) - goto done; - while (len) { - size_t size = min_t(size_t, len, d->mtu); - int err; - - skb = sock_alloc_send_skb(sk, size + RFCOMM_SKB_RESERVE, - msg->msg_flags & MSG_DONTWAIT, &err); - if (!skb) { - if (sent == 0) - sent = err; - break; - } - skb_reserve(skb, RFCOMM_SKB_HEAD_RESERVE); - - err = memcpy_from_msg(skb_put(skb, size), msg, size); - if (err) { - kfree_skb(skb); - if (sent == 0) - sent = err; - break; - } - - skb->priority = sk->sk_priority; - - err = rfcomm_dlc_send(d, skb); - if (err < 0) { - kfree_skb(skb); - if (sent == 0) - sent = err; - break; - } - - sent += size; - len -= size; - } - -done: release_sock(sk); + if (sent) + return sent; + + skb = bt_skb_sendmmsg(sk, msg, len, d->mtu, RFCOMM_SKB_HEAD_RESERVE, + RFCOMM_SKB_TAIL_RESERVE); + if (IS_ERR_OR_NULL(skb)) + return PTR_ERR(skb); + + sent = rfcomm_dlc_send(d, skb); + if (sent < 0) + kfree_skb(skb); + return sent; } From a8feae8bd22757637921dfbfb74731dac31da461 Mon Sep 17 00:00:00 2001 From: Luiz Augusto von Dentz Date: Thu, 16 Sep 2021 13:10:48 -0700 Subject: [PATCH 1684/1842] Bluetooth: Fix passing NULL to PTR_ERR commit 266191aa8d14b84958aaeb5e96ee4e97839e3d87 upstream. Passing NULL to PTR_ERR will result in 0 (success), also since the likes of bt_skb_sendmsg does never return NULL it is safe to replace the instances of IS_ERR_OR_NULL with IS_ERR when checking its return. Reported-by: Dan Carpenter Tested-by: Tedd Ho-Jeong An Signed-off-by: Luiz Augusto von Dentz Signed-off-by: Marcel Holtmann Cc: Harshit Mogalapalli Signed-off-by: Greg Kroah-Hartman --- include/net/bluetooth/bluetooth.h | 2 +- net/bluetooth/rfcomm/sock.c | 2 +- net/bluetooth/sco.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h index 3275d5737285..b85e6d9ba39f 100644 --- a/include/net/bluetooth/bluetooth.h +++ b/include/net/bluetooth/bluetooth.h @@ -474,7 +474,7 @@ static inline struct sk_buff *bt_skb_sendmmsg(struct sock *sk, struct sk_buff *tmp; tmp = bt_skb_sendmsg(sk, msg, len, mtu, headroom, tailroom); - if (IS_ERR_OR_NULL(tmp)) { + if (IS_ERR(tmp)) { kfree_skb(skb); return tmp; } diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c index 97f10f05ae19..4cf1fa9900ca 100644 --- a/net/bluetooth/rfcomm/sock.c +++ b/net/bluetooth/rfcomm/sock.c @@ -583,7 +583,7 @@ static int rfcomm_sock_sendmsg(struct socket *sock, struct msghdr *msg, skb = bt_skb_sendmmsg(sk, msg, len, d->mtu, RFCOMM_SKB_HEAD_RESERVE, RFCOMM_SKB_TAIL_RESERVE); - if (IS_ERR_OR_NULL(skb)) + if (IS_ERR(skb)) return PTR_ERR(skb); sent = rfcomm_dlc_send(d, skb); diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c index 2702a0240e84..f63d50b4b00b 100644 --- a/net/bluetooth/sco.c +++ b/net/bluetooth/sco.c @@ -732,7 +732,7 @@ static int sco_sock_sendmsg(struct socket *sock, struct msghdr *msg, return -EOPNOTSUPP; skb = bt_skb_sendmsg(sk, msg, len, len, 0, 0); - if (IS_ERR_OR_NULL(skb)) + if (IS_ERR(skb)) return PTR_ERR(skb); lock_sock(sk); From f44e65e6f0ee38b124ff4becdce6f8f566f1d1c5 Mon Sep 17 00:00:00 2001 From: Luiz Augusto von Dentz Date: Thu, 16 Sep 2021 13:10:49 -0700 Subject: [PATCH 1685/1842] Bluetooth: SCO: Fix sco_send_frame returning skb->len commit 037ce005af6b8a3e40ee07c6e9266c8997e6a4d6 upstream. The skb in modified by hci_send_sco which pushes SCO headers thus changing skb->len causing sco_sock_sendmsg to fail. Fixes: 0771cbb3b97d ("Bluetooth: SCO: Replace use of memcpy_from_msg with bt_skb_sendmsg") Tested-by: Tedd Ho-Jeong An Signed-off-by: Luiz Augusto von Dentz Signed-off-by: Marcel Holtmann Cc: Harshit Mogalapalli Signed-off-by: Greg Kroah-Hartman --- net/bluetooth/sco.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c index f63d50b4b00b..8244d3ae185b 100644 --- a/net/bluetooth/sco.c +++ b/net/bluetooth/sco.c @@ -283,16 +283,17 @@ static int sco_connect(struct hci_dev *hdev, struct sock *sk) static int sco_send_frame(struct sock *sk, struct sk_buff *skb) { struct sco_conn *conn = sco_pi(sk)->conn; + int len = skb->len; /* Check outgoing MTU */ - if (skb->len > conn->mtu) + if (len > conn->mtu) return -EINVAL; - BT_DBG("sk %p len %d", sk, skb->len); + BT_DBG("sk %p len %d", sk, len); hci_send_sco(conn->hcon, skb); - return skb->len; + return len; } static void sco_recv_frame(struct sco_conn *conn, struct sk_buff *skb) @@ -743,7 +744,8 @@ static int sco_sock_sendmsg(struct socket *sock, struct msghdr *msg, err = -ENOTCONN; release_sock(sk); - if (err) + + if (err < 0) kfree_skb(skb); return err; } From 14fc9233aa73e896d21a4998cdd453349a4baa8b Mon Sep 17 00:00:00 2001 From: Luiz Augusto von Dentz Date: Mon, 14 Feb 2022 17:59:38 -0800 Subject: [PATCH 1686/1842] Bluetooth: Fix bt_skb_sendmmsg not allocating partial chunks commit 29fb608396d6a62c1b85acc421ad7a4399085b9f upstream. Since bt_skb_sendmmsg can be used with the likes of SOCK_STREAM it shall return the partial chunks it could allocate instead of freeing everything as otherwise it can cause problems like bellow. Fixes: 81be03e026dc ("Bluetooth: RFCOMM: Replace use of memcpy_from_msg with bt_skb_sendmmsg") Reported-by: Paul Menzel Link: https://lore.kernel.org/r/d7206e12-1b99-c3be-84f4-df22af427ef5@molgen.mpg.de BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=215594 Signed-off-by: Luiz Augusto von Dentz Tested-by: Paul Menzel (Nokia N9 (MeeGo/Harmattan) Signed-off-by: Marcel Holtmann Cc: Harshit Mogalapalli Signed-off-by: Greg Kroah-Hartman --- include/net/bluetooth/bluetooth.h | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h index b85e6d9ba39f..355835639ae5 100644 --- a/include/net/bluetooth/bluetooth.h +++ b/include/net/bluetooth/bluetooth.h @@ -475,8 +475,7 @@ static inline struct sk_buff *bt_skb_sendmmsg(struct sock *sk, tmp = bt_skb_sendmsg(sk, msg, len, mtu, headroom, tailroom); if (IS_ERR(tmp)) { - kfree_skb(skb); - return tmp; + return skb; } len -= tmp->len; From b7b9e5cc8b24d6c800b2b7f8bba44c5914bd6c8f Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Mon, 18 Jul 2022 13:41:37 +0200 Subject: [PATCH 1687/1842] x86/amd: Use IBPB for firmware calls commit 28a99e95f55c61855983d36a88c05c178d966bb7 upstream. On AMD IBRS does not prevent Retbleed; as such use IBPB before a firmware call to flush the branch history state. And because in order to do an EFI call, the kernel maps a whole lot of the kernel page table into the EFI page table, do an IBPB just in case in order to prevent the scenario of poisoning the BTB and causing an EFI call using the unprotected RET there. [ bp: Massage. ] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/r/20220715194550.793957-1-cascardo@canonical.com Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/nospec-branch.h | 2 ++ arch/x86/kernel/cpu/bugs.c | 11 ++++++++++- 3 files changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index e077f4fe7c6d..2a51ee2f5a0f 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -298,6 +298,7 @@ #define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */ #define X86_FEATURE_RETHUNK (11*32+14) /* "" Use REturn THUNK */ #define X86_FEATURE_UNRET (11*32+15) /* "" AMD BTB untrain return */ +#define X86_FEATURE_USE_IBPB_FW (11*32+16) /* "" Use IBPB during runtime firmware calls */ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */ diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 6f2adaf53f46..c3e8e50633ea 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -298,6 +298,8 @@ do { \ alternative_msr_write(MSR_IA32_SPEC_CTRL, \ spec_ctrl_current() | SPEC_CTRL_IBRS, \ X86_FEATURE_USE_IBRS_FW); \ + alternative_msr_write(MSR_IA32_PRED_CMD, PRED_CMD_IBPB, \ + X86_FEATURE_USE_IBPB_FW); \ } while (0) #define firmware_restrict_branch_speculation_end() \ diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index c2f78f893d98..7896b67dda42 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1475,7 +1475,16 @@ static void __init spectre_v2_select_mitigation(void) * the CPU supports Enhanced IBRS, kernel might un-intentionally not * enable IBRS around firmware calls. */ - if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_ibrs_mode(mode)) { + if (boot_cpu_has_bug(X86_BUG_RETBLEED) && + (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || + boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)) { + + if (retbleed_cmd != RETBLEED_CMD_IBPB) { + setup_force_cpu_cap(X86_FEATURE_USE_IBPB_FW); + pr_info("Enabling Speculation Barrier for firmware calls\n"); + } + + } else if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_ibrs_mode(mode)) { setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW); pr_info("Enabling Restricted Speculation for firmware calls\n"); } From c0a3a9eb262a5e86a36fd5fc4f9fc38470713f13 Mon Sep 17 00:00:00 2001 From: Kees Cook Date: Wed, 13 Jul 2022 14:38:19 -0700 Subject: [PATCH 1688/1842] x86/alternative: Report missing return thunk details commit 65cdf0d623bedf0e069bb64ed52e8bb20105e2ba upstream. Debugging missing return thunks is easier if we can see where they're happening. Suggested-by: Peter Zijlstra Signed-off-by: Kees Cook Signed-off-by: Peter Zijlstra (Intel) Link: https://lore.kernel.org/lkml/Ys66hwtFcGbYmoiZ@hirez.programming.kicks-ass.net/ Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/alternative.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 77eefa0b0f32..a85fb17f1180 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -709,7 +709,9 @@ void __init_or_module noinline apply_returns(s32 *start, s32 *end) dest = addr + insn.length + insn.immediate.value; if (__static_call_fixup(addr, op, dest) || - WARN_ON_ONCE(dest != &__x86_return_thunk)) + WARN_ONCE(dest != &__x86_return_thunk, + "missing return thunk: %pS-%pS: %*ph", + addr, dest, 5, addr)) continue; DPRINTK("return thunk at: %pS (%px) len: %d to: %pS", From 0adf21eec59040b31af113e626efd85eb153c728 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Tue, 19 Jul 2022 11:09:01 -0700 Subject: [PATCH 1689/1842] watchqueue: make sure to serialize 'wqueue->defunct' properly commit 353f7988dd8413c47718f7ca79c030b6fb62cfe5 upstream. When the pipe is closed, we mark the associated watchqueue defunct by calling watch_queue_clear(). However, while that is protected by the watchqueue lock, new watchqueue entries aren't actually added under that lock at all: they use the pipe->rd_wait.lock instead, and looking up that pipe happens without any locking. The watchqueue code uses the RCU read-side section to make sure that the wqueue entry itself hasn't disappeared, but that does not protect the pipe_info in any way. So make sure to actually hold the wqueue lock when posting watch events, properly serializing against the pipe being torn down. Reported-by: Noam Rathaus Cc: Greg KH Cc: David Howells Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- kernel/watch_queue.c | 53 +++++++++++++++++++++++++++++++------------- 1 file changed, 37 insertions(+), 16 deletions(-) diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c index 249ed3259144..dcf1e676797d 100644 --- a/kernel/watch_queue.c +++ b/kernel/watch_queue.c @@ -34,6 +34,27 @@ MODULE_LICENSE("GPL"); #define WATCH_QUEUE_NOTE_SIZE 128 #define WATCH_QUEUE_NOTES_PER_PAGE (PAGE_SIZE / WATCH_QUEUE_NOTE_SIZE) +/* + * This must be called under the RCU read-lock, which makes + * sure that the wqueue still exists. It can then take the lock, + * and check that the wqueue hasn't been destroyed, which in + * turn makes sure that the notification pipe still exists. + */ +static inline bool lock_wqueue(struct watch_queue *wqueue) +{ + spin_lock_bh(&wqueue->lock); + if (unlikely(wqueue->defunct)) { + spin_unlock_bh(&wqueue->lock); + return false; + } + return true; +} + +static inline void unlock_wqueue(struct watch_queue *wqueue) +{ + spin_unlock_bh(&wqueue->lock); +} + static void watch_queue_pipe_buf_release(struct pipe_inode_info *pipe, struct pipe_buffer *buf) { @@ -69,6 +90,10 @@ static const struct pipe_buf_operations watch_queue_pipe_buf_ops = { /* * Post a notification to a watch queue. + * + * Must be called with the RCU lock for reading, and the + * watch_queue lock held, which guarantees that the pipe + * hasn't been released. */ static bool post_one_notification(struct watch_queue *wqueue, struct watch_notification *n) @@ -85,9 +110,6 @@ static bool post_one_notification(struct watch_queue *wqueue, spin_lock_irq(&pipe->rd_wait.lock); - if (wqueue->defunct) - goto out; - mask = pipe->ring_size - 1; head = pipe->head; tail = pipe->tail; @@ -203,7 +225,10 @@ void __post_watch_notification(struct watch_list *wlist, if (security_post_notification(watch->cred, cred, n) < 0) continue; - post_one_notification(wqueue, n); + if (lock_wqueue(wqueue)) { + post_one_notification(wqueue, n); + unlock_wqueue(wqueue);; + } } rcu_read_unlock(); @@ -465,11 +490,12 @@ int add_watch_to_object(struct watch *watch, struct watch_list *wlist) return -EAGAIN; } - spin_lock_bh(&wqueue->lock); - kref_get(&wqueue->usage); - kref_get(&watch->usage); - hlist_add_head(&watch->queue_node, &wqueue->watches); - spin_unlock_bh(&wqueue->lock); + if (lock_wqueue(wqueue)) { + kref_get(&wqueue->usage); + kref_get(&watch->usage); + hlist_add_head(&watch->queue_node, &wqueue->watches); + unlock_wqueue(wqueue); + } hlist_add_head(&watch->list_node, &wlist->watchers); return 0; @@ -523,20 +549,15 @@ int remove_watch_from_object(struct watch_list *wlist, struct watch_queue *wq, wqueue = rcu_dereference(watch->queue); - /* We don't need the watch list lock for the next bit as RCU is - * protecting *wqueue from deallocation. - */ - if (wqueue) { + if (lock_wqueue(wqueue)) { post_one_notification(wqueue, &n.watch); - spin_lock_bh(&wqueue->lock); - if (!hlist_unhashed(&watch->queue_node)) { hlist_del_init_rcu(&watch->queue_node); put_watch(watch); } - spin_unlock_bh(&wqueue->lock); + unlock_wqueue(wqueue); } if (wlist->release_watch) { From 6a81848252869d929354a879e08807c932444929 Mon Sep 17 00:00:00 2001 From: Jiri Slaby Date: Mon, 22 Nov 2021 12:16:46 +0100 Subject: [PATCH 1690/1842] tty: drivers/tty/, stop using tty_schedule_flip() commit 5f6a85158ccacc3f09744b3aafe8b11ab3b6c6f6 upstream. Since commit a9c3f68f3cd8d (tty: Fix low_latency BUG) in 2014, tty_flip_buffer_push() is only a wrapper to tty_schedule_flip(). We are going to remove the latter (as it is used less), so call the former in drivers/tty/. Cc: Vladimir Zapolskiy Reviewed-by: Johan Hovold Signed-off-by: Jiri Slaby Link: https://lore.kernel.org/r/20211122111648.30379-2-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman --- drivers/tty/cyclades.c | 6 +++--- drivers/tty/goldfish.c | 2 +- drivers/tty/moxa.c | 4 ++-- drivers/tty/serial/lpc32xx_hs.c | 2 +- drivers/tty/vt/keyboard.c | 6 +++--- drivers/tty/vt/vt.c | 2 +- 6 files changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/tty/cyclades.c b/drivers/tty/cyclades.c index 097266342e5e..4bcaab250676 100644 --- a/drivers/tty/cyclades.c +++ b/drivers/tty/cyclades.c @@ -556,7 +556,7 @@ static void cyy_chip_rx(struct cyclades_card *cinfo, int chip, } info->idle_stats.recv_idle = jiffies; } - tty_schedule_flip(port); + tty_flip_buffer_push(port); /* end of service */ cyy_writeb(info, CyRIR, save_xir & 0x3f); @@ -996,7 +996,7 @@ static void cyz_handle_rx(struct cyclades_port *info) mod_timer(&info->rx_full_timer, jiffies + 1); #endif info->idle_stats.recv_idle = jiffies; - tty_schedule_flip(&info->port); + tty_flip_buffer_push(&info->port); /* Update rx_get */ cy_writel(&buf_ctrl->rx_get, new_rx_get); @@ -1172,7 +1172,7 @@ static void cyz_handle_cmd(struct cyclades_card *cinfo) if (delta_count) wake_up_interruptible(&info->port.delta_msr_wait); if (special_count) - tty_schedule_flip(&info->port); + tty_flip_buffer_push(&info->port); } } diff --git a/drivers/tty/goldfish.c b/drivers/tty/goldfish.c index 9180ca5e4dcd..d6e82eb61fc2 100644 --- a/drivers/tty/goldfish.c +++ b/drivers/tty/goldfish.c @@ -151,7 +151,7 @@ static irqreturn_t goldfish_tty_interrupt(int irq, void *dev_id) address = (unsigned long)(void *)buf; goldfish_tty_rw(qtty, address, count, 0); - tty_schedule_flip(&qtty->port); + tty_flip_buffer_push(&qtty->port); return IRQ_HANDLED; } diff --git a/drivers/tty/moxa.c b/drivers/tty/moxa.c index f9f14104bd2c..d6528f3bb9b9 100644 --- a/drivers/tty/moxa.c +++ b/drivers/tty/moxa.c @@ -1385,7 +1385,7 @@ static int moxa_poll_port(struct moxa_port *p, unsigned int handle, if (inited && !tty_throttled(tty) && MoxaPortRxQueue(p) > 0) { /* RX */ MoxaPortReadData(p); - tty_schedule_flip(&p->port); + tty_flip_buffer_push(&p->port); } } else { clear_bit(EMPTYWAIT, &p->statusflags); @@ -1410,7 +1410,7 @@ static int moxa_poll_port(struct moxa_port *p, unsigned int handle, if (tty && (intr & IntrBreak) && !I_IGNBRK(tty)) { /* BREAK */ tty_insert_flip_char(&p->port, 0, TTY_BREAK); - tty_schedule_flip(&p->port); + tty_flip_buffer_push(&p->port); } if (intr & IntrLine) diff --git a/drivers/tty/serial/lpc32xx_hs.c b/drivers/tty/serial/lpc32xx_hs.c index b5898c932036..a9802308ff60 100644 --- a/drivers/tty/serial/lpc32xx_hs.c +++ b/drivers/tty/serial/lpc32xx_hs.c @@ -344,7 +344,7 @@ static irqreturn_t serial_lpc32xx_interrupt(int irq, void *dev_id) LPC32XX_HSUART_IIR(port->membase)); port->icount.overrun++; tty_insert_flip_char(tport, 0, TTY_OVERRUN); - tty_schedule_flip(tport); + tty_flip_buffer_push(tport); } /* Data received? */ diff --git a/drivers/tty/vt/keyboard.c b/drivers/tty/vt/keyboard.c index 78acc270e39a..aa0026a9839c 100644 --- a/drivers/tty/vt/keyboard.c +++ b/drivers/tty/vt/keyboard.c @@ -311,7 +311,7 @@ int kbd_rate(struct kbd_repeat *rpt) static void put_queue(struct vc_data *vc, int ch) { tty_insert_flip_char(&vc->port, ch, 0); - tty_schedule_flip(&vc->port); + tty_flip_buffer_push(&vc->port); } static void puts_queue(struct vc_data *vc, char *cp) @@ -320,7 +320,7 @@ static void puts_queue(struct vc_data *vc, char *cp) tty_insert_flip_char(&vc->port, *cp, 0); cp++; } - tty_schedule_flip(&vc->port); + tty_flip_buffer_push(&vc->port); } static void applkey(struct vc_data *vc, int key, char mode) @@ -565,7 +565,7 @@ static void fn_inc_console(struct vc_data *vc) static void fn_send_intr(struct vc_data *vc) { tty_insert_flip_char(&vc->port, 0, TTY_BREAK); - tty_schedule_flip(&vc->port); + tty_flip_buffer_push(&vc->port); } static void fn_scroll_forw(struct vc_data *vc) diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c index f043fd7e0f92..2ebe73b116dc 100644 --- a/drivers/tty/vt/vt.c +++ b/drivers/tty/vt/vt.c @@ -1834,7 +1834,7 @@ static void csi_m(struct vc_data *vc) static void respond_string(const char *p, size_t len, struct tty_port *port) { tty_insert_flip_string(port, p, len); - tty_schedule_flip(port); + tty_flip_buffer_push(port); } static void cursor_report(struct vc_data *vc, struct tty_struct *tty) From 4d374625cca21ce4f9cdd58170d070b400910ae2 Mon Sep 17 00:00:00 2001 From: Jiri Slaby Date: Mon, 22 Nov 2021 12:16:47 +0100 Subject: [PATCH 1691/1842] tty: the rest, stop using tty_schedule_flip() commit b68b914494df4f79b4e9b58953110574af1cb7a2 upstream. Since commit a9c3f68f3cd8d (tty: Fix low_latency BUG) in 2014, tty_flip_buffer_push() is only a wrapper to tty_schedule_flip(). We are going to remove the latter (as it is used less), so call the former in the rest of the users. Cc: Richard Henderson Cc: Ivan Kokshaysky Cc: Matt Turner Cc: William Hubbs Cc: Chris Brannon Cc: Kirk Reiser Cc: Samuel Thibault Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Christian Borntraeger Cc: Alexander Gordeev Reviewed-by: Johan Hovold Signed-off-by: Jiri Slaby Link: https://lore.kernel.org/r/20211122111648.30379-3-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman --- arch/alpha/kernel/srmcons.c | 2 +- drivers/accessibility/speakup/spk_ttyio.c | 4 ++-- drivers/s390/char/keyboard.h | 4 ++-- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/alpha/kernel/srmcons.c b/arch/alpha/kernel/srmcons.c index 438b10c44d73..2b7a314b8452 100644 --- a/arch/alpha/kernel/srmcons.c +++ b/arch/alpha/kernel/srmcons.c @@ -59,7 +59,7 @@ srmcons_do_receive_chars(struct tty_port *port) } while((result.bits.status & 1) && (++loops < 10)); if (count) - tty_schedule_flip(port); + tty_flip_buffer_push(port); return count; } diff --git a/drivers/accessibility/speakup/spk_ttyio.c b/drivers/accessibility/speakup/spk_ttyio.c index 6284aff434a1..da3a24557a03 100644 --- a/drivers/accessibility/speakup/spk_ttyio.c +++ b/drivers/accessibility/speakup/spk_ttyio.c @@ -88,7 +88,7 @@ static int spk_ttyio_receive_buf2(struct tty_struct *tty, } if (!ldisc_data->buf_free) - /* ttyio_in will tty_schedule_flip */ + /* ttyio_in will tty_flip_buffer_push */ return 0; /* Make sure the consumer has read buf before we have seen @@ -334,7 +334,7 @@ static unsigned char ttyio_in(int timeout) mb(); ldisc_data->buf_free = true; /* Let TTY push more characters */ - tty_schedule_flip(speakup_tty->port); + tty_flip_buffer_push(speakup_tty->port); return rv; } diff --git a/drivers/s390/char/keyboard.h b/drivers/s390/char/keyboard.h index c467589c7f45..c06d399b9b1f 100644 --- a/drivers/s390/char/keyboard.h +++ b/drivers/s390/char/keyboard.h @@ -56,7 +56,7 @@ static inline void kbd_put_queue(struct tty_port *port, int ch) { tty_insert_flip_char(port, ch, 0); - tty_schedule_flip(port); + tty_flip_buffer_push(port); } static inline void @@ -64,5 +64,5 @@ kbd_puts_queue(struct tty_port *port, char *cp) { while (*cp) tty_insert_flip_char(port, *cp++, 0); - tty_schedule_flip(port); + tty_flip_buffer_push(port); } From c84986d097451203bb79a8bff8d37e56488fbf1d Mon Sep 17 00:00:00 2001 From: Jiri Slaby Date: Mon, 22 Nov 2021 12:16:48 +0100 Subject: [PATCH 1692/1842] tty: drop tty_schedule_flip() commit 5db96ef23bda6c2a61a51693c85b78b52d03f654 upstream. Since commit a9c3f68f3cd8d (tty: Fix low_latency BUG) in 2014, tty_flip_buffer_push() is only a wrapper to tty_schedule_flip(). All users were converted in the previous patches, so remove tty_schedule_flip() completely while inlining its body into tty_flip_buffer_push(). One less exported function. Reviewed-by: Johan Hovold Signed-off-by: Jiri Slaby Link: https://lore.kernel.org/r/20211122111648.30379-4-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman --- drivers/tty/tty_buffer.c | 30 ++++++++---------------------- include/linux/tty_flip.h | 1 - 2 files changed, 8 insertions(+), 23 deletions(-) diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c index 6c4a50addadd..40ddcaae2194 100644 --- a/drivers/tty/tty_buffer.c +++ b/drivers/tty/tty_buffer.c @@ -394,27 +394,6 @@ int __tty_insert_flip_char(struct tty_port *port, unsigned char ch, char flag) } EXPORT_SYMBOL(__tty_insert_flip_char); -/** - * tty_schedule_flip - push characters to ldisc - * @port: tty port to push from - * - * Takes any pending buffers and transfers their ownership to the - * ldisc side of the queue. It then schedules those characters for - * processing by the line discipline. - */ - -void tty_schedule_flip(struct tty_port *port) -{ - struct tty_bufhead *buf = &port->buf; - - /* paired w/ acquire in flush_to_ldisc(); ensures - * flush_to_ldisc() sees buffer data. - */ - smp_store_release(&buf->tail->commit, buf->tail->used); - queue_work(system_unbound_wq, &buf->work); -} -EXPORT_SYMBOL(tty_schedule_flip); - /** * tty_prepare_flip_string - make room for characters * @port: tty port @@ -557,7 +536,14 @@ static void flush_to_ldisc(struct work_struct *work) void tty_flip_buffer_push(struct tty_port *port) { - tty_schedule_flip(port); + struct tty_bufhead *buf = &port->buf; + + /* + * Paired w/ acquire in flush_to_ldisc(); ensures flush_to_ldisc() sees + * buffer data. + */ + smp_store_release(&buf->tail->commit, buf->tail->used); + queue_work(system_unbound_wq, &buf->work); } EXPORT_SYMBOL(tty_flip_buffer_push); diff --git a/include/linux/tty_flip.h b/include/linux/tty_flip.h index 767f62086bd9..e183f74b19a0 100644 --- a/include/linux/tty_flip.h +++ b/include/linux/tty_flip.h @@ -12,7 +12,6 @@ extern int tty_insert_flip_string_fixed_flag(struct tty_port *port, extern int tty_prepare_flip_string(struct tty_port *port, unsigned char **chars, size_t size); extern void tty_flip_buffer_push(struct tty_port *port); -void tty_schedule_flip(struct tty_port *port); int __tty_insert_flip_char(struct tty_port *port, unsigned char ch, char flag); static inline int tty_insert_flip_char(struct tty_port *port, From a4bb7ef2d6f6d7158539f95b2fa97d658ea3cf75 Mon Sep 17 00:00:00 2001 From: Jiri Slaby Date: Thu, 7 Jul 2022 10:25:57 +0200 Subject: [PATCH 1693/1842] tty: extract tty_flip_buffer_commit() from tty_flip_buffer_push() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 716b10580283fda66f2b88140e3964f8a7f9da89 upstream. We will need this new helper in the next patch. Cc: Hillf Danton Cc: 一只狗 Cc: Dan Carpenter Signed-off-by: Jiri Slaby Link: https://lore.kernel.org/r/20220707082558.9250-1-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman --- drivers/tty/tty_buffer.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c index 40ddcaae2194..bf8d4160411c 100644 --- a/drivers/tty/tty_buffer.c +++ b/drivers/tty/tty_buffer.c @@ -523,6 +523,15 @@ static void flush_to_ldisc(struct work_struct *work) } +static inline void tty_flip_buffer_commit(struct tty_buffer *tail) +{ + /* + * Paired w/ acquire in flush_to_ldisc(); ensures flush_to_ldisc() sees + * buffer data. + */ + smp_store_release(&tail->commit, tail->used); +} + /** * tty_flip_buffer_push - terminal * @port: tty port to push @@ -538,11 +547,7 @@ void tty_flip_buffer_push(struct tty_port *port) { struct tty_bufhead *buf = &port->buf; - /* - * Paired w/ acquire in flush_to_ldisc(); ensures flush_to_ldisc() sees - * buffer data. - */ - smp_store_release(&buf->tail->commit, buf->tail->used); + tty_flip_buffer_commit(buf->tail); queue_work(system_unbound_wq, &buf->work); } EXPORT_SYMBOL(tty_flip_buffer_push); From 08afa87f58d83dfe040572ed591b47e8cb9e225c Mon Sep 17 00:00:00 2001 From: Jiri Slaby Date: Thu, 7 Jul 2022 10:25:58 +0200 Subject: [PATCH 1694/1842] tty: use new tty_insert_flip_string_and_push_buffer() in pty_write() MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit a501ab75e7624d133a5a3c7ec010687c8b961d23 upstream. There is a race in pty_write(). pty_write() can be called in parallel with e.g. ioctl(TIOCSTI) or ioctl(TCXONC) which also inserts chars to the buffer. Provided, tty_flip_buffer_push() in pty_write() is called outside the lock, it can commit inconsistent tail. This can lead to out of bounds writes and other issues. See the Link below. To fix this, we have to introduce a new helper called tty_insert_flip_string_and_push_buffer(). It does both tty_insert_flip_string() and tty_flip_buffer_commit() under the port lock. It also calls queue_work(), but outside the lock. See 71a174b39f10 (pty: do tty_flip_buffer_push without port->lock in pty_write) for the reasons. Keep the helper internal-only (in drivers' tty.h). It is not intended to be used widely. Link: https://seclists.org/oss-sec/2022/q2/155 Fixes: 71a174b39f10 (pty: do tty_flip_buffer_push without port->lock in pty_write) Cc: 一只狗 Cc: Dan Carpenter Suggested-by: Hillf Danton Signed-off-by: Jiri Slaby Link: https://lore.kernel.org/r/20220707082558.9250-2-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman --- drivers/tty/pty.c | 14 ++------------ drivers/tty/tty_buffer.c | 31 +++++++++++++++++++++++++++++++ include/linux/tty_flip.h | 3 +++ 3 files changed, 36 insertions(+), 12 deletions(-) diff --git a/drivers/tty/pty.c b/drivers/tty/pty.c index 23368cec7ee8..16498f5fba64 100644 --- a/drivers/tty/pty.c +++ b/drivers/tty/pty.c @@ -111,21 +111,11 @@ static void pty_unthrottle(struct tty_struct *tty) static int pty_write(struct tty_struct *tty, const unsigned char *buf, int c) { struct tty_struct *to = tty->link; - unsigned long flags; - if (tty->stopped) + if (tty->stopped || !c) return 0; - if (c > 0) { - spin_lock_irqsave(&to->port->lock, flags); - /* Stuff the data into the input queue of the other end */ - c = tty_insert_flip_string(to->port, buf, c); - spin_unlock_irqrestore(&to->port->lock, flags); - /* And shovel */ - if (c) - tty_flip_buffer_push(to->port); - } - return c; + return tty_insert_flip_string_and_push_buffer(to->port, buf, c); } /** diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c index bf8d4160411c..5bbc2e010b48 100644 --- a/drivers/tty/tty_buffer.c +++ b/drivers/tty/tty_buffer.c @@ -552,6 +552,37 @@ void tty_flip_buffer_push(struct tty_port *port) } EXPORT_SYMBOL(tty_flip_buffer_push); +/** + * tty_insert_flip_string_and_push_buffer - add characters to the tty buffer and + * push + * @port: tty port + * @chars: characters + * @size: size + * + * The function combines tty_insert_flip_string() and tty_flip_buffer_push() + * with the exception of properly holding the @port->lock. + * + * To be used only internally (by pty currently). + * + * Returns: the number added. + */ +int tty_insert_flip_string_and_push_buffer(struct tty_port *port, + const unsigned char *chars, size_t size) +{ + struct tty_bufhead *buf = &port->buf; + unsigned long flags; + + spin_lock_irqsave(&port->lock, flags); + size = tty_insert_flip_string(port, chars, size); + if (size) + tty_flip_buffer_commit(buf->tail); + spin_unlock_irqrestore(&port->lock, flags); + + queue_work(system_unbound_wq, &buf->work); + + return size; +} + /** * tty_buffer_init - prepare a tty buffer structure * @port: tty port to initialise diff --git a/include/linux/tty_flip.h b/include/linux/tty_flip.h index e183f74b19a0..c326bfdb5ec2 100644 --- a/include/linux/tty_flip.h +++ b/include/linux/tty_flip.h @@ -39,4 +39,7 @@ static inline int tty_insert_flip_string(struct tty_port *port, extern void tty_buffer_lock_exclusive(struct tty_port *port); extern void tty_buffer_unlock_exclusive(struct tty_port *port); +int tty_insert_flip_string_and_push_buffer(struct tty_port *port, + const unsigned char *chars, size_t cnt); + #endif /* _LINUX_TTY_FLIP_H */ From f7c1fc0dec97b882c105a0887bef585471cf1262 Mon Sep 17 00:00:00 2001 From: Jose Alonso Date: Mon, 13 Jun 2022 15:32:44 -0300 Subject: [PATCH 1695/1842] net: usb: ax88179_178a needs FLAG_SEND_ZLP commit 36a15e1cb134c0395261ba1940762703f778438c upstream. The extra byte inserted by usbnet.c when (length % dev->maxpacket == 0) is causing problems to device. This patch sets FLAG_SEND_ZLP to avoid this. Tested with: 0b95:1790 ASIX Electronics Corp. AX88179 Gigabit Ethernet Problems observed: ====================================================================== 1) Using ssh/sshfs. The remote sshd daemon can abort with the message: "message authentication code incorrect" This happens because the tcp message sent is corrupted during the USB "Bulk out". The device calculate the tcp checksum and send a valid tcp message to the remote sshd. Then the encryption detects the error and aborts. 2) NETDEV WATCHDOG: ... (ax88179_178a): transmit queue 0 timed out 3) Stop normal work without any log message. The "Bulk in" continue receiving packets normally. The host sends "Bulk out" and the device responds with -ECONNRESET. (The netusb.c code tx_complete ignore -ECONNRESET) Under normal conditions these errors take days to happen and in intense usage take hours. A test with ping gives packet loss, showing that something is wrong: ping -4 -s 462 {destination} # 462 = 512 - 42 - 8 Not all packets fail. My guess is that the device tries to find another packet starting at the extra byte and will fail or not depending on the next bytes (old buffer content). ====================================================================== Signed-off-by: Jose Alonso Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- drivers/net/usb/ax88179_178a.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c index 79a53fe245e5..0ac4f59e3f18 100644 --- a/drivers/net/usb/ax88179_178a.c +++ b/drivers/net/usb/ax88179_178a.c @@ -1796,7 +1796,7 @@ static const struct driver_info ax88179_info = { .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; @@ -1809,7 +1809,7 @@ static const struct driver_info ax88178a_info = { .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; @@ -1822,7 +1822,7 @@ static const struct driver_info cypress_GX3_info = { .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; @@ -1835,7 +1835,7 @@ static const struct driver_info dlink_dub1312_info = { .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; @@ -1848,7 +1848,7 @@ static const struct driver_info sitecom_info = { .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; @@ -1861,7 +1861,7 @@ static const struct driver_info samsung_info = { .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; @@ -1874,7 +1874,7 @@ static const struct driver_info lenovo_info = { .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; @@ -1887,7 +1887,7 @@ static const struct driver_info belkin_info = { .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; @@ -1900,7 +1900,7 @@ static const struct driver_info toshiba_info = { .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; @@ -1913,7 +1913,7 @@ static const struct driver_info mct_info = { .link_reset = ax88179_link_reset, .reset = ax88179_reset, .stop = ax88179_stop, - .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_SEND_ZLP, .rx_fixup = ax88179_rx_fixup, .tx_fixup = ax88179_tx_fixup, }; From bb1990a3005e3655e84f9a57b3f30ce96d597dee Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Thu, 21 Jul 2022 10:30:14 -0700 Subject: [PATCH 1696/1842] watch-queue: remove spurious double semicolon commit 44e29e64cf1ac0cffb152e0532227ea6d002aa28 upstream. Sedat Dilek noticed that I had an extraneous semicolon at the end of a line in the previous patch. It's harmless, but unintentional, and while compilers just treat it as an extra empty statement, for all I know some other tooling might warn about it. So clean it up before other people notice too ;) Fixes: 353f7988dd84 ("watchqueue: make sure to serialize 'wqueue->defunct' properly") Reported-by: Sedat Dilek Signed-off-by: Linus Torvalds Reported-by: Sedat Dilek Signed-off-by: Greg Kroah-Hartman --- kernel/watch_queue.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c index dcf1e676797d..e5d22af43fa0 100644 --- a/kernel/watch_queue.c +++ b/kernel/watch_queue.c @@ -227,7 +227,7 @@ void __post_watch_notification(struct watch_list *wlist, if (lock_wqueue(wqueue)) { post_one_notification(wqueue, n); - unlock_wqueue(wqueue);; + unlock_wqueue(wqueue); } } From 7a62a4b6212a7f04251fdaf8b049c47aec59c54a Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Fri, 29 Jul 2022 17:19:29 +0200 Subject: [PATCH 1697/1842] Linux 5.10.134 Link: https://lore.kernel.org/r/20220727161012.056867467@linuxfoundation.org Tested-by: Florian Fainelli Tested-by: Linux Kernel Functional Testing Tested-by: Pavel Machek (CIP) Tested-by: Jon Hunter Tested-by: Shuah Khan Tested-by: Sudip Mukherjee Link: https://lore.kernel.org/r/20220728150340.045826831@linuxfoundation.org Tested-by: Guenter Roeck Tested-by: Linux Kernel Functional Testing Tested-by: Jon Hunter Tested-by: Sudip Mukherjee Tested-by: Rudi Heitbaum Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index fbd330e58c3b..00dddc2ac804 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 133 +SUBLEVEL = 134 EXTRAVERSION = NAME = Dare mighty things From 3f05c6dd1307884bfae727da2a198f745d50d031 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Sat, 30 Jul 2022 15:34:05 +0200 Subject: [PATCH 1698/1842] ANDROID: fix up 5.10.132 merge with the virtio_mmio.c driver The merge of 5.10.132 into the android12-5.10-lts branch caused duplicate functions to be merged into the same file. Fix this up by removing the out-of-tree versions and accepting the versions that came in as part of 5.10.132 Fixes: 0c724b692df3 ("Merge 5.10.132 into android12-5.10-lts") Signed-off-by: Greg Kroah-Hartman Change-Id: I3d4770e22d4e96b07e1e1d0f312f6fa4bb440e59 --- drivers/virtio/virtio_mmio.c | 20 -------------------- 1 file changed, 20 deletions(-) diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c index 5cb8045a27aa..e8ef0c66e558 100644 --- a/drivers/virtio/virtio_mmio.c +++ b/drivers/virtio/virtio_mmio.c @@ -787,26 +787,6 @@ static void vm_unregister_cmdline_devices(void) #endif -#ifdef CONFIG_PM_SLEEP -static int virtio_mmio_freeze(struct device *dev) -{ - struct platform_device *pdev = to_platform_device(dev); - struct virtio_mmio_device *vm_dev = platform_get_drvdata(pdev); - - return virtio_device_freeze(&vm_dev->vdev); -} - -static int virtio_mmio_restore(struct device *dev) -{ - struct platform_device *pdev = to_platform_device(dev); - struct virtio_mmio_device *vm_dev = platform_get_drvdata(pdev); - - return virtio_device_restore(&vm_dev->vdev); -} -#endif - -static SIMPLE_DEV_PM_OPS(virtio_mmio_pm_ops, virtio_mmio_freeze, virtio_mmio_restore); - /* Platform driver */ static const struct of_device_id virtio_mmio_match[] = { From de5d4654ac6c22b1be756fdf7db18471e7df01ea Mon Sep 17 00:00:00 2001 From: Luiz Augusto von Dentz Date: Thu, 21 Jul 2022 09:10:50 -0700 Subject: [PATCH 1699/1842] Bluetooth: L2CAP: Fix use-after-free caused by l2cap_chan_put commit d0be8347c623e0ac4202a1d4e0373882821f56b0 upstream. This fixes the following trace which is caused by hci_rx_work starting up *after* the final channel reference has been put() during sock_close() but *before* the references to the channel have been destroyed, so instead the code now rely on kref_get_unless_zero/l2cap_chan_hold_unless_zero to prevent referencing a channel that is about to be destroyed. refcount_t: increment on 0; use-after-free. BUG: KASAN: use-after-free in refcount_dec_and_test+0x20/0xd0 Read of size 4 at addr ffffffc114f5bf18 by task kworker/u17:14/705 CPU: 4 PID: 705 Comm: kworker/u17:14 Tainted: G S W 4.14.234-00003-g1fb6d0bd49a4-dirty #28 Hardware name: Qualcomm Technologies, Inc. SM8150 V2 PM8150 Google Inc. MSM sm8150 Flame DVT (DT) Workqueue: hci0 hci_rx_work Call trace: dump_backtrace+0x0/0x378 show_stack+0x20/0x2c dump_stack+0x124/0x148 print_address_description+0x80/0x2e8 __kasan_report+0x168/0x188 kasan_report+0x10/0x18 __asan_load4+0x84/0x8c refcount_dec_and_test+0x20/0xd0 l2cap_chan_put+0x48/0x12c l2cap_recv_frame+0x4770/0x6550 l2cap_recv_acldata+0x44c/0x7a4 hci_acldata_packet+0x100/0x188 hci_rx_work+0x178/0x23c process_one_work+0x35c/0x95c worker_thread+0x4cc/0x960 kthread+0x1a8/0x1c4 ret_from_fork+0x10/0x18 Cc: stable@kernel.org Reported-by: Lee Jones Signed-off-by: Luiz Augusto von Dentz Tested-by: Lee Jones Signed-off-by: Luiz Augusto von Dentz Signed-off-by: Greg Kroah-Hartman --- include/net/bluetooth/l2cap.h | 1 + net/bluetooth/l2cap_core.c | 61 +++++++++++++++++++++++++++-------- 2 files changed, 49 insertions(+), 13 deletions(-) diff --git a/include/net/bluetooth/l2cap.h b/include/net/bluetooth/l2cap.h index 1d1232917de7..9b8000869b07 100644 --- a/include/net/bluetooth/l2cap.h +++ b/include/net/bluetooth/l2cap.h @@ -845,6 +845,7 @@ enum { }; void l2cap_chan_hold(struct l2cap_chan *c); +struct l2cap_chan *l2cap_chan_hold_unless_zero(struct l2cap_chan *c); void l2cap_chan_put(struct l2cap_chan *c); static inline void l2cap_chan_lock(struct l2cap_chan *chan) diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c index 2557cd917f5e..6a5ff5dcc09a 100644 --- a/net/bluetooth/l2cap_core.c +++ b/net/bluetooth/l2cap_core.c @@ -111,7 +111,8 @@ static struct l2cap_chan *__l2cap_get_chan_by_scid(struct l2cap_conn *conn, } /* Find channel with given SCID. - * Returns locked channel. */ + * Returns a reference locked channel. + */ static struct l2cap_chan *l2cap_get_chan_by_scid(struct l2cap_conn *conn, u16 cid) { @@ -119,15 +120,19 @@ static struct l2cap_chan *l2cap_get_chan_by_scid(struct l2cap_conn *conn, mutex_lock(&conn->chan_lock); c = __l2cap_get_chan_by_scid(conn, cid); - if (c) - l2cap_chan_lock(c); + if (c) { + /* Only lock if chan reference is not 0 */ + c = l2cap_chan_hold_unless_zero(c); + if (c) + l2cap_chan_lock(c); + } mutex_unlock(&conn->chan_lock); return c; } /* Find channel with given DCID. - * Returns locked channel. + * Returns a reference locked channel. */ static struct l2cap_chan *l2cap_get_chan_by_dcid(struct l2cap_conn *conn, u16 cid) @@ -136,8 +141,12 @@ static struct l2cap_chan *l2cap_get_chan_by_dcid(struct l2cap_conn *conn, mutex_lock(&conn->chan_lock); c = __l2cap_get_chan_by_dcid(conn, cid); - if (c) - l2cap_chan_lock(c); + if (c) { + /* Only lock if chan reference is not 0 */ + c = l2cap_chan_hold_unless_zero(c); + if (c) + l2cap_chan_lock(c); + } mutex_unlock(&conn->chan_lock); return c; @@ -162,8 +171,12 @@ static struct l2cap_chan *l2cap_get_chan_by_ident(struct l2cap_conn *conn, mutex_lock(&conn->chan_lock); c = __l2cap_get_chan_by_ident(conn, ident); - if (c) - l2cap_chan_lock(c); + if (c) { + /* Only lock if chan reference is not 0 */ + c = l2cap_chan_hold_unless_zero(c); + if (c) + l2cap_chan_lock(c); + } mutex_unlock(&conn->chan_lock); return c; @@ -497,6 +510,16 @@ void l2cap_chan_hold(struct l2cap_chan *c) kref_get(&c->kref); } +struct l2cap_chan *l2cap_chan_hold_unless_zero(struct l2cap_chan *c) +{ + BT_DBG("chan %p orig refcnt %u", c, kref_read(&c->kref)); + + if (!kref_get_unless_zero(&c->kref)) + return NULL; + + return c; +} + void l2cap_chan_put(struct l2cap_chan *c) { BT_DBG("chan %p orig refcnt %d", c, kref_read(&c->kref)); @@ -1965,7 +1988,10 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm, src_match = !bacmp(&c->src, src); dst_match = !bacmp(&c->dst, dst); if (src_match && dst_match) { - l2cap_chan_hold(c); + c = l2cap_chan_hold_unless_zero(c); + if (!c) + continue; + read_unlock(&chan_list_lock); return c; } @@ -1980,7 +2006,7 @@ static struct l2cap_chan *l2cap_global_chan_by_psm(int state, __le16 psm, } if (c1) - l2cap_chan_hold(c1); + c1 = l2cap_chan_hold_unless_zero(c1); read_unlock(&chan_list_lock); @@ -4460,6 +4486,7 @@ static inline int l2cap_config_req(struct l2cap_conn *conn, unlock: l2cap_chan_unlock(chan); + l2cap_chan_put(chan); return err; } @@ -4573,6 +4600,7 @@ static inline int l2cap_config_rsp(struct l2cap_conn *conn, done: l2cap_chan_unlock(chan); + l2cap_chan_put(chan); return err; } @@ -5300,6 +5328,7 @@ static inline int l2cap_move_channel_req(struct l2cap_conn *conn, l2cap_send_move_chan_rsp(chan, result); l2cap_chan_unlock(chan); + l2cap_chan_put(chan); return 0; } @@ -5392,6 +5421,7 @@ static void l2cap_move_continue(struct l2cap_conn *conn, u16 icid, u16 result) } l2cap_chan_unlock(chan); + l2cap_chan_put(chan); } static void l2cap_move_fail(struct l2cap_conn *conn, u8 ident, u16 icid, @@ -5421,6 +5451,7 @@ static void l2cap_move_fail(struct l2cap_conn *conn, u8 ident, u16 icid, l2cap_send_move_chan_cfm(chan, L2CAP_MC_UNCONFIRMED); l2cap_chan_unlock(chan); + l2cap_chan_put(chan); } static int l2cap_move_channel_rsp(struct l2cap_conn *conn, @@ -5484,6 +5515,7 @@ static int l2cap_move_channel_confirm(struct l2cap_conn *conn, l2cap_send_move_chan_cfm_rsp(conn, cmd->ident, icid); l2cap_chan_unlock(chan); + l2cap_chan_put(chan); return 0; } @@ -5519,6 +5551,7 @@ static inline int l2cap_move_channel_confirm_rsp(struct l2cap_conn *conn, } l2cap_chan_unlock(chan); + l2cap_chan_put(chan); return 0; } @@ -5891,12 +5924,11 @@ static inline int l2cap_le_credits(struct l2cap_conn *conn, if (credits > max_credits) { BT_ERR("LE credits overflow"); l2cap_send_disconn_req(chan, ECONNRESET); - l2cap_chan_unlock(chan); /* Return 0 so that we don't trigger an unnecessary * command reject packet. */ - return 0; + goto unlock; } chan->tx_credits += credits; @@ -5907,7 +5939,9 @@ static inline int l2cap_le_credits(struct l2cap_conn *conn, if (chan->tx_credits) chan->ops->resume(chan); +unlock: l2cap_chan_unlock(chan); + l2cap_chan_put(chan); return 0; } @@ -7587,6 +7621,7 @@ static void l2cap_data_channel(struct l2cap_conn *conn, u16 cid, done: l2cap_chan_unlock(chan); + l2cap_chan_put(chan); } static void l2cap_conless_channel(struct l2cap_conn *conn, __le16 psm, @@ -8074,7 +8109,7 @@ static struct l2cap_chan *l2cap_global_fixed_chan(struct l2cap_chan *c, if (src_type != c->src_type) continue; - l2cap_chan_hold(c); + c = l2cap_chan_hold_unless_zero(c); read_unlock(&chan_list_lock); return c; } From 5528990512a22c66a62affaa1a81e5a496e88053 Mon Sep 17 00:00:00 2001 From: Junxiao Bi Date: Fri, 3 Jun 2022 15:28:01 -0700 Subject: [PATCH 1700/1842] Revert "ocfs2: mount shared volume without ha stack" commit c80af0c250c8f8a3c978aa5aafbe9c39b336b813 upstream. This reverts commit 912f655d78c5d4ad05eac287f23a435924df7144. This commit introduced a regression that can cause mount hung. The changes in __ocfs2_find_empty_slot causes that any node with none-zero node number can grab the slot that was already taken by node 0, so node 1 will access the same journal with node 0, when it try to grab journal cluster lock, it will hung because it was already acquired by node 0. It's very easy to reproduce this, in one cluster, mount node 0 first, then node 1, you will see the following call trace from node 1. [13148.735424] INFO: task mount.ocfs2:53045 blocked for more than 122 seconds. [13148.739691] Not tainted 5.15.0-2148.0.4.el8uek.mountracev2.x86_64 #2 [13148.742560] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [13148.745846] task:mount.ocfs2 state:D stack: 0 pid:53045 ppid: 53044 flags:0x00004000 [13148.749354] Call Trace: [13148.750718] [13148.752019] ? usleep_range+0x90/0x89 [13148.753882] __schedule+0x210/0x567 [13148.755684] schedule+0x44/0xa8 [13148.757270] schedule_timeout+0x106/0x13c [13148.759273] ? __prepare_to_swait+0x53/0x78 [13148.761218] __wait_for_common+0xae/0x163 [13148.763144] __ocfs2_cluster_lock.constprop.0+0x1d6/0x870 [ocfs2] [13148.765780] ? ocfs2_inode_lock_full_nested+0x18d/0x398 [ocfs2] [13148.768312] ocfs2_inode_lock_full_nested+0x18d/0x398 [ocfs2] [13148.770968] ocfs2_journal_init+0x91/0x340 [ocfs2] [13148.773202] ocfs2_check_volume+0x39/0x461 [ocfs2] [13148.775401] ? iput+0x69/0xba [13148.777047] ocfs2_mount_volume.isra.0.cold+0x40/0x1f5 [ocfs2] [13148.779646] ocfs2_fill_super+0x54b/0x853 [ocfs2] [13148.781756] mount_bdev+0x190/0x1b7 [13148.783443] ? ocfs2_remount+0x440/0x440 [ocfs2] [13148.785634] legacy_get_tree+0x27/0x48 [13148.787466] vfs_get_tree+0x25/0xd0 [13148.789270] do_new_mount+0x18c/0x2d9 [13148.791046] __x64_sys_mount+0x10e/0x142 [13148.792911] do_syscall_64+0x3b/0x89 [13148.794667] entry_SYSCALL_64_after_hwframe+0x170/0x0 [13148.797051] RIP: 0033:0x7f2309f6e26e [13148.798784] RSP: 002b:00007ffdcee7d408 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5 [13148.801974] RAX: ffffffffffffffda RBX: 00007ffdcee7d4a0 RCX: 00007f2309f6e26e [13148.804815] RDX: 0000559aa762a8ae RSI: 0000559aa939d340 RDI: 0000559aa93a22b0 [13148.807719] RBP: 00007ffdcee7d5b0 R08: 0000559aa93a2290 R09: 00007f230a0b4820 [13148.810659] R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffdcee7d420 [13148.813609] R13: 0000000000000000 R14: 0000559aa939f000 R15: 0000000000000000 [13148.816564] To fix it, we can just fix __ocfs2_find_empty_slot. But original commit introduced the feature to mount ocfs2 locally even it is cluster based, that is a very dangerous, it can easily cause serious data corruption, there is no way to stop other nodes mounting the fs and corrupting it. Setup ha or other cluster-aware stack is just the cost that we have to take for avoiding corruption, otherwise we have to do it in kernel. Link: https://lkml.kernel.org/r/20220603222801.42488-1-junxiao.bi@oracle.com Fixes: 912f655d78c5("ocfs2: mount shared volume without ha stack") Signed-off-by: Junxiao Bi Acked-by: Joseph Qi Cc: Mark Fasheh Cc: Joel Becker Cc: Changwei Ge Cc: Gang He Cc: Jun Piao Cc: Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- fs/ocfs2/ocfs2.h | 4 +--- fs/ocfs2/slot_map.c | 46 +++++++++++++++++++-------------------------- fs/ocfs2/super.c | 21 --------------------- 3 files changed, 20 insertions(+), 51 deletions(-) diff --git a/fs/ocfs2/ocfs2.h b/fs/ocfs2/ocfs2.h index 7993d527edae..0a8cd8e59a92 100644 --- a/fs/ocfs2/ocfs2.h +++ b/fs/ocfs2/ocfs2.h @@ -279,7 +279,6 @@ enum ocfs2_mount_options OCFS2_MOUNT_JOURNAL_ASYNC_COMMIT = 1 << 15, /* Journal Async Commit */ OCFS2_MOUNT_ERRORS_CONT = 1 << 16, /* Return EIO to the calling process on error */ OCFS2_MOUNT_ERRORS_ROFS = 1 << 17, /* Change filesystem to read-only on error */ - OCFS2_MOUNT_NOCLUSTER = 1 << 18, /* No cluster aware filesystem mount */ }; #define OCFS2_OSB_SOFT_RO 0x0001 @@ -675,8 +674,7 @@ static inline int ocfs2_cluster_o2cb_global_heartbeat(struct ocfs2_super *osb) static inline int ocfs2_mount_local(struct ocfs2_super *osb) { - return ((osb->s_feature_incompat & OCFS2_FEATURE_INCOMPAT_LOCAL_MOUNT) - || (osb->s_mount_opt & OCFS2_MOUNT_NOCLUSTER)); + return (osb->s_feature_incompat & OCFS2_FEATURE_INCOMPAT_LOCAL_MOUNT); } static inline int ocfs2_uses_extended_slot_map(struct ocfs2_super *osb) diff --git a/fs/ocfs2/slot_map.c b/fs/ocfs2/slot_map.c index 4da0e4b1e79b..8caeceeaeda7 100644 --- a/fs/ocfs2/slot_map.c +++ b/fs/ocfs2/slot_map.c @@ -254,16 +254,14 @@ static int __ocfs2_find_empty_slot(struct ocfs2_slot_info *si, int i, ret = -ENOSPC; if ((preferred >= 0) && (preferred < si->si_num_slots)) { - if (!si->si_slots[preferred].sl_valid || - !si->si_slots[preferred].sl_node_num) { + if (!si->si_slots[preferred].sl_valid) { ret = preferred; goto out; } } for(i = 0; i < si->si_num_slots; i++) { - if (!si->si_slots[i].sl_valid || - !si->si_slots[i].sl_node_num) { + if (!si->si_slots[i].sl_valid) { ret = i; break; } @@ -458,30 +456,24 @@ int ocfs2_find_slot(struct ocfs2_super *osb) spin_lock(&osb->osb_lock); ocfs2_update_slot_info(si); - if (ocfs2_mount_local(osb)) - /* use slot 0 directly in local mode */ - slot = 0; - else { - /* search for ourselves first and take the slot if it already - * exists. Perhaps we need to mark this in a variable for our - * own journal recovery? Possibly not, though we certainly - * need to warn to the user */ - slot = __ocfs2_node_num_to_slot(si, osb->node_num); + /* search for ourselves first and take the slot if it already + * exists. Perhaps we need to mark this in a variable for our + * own journal recovery? Possibly not, though we certainly + * need to warn to the user */ + slot = __ocfs2_node_num_to_slot(si, osb->node_num); + if (slot < 0) { + /* if no slot yet, then just take 1st available + * one. */ + slot = __ocfs2_find_empty_slot(si, osb->preferred_slot); if (slot < 0) { - /* if no slot yet, then just take 1st available - * one. */ - slot = __ocfs2_find_empty_slot(si, osb->preferred_slot); - if (slot < 0) { - spin_unlock(&osb->osb_lock); - mlog(ML_ERROR, "no free slots available!\n"); - status = -EINVAL; - goto bail; - } - } else - printk(KERN_INFO "ocfs2: Slot %d on device (%s) was " - "already allocated to this node!\n", - slot, osb->dev_str); - } + spin_unlock(&osb->osb_lock); + mlog(ML_ERROR, "no free slots available!\n"); + status = -EINVAL; + goto bail; + } + } else + printk(KERN_INFO "ocfs2: Slot %d on device (%s) was already " + "allocated to this node!\n", slot, osb->dev_str); ocfs2_set_slot(si, slot, osb->node_num); osb->slot_num = slot; diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c index 477ad05a34ea..c0e5f1bad499 100644 --- a/fs/ocfs2/super.c +++ b/fs/ocfs2/super.c @@ -175,7 +175,6 @@ enum { Opt_dir_resv_level, Opt_journal_async_commit, Opt_err_cont, - Opt_nocluster, Opt_err, }; @@ -209,7 +208,6 @@ static const match_table_t tokens = { {Opt_dir_resv_level, "dir_resv_level=%u"}, {Opt_journal_async_commit, "journal_async_commit"}, {Opt_err_cont, "errors=continue"}, - {Opt_nocluster, "nocluster"}, {Opt_err, NULL} }; @@ -621,13 +619,6 @@ static int ocfs2_remount(struct super_block *sb, int *flags, char *data) goto out; } - tmp = OCFS2_MOUNT_NOCLUSTER; - if ((osb->s_mount_opt & tmp) != (parsed_options.mount_opt & tmp)) { - ret = -EINVAL; - mlog(ML_ERROR, "Cannot change nocluster option on remount\n"); - goto out; - } - tmp = OCFS2_MOUNT_HB_LOCAL | OCFS2_MOUNT_HB_GLOBAL | OCFS2_MOUNT_HB_NONE; if ((osb->s_mount_opt & tmp) != (parsed_options.mount_opt & tmp)) { @@ -868,7 +859,6 @@ static int ocfs2_verify_userspace_stack(struct ocfs2_super *osb, } if (ocfs2_userspace_stack(osb) && - !(osb->s_mount_opt & OCFS2_MOUNT_NOCLUSTER) && strncmp(osb->osb_cluster_stack, mopt->cluster_stack, OCFS2_STACK_LABEL_LEN)) { mlog(ML_ERROR, @@ -1149,11 +1139,6 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent) osb->s_mount_opt & OCFS2_MOUNT_DATA_WRITEBACK ? "writeback" : "ordered"); - if ((osb->s_mount_opt & OCFS2_MOUNT_NOCLUSTER) && - !(osb->s_feature_incompat & OCFS2_FEATURE_INCOMPAT_LOCAL_MOUNT)) - printk(KERN_NOTICE "ocfs2: The shared device (%s) is mounted " - "without cluster aware mode.\n", osb->dev_str); - atomic_set(&osb->vol_state, VOLUME_MOUNTED); wake_up(&osb->osb_mount_event); @@ -1460,9 +1445,6 @@ static int ocfs2_parse_options(struct super_block *sb, case Opt_journal_async_commit: mopt->mount_opt |= OCFS2_MOUNT_JOURNAL_ASYNC_COMMIT; break; - case Opt_nocluster: - mopt->mount_opt |= OCFS2_MOUNT_NOCLUSTER; - break; default: mlog(ML_ERROR, "Unrecognized mount option \"%s\" " @@ -1574,9 +1556,6 @@ static int ocfs2_show_options(struct seq_file *s, struct dentry *root) if (opts & OCFS2_MOUNT_JOURNAL_ASYNC_COMMIT) seq_printf(s, ",journal_async_commit"); - if (opts & OCFS2_MOUNT_NOCLUSTER) - seq_printf(s, ",nocluster"); - return 0; } From 1228934cf259da00cb0b5e1313a3b3fb8f588d67 Mon Sep 17 00:00:00 2001 From: ChenXiaoSong Date: Thu, 7 Jul 2022 18:53:29 +0800 Subject: [PATCH 1701/1842] ntfs: fix use-after-free in ntfs_ucsncmp() commit 38c9c22a85aeed28d0831f230136e9cf6fa2ed44 upstream. Syzkaller reported use-after-free bug as follows: ================================================================== BUG: KASAN: use-after-free in ntfs_ucsncmp+0x123/0x130 Read of size 2 at addr ffff8880751acee8 by task a.out/879 CPU: 7 PID: 879 Comm: a.out Not tainted 5.19.0-rc4-next-20220630-00001-gcc5218c8bd2c-dirty #7 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014 Call Trace: dump_stack_lvl+0x1c0/0x2b0 print_address_description.constprop.0.cold+0xd4/0x484 print_report.cold+0x55/0x232 kasan_report+0xbf/0xf0 ntfs_ucsncmp+0x123/0x130 ntfs_are_names_equal.cold+0x2b/0x41 ntfs_attr_find+0x43b/0xb90 ntfs_attr_lookup+0x16d/0x1e0 ntfs_read_locked_attr_inode+0x4aa/0x2360 ntfs_attr_iget+0x1af/0x220 ntfs_read_locked_inode+0x246c/0x5120 ntfs_iget+0x132/0x180 load_system_files+0x1cc6/0x3480 ntfs_fill_super+0xa66/0x1cf0 mount_bdev+0x38d/0x460 legacy_get_tree+0x10d/0x220 vfs_get_tree+0x93/0x300 do_new_mount+0x2da/0x6d0 path_mount+0x496/0x19d0 __x64_sys_mount+0x284/0x300 do_syscall_64+0x3b/0xc0 entry_SYSCALL_64_after_hwframe+0x46/0xb0 RIP: 0033:0x7f3f2118d9ea Code: 48 8b 0d a9 f4 0b 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 76 f4 0b 00 f7 d8 64 89 01 48 RSP: 002b:00007ffc269deac8 EFLAGS: 00000202 ORIG_RAX: 00000000000000a5 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f3f2118d9ea RDX: 0000000020000000 RSI: 0000000020000100 RDI: 00007ffc269dec00 RBP: 00007ffc269dec80 R08: 00007ffc269deb00 R09: 00007ffc269dec44 R10: 0000000000000000 R11: 0000000000000202 R12: 000055f81ab1d220 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 The buggy address belongs to the physical page: page:0000000085430378 refcount:1 mapcount:1 mapping:0000000000000000 index:0x555c6a81d pfn:0x751ac memcg:ffff888101f7e180 anon flags: 0xfffffc00a0014(uptodate|lru|mappedtodisk|swapbacked|node=0|zone=1|lastcpupid=0x1fffff) raw: 000fffffc00a0014 ffffea0001bf2988 ffffea0001de2448 ffff88801712e201 raw: 0000000555c6a81d 0000000000000000 0000000100000000 ffff888101f7e180 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff8880751acd80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff8880751ace00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >ffff8880751ace80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ^ ffff8880751acf00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff8880751acf80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ================================================================== The reason is that struct ATTR_RECORD->name_offset is 6485, end address of name string is out of bounds. Fix this by adding sanity check on end address of attribute name string. [akpm@linux-foundation.org: coding-style cleanups] [chenxiaosong2@huawei.com: cleanup suggested by Hawkins Jiawei] Link: https://lkml.kernel.org/r/20220709064511.3304299-1-chenxiaosong2@huawei.com Link: https://lkml.kernel.org/r/20220707105329.4020708-1-chenxiaosong2@huawei.com Signed-off-by: ChenXiaoSong Signed-off-by: Hawkins Jiawei Cc: Anton Altaparmakov Cc: ChenXiaoSong Cc: Yongqiang Liu Cc: Zhang Yi Cc: Zhang Xiaoxu Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- fs/ntfs/attrib.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/fs/ntfs/attrib.c b/fs/ntfs/attrib.c index d563abc3e136..914e99173130 100644 --- a/fs/ntfs/attrib.c +++ b/fs/ntfs/attrib.c @@ -592,8 +592,12 @@ static int ntfs_attr_find(const ATTR_TYPE type, const ntfschar *name, a = (ATTR_RECORD*)((u8*)ctx->attr + le32_to_cpu(ctx->attr->length)); for (;; a = (ATTR_RECORD*)((u8*)a + le32_to_cpu(a->length))) { - if ((u8*)a < (u8*)ctx->mrec || (u8*)a > (u8*)ctx->mrec + - le32_to_cpu(ctx->mrec->bytes_allocated)) + u8 *mrec_end = (u8 *)ctx->mrec + + le32_to_cpu(ctx->mrec->bytes_allocated); + u8 *name_end = (u8 *)a + le16_to_cpu(a->name_offset) + + a->name_length * sizeof(ntfschar); + if ((u8*)a < (u8*)ctx->mrec || (u8*)a > mrec_end || + name_end > mrec_end) break; ctx->attr = a; if (unlikely(le32_to_cpu(a->type) > le32_to_cpu(type) || From bd46ca41461b65702cdea84378f2c755eab947e9 Mon Sep 17 00:00:00 2001 From: Harald Freudenberger Date: Wed, 13 Jul 2022 15:17:21 +0200 Subject: [PATCH 1702/1842] s390/archrandom: prevent CPACF trng invocations in interrupt context commit 918e75f77af7d2e049bb70469ec0a2c12782d96a upstream. This patch slightly reworks the s390 arch_get_random_seed_{int,long} implementation: Make sure the CPACF trng instruction is never called in any interrupt context. This is done by adding an additional condition in_task(). Justification: There are some constrains to satisfy for the invocation of the arch_get_random_seed_{int,long}() functions: - They should provide good random data during kernel initialization. - They should not be called in interrupt context as the TRNG instruction is relatively heavy weight and may for example make some network loads cause to timeout and buck. However, it was not clear what kind of interrupt context is exactly encountered during kernel init or network traffic eventually calling arch_get_random_seed_long(). After some days of investigations it is clear that the s390 start_kernel function is not running in any interrupt context and so the trng is called: Jul 11 18:33:39 t35lp54 kernel: [<00000001064e90ca>] arch_get_random_seed_long.part.0+0x32/0x70 Jul 11 18:33:39 t35lp54 kernel: [<000000010715f246>] random_init+0xf6/0x238 Jul 11 18:33:39 t35lp54 kernel: [<000000010712545c>] start_kernel+0x4a4/0x628 Jul 11 18:33:39 t35lp54 kernel: [<000000010590402a>] startup_continue+0x2a/0x40 The condition in_task() is true and the CPACF trng provides random data during kernel startup. The network traffic however, is more difficult. A typical call stack looks like this: Jul 06 17:37:07 t35lp54 kernel: [<000000008b5600fc>] extract_entropy.constprop.0+0x23c/0x240 Jul 06 17:37:07 t35lp54 kernel: [<000000008b560136>] crng_reseed+0x36/0xd8 Jul 06 17:37:07 t35lp54 kernel: [<000000008b5604b8>] crng_make_state+0x78/0x340 Jul 06 17:37:07 t35lp54 kernel: [<000000008b5607e0>] _get_random_bytes+0x60/0xf8 Jul 06 17:37:07 t35lp54 kernel: [<000000008b56108a>] get_random_u32+0xda/0x248 Jul 06 17:37:07 t35lp54 kernel: [<000000008aefe7a8>] kfence_guarded_alloc+0x48/0x4b8 Jul 06 17:37:07 t35lp54 kernel: [<000000008aeff35e>] __kfence_alloc+0x18e/0x1b8 Jul 06 17:37:07 t35lp54 kernel: [<000000008aef7f10>] __kmalloc_node_track_caller+0x368/0x4d8 Jul 06 17:37:07 t35lp54 kernel: [<000000008b611eac>] kmalloc_reserve+0x44/0xa0 Jul 06 17:37:07 t35lp54 kernel: [<000000008b611f98>] __alloc_skb+0x90/0x178 Jul 06 17:37:07 t35lp54 kernel: [<000000008b6120dc>] __napi_alloc_skb+0x5c/0x118 Jul 06 17:37:07 t35lp54 kernel: [<000000008b8f06b4>] qeth_extract_skb+0x13c/0x680 Jul 06 17:37:07 t35lp54 kernel: [<000000008b8f6526>] qeth_poll+0x256/0x3f8 Jul 06 17:37:07 t35lp54 kernel: [<000000008b63d76e>] __napi_poll.constprop.0+0x46/0x2f8 Jul 06 17:37:07 t35lp54 kernel: [<000000008b63dbec>] net_rx_action+0x1cc/0x408 Jul 06 17:37:07 t35lp54 kernel: [<000000008b937302>] __do_softirq+0x132/0x6b0 Jul 06 17:37:07 t35lp54 kernel: [<000000008abf46ce>] __irq_exit_rcu+0x13e/0x170 Jul 06 17:37:07 t35lp54 kernel: [<000000008abf531a>] irq_exit_rcu+0x22/0x50 Jul 06 17:37:07 t35lp54 kernel: [<000000008b922506>] do_io_irq+0xe6/0x198 Jul 06 17:37:07 t35lp54 kernel: [<000000008b935826>] io_int_handler+0xd6/0x110 Jul 06 17:37:07 t35lp54 kernel: [<000000008b9358a6>] psw_idle_exit+0x0/0xa Jul 06 17:37:07 t35lp54 kernel: ([<000000008ab9c59a>] arch_cpu_idle+0x52/0xe0) Jul 06 17:37:07 t35lp54 kernel: [<000000008b933cfe>] default_idle_call+0x6e/0xd0 Jul 06 17:37:07 t35lp54 kernel: [<000000008ac59f4e>] do_idle+0xf6/0x1b0 Jul 06 17:37:07 t35lp54 kernel: [<000000008ac5a28e>] cpu_startup_entry+0x36/0x40 Jul 06 17:37:07 t35lp54 kernel: [<000000008abb0d90>] smp_start_secondary+0x148/0x158 Jul 06 17:37:07 t35lp54 kernel: [<000000008b935b9e>] restart_int_handler+0x6e/0x90 which confirms that the call is in softirq context. So in_task() covers exactly the cases where we want to have CPACF trng called: not in nmi, not in hard irq, not in soft irq but in normal task context and during kernel init. Signed-off-by: Harald Freudenberger Acked-by: Jason A. Donenfeld Reviewed-by: Juergen Christ Link: https://lore.kernel.org/r/20220713131721.257907-1-freude@linux.ibm.com Fixes: e4f74400308c ("s390/archrandom: simplify back to earlier design and initialize earlier") [agordeev@linux.ibm.com changed desc, added Fixes and Link, removed -stable] Signed-off-by: Alexander Gordeev Signed-off-by: Greg Kroah-Hartman --- arch/s390/include/asm/archrandom.h | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/s390/include/asm/archrandom.h b/arch/s390/include/asm/archrandom.h index 2c6e1c6ecbe7..4120c428dc37 100644 --- a/arch/s390/include/asm/archrandom.h +++ b/arch/s390/include/asm/archrandom.h @@ -2,7 +2,7 @@ /* * Kernel interface for the s390 arch_random_* functions * - * Copyright IBM Corp. 2017, 2020 + * Copyright IBM Corp. 2017, 2022 * * Author: Harald Freudenberger * @@ -14,6 +14,7 @@ #ifdef CONFIG_ARCH_RANDOM #include +#include #include #include @@ -32,7 +33,8 @@ static inline bool __must_check arch_get_random_int(unsigned int *v) static inline bool __must_check arch_get_random_seed_long(unsigned long *v) { - if (static_branch_likely(&s390_arch_random_available)) { + if (static_branch_likely(&s390_arch_random_available) && + in_task()) { cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v)); atomic64_add(sizeof(*v), &s390_arch_random_counter); return true; @@ -42,7 +44,8 @@ static inline bool __must_check arch_get_random_seed_long(unsigned long *v) static inline bool __must_check arch_get_random_seed_int(unsigned int *v) { - if (static_branch_likely(&s390_arch_random_available)) { + if (static_branch_likely(&s390_arch_random_available) && + in_task()) { cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v)); atomic64_add(sizeof(*v), &s390_arch_random_counter); return true; From b38a8802c52d5ed9a0256b6ff95229e9f454e93e Mon Sep 17 00:00:00 2001 From: Alistair Popple Date: Wed, 20 Jul 2022 16:27:45 +1000 Subject: [PATCH 1703/1842] nouveau/svm: Fix to migrate all requested pages commit 66cee9097e2b74ff3c8cc040ce5717c521a0c3fa upstream. Users may request that pages from an OpenCL SVM allocation be migrated to the GPU with clEnqueueSVMMigrateMem(). In Nouveau this will call into nouveau_dmem_migrate_vma() to do the migration. If the total range to be migrated exceeds SG_MAX_SINGLE_ALLOC the pages will be migrated in chunks of size SG_MAX_SINGLE_ALLOC. However a typo in updating the starting address means that only the first chunk will get migrated. Fix the calculation so that the entire range will get migrated if possible. Signed-off-by: Alistair Popple Fixes: e3d8b0890469 ("drm/nouveau/svm: map pages after migration") Reviewed-by: Ralph Campbell Reviewed-by: Lyude Paul Signed-off-by: Lyude Paul Link: https://patchwork.freedesktop.org/patch/msgid/20220720062745.960701-1-apopple@nvidia.com Cc: # v5.8+ Signed-off-by: Greg Kroah-Hartman --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index 92987daa5e17..5e72e6cb2f84 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -679,7 +679,11 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm, goto out_free_dma; for (i = 0; i < npages; i += max) { - args.end = start + (max << PAGE_SHIFT); + if (args.start + (max << PAGE_SHIFT) > end) + args.end = end; + else + args.end = args.start + (max << PAGE_SHIFT); + ret = migrate_vma_setup(&args); if (ret) goto out_free_pfns; From 45a84f04a9a08a43a4ef7f6d5a965fbf664c802f Mon Sep 17 00:00:00 2001 From: David Howells Date: Thu, 28 Jul 2022 10:31:06 +0100 Subject: [PATCH 1704/1842] watch_queue: Fix missing rcu annotation commit e0339f036ef4beb9b20f0b6532a1e0ece7f594c6 upstream. Since __post_watch_notification() walks wlist->watchers with only the RCU read lock held, we need to use RCU methods to add to the list (we already use RCU methods to remove from the list). Fix add_watch_to_object() to use hlist_add_head_rcu() instead of hlist_add_head() for that list. Fixes: c73be61cede5 ("pipe: Add general notification queue support") Signed-off-by: David Howells Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- kernel/watch_queue.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c index e5d22af43fa0..f78f90910398 100644 --- a/kernel/watch_queue.c +++ b/kernel/watch_queue.c @@ -497,7 +497,7 @@ int add_watch_to_object(struct watch *watch, struct watch_list *wlist) unlock_wqueue(wqueue); } - hlist_add_head(&watch->list_node, &wlist->watchers); + hlist_add_head_rcu(&watch->list_node, &wlist->watchers); return 0; } EXPORT_SYMBOL(add_watch_to_object); From 7fa8999b31674dc7697fa37a1eee088767cd84f3 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Thu, 28 Jul 2022 10:31:12 +0100 Subject: [PATCH 1705/1842] watch_queue: Fix missing locking in add_watch_to_object() commit e64ab2dbd882933b65cd82ff6235d705ad65dbb6 upstream. If a watch is being added to a queue, it needs to guard against interference from addition of a new watch, manual removal of a watch and removal of a watch due to some other queue being destroyed. KEYCTL_WATCH_KEY guards against this for the same {key,queue} pair by holding the key->sem writelocked and by holding refs on both the key and the queue - but that doesn't prevent interaction from other {key,queue} pairs. While add_watch_to_object() does take the spinlock on the event queue, it doesn't take the lock on the source's watch list. The assumption was that the caller would prevent that (say by taking key->sem) - but that doesn't prevent interference from the destruction of another queue. Fix this by locking the watcher list in add_watch_to_object(). Fixes: c73be61cede5 ("pipe: Add general notification queue support") Reported-by: syzbot+03d7b43290037d1f87ca@syzkaller.appspotmail.com Signed-off-by: David Howells cc: keyrings@vger.kernel.org Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- kernel/watch_queue.c | 58 +++++++++++++++++++++++++++----------------- 1 file changed, 36 insertions(+), 22 deletions(-) diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c index f78f90910398..d29731a30b8e 100644 --- a/kernel/watch_queue.c +++ b/kernel/watch_queue.c @@ -457,6 +457,33 @@ void init_watch(struct watch *watch, struct watch_queue *wqueue) rcu_assign_pointer(watch->queue, wqueue); } +static int add_one_watch(struct watch *watch, struct watch_list *wlist, struct watch_queue *wqueue) +{ + const struct cred *cred; + struct watch *w; + + hlist_for_each_entry(w, &wlist->watchers, list_node) { + struct watch_queue *wq = rcu_access_pointer(w->queue); + if (wqueue == wq && watch->id == w->id) + return -EBUSY; + } + + cred = current_cred(); + if (atomic_inc_return(&cred->user->nr_watches) > task_rlimit(current, RLIMIT_NOFILE)) { + atomic_dec(&cred->user->nr_watches); + return -EAGAIN; + } + + watch->cred = get_cred(cred); + rcu_assign_pointer(watch->watch_list, wlist); + + kref_get(&wqueue->usage); + kref_get(&watch->usage); + hlist_add_head(&watch->queue_node, &wqueue->watches); + hlist_add_head_rcu(&watch->list_node, &wlist->watchers); + return 0; +} + /** * add_watch_to_object - Add a watch on an object to a watch list * @watch: The watch to add @@ -471,34 +498,21 @@ void init_watch(struct watch *watch, struct watch_queue *wqueue) */ int add_watch_to_object(struct watch *watch, struct watch_list *wlist) { - struct watch_queue *wqueue = rcu_access_pointer(watch->queue); - struct watch *w; + struct watch_queue *wqueue; + int ret = -ENOENT; - hlist_for_each_entry(w, &wlist->watchers, list_node) { - struct watch_queue *wq = rcu_access_pointer(w->queue); - if (wqueue == wq && watch->id == w->id) - return -EBUSY; - } - - watch->cred = get_current_cred(); - rcu_assign_pointer(watch->watch_list, wlist); - - if (atomic_inc_return(&watch->cred->user->nr_watches) > - task_rlimit(current, RLIMIT_NOFILE)) { - atomic_dec(&watch->cred->user->nr_watches); - put_cred(watch->cred); - return -EAGAIN; - } + rcu_read_lock(); + wqueue = rcu_access_pointer(watch->queue); if (lock_wqueue(wqueue)) { - kref_get(&wqueue->usage); - kref_get(&watch->usage); - hlist_add_head(&watch->queue_node, &wqueue->watches); + spin_lock(&wlist->lock); + ret = add_one_watch(watch, wlist, wqueue); + spin_unlock(&wlist->lock); unlock_wqueue(wqueue); } - hlist_add_head_rcu(&watch->list_node, &wlist->watchers); - return 0; + rcu_read_unlock(); + return ret; } EXPORT_SYMBOL(add_watch_to_object); From f10a5f905a97cb31e0d8a151c5af5fd2e4494f91 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 20 Jul 2022 09:50:12 -0700 Subject: [PATCH 1706/1842] tcp: Fix data-races around sysctl_tcp_dsack. commit 58ebb1c8b35a8ef38cd6927431e0fa7b173a632d upstream. While reading sysctl_tcp_dsack, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- net/ipv4/tcp_input.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index d817f8c31c9c..e50da18912dd 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -4367,7 +4367,7 @@ static void tcp_dsack_set(struct sock *sk, u32 seq, u32 end_seq) { struct tcp_sock *tp = tcp_sk(sk); - if (tcp_is_sack(tp) && sock_net(sk)->ipv4.sysctl_tcp_dsack) { + if (tcp_is_sack(tp) && READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_dsack)) { int mib_idx; if (before(seq, tp->rcv_nxt)) @@ -4414,7 +4414,7 @@ static void tcp_send_dupack(struct sock *sk, const struct sk_buff *skb) NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKLOST); tcp_enter_quickack_mode(sk, TCP_MAX_QUICKACKS); - if (tcp_is_sack(tp) && sock_net(sk)->ipv4.sysctl_tcp_dsack) { + if (tcp_is_sack(tp) && READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_dsack)) { u32 end_seq = TCP_SKB_CB(skb)->end_seq; tcp_rcv_spurious_retrans(sk, skb); From 3cddb7a7a5d5ceaf02354b127e64985d09f83e9a Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 20 Jul 2022 09:50:13 -0700 Subject: [PATCH 1707/1842] tcp: Fix a data-race around sysctl_tcp_app_win. commit 02ca527ac5581cf56749db9fd03d854e842253dd upstream. While reading sysctl_tcp_app_win, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- net/ipv4/tcp_input.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index e50da18912dd..875064ddb9ee 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -503,7 +503,7 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb) */ static void tcp_init_buffer_space(struct sock *sk) { - int tcp_app_win = sock_net(sk)->ipv4.sysctl_tcp_app_win; + int tcp_app_win = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_app_win); struct tcp_sock *tp = tcp_sk(sk); int maxwin; From 312ce3901fd8c1194601457fc22e0f7932f0f28b Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 20 Jul 2022 09:50:14 -0700 Subject: [PATCH 1708/1842] tcp: Fix a data-race around sysctl_tcp_adv_win_scale. commit 36eeee75ef0157e42fb6593dcc65daab289b559e upstream. While reading sysctl_tcp_adv_win_scale, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- include/net/tcp.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index 44bfb22069c1..8129ce9a0771 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1396,7 +1396,7 @@ void tcp_select_initial_window(const struct sock *sk, int __space, static inline int tcp_win_from_space(const struct sock *sk, int space) { - int tcp_adv_win_scale = sock_net(sk)->ipv4.sysctl_tcp_adv_win_scale; + int tcp_adv_win_scale = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_adv_win_scale); return tcp_adv_win_scale <= 0 ? (space>>(-tcp_adv_win_scale)) : From 81c45f49e678c999985ef315e0cfe200192d1c2d Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 20 Jul 2022 09:50:15 -0700 Subject: [PATCH 1709/1842] tcp: Fix a data-race around sysctl_tcp_frto. commit 706c6202a3589f290e1ef9be0584a8f4a3cc0507 upstream. While reading sysctl_tcp_frto, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- net/ipv4/tcp_input.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 875064ddb9ee..7db10f74c8fb 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -2135,7 +2135,7 @@ void tcp_enter_loss(struct sock *sk) * loss recovery is underway except recurring timeout(s) on * the same SND.UNA (sec 3.2). Disable F-RTO on path MTU probing */ - tp->frto = net->ipv4.sysctl_tcp_frto && + tp->frto = READ_ONCE(net->ipv4.sysctl_tcp_frto) && (new_recovery || icsk->icsk_retransmits) && !inet_csk(sk)->icsk_mtup.probe_size; } From 3fb21b67c0fce00dcb6aac5da2aa3f4ab0e89d59 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 20 Jul 2022 09:50:16 -0700 Subject: [PATCH 1710/1842] tcp: Fix a data-race around sysctl_tcp_nometrics_save. commit 8499a2454d9e8a55ce616ede9f9580f36fd5b0f3 upstream. While reading sysctl_tcp_nometrics_save, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- net/ipv4/tcp_metrics.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c index 8d7e32f4abf6..699e0ae60561 100644 --- a/net/ipv4/tcp_metrics.c +++ b/net/ipv4/tcp_metrics.c @@ -329,7 +329,7 @@ void tcp_update_metrics(struct sock *sk) int m; sk_dst_confirm(sk); - if (net->ipv4.sysctl_tcp_nometrics_save || !dst) + if (READ_ONCE(net->ipv4.sysctl_tcp_nometrics_save) || !dst) return; rcu_read_lock(); From 2b4b373271e5a39117f520dfa931f2f801f3076f Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 20 Jul 2022 09:50:17 -0700 Subject: [PATCH 1711/1842] tcp: Fix data-races around sysctl_tcp_no_ssthresh_metrics_save. commit ab1ba21b523ab496b1a4a8e396333b24b0a18f9a upstream. While reading sysctl_tcp_no_ssthresh_metrics_save, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 65e6d90168f3 ("net-tcp: Disable TCP ssthresh metrics cache by default") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- net/ipv4/tcp_metrics.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c index 699e0ae60561..f3ca6eea2ca3 100644 --- a/net/ipv4/tcp_metrics.c +++ b/net/ipv4/tcp_metrics.c @@ -385,7 +385,7 @@ void tcp_update_metrics(struct sock *sk) if (tcp_in_initial_slowstart(tp)) { /* Slow start still did not finish. */ - if (!net->ipv4.sysctl_tcp_no_ssthresh_metrics_save && + if (!READ_ONCE(net->ipv4.sysctl_tcp_no_ssthresh_metrics_save) && !tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) { val = tcp_metric_get(tm, TCP_METRIC_SSTHRESH); if (val && (tp->snd_cwnd >> 1) > val) @@ -401,7 +401,7 @@ void tcp_update_metrics(struct sock *sk) } else if (!tcp_in_slow_start(tp) && icsk->icsk_ca_state == TCP_CA_Open) { /* Cong. avoidance phase, cwnd is reliable. */ - if (!net->ipv4.sysctl_tcp_no_ssthresh_metrics_save && + if (!READ_ONCE(net->ipv4.sysctl_tcp_no_ssthresh_metrics_save) && !tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) tcp_metric_set(tm, TCP_METRIC_SSTHRESH, max(tp->snd_cwnd >> 1, tp->snd_ssthresh)); @@ -418,7 +418,7 @@ void tcp_update_metrics(struct sock *sk) tcp_metric_set(tm, TCP_METRIC_CWND, (val + tp->snd_ssthresh) >> 1); } - if (!net->ipv4.sysctl_tcp_no_ssthresh_metrics_save && + if (!READ_ONCE(net->ipv4.sysctl_tcp_no_ssthresh_metrics_save) && !tcp_metric_locked(tm, TCP_METRIC_SSTHRESH)) { val = tcp_metric_get(tm, TCP_METRIC_SSTHRESH); if (val && tp->snd_ssthresh > val) @@ -463,7 +463,7 @@ void tcp_init_metrics(struct sock *sk) if (tcp_metric_locked(tm, TCP_METRIC_CWND)) tp->snd_cwnd_clamp = tcp_metric_get(tm, TCP_METRIC_CWND); - val = net->ipv4.sysctl_tcp_no_ssthresh_metrics_save ? + val = READ_ONCE(net->ipv4.sysctl_tcp_no_ssthresh_metrics_save) ? 0 : tcp_metric_get(tm, TCP_METRIC_SSTHRESH); if (val) { tp->snd_ssthresh = val; From 9ed6f97c8d77d0c06a70bc0189bf4c0cc8fd0144 Mon Sep 17 00:00:00 2001 From: Maciej Fijalkowski Date: Thu, 7 Jul 2022 12:20:42 +0200 Subject: [PATCH 1712/1842] ice: check (DD | EOF) bits on Rx descriptor rather than (EOP | RS) commit 283d736ff7c7e96ac5b32c6c0de40372f8eb171e upstream. Tx side sets EOP and RS bits on descriptors to indicate that a particular descriptor is the last one and needs to generate an irq when it was sent. These bits should not be checked on completion path regardless whether it's the Tx or the Rx. DD bit serves this purpose and it indicates that a particular descriptor is either for Rx or was successfully Txed. EOF is also set as loopback test does not xmit fragmented frames. Look at (DD | EOF) bits setting in ice_lbtest_receive_frames() instead of EOP and RS pair. Fixes: 0e674aeb0b77 ("ice: Add handler for ethtool selftest") Signed-off-by: Maciej Fijalkowski Tested-by: George Kuruvinakunnel Signed-off-by: Tony Nguyen Signed-off-by: Greg Kroah-Hartman --- drivers/net/ethernet/intel/ice/ice_ethtool.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index 060897eb9cab..7f1bf71844bc 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -652,7 +652,8 @@ static int ice_lbtest_receive_frames(struct ice_ring *rx_ring) rx_desc = ICE_RX_DESC(rx_ring, i); if (!(rx_desc->wb.status_error0 & - cpu_to_le16(ICE_TX_DESC_CMD_EOP | ICE_TX_DESC_CMD_RS))) + (cpu_to_le16(BIT(ICE_RX_FLEX_DESC_STATUS0_DD_S)) | + cpu_to_le16(BIT(ICE_RX_FLEX_DESC_STATUS0_EOF_S))))) continue; rx_buf = &rx_ring->rx_buf[i]; From 160f79561e8746d69023a1cad6c80ebaf6be81f7 Mon Sep 17 00:00:00 2001 From: Maciej Fijalkowski Date: Thu, 7 Jul 2022 12:20:43 +0200 Subject: [PATCH 1713/1842] ice: do not setup vlan for loopback VSI commit cc019545a238518fa9da1e2a889f6e1bb1005a63 upstream. Currently loopback test is failiing due to the error returned from ice_vsi_vlan_setup(). Skip calling it when preparing loopback VSI. Fixes: 0e674aeb0b77 ("ice: Add handler for ethtool selftest") Signed-off-by: Maciej Fijalkowski Tested-by: George Kuruvinakunnel Signed-off-by: Tony Nguyen Signed-off-by: Greg Kroah-Hartman --- drivers/net/ethernet/intel/ice/ice_main.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index aae79fdd5172..810f2bdb9164 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -5203,10 +5203,12 @@ int ice_vsi_cfg(struct ice_vsi *vsi) if (vsi->netdev) { ice_set_rx_mode(vsi->netdev); - err = ice_vsi_vlan_setup(vsi); + if (vsi->type != ICE_VSI_LB) { + err = ice_vsi_vlan_setup(vsi); - if (err) - return err; + if (err) + return err; + } } ice_vsi_cfg_dcb_rings(vsi); From 54a73d65440e8e4683306ef5c25a4525fc8b701e Mon Sep 17 00:00:00 2001 From: Liang He Date: Tue, 19 Jul 2022 15:15:29 +0800 Subject: [PATCH 1714/1842] scsi: ufs: host: Hold reference returned by of_parse_phandle() commit a3435afba87dc6cd83f5595e7607f3c40f93ef01 upstream. In ufshcd_populate_vreg(), we should hold the reference returned by of_parse_phandle() and then use it to call of_node_put() for refcount balance. Link: https://lore.kernel.org/r/20220719071529.1081166-1-windhl@126.com Fixes: aa4976130934 ("ufs: Add regulator enable support") Reviewed-by: Bart Van Assche Signed-off-by: Liang He Signed-off-by: Martin K. Petersen Signed-off-by: Greg Kroah-Hartman --- drivers/scsi/ufs/ufshcd-pltfrm.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/ufs/ufshcd-pltfrm.c b/drivers/scsi/ufs/ufshcd-pltfrm.c index 0f2430fb398d..576cc39077f3 100644 --- a/drivers/scsi/ufs/ufshcd-pltfrm.c +++ b/drivers/scsi/ufs/ufshcd-pltfrm.c @@ -107,9 +107,20 @@ static int ufshcd_parse_clock_info(struct ufs_hba *hba) return ret; } +static bool phandle_exists(const struct device_node *np, + const char *phandle_name, int index) +{ + struct device_node *parse_np = of_parse_phandle(np, phandle_name, index); + + if (parse_np) + of_node_put(parse_np); + + return parse_np != NULL; +} + #define MAX_PROP_SIZE 32 static int ufshcd_populate_vreg(struct device *dev, const char *name, - struct ufs_vreg **out_vreg) + struct ufs_vreg **out_vreg) { int ret = 0; char prop_name[MAX_PROP_SIZE]; @@ -122,7 +133,7 @@ static int ufshcd_populate_vreg(struct device *dev, const char *name, } snprintf(prop_name, MAX_PROP_SIZE, "%s-supply", name); - if (!of_parse_phandle(np, prop_name, 0)) { + if (!phandle_exists(np, prop_name, 0)) { dev_info(dev, "%s: Unable to find %s regulator, assuming enabled\n", __func__, prop_name); goto out; From 77ac046a9ad3b9ee94d02f999d30db3ac106c98c Mon Sep 17 00:00:00 2001 From: Wei Wang Date: Thu, 21 Jul 2022 20:44:04 +0000 Subject: [PATCH 1715/1842] Revert "tcp: change pingpong threshold to 3" commit 4d8f24eeedc58d5f87b650ddda73c16e8ba56559 upstream. This reverts commit 4a41f453bedfd5e9cd040bad509d9da49feb3e2c. This to-be-reverted commit was meant to apply a stricter rule for the stack to enter pingpong mode. However, the condition used to check for interactive session "before(tp->lsndtime, icsk->icsk_ack.lrcvtime)" is jiffy based and might be too coarse, which delays the stack entering pingpong mode. We revert this patch so that we no longer use the above condition to determine interactive session, and also reduce pingpong threshold to 1. Fixes: 4a41f453bedf ("tcp: change pingpong threshold to 3") Reported-by: LemmyHuang Suggested-by: Neal Cardwell Signed-off-by: Wei Wang Acked-by: Neal Cardwell Reviewed-by: Eric Dumazet Link: https://lore.kernel.org/r/20220721204404.388396-1-weiwan@google.com Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- include/net/inet_connection_sock.h | 10 +--------- net/ipv4/tcp_output.c | 15 ++++++--------- 2 files changed, 7 insertions(+), 18 deletions(-) diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h index 0b1864a82d4a..ff901aade442 100644 --- a/include/net/inet_connection_sock.h +++ b/include/net/inet_connection_sock.h @@ -317,7 +317,7 @@ void inet_csk_update_fastreuse(struct inet_bind_bucket *tb, struct dst_entry *inet_csk_update_pmtu(struct sock *sk, u32 mtu); -#define TCP_PINGPONG_THRESH 3 +#define TCP_PINGPONG_THRESH 1 static inline void inet_csk_enter_pingpong_mode(struct sock *sk) { @@ -334,14 +334,6 @@ static inline bool inet_csk_in_pingpong_mode(struct sock *sk) return inet_csk(sk)->icsk_ack.pingpong >= TCP_PINGPONG_THRESH; } -static inline void inet_csk_inc_pingpong_cnt(struct sock *sk) -{ - struct inet_connection_sock *icsk = inet_csk(sk); - - if (icsk->icsk_ack.pingpong < U8_MAX) - icsk->icsk_ack.pingpong++; -} - static inline bool inet_csk_has_ulp(struct sock *sk) { return inet_sk(sk)->is_icsk && !!inet_csk(sk)->icsk_ulp_ops; diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 9b67c61576e4..dfa41d73deab 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -167,16 +167,13 @@ static void tcp_event_data_sent(struct tcp_sock *tp, if (tcp_packets_in_flight(tp) == 0) tcp_ca_event(sk, CA_EVENT_TX_START); - /* If this is the first data packet sent in response to the - * previous received data, - * and it is a reply for ato after last received packet, - * increase pingpong count. - */ - if (before(tp->lsndtime, icsk->icsk_ack.lrcvtime) && - (u32)(now - icsk->icsk_ack.lrcvtime) < icsk->icsk_ack.ato) - inet_csk_inc_pingpong_cnt(sk); - tp->lsndtime = now; + + /* If it is a reply for ato after last received + * packet, enter pingpong mode. + */ + if ((u32)(now - icsk->icsk_ack.lrcvtime) < icsk->icsk_ack.ato) + inet_csk_enter_pingpong_mode(sk); } /* Account for an ACK we sent. */ From 3e933125830a6f53de6725cf50174df6c3384368 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 20 Jul 2022 09:50:18 -0700 Subject: [PATCH 1716/1842] tcp: Fix data-races around sysctl_tcp_moderate_rcvbuf. commit 780476488844e070580bfc9e3bc7832ec1cea883 upstream. While reading sysctl_tcp_moderate_rcvbuf, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- net/ipv4/tcp_input.c | 2 +- net/mptcp/protocol.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 7db10f74c8fb..08f3fe8b9654 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -693,7 +693,7 @@ void tcp_rcv_space_adjust(struct sock *sk) * */ - if (sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf && + if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf) && !(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) { int rcvmem, rcvbuf; u64 rcvwin, grow; diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 8123c79e2791..d0e91aa7b30e 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -1421,7 +1421,7 @@ static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied) if (msk->rcvq_space.copied <= msk->rcvq_space.space) goto new_measure; - if (sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf && + if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf) && !(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) { int rcvmem, rcvbuf; u64 rcvwin, grow; From 9ffb4fdfd80a93c7e2a5e3bc649cd3b219ecd2d9 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 20 Jul 2022 09:50:20 -0700 Subject: [PATCH 1717/1842] tcp: Fix a data-race around sysctl_tcp_limit_output_bytes. commit 9fb90193fbd66b4c5409ef729fd081861f8b6351 upstream. While reading sysctl_tcp_limit_output_bytes, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 46d3ceabd8d9 ("tcp: TCP Small Queues") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- net/ipv4/tcp_output.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index dfa41d73deab..1a144c38039c 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -2499,7 +2499,7 @@ static bool tcp_small_queue_check(struct sock *sk, const struct sk_buff *skb, sk->sk_pacing_rate >> READ_ONCE(sk->sk_pacing_shift)); if (sk->sk_pacing_status == SK_PACING_NONE) limit = min_t(unsigned long, limit, - sock_net(sk)->ipv4.sysctl_tcp_limit_output_bytes); + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_limit_output_bytes)); limit <<= factor; if (static_branch_unlikely(&tcp_tx_delay_enabled) && From c37c7f35d7b707c2db656b0ab60419d4fb45b5c5 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 20 Jul 2022 09:50:21 -0700 Subject: [PATCH 1718/1842] tcp: Fix a data-race around sysctl_tcp_challenge_ack_limit. commit db3815a2fa691da145cfbe834584f31ad75df9ff upstream. While reading sysctl_tcp_challenge_ack_limit, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 282f23c6ee34 ("tcp: implement RFC 5961 3.2") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- net/ipv4/tcp_input.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 08f3fe8b9654..c31db58b93a6 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -3576,7 +3576,7 @@ static void tcp_send_challenge_ack(struct sock *sk, const struct sk_buff *skb) /* Then check host-wide RFC 5961 rate limit. */ now = jiffies / HZ; if (now != challenge_timestamp) { - u32 ack_limit = net->ipv4.sysctl_tcp_challenge_ack_limit; + u32 ack_limit = READ_ONCE(net->ipv4.sysctl_tcp_challenge_ack_limit); u32 half = (ack_limit + 1) >> 1; challenge_timestamp = now; From a84b8b53a50b85e9b48a10ea6d61dd930577d833 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 27 Jul 2022 18:22:20 -0700 Subject: [PATCH 1719/1842] net: ping6: Fix memleak in ipv6_renew_options(). commit e27326009a3d247b831eda38878c777f6f4eb3d1 upstream. When we close ping6 sockets, some resources are left unfreed because pingv6_prot is missing sk->sk_prot->destroy(). As reported by syzbot [0], just three syscalls leak 96 bytes and easily cause OOM. struct ipv6_sr_hdr *hdr; char data[24] = {0}; int fd; hdr = (struct ipv6_sr_hdr *)data; hdr->hdrlen = 2; hdr->type = IPV6_SRCRT_TYPE_4; fd = socket(AF_INET6, SOCK_DGRAM, NEXTHDR_ICMP); setsockopt(fd, IPPROTO_IPV6, IPV6_RTHDR, data, 24); close(fd); To fix memory leaks, let's add a destroy function. Note the socket() syscall checks if the GID is within the range of net.ipv4.ping_group_range. The default value is [1, 0] so that no GID meets the condition (1 <= GID <= 0). Thus, the local DoS does not succeed until we change the default value. However, at least Ubuntu/Fedora/RHEL loosen it. $ cat /usr/lib/sysctl.d/50-default.conf ... -net.ipv4.ping_group_range = 0 2147483647 Also, there could be another path reported with these options, and some of them require CAP_NET_RAW. setsockopt IPV6_ADDRFORM (inet6_sk(sk)->pktoptions) IPV6_RECVPATHMTU (inet6_sk(sk)->rxpmtu) IPV6_HOPOPTS (inet6_sk(sk)->opt) IPV6_RTHDRDSTOPTS (inet6_sk(sk)->opt) IPV6_RTHDR (inet6_sk(sk)->opt) IPV6_DSTOPTS (inet6_sk(sk)->opt) IPV6_2292PKTOPTIONS (inet6_sk(sk)->opt) getsockopt IPV6_FLOWLABEL_MGR (inet6_sk(sk)->ipv6_fl_list) For the record, I left a different splat with syzbot's one. unreferenced object 0xffff888006270c60 (size 96): comm "repro2", pid 231, jiffies 4294696626 (age 13.118s) hex dump (first 32 bytes): 01 00 00 00 44 00 00 00 00 00 00 00 00 00 00 00 ....D........... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<00000000f6bc7ea9>] sock_kmalloc (net/core/sock.c:2564 net/core/sock.c:2554) [<000000006d699550>] do_ipv6_setsockopt.constprop.0 (net/ipv6/ipv6_sockglue.c:715) [<00000000c3c3b1f5>] ipv6_setsockopt (net/ipv6/ipv6_sockglue.c:1024) [<000000007096a025>] __sys_setsockopt (net/socket.c:2254) [<000000003a8ff47b>] __x64_sys_setsockopt (net/socket.c:2265 net/socket.c:2262 net/socket.c:2262) [<000000007c409dcb>] do_syscall_64 (arch/x86/entry/common.c:50 arch/x86/entry/common.c:80) [<00000000e939c4a9>] entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:120) [0]: https://syzkaller.appspot.com/bug?extid=a8430774139ec3ab7176 Fixes: 6d0bfe226116 ("net: ipv6: Add IPv6 support to the ping socket.") Reported-by: syzbot+a8430774139ec3ab7176@syzkaller.appspotmail.com Reported-by: Ayushman Dutta Signed-off-by: Kuniyuki Iwashima Reviewed-by: David Ahern Reviewed-by: Eric Dumazet Link: https://lore.kernel.org/r/20220728012220.46918-1-kuniyu@amazon.com Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- net/ipv6/ping.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/net/ipv6/ping.c b/net/ipv6/ping.c index 6ac88fe24a8e..135e3a060caa 100644 --- a/net/ipv6/ping.c +++ b/net/ipv6/ping.c @@ -22,6 +22,11 @@ #include #include +static void ping_v6_destroy(struct sock *sk) +{ + inet6_destroy_sock(sk); +} + /* Compatibility glue so we can support IPv6 when it's compiled as a module */ static int dummy_ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len) @@ -166,6 +171,7 @@ struct proto pingv6_prot = { .owner = THIS_MODULE, .init = ping_init_sock, .close = ping_close, + .destroy = ping_v6_destroy, .connect = ip6_datagram_connect_v6_only, .disconnect = __udp_disconnect, .setsockopt = ipv6_setsockopt, From 8008e797ec6f8537f1c5c216c005b977a75043f6 Mon Sep 17 00:00:00 2001 From: Ziyang Xuan Date: Thu, 28 Jul 2022 09:33:07 +0800 Subject: [PATCH 1720/1842] ipv6/addrconf: fix a null-ptr-deref bug for ip6_ptr commit 85f0173df35e5462d89947135a6a5599c6c3ef6f upstream. Change net device's MTU to smaller than IPV6_MIN_MTU or unregister device while matching route. That may trigger null-ptr-deref bug for ip6_ptr probability as following. ========================================================= BUG: KASAN: null-ptr-deref in find_match.part.0+0x70/0x134 Read of size 4 at addr 0000000000000308 by task ping6/263 CPU: 2 PID: 263 Comm: ping6 Not tainted 5.19.0-rc7+ #14 Call trace: dump_backtrace+0x1a8/0x230 show_stack+0x20/0x70 dump_stack_lvl+0x68/0x84 print_report+0xc4/0x120 kasan_report+0x84/0x120 __asan_load4+0x94/0xd0 find_match.part.0+0x70/0x134 __find_rr_leaf+0x408/0x470 fib6_table_lookup+0x264/0x540 ip6_pol_route+0xf4/0x260 ip6_pol_route_output+0x58/0x70 fib6_rule_lookup+0x1a8/0x330 ip6_route_output_flags_noref+0xd8/0x1a0 ip6_route_output_flags+0x58/0x160 ip6_dst_lookup_tail+0x5b4/0x85c ip6_dst_lookup_flow+0x98/0x120 rawv6_sendmsg+0x49c/0xc70 inet_sendmsg+0x68/0x94 Reproducer as following: Firstly, prepare conditions: $ip netns add ns1 $ip netns add ns2 $ip link add veth1 type veth peer name veth2 $ip link set veth1 netns ns1 $ip link set veth2 netns ns2 $ip netns exec ns1 ip -6 addr add 2001:0db8:0:f101::1/64 dev veth1 $ip netns exec ns2 ip -6 addr add 2001:0db8:0:f101::2/64 dev veth2 $ip netns exec ns1 ifconfig veth1 up $ip netns exec ns2 ifconfig veth2 up $ip netns exec ns1 ip -6 route add 2000::/64 dev veth1 metric 1 $ip netns exec ns2 ip -6 route add 2001::/64 dev veth2 metric 1 Secondly, execute the following two commands in two ssh windows respectively: $ip netns exec ns1 sh $while true; do ip -6 addr add 2001:0db8:0:f101::1/64 dev veth1; ip -6 route add 2000::/64 dev veth1 metric 1; ping6 2000::2; done $ip netns exec ns1 sh $while true; do ip link set veth1 mtu 1000; ip link set veth1 mtu 1500; sleep 5; done It is because ip6_ptr has been assigned to NULL in addrconf_ifdown() firstly, then ip6_ignore_linkdown() accesses ip6_ptr directly without NULL check. cpu0 cpu1 fib6_table_lookup __find_rr_leaf addrconf_notify [ NETDEV_CHANGEMTU ] addrconf_ifdown RCU_INIT_POINTER(dev->ip6_ptr, NULL) find_match ip6_ignore_linkdown So we can add NULL check for ip6_ptr before using in ip6_ignore_linkdown() to fix the null-ptr-deref bug. Fixes: dcd1f572954f ("net/ipv6: Remove fib6_idev") Signed-off-by: Ziyang Xuan Reviewed-by: David Ahern Link: https://lore.kernel.org/r/20220728013307.656257-1-william.xuanziyang@huawei.com Signed-off-by: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman --- include/net/addrconf.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/net/addrconf.h b/include/net/addrconf.h index e7ce719838b5..edba74a53683 100644 --- a/include/net/addrconf.h +++ b/include/net/addrconf.h @@ -405,6 +405,9 @@ static inline bool ip6_ignore_linkdown(const struct net_device *dev) { const struct inet6_dev *idev = __in6_dev_get(dev); + if (unlikely(!idev)) + return true; + return !!idev->cnf.ignore_routes_with_linkdown; } From 4c1318dabeb98ad9650909e9abb650b523aada15 Mon Sep 17 00:00:00 2001 From: Maxim Mikityanskiy Date: Thu, 21 Jul 2022 12:11:27 +0300 Subject: [PATCH 1721/1842] net/tls: Remove the context from the list in tls_device_down commit f6336724a4d4220c89a4ec38bca84b03b178b1a3 upstream. tls_device_down takes a reference on all contexts it's going to move to the degraded state (software fallback). If sk_destruct runs afterwards, it can reduce the reference counter back to 1 and return early without destroying the context. Then tls_device_down will release the reference it took and call tls_device_free_ctx. However, the context will still stay in tls_device_down_list forever. The list will contain an item, memory for which is released, making a memory corruption possible. Fix the above bug by properly removing the context from all lists before any call to tls_device_free_ctx. Fixes: 3740651bf7e2 ("tls: Fix context leak on tls_device_down") Signed-off-by: Maxim Mikityanskiy Reviewed-by: Tariq Toukan Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- net/tls/tls_device.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index 23eab7ac43ee..5cb6846544cc 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -1349,8 +1349,13 @@ static int tls_device_down(struct net_device *netdev) * by tls_device_free_ctx. rx_conf and tx_conf stay in TLS_HW. * Now release the ref taken above. */ - if (refcount_dec_and_test(&ctx->refcount)) + if (refcount_dec_and_test(&ctx->refcount)) { + /* sk_destruct ran after tls_device_down took a ref, and + * it returned early. Complete the destruction here. + */ + list_del(&ctx->list); tls_device_free_ctx(ctx); + } } up_write(&device_offload_lock); From b399ffafffba39f47b731b26a5da1dc0ffc4b3ad Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 15 Jul 2022 10:17:44 -0700 Subject: [PATCH 1722/1842] igmp: Fix data-races around sysctl_igmp_qrv. [ Upstream commit 8ebcc62c738f68688ee7c6fec2efe5bc6d3d7e60 ] While reading sysctl_igmp_qrv, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. This test can be packed into a helper, so such changes will be in the follow-up series after net is merged into net-next. qrv ?: READ_ONCE(net->ipv4.sysctl_igmp_qrv); Fixes: a9fe8e29945d ("ipv4: implement igmp_qrv sysctl to tune igmp robustness variable") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/igmp.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c index 428cc3a4c36f..c71b863093ac 100644 --- a/net/ipv4/igmp.c +++ b/net/ipv4/igmp.c @@ -827,7 +827,7 @@ static void igmp_ifc_event(struct in_device *in_dev) struct net *net = dev_net(in_dev->dev); if (IGMP_V1_SEEN(in_dev) || IGMP_V2_SEEN(in_dev)) return; - WRITE_ONCE(in_dev->mr_ifc_count, in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv); + WRITE_ONCE(in_dev->mr_ifc_count, in_dev->mr_qrv ?: READ_ONCE(net->ipv4.sysctl_igmp_qrv)); igmp_ifc_start_timer(in_dev, 1); } @@ -1009,7 +1009,7 @@ static bool igmp_heard_query(struct in_device *in_dev, struct sk_buff *skb, * received value was zero, use the default or statically * configured value. */ - in_dev->mr_qrv = ih3->qrv ?: net->ipv4.sysctl_igmp_qrv; + in_dev->mr_qrv = ih3->qrv ?: READ_ONCE(net->ipv4.sysctl_igmp_qrv); in_dev->mr_qi = IGMPV3_QQIC(ih3->qqic)*HZ ?: IGMP_QUERY_INTERVAL; /* RFC3376, 8.3. Query Response Interval: @@ -1189,7 +1189,7 @@ static void igmpv3_add_delrec(struct in_device *in_dev, struct ip_mc_list *im, pmc->interface = im->interface; in_dev_hold(in_dev); pmc->multiaddr = im->multiaddr; - pmc->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; + pmc->crcount = in_dev->mr_qrv ?: READ_ONCE(net->ipv4.sysctl_igmp_qrv); pmc->sfmode = im->sfmode; if (pmc->sfmode == MCAST_INCLUDE) { struct ip_sf_list *psf; @@ -1240,9 +1240,11 @@ static void igmpv3_del_delrec(struct in_device *in_dev, struct ip_mc_list *im) swap(im->tomb, pmc->tomb); swap(im->sources, pmc->sources); for (psf = im->sources; psf; psf = psf->sf_next) - psf->sf_crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; + psf->sf_crcount = in_dev->mr_qrv ?: + READ_ONCE(net->ipv4.sysctl_igmp_qrv); } else { - im->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; + im->crcount = in_dev->mr_qrv ?: + READ_ONCE(net->ipv4.sysctl_igmp_qrv); } in_dev_put(pmc->interface); kfree_pmc(pmc); @@ -1349,7 +1351,7 @@ static void igmp_group_added(struct ip_mc_list *im) if (in_dev->dead) return; - im->unsolicit_count = net->ipv4.sysctl_igmp_qrv; + im->unsolicit_count = READ_ONCE(net->ipv4.sysctl_igmp_qrv); if (IGMP_V1_SEEN(in_dev) || IGMP_V2_SEEN(in_dev)) { spin_lock_bh(&im->lock); igmp_start_timer(im, IGMP_INITIAL_REPORT_DELAY); @@ -1363,7 +1365,7 @@ static void igmp_group_added(struct ip_mc_list *im) * IN() to IN(A). */ if (im->sfmode == MCAST_EXCLUDE) - im->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; + im->crcount = in_dev->mr_qrv ?: READ_ONCE(net->ipv4.sysctl_igmp_qrv); igmp_ifc_event(in_dev); #endif @@ -1754,7 +1756,7 @@ static void ip_mc_reset(struct in_device *in_dev) in_dev->mr_qi = IGMP_QUERY_INTERVAL; in_dev->mr_qri = IGMP_QUERY_RESPONSE_INTERVAL; - in_dev->mr_qrv = net->ipv4.sysctl_igmp_qrv; + in_dev->mr_qrv = READ_ONCE(net->ipv4.sysctl_igmp_qrv); } #else static void ip_mc_reset(struct in_device *in_dev) @@ -1888,7 +1890,7 @@ static int ip_mc_del1_src(struct ip_mc_list *pmc, int sfmode, #ifdef CONFIG_IP_MULTICAST if (psf->sf_oldin && !IGMP_V1_SEEN(in_dev) && !IGMP_V2_SEEN(in_dev)) { - psf->sf_crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; + psf->sf_crcount = in_dev->mr_qrv ?: READ_ONCE(net->ipv4.sysctl_igmp_qrv); psf->sf_next = pmc->tomb; pmc->tomb = psf; rv = 1; @@ -1952,7 +1954,7 @@ static int ip_mc_del_src(struct in_device *in_dev, __be32 *pmca, int sfmode, /* filter mode change */ pmc->sfmode = MCAST_INCLUDE; #ifdef CONFIG_IP_MULTICAST - pmc->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; + pmc->crcount = in_dev->mr_qrv ?: READ_ONCE(net->ipv4.sysctl_igmp_qrv); WRITE_ONCE(in_dev->mr_ifc_count, pmc->crcount); for (psf = pmc->sources; psf; psf = psf->sf_next) psf->sf_crcount = 0; @@ -2131,7 +2133,7 @@ static int ip_mc_add_src(struct in_device *in_dev, __be32 *pmca, int sfmode, #ifdef CONFIG_IP_MULTICAST /* else no filters; keep old mode for reports */ - pmc->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; + pmc->crcount = in_dev->mr_qrv ?: READ_ONCE(net->ipv4.sysctl_igmp_qrv); WRITE_ONCE(in_dev->mr_ifc_count, pmc->crcount); for (psf = pmc->sources; psf; psf = psf->sf_next) psf->sf_crcount = 0; From 5584fe9718a4d546a6ee531d0481ed3cdaca1e25 Mon Sep 17 00:00:00 2001 From: Liang He Date: Wed, 20 Jul 2022 21:10:03 +0800 Subject: [PATCH 1723/1842] net: sungem_phy: Add of_node_put() for reference returned by of_get_parent() [ Upstream commit ebbbe23fdf6070e31509638df3321688358cc211 ] In bcm5421_init(), we should call of_node_put() for the reference returned by of_get_parent() which has increased the refcount. Fixes: 3c326fe9cb7a ("[PATCH] ppc64: Add new PHY to sungem") Signed-off-by: Liang He Link: https://lore.kernel.org/r/20220720131003.1287426-1-windhl@126.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/sungem_phy.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/sungem_phy.c b/drivers/net/sungem_phy.c index 291fa449993f..45f295403cb5 100644 --- a/drivers/net/sungem_phy.c +++ b/drivers/net/sungem_phy.c @@ -454,6 +454,7 @@ static int bcm5421_init(struct mii_phy* phy) int can_low_power = 1; if (np == NULL || of_get_property(np, "no-autolowpower", NULL)) can_low_power = 0; + of_node_put(np); if (can_low_power) { /* Enable automatic low-power */ sungem_phy_write(phy, 0x1c, 0x9002); From f47e7e5b49e3f14a7dff5c1085f7a9e6f5a18b7b Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 20 Jul 2022 09:50:22 -0700 Subject: [PATCH 1724/1842] tcp: Fix a data-race around sysctl_tcp_min_tso_segs. [ Upstream commit e0bb4ab9dfddd872622239f49fb2bd403b70853b ] While reading sysctl_tcp_min_tso_segs, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 95bd09eb2750 ("tcp: TSO packets automatic sizing") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_output.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 1a144c38039c..657b0a4d9359 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -1984,7 +1984,7 @@ static u32 tcp_tso_segs(struct sock *sk, unsigned int mss_now) min_tso = ca_ops->min_tso_segs ? ca_ops->min_tso_segs(sk) : - sock_net(sk)->ipv4.sysctl_tcp_min_tso_segs; + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_min_tso_segs); tso_segs = tcp_tso_autosize(sk, mss_now, min_tso); return min_t(u32, tso_segs, sk->sk_gso_max_segs); From 83edb788e69a0d5ad40ba5be6a60d6db265d0500 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 20 Jul 2022 09:50:24 -0700 Subject: [PATCH 1725/1842] tcp: Fix a data-race around sysctl_tcp_min_rtt_wlen. [ Upstream commit 1330ffacd05fc9ac4159d19286ce119e22450ed2 ] While reading sysctl_tcp_min_rtt_wlen, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: f672258391b4 ("tcp: track min RTT using windowed min-filter") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_input.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index c31db58b93a6..79539ef5eb90 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -3004,7 +3004,7 @@ static void tcp_fastretrans_alert(struct sock *sk, const u32 prior_snd_una, static void tcp_update_rtt_min(struct sock *sk, u32 rtt_us, const int flag) { - u32 wlen = sock_net(sk)->ipv4.sysctl_tcp_min_rtt_wlen * HZ; + u32 wlen = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_min_rtt_wlen) * HZ; struct tcp_sock *tp = tcp_sk(sk); if ((flag & FLAG_ACK_MAYBE_DELAYED) && rtt_us > tcp_min_rtt(tp)) { From c4e6029a85c8459482284f0edabd1b8fe2909011 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 20 Jul 2022 09:50:25 -0700 Subject: [PATCH 1726/1842] tcp: Fix a data-race around sysctl_tcp_autocorking. [ Upstream commit 85225e6f0a76e6745bc841c9f25169c509b573d8 ] While reading sysctl_tcp_autocorking, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: f54b311142a9 ("tcp: auto corking") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index f1fd26bb199c..78460eb39b3a 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -698,7 +698,7 @@ static bool tcp_should_autocork(struct sock *sk, struct sk_buff *skb, int size_goal) { return skb->len < size_goal && - sock_net(sk)->ipv4.sysctl_tcp_autocorking && + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_autocorking) && !tcp_rtx_queue_empty(sk) && refcount_read(&sk->sk_wmem_alloc) > skb->truesize; } From 4aea33f404594dac8b98b9308999970c693ca959 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Wed, 20 Jul 2022 09:50:26 -0700 Subject: [PATCH 1727/1842] tcp: Fix a data-race around sysctl_tcp_invalid_ratelimit. [ Upstream commit 2afdbe7b8de84c28e219073a6661080e1b3ded48 ] While reading sysctl_tcp_invalid_ratelimit, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 032ee4236954 ("tcp: helpers to mitigate ACK loops by rate-limiting out-of-window dupacks") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_input.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 79539ef5eb90..716bc95ebfb0 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -3528,7 +3528,8 @@ static bool __tcp_oow_rate_limited(struct net *net, int mib_idx, if (*last_oow_ack_time) { s32 elapsed = (s32)(tcp_jiffies32 - *last_oow_ack_time); - if (0 <= elapsed && elapsed < net->ipv4.sysctl_tcp_invalid_ratelimit) { + if (0 <= elapsed && + elapsed < READ_ONCE(net->ipv4.sysctl_tcp_invalid_ratelimit)) { NET_INC_STATS(net, mib_idx); return true; /* rate-limited: don't send yet! */ } From 034bfadc8f511397d55ad5a6851d796342d3c1f5 Mon Sep 17 00:00:00 2001 From: Xin Long Date: Thu, 21 Jul 2022 10:35:46 -0400 Subject: [PATCH 1728/1842] Documentation: fix sctp_wmem in ip-sysctl.rst [ Upstream commit aa709da0e032cee7c202047ecd75f437bb0126ed ] Since commit 1033990ac5b2 ("sctp: implement memory accounting on tx path"), SCTP has supported memory accounting on tx path where 'sctp_wmem' is used by sk_wmem_schedule(). So we should fix the description for this option in ip-sysctl.rst accordingly. v1->v2: - Improve the description as Marcelo suggested. Fixes: 1033990ac5b2 ("sctp: implement memory accounting on tx path") Signed-off-by: Xin Long Acked-by: Marcelo Ricardo Leitner Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- Documentation/networking/ip-sysctl.rst | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst index 0b1f3235aa77..0158dff63887 100644 --- a/Documentation/networking/ip-sysctl.rst +++ b/Documentation/networking/ip-sysctl.rst @@ -2629,7 +2629,14 @@ sctp_rmem - vector of 3 INTEGERs: min, default, max Default: 4K sctp_wmem - vector of 3 INTEGERs: min, default, max - Currently this tunable has no effect. + Only the first value ("min") is used, "default" and "max" are + ignored. + + min: Minimum size of send buffer that can be used by SCTP sockets. + It is guaranteed to each SCTP socket (but not association) even + under moderate memory pressure. + + Default: 4K addr_scope_policy - INTEGER Control IPv4 address scoping - draft-stewart-tsvwg-sctp-ipv4-00 From 54c295a30f000fcb5525c9385144058987356697 Mon Sep 17 00:00:00 2001 From: Sabrina Dubroca Date: Fri, 22 Jul 2022 11:16:27 +0200 Subject: [PATCH 1729/1842] macsec: fix NULL deref in macsec_add_rxsa [ Upstream commit f46040eeaf2e523a4096199fd93a11e794818009 ] Commit 48ef50fa866a added a test on tb_sa[MACSEC_SA_ATTR_PN], but nothing guarantees that it's not NULL at this point. The same code was added to macsec_add_txsa, but there it's not a problem because validate_add_txsa checks that the MACSEC_SA_ATTR_PN attribute is present. Note: it's not possible to reproduce with iproute, because iproute doesn't allow creating an SA without specifying the PN. Fixes: 48ef50fa866a ("macsec: Netlink support of XPN cipher suites (IEEE 802.1AEbw)") Link: https://bugzilla.kernel.org/show_bug.cgi?id=208315 Reported-by: Frantisek Sumsal Signed-off-by: Sabrina Dubroca Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/macsec.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index 789a124809e3..0b53c7cadd87 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -1750,7 +1750,8 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info) } pn_len = secy->xpn ? MACSEC_XPN_PN_LEN : MACSEC_DEFAULT_PN_LEN; - if (nla_len(tb_sa[MACSEC_SA_ATTR_PN]) != pn_len) { + if (tb_sa[MACSEC_SA_ATTR_PN] && + nla_len(tb_sa[MACSEC_SA_ATTR_PN]) != pn_len) { pr_notice("macsec: nl: add_rxsa: bad pn length: %d != %d\n", nla_len(tb_sa[MACSEC_SA_ATTR_PN]), pn_len); rtnl_unlock(); From 0755c9d05ab28002d5af6b5aa2275abce34db1be Mon Sep 17 00:00:00 2001 From: Sabrina Dubroca Date: Fri, 22 Jul 2022 11:16:28 +0200 Subject: [PATCH 1730/1842] macsec: fix error message in macsec_add_rxsa and _txsa [ Upstream commit 3240eac4ff20e51b87600dbd586ed814daf313db ] The expected length is MACSEC_SALT_LEN, not MACSEC_SA_ATTR_SALT. Fixes: 48ef50fa866a ("macsec: Netlink support of XPN cipher suites (IEEE 802.1AEbw)") Signed-off-by: Sabrina Dubroca Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/macsec.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index 0b53c7cadd87..c2d8bcda2503 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -1767,7 +1767,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info) if (nla_len(tb_sa[MACSEC_SA_ATTR_SALT]) != MACSEC_SALT_LEN) { pr_notice("macsec: nl: add_rxsa: bad salt length: %d != %d\n", nla_len(tb_sa[MACSEC_SA_ATTR_SALT]), - MACSEC_SA_ATTR_SALT); + MACSEC_SALT_LEN); rtnl_unlock(); return -EINVAL; } @@ -2009,7 +2009,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info) if (nla_len(tb_sa[MACSEC_SA_ATTR_SALT]) != MACSEC_SALT_LEN) { pr_notice("macsec: nl: add_txsa: bad salt length: %d != %d\n", nla_len(tb_sa[MACSEC_SA_ATTR_SALT]), - MACSEC_SA_ATTR_SALT); + MACSEC_SALT_LEN); rtnl_unlock(); return -EINVAL; } From 2daf0a1261c7b43416200e59bd35ccea66951797 Mon Sep 17 00:00:00 2001 From: Sabrina Dubroca Date: Fri, 22 Jul 2022 11:16:29 +0200 Subject: [PATCH 1731/1842] macsec: limit replay window size with XPN [ Upstream commit b07a0e2044057f201d694ab474f5c42a02b6465b ] IEEE 802.1AEbw-2013 (section 10.7.8) specifies that the maximum value of the replay window is 2^30-1, to help with recovery of the upper bits of the PN. To avoid leaving the existing macsec device in an inconsistent state if this test fails during changelink, reuse the cleanup mechanism introduced for HW offload. This wasn't needed until now because macsec_changelink_common could not fail during changelink, as modifying the cipher suite was not allowed. Finally, this must happen after handling IFLA_MACSEC_CIPHER_SUITE so that secy->xpn is set. Fixes: 48ef50fa866a ("macsec: Netlink support of XPN cipher suites (IEEE 802.1AEbw)") Signed-off-by: Sabrina Dubroca Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/macsec.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index c2d8bcda2503..96dc7bd4813d 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -240,6 +240,7 @@ static struct macsec_cb *macsec_skb_cb(struct sk_buff *skb) #define DEFAULT_SEND_SCI true #define DEFAULT_ENCRYPT false #define DEFAULT_ENCODING_SA 0 +#define MACSEC_XPN_MAX_REPLAY_WINDOW (((1 << 30) - 1)) static bool send_sci(const struct macsec_secy *secy) { @@ -3738,9 +3739,6 @@ static int macsec_changelink_common(struct net_device *dev, secy->operational = tx_sa && tx_sa->active; } - if (data[IFLA_MACSEC_WINDOW]) - secy->replay_window = nla_get_u32(data[IFLA_MACSEC_WINDOW]); - if (data[IFLA_MACSEC_ENCRYPT]) tx_sc->encrypt = !!nla_get_u8(data[IFLA_MACSEC_ENCRYPT]); @@ -3786,6 +3784,16 @@ static int macsec_changelink_common(struct net_device *dev, } } + if (data[IFLA_MACSEC_WINDOW]) { + secy->replay_window = nla_get_u32(data[IFLA_MACSEC_WINDOW]); + + /* IEEE 802.1AEbw-2013 10.7.8 - maximum replay window + * for XPN cipher suites */ + if (secy->xpn && + secy->replay_window > MACSEC_XPN_MAX_REPLAY_WINDOW) + return -EINVAL; + } + return 0; } @@ -3815,7 +3823,7 @@ static int macsec_changelink(struct net_device *dev, struct nlattr *tb[], ret = macsec_changelink_common(dev, data); if (ret) - return ret; + goto cleanup; /* If h/w offloading is available, propagate to the device */ if (macsec_is_offloaded(macsec)) { From 6e0e0464f1da3eb76695ba231ab7e7e64c2c154f Mon Sep 17 00:00:00 2001 From: Sabrina Dubroca Date: Fri, 22 Jul 2022 11:16:30 +0200 Subject: [PATCH 1732/1842] macsec: always read MACSEC_SA_ATTR_PN as a u64 [ Upstream commit c630d1fe6219769049c87d1a6a0e9a6de55328a1 ] Currently, MACSEC_SA_ATTR_PN is handled inconsistently, sometimes as a u32, sometimes forced into a u64 without checking the actual length of the attribute. Instead, we can use nla_get_u64 everywhere, which will read up to 64 bits into a u64, capped by the actual length of the attribute coming from userspace. This fixes several issues: - the check in validate_add_rxsa doesn't work with 32-bit attributes - the checks in validate_add_txsa and validate_upd_sa incorrectly reject X << 32 (with X != 0) Fixes: 48ef50fa866a ("macsec: Netlink support of XPN cipher suites (IEEE 802.1AEbw)") Signed-off-by: Sabrina Dubroca Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/macsec.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index 96dc7bd4813d..8d73b72d6179 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -1695,7 +1695,7 @@ static bool validate_add_rxsa(struct nlattr **attrs) return false; if (attrs[MACSEC_SA_ATTR_PN] && - *(u64 *)nla_data(attrs[MACSEC_SA_ATTR_PN]) == 0) + nla_get_u64(attrs[MACSEC_SA_ATTR_PN]) == 0) return false; if (attrs[MACSEC_SA_ATTR_ACTIVE]) { @@ -1938,7 +1938,7 @@ static bool validate_add_txsa(struct nlattr **attrs) if (nla_get_u8(attrs[MACSEC_SA_ATTR_AN]) >= MACSEC_NUM_AN) return false; - if (nla_get_u32(attrs[MACSEC_SA_ATTR_PN]) == 0) + if (nla_get_u64(attrs[MACSEC_SA_ATTR_PN]) == 0) return false; if (attrs[MACSEC_SA_ATTR_ACTIVE]) { @@ -2292,7 +2292,7 @@ static bool validate_upd_sa(struct nlattr **attrs) if (nla_get_u8(attrs[MACSEC_SA_ATTR_AN]) >= MACSEC_NUM_AN) return false; - if (attrs[MACSEC_SA_ATTR_PN] && nla_get_u32(attrs[MACSEC_SA_ATTR_PN]) == 0) + if (attrs[MACSEC_SA_ATTR_PN] && nla_get_u64(attrs[MACSEC_SA_ATTR_PN]) == 0) return false; if (attrs[MACSEC_SA_ATTR_ACTIVE]) { From 530a4da37ece1dd071eeb2e93fb353c56254e407 Mon Sep 17 00:00:00 2001 From: Jianglei Nie Date: Fri, 22 Jul 2022 17:29:02 +0800 Subject: [PATCH 1733/1842] net: macsec: fix potential resource leak in macsec_add_rxsa() and macsec_add_txsa() [ Upstream commit c7b205fbbf3cffa374721bb7623f7aa8c46074f1 ] init_rx_sa() allocates relevant resource for rx_sa->stats and rx_sa-> key.tfm with alloc_percpu() and macsec_alloc_tfm(). When some error occurs after init_rx_sa() is called in macsec_add_rxsa(), the function released rx_sa with kfree() without releasing rx_sa->stats and rx_sa-> key.tfm, which will lead to a resource leak. We should call macsec_rxsa_put() instead of kfree() to decrease the ref count of rx_sa and release the relevant resource if the refcount is 0. The same bug exists in macsec_add_txsa() for tx_sa as well. This patch fixes the above two bugs. Fixes: 3cf3227a21d1 ("net: macsec: hardware offloading infrastructure") Signed-off-by: Jianglei Nie Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/macsec.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c index 8d73b72d6179..70c5905a916b 100644 --- a/drivers/net/macsec.c +++ b/drivers/net/macsec.c @@ -1841,7 +1841,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info) return 0; cleanup: - kfree(rx_sa); + macsec_rxsa_put(rx_sa); rtnl_unlock(); return err; } @@ -2084,7 +2084,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info) cleanup: secy->operational = was_operational; - kfree(tx_sa); + macsec_txsa_put(tx_sa); rtnl_unlock(); return err; } From 48323978912754eb589edd2dfd4fb5f5140a2cdd Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 22 Jul 2022 11:22:01 -0700 Subject: [PATCH 1734/1842] tcp: Fix a data-race around sysctl_tcp_comp_sack_delay_ns. [ Upstream commit 4866b2b0f7672b6d760c4b8ece6fb56f965dcc8a ] While reading sysctl_tcp_comp_sack_delay_ns, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 6d82aa242092 ("tcp: add tcp_comp_sack_delay_ns sysctl") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_input.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 716bc95ebfb0..b86d98c07cdf 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -5461,7 +5461,8 @@ static void __tcp_ack_snd_check(struct sock *sk, int ofo_possible) if (tp->srtt_us && tp->srtt_us < rtt) rtt = tp->srtt_us; - delay = min_t(unsigned long, sock_net(sk)->ipv4.sysctl_tcp_comp_sack_delay_ns, + delay = min_t(unsigned long, + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_comp_sack_delay_ns), rtt * (NSEC_PER_USEC >> 3)/20); sock_hold(sk); hrtimer_start_range_ns(&tp->compressed_ack_timer, ns_to_ktime(delay), From d2476f2059c240b016e815f27b1a5a7bdb79effa Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 22 Jul 2022 11:22:02 -0700 Subject: [PATCH 1735/1842] tcp: Fix a data-race around sysctl_tcp_comp_sack_slack_ns. [ Upstream commit 22396941a7f343d704738360f9ef0e6576489d43 ] While reading sysctl_tcp_comp_sack_slack_ns, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: a70437cc09a1 ("tcp: add hrtimer slack to sack compression") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_input.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index b86d98c07cdf..72a339d3f18f 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -5466,7 +5466,7 @@ static void __tcp_ack_snd_check(struct sock *sk, int ofo_possible) rtt * (NSEC_PER_USEC >> 3)/20); sock_hold(sk); hrtimer_start_range_ns(&tp->compressed_ack_timer, ns_to_ktime(delay), - sock_net(sk)->ipv4.sysctl_tcp_comp_sack_slack_ns, + READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_comp_sack_slack_ns), HRTIMER_MODE_REL_PINNED_SOFT); } From f310fb69a0a833b325bf36b21587ae0e52450d58 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 22 Jul 2022 11:22:03 -0700 Subject: [PATCH 1736/1842] tcp: Fix a data-race around sysctl_tcp_comp_sack_nr. [ Upstream commit 79f55473bfc8ac51bd6572929a679eeb4da22251 ] While reading sysctl_tcp_comp_sack_nr, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 9c21d2fc41c0 ("tcp: add tcp_comp_sack_nr sysctl") Signed-off-by: Kuniyuki Iwashima Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_input.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 72a339d3f18f..d35e88b5ffcb 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -5440,7 +5440,7 @@ static void __tcp_ack_snd_check(struct sock *sk, int ofo_possible) } if (!tcp_is_sack(tp) || - tp->compressed_ack >= sock_net(sk)->ipv4.sysctl_tcp_comp_sack_nr) + tp->compressed_ack >= READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_comp_sack_nr)) goto send_now; if (tp->compressed_ack_rcv_nxt != tp->rcv_nxt) { From e4a7acd6b443c15dcf269df4e04992408573e674 Mon Sep 17 00:00:00 2001 From: Kuniyuki Iwashima Date: Fri, 22 Jul 2022 11:22:04 -0700 Subject: [PATCH 1737/1842] tcp: Fix data-races around sysctl_tcp_reflect_tos. [ Upstream commit 870e3a634b6a6cb1543b359007aca73fe6a03ac5 ] While reading sysctl_tcp_reflect_tos, it can be changed concurrently. Thus, we need to add READ_ONCE() to its readers. Fixes: ac8f1710c12b ("tcp: reflect tos value received in SYN to the socket") Signed-off-by: Kuniyuki Iwashima Acked-by: Wei Wang Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/ipv4/tcp_ipv4.c | 4 ++-- net/ipv6/tcp_ipv6.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index d5f13ff7d900..0d165ce2d80a 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -983,7 +983,7 @@ static int tcp_v4_send_synack(const struct sock *sk, struct dst_entry *dst, if (skb) { __tcp_v4_send_check(skb, ireq->ir_loc_addr, ireq->ir_rmt_addr); - tos = sock_net(sk)->ipv4.sysctl_tcp_reflect_tos ? + tos = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reflect_tos) ? (tcp_rsk(req)->syn_tos & ~INET_ECN_MASK) | (inet_sk(sk)->tos & INET_ECN_MASK) : inet_sk(sk)->tos; @@ -1558,7 +1558,7 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb, /* Set ToS of the new socket based upon the value of incoming SYN. * ECT bits are set later in tcp_init_transfer(). */ - if (sock_net(sk)->ipv4.sysctl_tcp_reflect_tos) + if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reflect_tos)) newinet->tos = tcp_rsk(req)->syn_tos & ~INET_ECN_MASK; if (!dst) { diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 303b54414a6c..8d91f36cb11b 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -542,7 +542,7 @@ static int tcp_v6_send_synack(const struct sock *sk, struct dst_entry *dst, if (np->repflow && ireq->pktopts) fl6->flowlabel = ip6_flowlabel(ipv6_hdr(ireq->pktopts)); - tclass = sock_net(sk)->ipv4.sysctl_tcp_reflect_tos ? + tclass = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reflect_tos) ? (tcp_rsk(req)->syn_tos & ~INET_ECN_MASK) | (np->tclass & INET_ECN_MASK) : np->tclass; @@ -1344,7 +1344,7 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff * /* Set ToS of the new socket based upon the value of incoming SYN. * ECT bits are set later in tcp_init_transfer(). */ - if (sock_net(sk)->ipv4.sysctl_tcp_reflect_tos) + if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reflect_tos)) newnp->tclass = tcp_rsk(req)->syn_tos & ~INET_ECN_MASK; /* Clone native IPv6 options from listening socket (if any) From fad6caf9b19f037e74bec404e40e18a0f1c7b1d1 Mon Sep 17 00:00:00 2001 From: Michal Maloszewski Date: Fri, 22 Jul 2022 10:54:01 -0700 Subject: [PATCH 1738/1842] i40e: Fix interface init with MSI interrupts (no MSI-X) [ Upstream commit 5fcbb711024aac6d4db385623e6f2fdf019f7782 ] Fix the inability to bring an interface up on a setup with only MSI interrupts enabled (no MSI-X). Solution is to add a default number of QPs = 1. This is enough, since without MSI-X support driver enables only a basic feature set. Fixes: bc6d33c8d93f ("i40e: Fix the number of queues available to be mapped for use") Signed-off-by: Dawid Lukwinski Signed-off-by: Michal Maloszewski Tested-by: Dave Switzer Signed-off-by: Tony Nguyen Link: https://lore.kernel.org/r/20220722175401.112572-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/intel/i40e/i40e_main.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 11d4e3ba9af4..1dad62ecb8a3 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -1907,11 +1907,15 @@ static void i40e_vsi_setup_queue_map(struct i40e_vsi *vsi, * non-zero req_queue_pairs says that user requested a new * queue count via ethtool's set_channels, so use this * value for queues distribution across traffic classes + * We need at least one queue pair for the interface + * to be usable as we see in else statement. */ if (vsi->req_queue_pairs > 0) vsi->num_queue_pairs = vsi->req_queue_pairs; else if (pf->flags & I40E_FLAG_MSIX_ENABLED) vsi->num_queue_pairs = pf->num_lan_msix; + else + vsi->num_queue_pairs = 1; } /* Number of queues per enabled TC */ From aeb2ff9f9f7059021f93313da3cc0c3f7f2f1641 Mon Sep 17 00:00:00 2001 From: Duoming Zhou Date: Sat, 23 Jul 2022 09:58:09 +0800 Subject: [PATCH 1739/1842] sctp: fix sleep in atomic context bug in timer handlers [ Upstream commit b89fc26f741d9f9efb51cba3e9b241cf1380ec5a ] There are sleep in atomic context bugs in timer handlers of sctp such as sctp_generate_t3_rtx_event(), sctp_generate_probe_event(), sctp_generate_t1_init_event(), sctp_generate_timeout_event(), sctp_generate_t3_rtx_event() and so on. The root cause is sctp_sched_prio_init_sid() with GFP_KERNEL parameter that may sleep could be called by different timer handlers which is in interrupt context. One of the call paths that could trigger bug is shown below: (interrupt context) sctp_generate_probe_event sctp_do_sm sctp_side_effects sctp_cmd_interpreter sctp_outq_teardown sctp_outq_init sctp_sched_set_sched n->init_sid(..,GFP_KERNEL) sctp_sched_prio_init_sid //may sleep This patch changes gfp_t parameter of init_sid in sctp_sched_set_sched() from GFP_KERNEL to GFP_ATOMIC in order to prevent sleep in atomic context bugs. Fixes: 5bbbbe32a431 ("sctp: introduce stream scheduler foundations") Signed-off-by: Duoming Zhou Acked-by: Marcelo Ricardo Leitner Link: https://lore.kernel.org/r/20220723015809.11553-1-duoming@zju.edu.cn Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- net/sctp/stream_sched.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/sctp/stream_sched.c b/net/sctp/stream_sched.c index 99e5f69fbb74..a2e1d34f52c5 100644 --- a/net/sctp/stream_sched.c +++ b/net/sctp/stream_sched.c @@ -163,7 +163,7 @@ int sctp_sched_set_sched(struct sctp_association *asoc, if (!SCTP_SO(&asoc->stream, i)->ext) continue; - ret = n->init_sid(&asoc->stream, i, GFP_KERNEL); + ret = n->init_sid(&asoc->stream, i, GFP_ATOMIC); if (ret) goto err; } From 440dccd80f627e0e11ceb0429e4cdab61857d17e Mon Sep 17 00:00:00 2001 From: Florian Westphal Date: Tue, 26 Jul 2022 12:42:06 +0200 Subject: [PATCH 1740/1842] netfilter: nf_queue: do not allow packet truncation below transport header offset [ Upstream commit 99a63d36cb3ed5ca3aa6fcb64cffbeaf3b0fb164 ] Domingo Dirutigliano and Nicola Guerrera report kernel panic when sending nf_queue verdict with 1-byte nfta_payload attribute. The IP/IPv6 stack pulls the IP(v6) header from the packet after the input hook. If user truncates the packet below the header size, this skb_pull() will result in a malformed skb (skb->len < 0). Fixes: 7af4cc3fa158 ("[NETFILTER]: Add "nfnetlink_queue" netfilter queue handler over nfnetlink") Reported-by: Domingo Dirutigliano Signed-off-by: Florian Westphal Reviewed-by: Pablo Neira Ayuso Signed-off-by: Sasha Levin --- net/netfilter/nfnetlink_queue.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c index 1640da5c5077..72d30922ed29 100644 --- a/net/netfilter/nfnetlink_queue.c +++ b/net/netfilter/nfnetlink_queue.c @@ -838,11 +838,16 @@ nfqnl_enqueue_packet(struct nf_queue_entry *entry, unsigned int queuenum) } static int -nfqnl_mangle(void *data, int data_len, struct nf_queue_entry *e, int diff) +nfqnl_mangle(void *data, unsigned int data_len, struct nf_queue_entry *e, int diff) { struct sk_buff *nskb; if (diff < 0) { + unsigned int min_len = skb_transport_offset(e->skb); + + if (data_len < min_len) + return -EINVAL; + if (pskb_trim(e->skb, data_len)) return -ENOMEM; } else if (diff > 0) { From 6807897695d4bccffed439029a78ed3d03a37e79 Mon Sep 17 00:00:00 2001 From: Jason Wang Date: Mon, 25 Jul 2022 15:21:59 +0800 Subject: [PATCH 1741/1842] virtio-net: fix the race between refill work and close [ Upstream commit 5a159128faff151b7fe5f4eb0f310b1e0a2d56bf ] We try using cancel_delayed_work_sync() to prevent the work from enabling NAPI. This is insufficient since we don't disable the source of the refill work scheduling. This means an NAPI poll callback after cancel_delayed_work_sync() can schedule the refill work then can re-enable the NAPI that leads to use-after-free [1]. Since the work can enable NAPI, we can't simply disable NAPI before calling cancel_delayed_work_sync(). So fix this by introducing a dedicated boolean to control whether or not the work could be scheduled from NAPI. [1] ================================================================== BUG: KASAN: use-after-free in refill_work+0x43/0xd4 Read of size 2 at addr ffff88810562c92e by task kworker/2:1/42 CPU: 2 PID: 42 Comm: kworker/2:1 Not tainted 5.19.0-rc1+ #480 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014 Workqueue: events refill_work Call Trace: dump_stack_lvl+0x34/0x44 print_report.cold+0xbb/0x6ac ? _printk+0xad/0xde ? refill_work+0x43/0xd4 kasan_report+0xa8/0x130 ? refill_work+0x43/0xd4 refill_work+0x43/0xd4 process_one_work+0x43d/0x780 worker_thread+0x2a0/0x6f0 ? process_one_work+0x780/0x780 kthread+0x167/0x1a0 ? kthread_exit+0x50/0x50 ret_from_fork+0x22/0x30 ... Fixes: b2baed69e605c ("virtio_net: set/cancel work on ndo_open/ndo_stop") Signed-off-by: Jason Wang Acked-by: Michael S. Tsirkin Reviewed-by: Xuan Zhuo Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- drivers/net/virtio_net.c | 37 ++++++++++++++++++++++++++++++++++--- 1 file changed, 34 insertions(+), 3 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 37178b078ee3..0a07c05a610d 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -213,9 +213,15 @@ struct virtnet_info { /* Packet virtio header size */ u8 hdr_len; - /* Work struct for refilling if we run low on memory. */ + /* Work struct for delayed refilling if we run low on memory. */ struct delayed_work refill; + /* Is delayed refill enabled? */ + bool refill_enabled; + + /* The lock to synchronize the access to refill_enabled */ + spinlock_t refill_lock; + /* Work struct for config space updates */ struct work_struct config_work; @@ -319,6 +325,20 @@ static struct page *get_a_page(struct receive_queue *rq, gfp_t gfp_mask) return p; } +static void enable_delayed_refill(struct virtnet_info *vi) +{ + spin_lock_bh(&vi->refill_lock); + vi->refill_enabled = true; + spin_unlock_bh(&vi->refill_lock); +} + +static void disable_delayed_refill(struct virtnet_info *vi) +{ + spin_lock_bh(&vi->refill_lock); + vi->refill_enabled = false; + spin_unlock_bh(&vi->refill_lock); +} + static void virtqueue_napi_schedule(struct napi_struct *napi, struct virtqueue *vq) { @@ -1403,8 +1423,12 @@ static int virtnet_receive(struct receive_queue *rq, int budget, } if (rq->vq->num_free > min((unsigned int)budget, virtqueue_get_vring_size(rq->vq)) / 2) { - if (!try_fill_recv(vi, rq, GFP_ATOMIC)) - schedule_delayed_work(&vi->refill, 0); + if (!try_fill_recv(vi, rq, GFP_ATOMIC)) { + spin_lock(&vi->refill_lock); + if (vi->refill_enabled) + schedule_delayed_work(&vi->refill, 0); + spin_unlock(&vi->refill_lock); + } } u64_stats_update_begin(&rq->stats.syncp); @@ -1523,6 +1547,8 @@ static int virtnet_open(struct net_device *dev) struct virtnet_info *vi = netdev_priv(dev); int i, err; + enable_delayed_refill(vi); + for (i = 0; i < vi->max_queue_pairs; i++) { if (i < vi->curr_queue_pairs) /* Make sure we have some buffers: if oom use wq. */ @@ -1893,6 +1919,8 @@ static int virtnet_close(struct net_device *dev) struct virtnet_info *vi = netdev_priv(dev); int i; + /* Make sure NAPI doesn't schedule refill work */ + disable_delayed_refill(vi); /* Make sure refill_work doesn't re-enable napi! */ cancel_delayed_work_sync(&vi->refill); @@ -2390,6 +2418,8 @@ static int virtnet_restore_up(struct virtio_device *vdev) virtio_device_ready(vdev); + enable_delayed_refill(vi); + if (netif_running(vi->dev)) { err = virtnet_open(vi->dev); if (err) @@ -3092,6 +3122,7 @@ static int virtnet_probe(struct virtio_device *vdev) vdev->priv = vi; INIT_WORK(&vi->config_work, virtnet_config_changed_work); + spin_lock_init(&vi->refill_lock); /* If we can receive ANY GSO packets, we must allocate large ones. */ if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) || From 3ec42508a67b74de643cba7d96e755b79fc2837a Mon Sep 17 00:00:00 2001 From: Leo Yan Date: Sun, 24 Jul 2022 14:00:12 +0800 Subject: [PATCH 1742/1842] perf symbol: Correct address for bss symbols [ Upstream commit 2d86612aacb7805f72873691a2644d7279ed0630 ] When using 'perf mem' and 'perf c2c', an issue is observed that tool reports the wrong offset for global data symbols. This is a common issue on both x86 and Arm64 platforms. Let's see an example, for a test program, below is the disassembly for its .bss section which is dumped with objdump: ... Disassembly of section .bss: 0000000000004040 : ... 0000000000004080 : ... 00000000000040c0 : ... 0000000000004100 : ... First we used 'perf mem record' to run the test program and then used 'perf --debug verbose=4 mem report' to observe what's the symbol info for 'buf1' and 'buf2' structures. # ./perf mem record -e ldlat-loads,ldlat-stores -- false_sharing.exe 8 # ./perf --debug verbose=4 mem report ... dso__load_sym_internal: adjusting symbol: st_value: 0x40c0 sh_addr: 0x4040 sh_offset: 0x3028 symbol__new: buf2 0x30a8-0x30e8 ... dso__load_sym_internal: adjusting symbol: st_value: 0x4080 sh_addr: 0x4040 sh_offset: 0x3028 symbol__new: buf1 0x3068-0x30a8 ... The perf tool relies on libelf to parse symbols, in executable and shared object files, 'st_value' holds a virtual address; 'sh_addr' is the address at which section's first byte should reside in memory, and 'sh_offset' is the byte offset from the beginning of the file to the first byte in the section. The perf tool uses below formula to convert a symbol's memory address to a file address: file_address = st_value - sh_addr + sh_offset ^ ` Memory address We can see the final adjusted address ranges for buf1 and buf2 are [0x30a8-0x30e8) and [0x3068-0x30a8) respectively, apparently this is incorrect, in the code, the structure for 'buf1' and 'buf2' specifies compiler attribute with 64-byte alignment. The problem happens for 'sh_offset', libelf returns it as 0x3028 which is not 64-byte aligned, combining with disassembly, it's likely libelf doesn't respect the alignment for .bss section, therefore, it doesn't return the aligned value for 'sh_offset'. Suggested by Fangrui Song, ELF file contains program header which contains PT_LOAD segments, the fields p_vaddr and p_offset in PT_LOAD segments contain the execution info. A better choice for converting memory address to file address is using the formula: file_address = st_value - p_vaddr + p_offset This patch introduces elf_read_program_header() which returns the program header based on the passed 'st_value', then it uses the formula above to calculate the symbol file address; and the debugging log is updated respectively. After applying the change: # ./perf --debug verbose=4 mem report ... dso__load_sym_internal: adjusting symbol: st_value: 0x40c0 p_vaddr: 0x3d28 p_offset: 0x2d28 symbol__new: buf2 0x30c0-0x3100 ... dso__load_sym_internal: adjusting symbol: st_value: 0x4080 p_vaddr: 0x3d28 p_offset: 0x2d28 symbol__new: buf1 0x3080-0x30c0 ... Fixes: f17e04afaff84b5c ("perf report: Fix ELF symbol parsing") Reported-by: Chang Rui Suggested-by: Fangrui Song Signed-off-by: Leo Yan Acked-by: Namhyung Kim Cc: Alexander Shishkin Cc: Ian Rogers Cc: Ingo Molnar Cc: Jiri Olsa Cc: Mark Rutland Cc: Peter Zijlstra Link: https://lore.kernel.org/r/20220724060013.171050-2-leo.yan@linaro.org Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Sasha Levin --- tools/perf/util/symbol-elf.c | 45 ++++++++++++++++++++++++++++++++---- 1 file changed, 41 insertions(+), 4 deletions(-) diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c index 94809aed8b44..1cab29d45bfb 100644 --- a/tools/perf/util/symbol-elf.c +++ b/tools/perf/util/symbol-elf.c @@ -232,6 +232,33 @@ Elf_Scn *elf_section_by_name(Elf *elf, GElf_Ehdr *ep, return NULL; } +static int elf_read_program_header(Elf *elf, u64 vaddr, GElf_Phdr *phdr) +{ + size_t i, phdrnum; + u64 sz; + + if (elf_getphdrnum(elf, &phdrnum)) + return -1; + + for (i = 0; i < phdrnum; i++) { + if (gelf_getphdr(elf, i, phdr) == NULL) + return -1; + + if (phdr->p_type != PT_LOAD) + continue; + + sz = max(phdr->p_memsz, phdr->p_filesz); + if (!sz) + continue; + + if (vaddr >= phdr->p_vaddr && (vaddr < phdr->p_vaddr + sz)) + return 0; + } + + /* Not found any valid program header */ + return -1; +} + static bool want_demangle(bool is_kernel_sym) { return is_kernel_sym ? symbol_conf.demangle_kernel : symbol_conf.demangle; @@ -1181,6 +1208,7 @@ int dso__load_sym(struct dso *dso, struct map *map, struct symsrc *syms_ss, sym.st_value); used_opd = true; } + /* * When loading symbols in a data mapping, ABS symbols (which * has a value of SHN_ABS in its st_shndx) failed at @@ -1217,11 +1245,20 @@ int dso__load_sym(struct dso *dso, struct map *map, struct symsrc *syms_ss, goto out_elf_end; } else if ((used_opd && runtime_ss->adjust_symbols) || (!used_opd && syms_ss->adjust_symbols)) { + GElf_Phdr phdr; + + if (elf_read_program_header(syms_ss->elf, + (u64)sym.st_value, &phdr)) { + pr_warning("%s: failed to find program header for " + "symbol: %s st_value: %#" PRIx64 "\n", + __func__, elf_name, (u64)sym.st_value); + continue; + } pr_debug4("%s: adjusting symbol: st_value: %#" PRIx64 " " - "sh_addr: %#" PRIx64 " sh_offset: %#" PRIx64 "\n", __func__, - (u64)sym.st_value, (u64)shdr.sh_addr, - (u64)shdr.sh_offset); - sym.st_value -= shdr.sh_addr - shdr.sh_offset; + "p_vaddr: %#" PRIx64 " p_offset: %#" PRIx64 "\n", + __func__, (u64)sym.st_value, (u64)phdr.p_vaddr, + (u64)phdr.p_offset); + sym.st_value -= phdr.p_vaddr - phdr.p_offset; } demangled = demangle_sym(dso, kmodule, elf_name); From 510e5b3791f67b28be4fd8eedc15183db1c7920d Mon Sep 17 00:00:00 2001 From: Alejandro Lucero Date: Tue, 26 Jul 2022 08:45:04 +0200 Subject: [PATCH 1743/1842] sfc: disable softirqs for ptp TX [ Upstream commit 67c3b611d92fc238c43734878bc3e232ab570c79 ] Sending a PTP packet can imply to use the normal TX driver datapath but invoked from the driver's ptp worker. The kernel generic TX code disables softirqs and preemption before calling specific driver TX code, but the ptp worker does not. Although current ptp driver functionality does not require it, there are several reasons for doing so: 1) The invoked code is always executed with softirqs disabled for non PTP packets. 2) Better if a ptp packet transmission is not interrupted by softirq handling which could lead to high latencies. 3) netdev_xmit_more used by the TX code requires preemption to be disabled. Indeed a solution for dealing with kernel preemption state based on static kernel configuration is not possible since the introduction of dynamic preemption level configuration at boot time using the static calls functionality. Fixes: f79c957a0b537 ("drivers: net: sfc: use netdev_xmit_more helper") Signed-off-by: Alejandro Lucero Link: https://lore.kernel.org/r/20220726064504.49613-1-alejandro.lucero-palau@amd.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- drivers/net/ethernet/sfc/ptp.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/drivers/net/ethernet/sfc/ptp.c b/drivers/net/ethernet/sfc/ptp.c index 725b0f38813a..a2b4e3befa59 100644 --- a/drivers/net/ethernet/sfc/ptp.c +++ b/drivers/net/ethernet/sfc/ptp.c @@ -1100,7 +1100,29 @@ static void efx_ptp_xmit_skb_queue(struct efx_nic *efx, struct sk_buff *skb) tx_queue = efx_channel_get_tx_queue(ptp_data->channel, type); if (tx_queue && tx_queue->timestamping) { + /* This code invokes normal driver TX code which is always + * protected from softirqs when called from generic TX code, + * which in turn disables preemption. Look at __dev_queue_xmit + * which uses rcu_read_lock_bh disabling preemption for RCU + * plus disabling softirqs. We do not need RCU reader + * protection here. + * + * Although it is theoretically safe for current PTP TX/RX code + * running without disabling softirqs, there are three good + * reasond for doing so: + * + * 1) The code invoked is mainly implemented for non-PTP + * packets and it is always executed with softirqs + * disabled. + * 2) This being a single PTP packet, better to not + * interrupt its processing by softirqs which can lead + * to high latencies. + * 3) netdev_xmit_more checks preemption is disabled and + * triggers a BUG_ON if not. + */ + local_bh_disable(); efx_enqueue_skb(tx_queue, skb); + local_bh_enable(); } else { WARN_ONCE(1, "PTP channel has no timestamped tx queue\n"); dev_kfree_skb_any(skb); From 6f3505588d66b27220f07d0cab18da380fae2e2d Mon Sep 17 00:00:00 2001 From: Xin Long Date: Mon, 25 Jul 2022 18:11:06 -0400 Subject: [PATCH 1744/1842] sctp: leave the err path free in sctp_stream_init to sctp_stream_free [ Upstream commit 181d8d2066c000ba0a0e6940a7ad80f1a0e68e9d ] A NULL pointer dereference was reported by Wei Chen: BUG: kernel NULL pointer dereference, address: 0000000000000000 RIP: 0010:__list_del_entry_valid+0x26/0x80 Call Trace: sctp_sched_dequeue_common+0x1c/0x90 sctp_sched_prio_dequeue+0x67/0x80 __sctp_outq_teardown+0x299/0x380 sctp_outq_free+0x15/0x20 sctp_association_free+0xc3/0x440 sctp_do_sm+0x1ca7/0x2210 sctp_assoc_bh_rcv+0x1f6/0x340 This happens when calling sctp_sendmsg without connecting to server first. In this case, a data chunk already queues up in send queue of client side when processing the INIT_ACK from server in sctp_process_init() where it calls sctp_stream_init() to alloc stream_in. If it fails to alloc stream_in all stream_out will be freed in sctp_stream_init's err path. Then in the asoc freeing it will crash when dequeuing this data chunk as stream_out is missing. As we can't free stream out before dequeuing all data from send queue, and this patch is to fix it by moving the err path stream_out/in freeing in sctp_stream_init() to sctp_stream_free() which is eventually called when freeing the asoc in sctp_association_free(). This fix also makes the code in sctp_process_init() more clear. Note that in sctp_association_init() when it fails in sctp_stream_init(), sctp_association_free() will not be called, and in that case it should go to 'stream_free' err path to free stream instead of 'fail_init'. Fixes: 5bbbbe32a431 ("sctp: introduce stream scheduler foundations") Reported-by: Wei Chen Signed-off-by: Xin Long Link: https://lore.kernel.org/r/831a3dc100c4908ff76e5bcc363be97f2778bc0b.1658787066.git.lucien.xin@gmail.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- net/sctp/associola.c | 5 ++--- net/sctp/stream.c | 19 +++---------------- 2 files changed, 5 insertions(+), 19 deletions(-) diff --git a/net/sctp/associola.c b/net/sctp/associola.c index fdb69d46276d..2d4ec6187755 100644 --- a/net/sctp/associola.c +++ b/net/sctp/associola.c @@ -226,9 +226,8 @@ static struct sctp_association *sctp_association_init( if (!sctp_ulpq_init(&asoc->ulpq, asoc)) goto fail_init; - if (sctp_stream_init(&asoc->stream, asoc->c.sinit_num_ostreams, - 0, gfp)) - goto fail_init; + if (sctp_stream_init(&asoc->stream, asoc->c.sinit_num_ostreams, 0, gfp)) + goto stream_free; /* Initialize default path MTU. */ asoc->pathmtu = sp->pathmtu; diff --git a/net/sctp/stream.c b/net/sctp/stream.c index 6dc95dcc0ff4..ef9fceadef8d 100644 --- a/net/sctp/stream.c +++ b/net/sctp/stream.c @@ -137,7 +137,7 @@ int sctp_stream_init(struct sctp_stream *stream, __u16 outcnt, __u16 incnt, ret = sctp_stream_alloc_out(stream, outcnt, gfp); if (ret) - goto out_err; + return ret; for (i = 0; i < stream->outcnt; i++) SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN; @@ -145,22 +145,9 @@ int sctp_stream_init(struct sctp_stream *stream, __u16 outcnt, __u16 incnt, handle_in: sctp_stream_interleave_init(stream); if (!incnt) - goto out; + return 0; - ret = sctp_stream_alloc_in(stream, incnt, gfp); - if (ret) - goto in_err; - - goto out; - -in_err: - sched->free(stream); - genradix_free(&stream->in); -out_err: - genradix_free(&stream->out); - stream->outcnt = 0; -out: - return ret; + return sctp_stream_alloc_in(stream, incnt, gfp); } int sctp_stream_init_ext(struct sctp_stream *stream, __u16 sid) From 8014246694bb8bc6e09ceacdf00492a490a8cb08 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Sun, 31 Jul 2022 12:05:51 +0200 Subject: [PATCH 1745/1842] ARM: crypto: comment out gcc warning that breaks clang builds The gcc build warning prevents all clang-built kernels from working properly, so comment it out to fix the build. This is a -stable kernel only patch for now, it will be resolved differently in mainline releases in the future. Cc: "Jason A. Donenfeld" Cc: "Justin M. Forbes" Cc: Ard Biesheuvel Acked-by: Arnd Bergmann Cc: Nicolas Pitre Cc: Nathan Chancellor Cc: Nick Desaulniers Signed-off-by: Greg Kroah-Hartman --- arch/arm/lib/xor-neon.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm/lib/xor-neon.c b/arch/arm/lib/xor-neon.c index b99dd8e1c93f..7ba6cf826162 100644 --- a/arch/arm/lib/xor-neon.c +++ b/arch/arm/lib/xor-neon.c @@ -26,8 +26,9 @@ MODULE_LICENSE("GPL"); * While older versions of GCC do not generate incorrect code, they fail to * recognize the parallel nature of these functions, and emit plain ARM code, * which is known to be slower than the optimized ARM code in asm-arm/xor.h. + * + * #warning This code requires at least version 4.6 of GCC */ -#warning This code requires at least version 4.6 of GCC #endif #pragma GCC diagnostic ignored "-Wunused-variable" From 2670f76a563124478d0d14e603b38b73b99c389c Mon Sep 17 00:00:00 2001 From: Jaewon Kim Date: Mon, 25 Jul 2022 18:52:12 +0900 Subject: [PATCH 1746/1842] page_alloc: fix invalid watermark check on a negative value commit 9282012fc0aa248b77a69f5eb802b67c5a16bb13 upstream. There was a report that a task is waiting at the throttle_direct_reclaim. The pgscan_direct_throttle in vmstat was increasing. This is a bug where zone_watermark_fast returns true even when the free is very low. The commit f27ce0e14088 ("page_alloc: consider highatomic reserve in watermark fast") changed the watermark fast to consider highatomic reserve. But it did not handle a negative value case which can be happened when reserved_highatomic pageblock is bigger than the actual free. If watermark is considered as ok for the negative value, allocating contexts for order-0 will consume all free pages without direct reclaim, and finally free page may become depleted except highatomic free. Then allocating contexts may fall into throttle_direct_reclaim. This symptom may easily happen in a system where wmark min is low and other reclaimers like kswapd does not make free pages quickly. Handle the negative case by using MIN. Link: https://lkml.kernel.org/r/20220725095212.25388-1-jaewon31.kim@samsung.com Fixes: f27ce0e14088 ("page_alloc: consider highatomic reserve in watermark fast") Signed-off-by: Jaewon Kim Reported-by: GyeongHwan Hong Acked-by: Mel Gorman Cc: Minchan Kim Cc: Baoquan He Cc: Vlastimil Babka Cc: Johannes Weiner Cc: Michal Hocko Cc: Yong-Taek Lee Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- mm/page_alloc.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f3418edb136b..43ff22ce7632 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3679,11 +3679,15 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order, * need to be calculated. */ if (!order) { - long fast_free; + long usable_free; + long reserved; - fast_free = free_pages; - fast_free -= __zone_watermark_unusable_free(z, 0, alloc_flags); - if (fast_free > mark + z->lowmem_reserve[highest_zoneidx]) + usable_free = free_pages; + reserved = __zone_watermark_unusable_free(z, 0, alloc_flags); + + /* reserved may over estimate high-atomic reserves. */ + usable_free -= min(usable_free, reserved); + if (usable_free > mark + z->lowmem_reserve[highest_zoneidx]) return true; } From e500aa9f2d76d63592ec09752a2fa7d058bc9dde Mon Sep 17 00:00:00 2001 From: Wei Mingzhi Date: Sat, 19 Jun 2021 00:08:40 +0800 Subject: [PATCH 1747/1842] mt7601u: add USB device ID for some versions of XiaoDu WiFi Dongle. commit 829eea7c94e0bac804e65975639a2f2e5f147033 upstream. USB device ID of some versions of XiaoDu WiFi Dongle is 2955:1003 instead of 2955:1001. Both are the same mt7601u hardware. Signed-off-by: Wei Mingzhi Acked-by: Jakub Kicinski Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/20210618160840.305024-1-whistler@member.fsf.org Cc: Yan Xinyu Signed-off-by: Greg Kroah-Hartman --- drivers/net/wireless/mediatek/mt7601u/usb.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/wireless/mediatek/mt7601u/usb.c b/drivers/net/wireless/mediatek/mt7601u/usb.c index 6bcc4a13ae6c..cc772045d526 100644 --- a/drivers/net/wireless/mediatek/mt7601u/usb.c +++ b/drivers/net/wireless/mediatek/mt7601u/usb.c @@ -26,6 +26,7 @@ static const struct usb_device_id mt7601u_device_table[] = { { USB_DEVICE(0x2717, 0x4106) }, { USB_DEVICE(0x2955, 0x0001) }, { USB_DEVICE(0x2955, 0x1001) }, + { USB_DEVICE(0x2955, 0x1003) }, { USB_DEVICE(0x2a5f, 0x1000) }, { USB_DEVICE(0x7392, 0x7710) }, { 0, } From c4546391720549ee50dae3253e36f214591270b1 Mon Sep 17 00:00:00 2001 From: Florian Fainelli Date: Tue, 19 Jul 2022 17:33:21 +0100 Subject: [PATCH 1748/1842] ARM: 9216/1: Fix MAX_DMA_ADDRESS overflow [ Upstream commit fb0fd3469ead5b937293c213daa1f589b4b7ce46 ] Commit 26f09e9b3a06 ("mm/memblock: add memblock memory allocation apis") added a check to determine whether arm_dma_zone_size is exceeding the amount of kernel virtual address space available between the upper 4GB virtual address limit and PAGE_OFFSET in order to provide a suitable definition of MAX_DMA_ADDRESS that should fit within the 32-bit virtual address space. The quantity used for comparison was off by a missing trailing 0, leading to MAX_DMA_ADDRESS to be overflowing a 32-bit quantity. This was caught thanks to CONFIG_DEBUG_VIRTUAL on the bcm2711 platform where we define a dma_zone_size of 1GB and we have a PAGE_OFFSET value of 0xc000_0000 (CONFIG_VMSPLIT_3G) leading to MAX_DMA_ADDRESS being 0x1_0000_0000 which overflows the unsigned long type used throughout __pa() and then __virt_addr_valid(). Because the virtual address passed to __virt_addr_valid() would now be 0, the function would loudly warn and flood the kernel log, thus making the platform unable to boot properly. Fixes: 26f09e9b3a06 ("mm/memblock: add memblock memory allocation apis") Signed-off-by: Florian Fainelli Reviewed-by: Linus Walleij Signed-off-by: Russell King (Oracle) Signed-off-by: Sasha Levin --- arch/arm/include/asm/dma.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/include/asm/dma.h b/arch/arm/include/asm/dma.h index a81dda65c576..45180a2cc47c 100644 --- a/arch/arm/include/asm/dma.h +++ b/arch/arm/include/asm/dma.h @@ -10,7 +10,7 @@ #else #define MAX_DMA_ADDRESS ({ \ extern phys_addr_t arm_dma_zone_size; \ - arm_dma_zone_size && arm_dma_zone_size < (0x10000000 - PAGE_OFFSET) ? \ + arm_dma_zone_size && arm_dma_zone_size < (0x100000000ULL - PAGE_OFFSET) ? \ (PAGE_OFFSET + arm_dma_zone_size) : 0xffffffffUL; }) #endif From c4cd52ab1e6dc881f9485a47b8fd242caf1d80d4 Mon Sep 17 00:00:00 2001 From: Toshi Kani Date: Thu, 21 Jul 2022 12:05:03 -0600 Subject: [PATCH 1749/1842] EDAC/ghes: Set the DIMM label unconditionally commit 5e2805d5379619c4a2e3ae4994e73b36439f4bad upstream. The commit cb51a371d08e ("EDAC/ghes: Setup DIMM label from DMI and use it in error reports") enforced that both the bank and device strings passed to dimm_setup_label() are not NULL. However, there are BIOSes, for example on a HPE ProLiant DL360 Gen10/ProLiant DL360 Gen10, BIOS U32 03/15/2019 which don't populate both strings: Handle 0x0020, DMI type 17, 84 bytes Memory Device Array Handle: 0x0013 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 32 GB Form Factor: DIMM Set: None Locator: PROC 1 DIMM 1 <===== device Bank Locator: Not Specified <===== bank This results in a buffer overflow because ghes_edac_register() calls strlen() on an uninitialized label, which had non-zero values left over from krealloc_array(): detected buffer overflow in __fortify_strlen ------------[ cut here ]------------ kernel BUG at lib/string_helpers.c:983! invalid opcode: 0000 [#1] PREEMPT SMP NOPTI CPU: 1 PID: 1 Comm: swapper/0 Tainted: G I 5.18.6-200.fc36.x86_64 #1 Hardware name: HPE ProLiant DL360 Gen10/ProLiant DL360 Gen10, BIOS U32 03/15/2019 RIP: 0010:fortify_panic ... Call Trace: ghes_edac_register.cold ghes_probe platform_probe really_probe __driver_probe_device driver_probe_device __driver_attach ? __device_attach_driver bus_for_each_dev bus_add_driver driver_register acpi_ghes_init acpi_init ? acpi_sleep_proc_init do_one_initcall The label contains garbage because the commit in Fixes reallocs the DIMMs array while scanning the system but doesn't clear the newly allocated memory. Change dimm_setup_label() to always initialize the label to fix the issue. Set it to the empty string in case BIOS does not provide both bank and device so that ghes_edac_register() can keep the default label given by edac_mc_alloc_dimms(). [ bp: Rewrite commit message. ] Fixes: b9cae27728d1f ("EDAC/ghes: Scan the system once on driver init") Co-developed-by: Robert Richter Signed-off-by: Robert Richter Signed-off-by: Toshi Kani Signed-off-by: Borislav Petkov Tested-by: Robert Elliott Cc: Link: https://lore.kernel.org/r/20220719220124.760359-1-toshi.kani@hpe.com Signed-off-by: Greg Kroah-Hartman --- drivers/edac/ghes_edac.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/edac/ghes_edac.c b/drivers/edac/ghes_edac.c index a918ca93e4f7..df5897c90bec 100644 --- a/drivers/edac/ghes_edac.c +++ b/drivers/edac/ghes_edac.c @@ -101,9 +101,14 @@ static void dimm_setup_label(struct dimm_info *dimm, u16 handle) dmi_memdev_name(handle, &bank, &device); - /* both strings must be non-zero */ - if (bank && *bank && device && *device) - snprintf(dimm->label, sizeof(dimm->label), "%s %s", bank, device); + /* + * Set to a NULL string when both bank and device are zero. In this case, + * the label assigned by default will be preserved. + */ + snprintf(dimm->label, sizeof(dimm->label), "%s%s%s", + (bank && *bank) ? bank : "", + (bank && *bank && device && *device) ? " " : "", + (device && *device) ? device : ""); } static void assign_dmi_dimm_info(struct dimm_info *dimm, struct memdev_dmi_entry *entry) From aadc39fd5b6d43e6d96a368d9f651d01598a4ec7 Mon Sep 17 00:00:00 2001 From: Eiichi Tsukata Date: Thu, 28 Jul 2022 04:39:07 +0000 Subject: [PATCH 1750/1842] docs/kernel-parameters: Update descriptions for "mitigations=" param with retbleed commit ea304a8b89fd0d6cf94ee30cb139dc23d9f1a62f upstream. Updates descriptions for "mitigations=off" and "mitigations=auto,nosmt" with the respective retbleed= settings. Signed-off-by: Eiichi Tsukata Signed-off-by: Borislav Petkov Cc: corbet@lwn.net Link: https://lore.kernel.org/r/20220728043907.165688-1-eiichi.tsukata@nutanix.com Signed-off-by: Greg Kroah-Hartman --- Documentation/admin-guide/kernel-parameters.txt | 2 ++ 1 file changed, 2 insertions(+) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 1a58c580b236..8b7c26d09045 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2873,6 +2873,7 @@ no_entry_flush [PPC] no_uaccess_flush [PPC] mmio_stale_data=off [X86] + retbleed=off [X86] Exceptions: This does not have any effect on @@ -2895,6 +2896,7 @@ mds=full,nosmt [X86] tsx_async_abort=full,nosmt [X86] mmio_stale_data=full,nosmt [X86] + retbleed=auto,nosmt [X86] mminit_loglevel= [KNL] When CONFIG_DEBUG_MEMORY_INIT is set, this From 41fbfdaba94ae78a91c440d3ca6ad5b890e28ffa Mon Sep 17 00:00:00 2001 From: Christoph Hellwig Date: Fri, 29 Jul 2022 18:16:01 +0200 Subject: [PATCH 1751/1842] xfs: refactor xfs_file_fsync commit f22c7f87777361f94aa17f746fbadfa499248dc8 upstream. [backported for dependency] Factor out the log syncing logic into two helpers to make the code easier to read and more maintainable. Signed-off-by: Christoph Hellwig Reviewed-by: Brian Foster Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Reviewed-by: Dave Chinner Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_file.c | 81 +++++++++++++++++++++++++++++------------------ 1 file changed, 50 insertions(+), 31 deletions(-) diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index 5b0f93f73837..414d856e2e75 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -118,6 +118,54 @@ xfs_dir_fsync( return xfs_log_force_inode(ip); } +static xfs_lsn_t +xfs_fsync_lsn( + struct xfs_inode *ip, + bool datasync) +{ + if (!xfs_ipincount(ip)) + return 0; + if (datasync && !(ip->i_itemp->ili_fsync_fields & ~XFS_ILOG_TIMESTAMP)) + return 0; + return ip->i_itemp->ili_last_lsn; +} + +/* + * All metadata updates are logged, which means that we just have to flush the + * log up to the latest LSN that touched the inode. + * + * If we have concurrent fsync/fdatasync() calls, we need them to all block on + * the log force before we clear the ili_fsync_fields field. This ensures that + * we don't get a racing sync operation that does not wait for the metadata to + * hit the journal before returning. If we race with clearing ili_fsync_fields, + * then all that will happen is the log force will do nothing as the lsn will + * already be on disk. We can't race with setting ili_fsync_fields because that + * is done under XFS_ILOCK_EXCL, and that can't happen because we hold the lock + * shared until after the ili_fsync_fields is cleared. + */ +static int +xfs_fsync_flush_log( + struct xfs_inode *ip, + bool datasync, + int *log_flushed) +{ + int error = 0; + xfs_lsn_t lsn; + + xfs_ilock(ip, XFS_ILOCK_SHARED); + lsn = xfs_fsync_lsn(ip, datasync); + if (lsn) { + error = xfs_log_force_lsn(ip->i_mount, lsn, XFS_LOG_SYNC, + log_flushed); + + spin_lock(&ip->i_itemp->ili_lock); + ip->i_itemp->ili_fsync_fields = 0; + spin_unlock(&ip->i_itemp->ili_lock); + } + xfs_iunlock(ip, XFS_ILOCK_SHARED); + return error; +} + STATIC int xfs_file_fsync( struct file *file, @@ -125,13 +173,10 @@ xfs_file_fsync( loff_t end, int datasync) { - struct inode *inode = file->f_mapping->host; - struct xfs_inode *ip = XFS_I(inode); - struct xfs_inode_log_item *iip = ip->i_itemp; + struct xfs_inode *ip = XFS_I(file->f_mapping->host); struct xfs_mount *mp = ip->i_mount; int error = 0; int log_flushed = 0; - xfs_lsn_t lsn = 0; trace_xfs_file_fsync(ip); @@ -155,33 +200,7 @@ xfs_file_fsync( else if (mp->m_logdev_targp != mp->m_ddev_targp) xfs_blkdev_issue_flush(mp->m_ddev_targp); - /* - * All metadata updates are logged, which means that we just have to - * flush the log up to the latest LSN that touched the inode. If we have - * concurrent fsync/fdatasync() calls, we need them to all block on the - * log force before we clear the ili_fsync_fields field. This ensures - * that we don't get a racing sync operation that does not wait for the - * metadata to hit the journal before returning. If we race with - * clearing the ili_fsync_fields, then all that will happen is the log - * force will do nothing as the lsn will already be on disk. We can't - * race with setting ili_fsync_fields because that is done under - * XFS_ILOCK_EXCL, and that can't happen because we hold the lock shared - * until after the ili_fsync_fields is cleared. - */ - xfs_ilock(ip, XFS_ILOCK_SHARED); - if (xfs_ipincount(ip)) { - if (!datasync || - (iip->ili_fsync_fields & ~XFS_ILOG_TIMESTAMP)) - lsn = iip->ili_last_lsn; - } - - if (lsn) { - error = xfs_log_force_lsn(mp, lsn, XFS_LOG_SYNC, &log_flushed); - spin_lock(&iip->ili_lock); - iip->ili_fsync_fields = 0; - spin_unlock(&iip->ili_lock); - } - xfs_iunlock(ip, XFS_ILOCK_SHARED); + error = xfs_fsync_flush_log(ip, datasync, &log_flushed); /* * If we only have a single device, and the log force about was From 6d3605f84edde4b28dd61855aacd9093906a65bf Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Fri, 29 Jul 2022 18:16:02 +0200 Subject: [PATCH 1752/1842] xfs: xfs_log_force_lsn isn't passed a LSN commit 5f9b4b0de8dc2fb8eb655463b438001c111570fe upstream. [backported from CIL scalability series for dependency] In doing an investigation into AIL push stalls, I was looking at the log force code to see if an async CIL push could be done instead. This lead me to xfs_log_force_lsn() and looking at how it works. xfs_log_force_lsn() is only called from inode synchronisation contexts such as fsync(), and it takes the ip->i_itemp->ili_last_lsn value as the LSN to sync the log to. This gets passed to xlog_cil_force_lsn() via xfs_log_force_lsn() to flush the CIL to the journal, and then used by xfs_log_force_lsn() to flush the iclogs to the journal. The problem is that ip->i_itemp->ili_last_lsn does not store a log sequence number. What it stores is passed to it from the ->iop_committing method, which is called by xfs_log_commit_cil(). The value this passes to the iop_committing method is the CIL context sequence number that the item was committed to. As it turns out, xlog_cil_force_lsn() converts the sequence to an actual commit LSN for the related context and returns that to xfs_log_force_lsn(). xfs_log_force_lsn() overwrites it's "lsn" variable that contained a sequence with an actual LSN and then uses that to sync the iclogs. This caused me some confusion for a while, even though I originally wrote all this code a decade ago. ->iop_committing is only used by a couple of log item types, and only inode items use the sequence number it is passed. Let's clean up the API, CIL structures and inode log item to call it a sequence number, and make it clear that the high level code is using CIL sequence numbers and not on-disk LSNs for integrity synchronisation purposes. Signed-off-by: Dave Chinner Reviewed-by: Brian Foster Reviewed-by: Darrick J. Wong Reviewed-by: Allison Henderson Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/libxfs/xfs_types.h | 1 + fs/xfs/xfs_buf_item.c | 2 +- fs/xfs/xfs_dquot_item.c | 2 +- fs/xfs/xfs_file.c | 14 +++++++------- fs/xfs/xfs_inode.c | 10 +++++----- fs/xfs/xfs_inode_item.c | 4 ++-- fs/xfs/xfs_inode_item.h | 2 +- fs/xfs/xfs_log.c | 27 ++++++++++++++------------- fs/xfs/xfs_log.h | 4 +--- fs/xfs/xfs_log_cil.c | 30 +++++++++++------------------- fs/xfs/xfs_log_priv.h | 15 +++++++-------- fs/xfs/xfs_trans.c | 6 +++--- fs/xfs/xfs_trans.h | 4 ++-- 13 files changed, 56 insertions(+), 65 deletions(-) diff --git a/fs/xfs/libxfs/xfs_types.h b/fs/xfs/libxfs/xfs_types.h index 397d94775440..1ce06173c2f5 100644 --- a/fs/xfs/libxfs/xfs_types.h +++ b/fs/xfs/libxfs/xfs_types.h @@ -21,6 +21,7 @@ typedef int32_t xfs_suminfo_t; /* type of bitmap summary info */ typedef uint32_t xfs_rtword_t; /* word type for bitmap manipulations */ typedef int64_t xfs_lsn_t; /* log sequence number */ +typedef int64_t xfs_csn_t; /* CIL sequence number */ typedef uint32_t xfs_dablk_t; /* dir/attr block number (in file) */ typedef uint32_t xfs_dahash_t; /* dir/attr hash value */ diff --git a/fs/xfs/xfs_buf_item.c b/fs/xfs/xfs_buf_item.c index 8c6e26d62ef2..5d6535370f87 100644 --- a/fs/xfs/xfs_buf_item.c +++ b/fs/xfs/xfs_buf_item.c @@ -632,7 +632,7 @@ xfs_buf_item_release( STATIC void xfs_buf_item_committing( struct xfs_log_item *lip, - xfs_lsn_t commit_lsn) + xfs_csn_t seq) { return xfs_buf_item_release(lip); } diff --git a/fs/xfs/xfs_dquot_item.c b/fs/xfs/xfs_dquot_item.c index 8c1fdf37ee8f..8ed47b739b6c 100644 --- a/fs/xfs/xfs_dquot_item.c +++ b/fs/xfs/xfs_dquot_item.c @@ -188,7 +188,7 @@ xfs_qm_dquot_logitem_release( STATIC void xfs_qm_dquot_logitem_committing( struct xfs_log_item *lip, - xfs_lsn_t commit_lsn) + xfs_csn_t seq) { return xfs_qm_dquot_logitem_release(lip); } diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index 414d856e2e75..4d6bf8d4974f 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -118,8 +118,8 @@ xfs_dir_fsync( return xfs_log_force_inode(ip); } -static xfs_lsn_t -xfs_fsync_lsn( +static xfs_csn_t +xfs_fsync_seq( struct xfs_inode *ip, bool datasync) { @@ -127,7 +127,7 @@ xfs_fsync_lsn( return 0; if (datasync && !(ip->i_itemp->ili_fsync_fields & ~XFS_ILOG_TIMESTAMP)) return 0; - return ip->i_itemp->ili_last_lsn; + return ip->i_itemp->ili_commit_seq; } /* @@ -150,12 +150,12 @@ xfs_fsync_flush_log( int *log_flushed) { int error = 0; - xfs_lsn_t lsn; + xfs_csn_t seq; xfs_ilock(ip, XFS_ILOCK_SHARED); - lsn = xfs_fsync_lsn(ip, datasync); - if (lsn) { - error = xfs_log_force_lsn(ip->i_mount, lsn, XFS_LOG_SYNC, + seq = xfs_fsync_seq(ip, datasync); + if (seq) { + error = xfs_log_force_seq(ip->i_mount, seq, XFS_LOG_SYNC, log_flushed); spin_lock(&ip->i_itemp->ili_lock); diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index 03497741aef7..1f61e085676b 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -2754,7 +2754,7 @@ xfs_iunpin( trace_xfs_inode_unpin_nowait(ip, _RET_IP_); /* Give the log a push to start the unpinning I/O */ - xfs_log_force_lsn(ip->i_mount, ip->i_itemp->ili_last_lsn, 0, NULL); + xfs_log_force_seq(ip->i_mount, ip->i_itemp->ili_commit_seq, 0, NULL); } @@ -3716,16 +3716,16 @@ int xfs_log_force_inode( struct xfs_inode *ip) { - xfs_lsn_t lsn = 0; + xfs_csn_t seq = 0; xfs_ilock(ip, XFS_ILOCK_SHARED); if (xfs_ipincount(ip)) - lsn = ip->i_itemp->ili_last_lsn; + seq = ip->i_itemp->ili_commit_seq; xfs_iunlock(ip, XFS_ILOCK_SHARED); - if (!lsn) + if (!seq) return 0; - return xfs_log_force_lsn(ip->i_mount, lsn, XFS_LOG_SYNC, NULL); + return xfs_log_force_seq(ip->i_mount, seq, XFS_LOG_SYNC, NULL); } /* diff --git a/fs/xfs/xfs_inode_item.c b/fs/xfs/xfs_inode_item.c index 6ff91e5bf3cd..3aba4559469f 100644 --- a/fs/xfs/xfs_inode_item.c +++ b/fs/xfs/xfs_inode_item.c @@ -617,9 +617,9 @@ xfs_inode_item_committed( STATIC void xfs_inode_item_committing( struct xfs_log_item *lip, - xfs_lsn_t commit_lsn) + xfs_csn_t seq) { - INODE_ITEM(lip)->ili_last_lsn = commit_lsn; + INODE_ITEM(lip)->ili_commit_seq = seq; return xfs_inode_item_release(lip); } diff --git a/fs/xfs/xfs_inode_item.h b/fs/xfs/xfs_inode_item.h index 4b926e32831c..403b45ab9aa2 100644 --- a/fs/xfs/xfs_inode_item.h +++ b/fs/xfs/xfs_inode_item.h @@ -33,7 +33,7 @@ struct xfs_inode_log_item { unsigned int ili_fields; /* fields to be logged */ unsigned int ili_fsync_fields; /* logged since last fsync */ xfs_lsn_t ili_flush_lsn; /* lsn at last flush */ - xfs_lsn_t ili_last_lsn; /* lsn at last transaction */ + xfs_csn_t ili_commit_seq; /* last transaction commit */ }; static inline int xfs_inode_clean(struct xfs_inode *ip) diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index b445e63cbc3c..05791456adbb 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -3210,14 +3210,13 @@ xfs_log_force( } static int -__xfs_log_force_lsn( - struct xfs_mount *mp, +xlog_force_lsn( + struct xlog *log, xfs_lsn_t lsn, uint flags, int *log_flushed, bool already_slept) { - struct xlog *log = mp->m_log; struct xlog_in_core *iclog; spin_lock(&log->l_icloglock); @@ -3250,8 +3249,6 @@ __xfs_log_force_lsn( if (!already_slept && (iclog->ic_prev->ic_state == XLOG_STATE_WANT_SYNC || iclog->ic_prev->ic_state == XLOG_STATE_SYNCING)) { - XFS_STATS_INC(mp, xs_log_force_sleep); - xlog_wait(&iclog->ic_prev->ic_write_wait, &log->l_icloglock); return -EAGAIN; @@ -3289,25 +3286,29 @@ __xfs_log_force_lsn( * to disk, that thread will wake up all threads waiting on the queue. */ int -xfs_log_force_lsn( +xfs_log_force_seq( struct xfs_mount *mp, - xfs_lsn_t lsn, + xfs_csn_t seq, uint flags, int *log_flushed) { + struct xlog *log = mp->m_log; + xfs_lsn_t lsn; int ret; - ASSERT(lsn != 0); + ASSERT(seq != 0); XFS_STATS_INC(mp, xs_log_force); - trace_xfs_log_force(mp, lsn, _RET_IP_); + trace_xfs_log_force(mp, seq, _RET_IP_); - lsn = xlog_cil_force_lsn(mp->m_log, lsn); + lsn = xlog_cil_force_seq(log, seq); if (lsn == NULLCOMMITLSN) return 0; - ret = __xfs_log_force_lsn(mp, lsn, flags, log_flushed, false); - if (ret == -EAGAIN) - ret = __xfs_log_force_lsn(mp, lsn, flags, log_flushed, true); + ret = xlog_force_lsn(log, lsn, flags, log_flushed, false); + if (ret == -EAGAIN) { + XFS_STATS_INC(mp, xs_log_force_sleep); + ret = xlog_force_lsn(log, lsn, flags, log_flushed, true); + } return ret; } diff --git a/fs/xfs/xfs_log.h b/fs/xfs/xfs_log.h index 98c913da7587..a1089f8b7169 100644 --- a/fs/xfs/xfs_log.h +++ b/fs/xfs/xfs_log.h @@ -106,7 +106,7 @@ struct xfs_item_ops; struct xfs_trans; int xfs_log_force(struct xfs_mount *mp, uint flags); -int xfs_log_force_lsn(struct xfs_mount *mp, xfs_lsn_t lsn, uint flags, +int xfs_log_force_seq(struct xfs_mount *mp, xfs_csn_t seq, uint flags, int *log_forced); int xfs_log_mount(struct xfs_mount *mp, struct xfs_buftarg *log_target, @@ -132,8 +132,6 @@ bool xfs_log_writable(struct xfs_mount *mp); struct xlog_ticket *xfs_log_ticket_get(struct xlog_ticket *ticket); void xfs_log_ticket_put(struct xlog_ticket *ticket); -void xfs_log_commit_cil(struct xfs_mount *mp, struct xfs_trans *tp, - xfs_lsn_t *commit_lsn, bool regrant); void xlog_cil_process_committed(struct list_head *list); bool xfs_log_item_in_current_chkpt(struct xfs_log_item *lip); diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c index cd5c04dabe2e..88730883bb70 100644 --- a/fs/xfs/xfs_log_cil.c +++ b/fs/xfs/xfs_log_cil.c @@ -777,7 +777,7 @@ xlog_cil_push_work( * that higher sequences will wait for us to write out a commit record * before they do. * - * xfs_log_force_lsn requires us to mirror the new sequence into the cil + * xfs_log_force_seq requires us to mirror the new sequence into the cil * structure atomically with the addition of this sequence to the * committing list. This also ensures that we can do unlocked checks * against the current sequence in log forces without risking @@ -1020,16 +1020,14 @@ xlog_cil_empty( * allowed again. */ void -xfs_log_commit_cil( - struct xfs_mount *mp, +xlog_cil_commit( + struct xlog *log, struct xfs_trans *tp, - xfs_lsn_t *commit_lsn, + xfs_csn_t *commit_seq, bool regrant) { - struct xlog *log = mp->m_log; struct xfs_cil *cil = log->l_cilp; struct xfs_log_item *lip, *next; - xfs_lsn_t xc_commit_lsn; /* * Do all necessary memory allocation before we lock the CIL. @@ -1043,10 +1041,6 @@ xfs_log_commit_cil( xlog_cil_insert_items(log, tp); - xc_commit_lsn = cil->xc_ctx->sequence; - if (commit_lsn) - *commit_lsn = xc_commit_lsn; - if (regrant && !XLOG_FORCED_SHUTDOWN(log)) xfs_log_ticket_regrant(log, tp->t_ticket); else @@ -1069,8 +1063,10 @@ xfs_log_commit_cil( list_for_each_entry_safe(lip, next, &tp->t_items, li_trans) { xfs_trans_del_item(lip); if (lip->li_ops->iop_committing) - lip->li_ops->iop_committing(lip, xc_commit_lsn); + lip->li_ops->iop_committing(lip, cil->xc_ctx->sequence); } + if (commit_seq) + *commit_seq = cil->xc_ctx->sequence; /* xlog_cil_push_background() releases cil->xc_ctx_lock */ xlog_cil_push_background(log); @@ -1087,9 +1083,9 @@ xfs_log_commit_cil( * iclog flush is necessary following this call. */ xfs_lsn_t -xlog_cil_force_lsn( +xlog_cil_force_seq( struct xlog *log, - xfs_lsn_t sequence) + xfs_csn_t sequence) { struct xfs_cil *cil = log->l_cilp; struct xfs_cil_ctx *ctx; @@ -1185,21 +1181,17 @@ bool xfs_log_item_in_current_chkpt( struct xfs_log_item *lip) { - struct xfs_cil_ctx *ctx; + struct xfs_cil_ctx *ctx = lip->li_mountp->m_log->l_cilp->xc_ctx; if (list_empty(&lip->li_cil)) return false; - ctx = lip->li_mountp->m_log->l_cilp->xc_ctx; - /* * li_seq is written on the first commit of a log item to record the * first checkpoint it is written to. Hence if it is different to the * current sequence, we're in a new checkpoint. */ - if (XFS_LSN_CMP(lip->li_seq, ctx->sequence) != 0) - return false; - return true; + return lip->li_seq == ctx->sequence; } /* diff --git a/fs/xfs/xfs_log_priv.h b/fs/xfs/xfs_log_priv.h index 1c6fdbf3d506..42cd1602ac25 100644 --- a/fs/xfs/xfs_log_priv.h +++ b/fs/xfs/xfs_log_priv.h @@ -230,7 +230,7 @@ struct xfs_cil; struct xfs_cil_ctx { struct xfs_cil *cil; - xfs_lsn_t sequence; /* chkpt sequence # */ + xfs_csn_t sequence; /* chkpt sequence # */ xfs_lsn_t start_lsn; /* first LSN of chkpt commit */ xfs_lsn_t commit_lsn; /* chkpt commit record lsn */ struct xlog_ticket *ticket; /* chkpt ticket */ @@ -268,10 +268,10 @@ struct xfs_cil { struct xfs_cil_ctx *xc_ctx; spinlock_t xc_push_lock ____cacheline_aligned_in_smp; - xfs_lsn_t xc_push_seq; + xfs_csn_t xc_push_seq; struct list_head xc_committing; wait_queue_head_t xc_commit_wait; - xfs_lsn_t xc_current_sequence; + xfs_csn_t xc_current_sequence; struct work_struct xc_push_work; wait_queue_head_t xc_push_wait; /* background push throttle */ } ____cacheline_aligned_in_smp; @@ -547,19 +547,18 @@ int xlog_cil_init(struct xlog *log); void xlog_cil_init_post_recovery(struct xlog *log); void xlog_cil_destroy(struct xlog *log); bool xlog_cil_empty(struct xlog *log); +void xlog_cil_commit(struct xlog *log, struct xfs_trans *tp, + xfs_csn_t *commit_seq, bool regrant); /* * CIL force routines */ -xfs_lsn_t -xlog_cil_force_lsn( - struct xlog *log, - xfs_lsn_t sequence); +xfs_lsn_t xlog_cil_force_seq(struct xlog *log, xfs_csn_t sequence); static inline void xlog_cil_force(struct xlog *log) { - xlog_cil_force_lsn(log, log->l_cilp->xc_current_sequence); + xlog_cil_force_seq(log, log->l_cilp->xc_current_sequence); } /* diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c index 36166bae24a6..73a1de7ceefc 100644 --- a/fs/xfs/xfs_trans.c +++ b/fs/xfs/xfs_trans.c @@ -832,7 +832,7 @@ __xfs_trans_commit( bool regrant) { struct xfs_mount *mp = tp->t_mountp; - xfs_lsn_t commit_lsn = -1; + xfs_csn_t commit_seq = 0; int error = 0; int sync = tp->t_flags & XFS_TRANS_SYNC; @@ -874,7 +874,7 @@ __xfs_trans_commit( xfs_trans_apply_sb_deltas(tp); xfs_trans_apply_dquot_deltas(tp); - xfs_log_commit_cil(mp, tp, &commit_lsn, regrant); + xlog_cil_commit(mp->m_log, tp, &commit_seq, regrant); xfs_trans_free(tp); @@ -883,7 +883,7 @@ __xfs_trans_commit( * log out now and wait for it. */ if (sync) { - error = xfs_log_force_lsn(mp, commit_lsn, XFS_LOG_SYNC, NULL); + error = xfs_log_force_seq(mp, commit_seq, XFS_LOG_SYNC, NULL); XFS_STATS_INC(mp, xs_trans_sync); } else { XFS_STATS_INC(mp, xs_trans_async); diff --git a/fs/xfs/xfs_trans.h b/fs/xfs/xfs_trans.h index 075eeade4f7d..97485559008b 100644 --- a/fs/xfs/xfs_trans.h +++ b/fs/xfs/xfs_trans.h @@ -43,7 +43,7 @@ struct xfs_log_item { struct list_head li_cil; /* CIL pointers */ struct xfs_log_vec *li_lv; /* active log vector */ struct xfs_log_vec *li_lv_shadow; /* standby vector */ - xfs_lsn_t li_seq; /* CIL commit seq */ + xfs_csn_t li_seq; /* CIL commit seq */ }; /* @@ -69,7 +69,7 @@ struct xfs_item_ops { void (*iop_pin)(struct xfs_log_item *); void (*iop_unpin)(struct xfs_log_item *, int remove); uint (*iop_push)(struct xfs_log_item *, struct list_head *); - void (*iop_committing)(struct xfs_log_item *, xfs_lsn_t commit_lsn); + void (*iop_committing)(struct xfs_log_item *lip, xfs_csn_t seq); void (*iop_release)(struct xfs_log_item *); xfs_lsn_t (*iop_committed)(struct xfs_log_item *, xfs_lsn_t); int (*iop_recover)(struct xfs_log_item *lip, From 17c8097fb0416896c6148d178531648e07e988f2 Mon Sep 17 00:00:00 2001 From: "Darrick J. Wong" Date: Fri, 29 Jul 2022 18:16:03 +0200 Subject: [PATCH 1753/1842] xfs: prevent UAF in xfs_log_item_in_current_chkpt commit f8d92a66e810acbef6ddbc0bd0cbd9b117ce8acd upstream. While I was running with KASAN and lockdep enabled, I stumbled upon an KASAN report about a UAF to a freed CIL checkpoint. Looking at the comment for xfs_log_item_in_current_chkpt, it seems pretty obvious to me that the original patch to xfs_defer_finish_noroll should have done something to lock the CIL to prevent it from switching the CIL contexts while the predicate runs. For upper level code that needs to know if a given log item is new enough not to need relogging, add a new wrapper that takes the CIL context lock long enough to sample the current CIL context. This is kind of racy in that the CIL can switch the contexts immediately after sampling, but that's ok because the consequence is that the defer ops code is a little slow to relog items. ================================================================== BUG: KASAN: use-after-free in xfs_log_item_in_current_chkpt+0x139/0x160 [xfs] Read of size 8 at addr ffff88804ea5f608 by task fsstress/527999 CPU: 1 PID: 527999 Comm: fsstress Tainted: G D 5.16.0-rc4-xfsx #rc4 Call Trace: dump_stack_lvl+0x45/0x59 print_address_description.constprop.0+0x1f/0x140 kasan_report.cold+0x83/0xdf xfs_log_item_in_current_chkpt+0x139/0x160 xfs_defer_finish_noroll+0x3bb/0x1e30 __xfs_trans_commit+0x6c8/0xcf0 xfs_reflink_remap_extent+0x66f/0x10e0 xfs_reflink_remap_blocks+0x2dd/0xa90 xfs_file_remap_range+0x27b/0xc30 vfs_dedupe_file_range_one+0x368/0x420 vfs_dedupe_file_range+0x37c/0x5d0 do_vfs_ioctl+0x308/0x1260 __x64_sys_ioctl+0xa1/0x170 do_syscall_64+0x35/0x80 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x7f2c71a2950b Code: 0f 1e fa 48 8b 05 85 39 0d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 55 39 0d 00 f7 d8 64 89 01 48 RSP: 002b:00007ffe8c0e03c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00005600862a8740 RCX: 00007f2c71a2950b RDX: 00005600862a7be0 RSI: 00000000c0189436 RDI: 0000000000000004 RBP: 000000000000000b R08: 0000000000000027 R09: 0000000000000003 R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000005a R13: 00005600862804a8 R14: 0000000000016000 R15: 00005600862a8a20 Allocated by task 464064: kasan_save_stack+0x1e/0x50 __kasan_kmalloc+0x81/0xa0 kmem_alloc+0xcd/0x2c0 [xfs] xlog_cil_ctx_alloc+0x17/0x1e0 [xfs] xlog_cil_push_work+0x141/0x13d0 [xfs] process_one_work+0x7f6/0x1380 worker_thread+0x59d/0x1040 kthread+0x3b0/0x490 ret_from_fork+0x1f/0x30 Freed by task 51: kasan_save_stack+0x1e/0x50 kasan_set_track+0x21/0x30 kasan_set_free_info+0x20/0x30 __kasan_slab_free+0xed/0x130 slab_free_freelist_hook+0x7f/0x160 kfree+0xde/0x340 xlog_cil_committed+0xbfd/0xfe0 [xfs] xlog_cil_process_committed+0x103/0x1c0 [xfs] xlog_state_do_callback+0x45d/0xbd0 [xfs] xlog_ioend_work+0x116/0x1c0 [xfs] process_one_work+0x7f6/0x1380 worker_thread+0x59d/0x1040 kthread+0x3b0/0x490 ret_from_fork+0x1f/0x30 Last potentially related work creation: kasan_save_stack+0x1e/0x50 __kasan_record_aux_stack+0xb7/0xc0 insert_work+0x48/0x2e0 __queue_work+0x4e7/0xda0 queue_work_on+0x69/0x80 xlog_cil_push_now.isra.0+0x16b/0x210 [xfs] xlog_cil_force_seq+0x1b7/0x850 [xfs] xfs_log_force_seq+0x1c7/0x670 [xfs] xfs_file_fsync+0x7c1/0xa60 [xfs] __x64_sys_fsync+0x52/0x80 do_syscall_64+0x35/0x80 entry_SYSCALL_64_after_hwframe+0x44/0xae The buggy address belongs to the object at ffff88804ea5f600 which belongs to the cache kmalloc-256 of size 256 The buggy address is located 8 bytes inside of 256-byte region [ffff88804ea5f600, ffff88804ea5f700) The buggy address belongs to the page: page:ffffea00013a9780 refcount:1 mapcount:0 mapping:0000000000000000 index:0xffff88804ea5ea00 pfn:0x4ea5e head:ffffea00013a9780 order:1 compound_mapcount:0 flags: 0x4fff80000010200(slab|head|node=1|zone=1|lastcpupid=0xfff) raw: 04fff80000010200 ffffea0001245908 ffffea00011bd388 ffff888004c42b40 raw: ffff88804ea5ea00 0000000000100009 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff88804ea5f500: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff88804ea5f580: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >ffff88804ea5f600: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff88804ea5f680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff88804ea5f700: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ================================================================== Fixes: 4e919af7827a ("xfs: periodically relog deferred intent items") Signed-off-by: Darrick J. Wong Reviewed-by: Dave Chinner Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_log_cil.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/fs/xfs/xfs_log_cil.c b/fs/xfs/xfs_log_cil.c index 88730883bb70..fbe160d5e9b9 100644 --- a/fs/xfs/xfs_log_cil.c +++ b/fs/xfs/xfs_log_cil.c @@ -1179,9 +1179,9 @@ xlog_cil_force_seq( */ bool xfs_log_item_in_current_chkpt( - struct xfs_log_item *lip) + struct xfs_log_item *lip) { - struct xfs_cil_ctx *ctx = lip->li_mountp->m_log->l_cilp->xc_ctx; + struct xfs_cil *cil = lip->li_mountp->m_log->l_cilp; if (list_empty(&lip->li_cil)) return false; @@ -1191,7 +1191,7 @@ xfs_log_item_in_current_chkpt( * first checkpoint it is written to. Hence if it is different to the * current sequence, we're in a new checkpoint. */ - return lip->li_seq == ctx->sequence; + return lip->li_seq == READ_ONCE(cil->xc_current_sequence); } /* From eccacbcbfd709c201b2ab56309de1d4a51f36c43 Mon Sep 17 00:00:00 2001 From: "Darrick J. Wong" Date: Fri, 29 Jul 2022 18:16:04 +0200 Subject: [PATCH 1754/1842] xfs: fix log intent recovery ENOSPC shutdowns when inactivating inodes commit 81ed94751b1513fcc5978dcc06eb1f5b4e55a785 upstream. During regular operation, the xfs_inactive operations create transactions with zero block reservation because in general we're freeing space, not asking for more. The per-AG space reservations created at mount time enable us to handle expansions of the refcount btree without needing to reserve blocks to the transaction. Unfortunately, log recovery doesn't create the per-AG space reservations when intent items are being recovered. This isn't an issue for intent item recovery itself because they explicitly request blocks, but any inode inactivation that can happen during log recovery uses the same xfs_inactive paths as regular runtime. If a refcount btree expansion happens, the transaction will fail due to blk_res_used > blk_res, and we shut down the filesystem unnecessarily. Fix this problem by making per-AG reservations temporarily so that we can handle the inactivations, and releasing them at the end. This brings the recovery environment closer to the runtime environment. Signed-off-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_mount.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index 44b05e1d5d32..a2a5a0fd9233 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -968,9 +968,17 @@ xfs_mountfs( /* * Finish recovering the file system. This part needed to be delayed * until after the root and real-time bitmap inodes were consistently - * read in. + * read in. Temporarily create per-AG space reservations for metadata + * btree shape changes because space freeing transactions (for inode + * inactivation) require the per-AG reservation in lieu of reserving + * blocks. */ + error = xfs_fs_reserve_ag_blocks(mp); + if (error && error == -ENOSPC) + xfs_warn(mp, + "ENOSPC reserving per-AG metadata pool, log recovery may fail."); error = xfs_log_mount_finish(mp); + xfs_fs_unreserve_ag_blocks(mp); if (error) { xfs_warn(mp, "log mount finish failed"); goto out_rtunmount; From d8f5bb0a09b7fbb8604c945d4a1444bcd769cf51 Mon Sep 17 00:00:00 2001 From: "Darrick J. Wong" Date: Fri, 29 Jul 2022 18:16:05 +0200 Subject: [PATCH 1755/1842] xfs: force the log offline when log intent item recovery fails commit 4e6b8270c820c8c57a73f869799a0af2b56eff3e upstream. If any part of log intent item recovery fails, we should shut down the log immediately to stop the log from writing a clean unmount record to disk, because the metadata is not consistent. The inability to cancel a dirty transaction catches most of these cases, but there are a few things that have slipped through the cracks, such as ENOSPC from a transaction allocation, or runtime errors that result in cancellation of a non-dirty transaction. This solves some weird behaviors reported by customers where a system goes down, the first mount fails, the second succeeds, but then the fs goes down later because of inconsistent metadata. Signed-off-by: Darrick J. Wong Reviewed-by: Christoph Hellwig Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_log.c | 3 +++ fs/xfs/xfs_log_recover.c | 5 ++++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index 05791456adbb..22d7d74231d4 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -765,6 +765,9 @@ xfs_log_mount_finish( if (readonly) mp->m_flags |= XFS_MOUNT_RDONLY; + /* Make sure the log is dead if we're returning failure. */ + ASSERT(!error || (mp->m_log->l_flags & XLOG_IO_ERROR)); + return error; } diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c index 87886b7f77da..69408782019e 100644 --- a/fs/xfs/xfs_log_recover.c +++ b/fs/xfs/xfs_log_recover.c @@ -2457,8 +2457,10 @@ xlog_finish_defer_ops( error = xfs_trans_alloc(mp, &resv, dfc->dfc_blkres, dfc->dfc_rtxres, XFS_TRANS_RESERVE, &tp); - if (error) + if (error) { + xfs_force_shutdown(mp, SHUTDOWN_LOG_IO_ERROR); return error; + } /* * Transfer to this new transaction all the dfops we captured @@ -3454,6 +3456,7 @@ xlog_recover_finish( * this) before we get around to xfs_log_mount_cancel. */ xlog_recover_cancel_intents(log); + xfs_force_shutdown(log->l_mp, SHUTDOWN_LOG_IO_ERROR); xfs_alert(log->l_mp, "Failed to recover intents"); return error; } From c85cbb0b21a1de7f8128f56adcc91ec04bb9b4f5 Mon Sep 17 00:00:00 2001 From: Brian Foster Date: Fri, 29 Jul 2022 18:16:06 +0200 Subject: [PATCH 1756/1842] xfs: hold buffer across unpin and potential shutdown processing commit 84d8949e770745b16a7e8a68dcb1d0f3687bdee9 upstream. The special processing used to simulate a buffer I/O failure on fs shutdown has a difficult to reproduce race that can result in a use after free of the associated buffer. Consider a buffer that has been committed to the on-disk log and thus is AIL resident. The buffer lands on the writeback delwri queue, but is subsequently locked, committed and pinned by another transaction before submitted for I/O. At this point, the buffer is stuck on the delwri queue as it cannot be submitted for I/O until it is unpinned. A log checkpoint I/O failure occurs sometime later, which aborts the bli. The unpin handler is called with the aborted log item, drops the bli reference count, the pin count, and falls into the I/O failure simulation path. The potential problem here is that once the pin count falls to zero in ->iop_unpin(), xfsaild is free to retry delwri submission of the buffer at any time, before the unpin handler even completes. If delwri queue submission wins the race to the buffer lock, it observes the shutdown state and simulates the I/O failure itself. This releases both the bli and delwri queue holds and frees the buffer while xfs_buf_item_unpin() sits on xfs_buf_lock() waiting to run through the same failure sequence. This problem is rare and requires many iterations of fstest generic/019 (which simulates disk I/O failures) to reproduce. To avoid this problem, grab a hold on the buffer before the log item is unpinned if the associated item has been aborted and will require a simulated I/O failure. The hold is already required for the simulated I/O failure, so the ordering simply guarantees the unpin handler access to the buffer before it is unpinned and thus processed by the AIL. This particular ordering is required so long as the AIL does not acquire a reference on the bli, which is the long term solution to this problem. Signed-off-by: Brian Foster Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_buf_item.c | 37 +++++++++++++++++++++---------------- 1 file changed, 21 insertions(+), 16 deletions(-) diff --git a/fs/xfs/xfs_buf_item.c b/fs/xfs/xfs_buf_item.c index 5d6535370f87..452af57731ed 100644 --- a/fs/xfs/xfs_buf_item.c +++ b/fs/xfs/xfs_buf_item.c @@ -393,17 +393,8 @@ xfs_buf_item_pin( } /* - * This is called to unpin the buffer associated with the buf log - * item which was previously pinned with a call to xfs_buf_item_pin(). - * - * Also drop the reference to the buf item for the current transaction. - * If the XFS_BLI_STALE flag is set and we are the last reference, - * then free up the buf log item and unlock the buffer. - * - * If the remove flag is set we are called from uncommit in the - * forced-shutdown path. If that is true and the reference count on - * the log item is going to drop to zero we need to free the item's - * descriptor in the transaction. + * This is called to unpin the buffer associated with the buf log item which + * was previously pinned with a call to xfs_buf_item_pin(). */ STATIC void xfs_buf_item_unpin( @@ -420,12 +411,26 @@ xfs_buf_item_unpin( trace_xfs_buf_item_unpin(bip); + /* + * Drop the bli ref associated with the pin and grab the hold required + * for the I/O simulation failure in the abort case. We have to do this + * before the pin count drops because the AIL doesn't acquire a bli + * reference. Therefore if the refcount drops to zero, the bli could + * still be AIL resident and the buffer submitted for I/O (and freed on + * completion) at any point before we return. This can be removed once + * the AIL properly holds a reference on the bli. + */ freed = atomic_dec_and_test(&bip->bli_refcount); - + if (freed && !stale && remove) + xfs_buf_hold(bp); if (atomic_dec_and_test(&bp->b_pin_count)) wake_up_all(&bp->b_waiters); - if (freed && stale) { + /* nothing to do but drop the pin count if the bli is active */ + if (!freed) + return; + + if (stale) { ASSERT(bip->bli_flags & XFS_BLI_STALE); ASSERT(xfs_buf_islocked(bp)); ASSERT(bp->b_flags & XBF_STALE); @@ -468,13 +473,13 @@ xfs_buf_item_unpin( ASSERT(bp->b_log_item == NULL); } xfs_buf_relse(bp); - } else if (freed && remove) { + } else if (remove) { /* * The buffer must be locked and held by the caller to simulate - * an async I/O failure. + * an async I/O failure. We acquired the hold for this case + * before the buffer was unpinned. */ xfs_buf_lock(bp); - xfs_buf_hold(bp); bp->b_flags |= XBF_ASYNC; xfs_buf_ioend_fail(bp); } From c1268acaa0dd9bfc43f5ad23d56db66f9f03eb7d Mon Sep 17 00:00:00 2001 From: Brian Foster Date: Fri, 29 Jul 2022 18:16:07 +0200 Subject: [PATCH 1757/1842] xfs: remove dead stale buf unpin handling code commit e53d3aa0b605c49d780e1b2fd0b49dba4154f32b upstream. This code goes back to a time when transaction commits wrote directly to iclogs. The associated log items were pinned, written to the log, and then "uncommitted" if some part of the log write had failed. This uncommit sequence called an ->iop_unpin_remove() handler that was eventually folded into ->iop_unpin() via the remove parameter. The log subsystem has since changed significantly in that transactions commit to the CIL instead of direct to iclogs, though log items must still be aborted in the event of an eventual log I/O error. However, the context for a log item abort is now asynchronous from transaction commit, which means the committing transaction has been freed by this point in time and the transaction uncommit sequence of events is no longer relevant. Further, since stale buffers remain locked at transaction commit through unpin, we can be certain that the buffer is not associated with any transaction when the unpin callback executes. Remove this unused hunk of code and replace it with an assertion that the buffer is disassociated from transaction context. Signed-off-by: Brian Foster Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_buf_item.c | 21 ++------------------- 1 file changed, 2 insertions(+), 19 deletions(-) diff --git a/fs/xfs/xfs_buf_item.c b/fs/xfs/xfs_buf_item.c index 452af57731ed..a3d5ecccfc2c 100644 --- a/fs/xfs/xfs_buf_item.c +++ b/fs/xfs/xfs_buf_item.c @@ -435,28 +435,11 @@ xfs_buf_item_unpin( ASSERT(xfs_buf_islocked(bp)); ASSERT(bp->b_flags & XBF_STALE); ASSERT(bip->__bli_format.blf_flags & XFS_BLF_CANCEL); + ASSERT(list_empty(&lip->li_trans)); + ASSERT(!bp->b_transp); trace_xfs_buf_item_unpin_stale(bip); - if (remove) { - /* - * If we are in a transaction context, we have to - * remove the log item from the transaction as we are - * about to release our reference to the buffer. If we - * don't, the unlock that occurs later in - * xfs_trans_uncommit() will try to reference the - * buffer which we no longer have a hold on. - */ - if (!list_empty(&lip->li_trans)) - xfs_trans_del_item(lip); - - /* - * Since the transaction no longer refers to the buffer, - * the buffer should no longer refer to the transaction. - */ - bp->b_transp = NULL; - } - /* * If we get called here because of an IO error, we may or may * not have the item on the AIL. xfs_trans_ail_delete() will From e5f9d4e0f895942ceb098a18250b50203c976819 Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Fri, 29 Jul 2022 18:16:08 +0200 Subject: [PATCH 1758/1842] xfs: logging the on disk inode LSN can make it go backwards commit 32baa63d82ee3f5ab3bd51bae6bf7d1c15aed8c7 upstream. When we log an inode, we format the "log inode" core and set an LSN in that inode core. We do that via xfs_inode_item_format_core(), which calls: xfs_inode_to_log_dinode(ip, dic, ip->i_itemp->ili_item.li_lsn); to format the log inode. It writes the LSN from the inode item into the log inode, and if recovery decides the inode item needs to be replayed, it recovers the log inode LSN field and writes it into the on disk inode LSN field. Now this might seem like a reasonable thing to do, but it is wrong on multiple levels. Firstly, if the item is not yet in the AIL, item->li_lsn is zero. i.e. the first time the inode it is logged and formatted, the LSN we write into the log inode will be zero. If we only log it once, recovery will run and can write this zero LSN into the inode. This means that the next time the inode is logged and log recovery runs, it will *always* replay changes to the inode regardless of whether the inode is newer on disk than the version in the log and that violates the entire purpose of recording the LSN in the inode at writeback time (i.e. to stop it going backwards in time on disk during recovery). Secondly, if we commit the CIL to the journal so the inode item moves to the AIL, and then relog the inode, the LSN that gets stamped into the log inode will be the LSN of the inode's current location in the AIL, not it's age on disk. And it's not the LSN that will be associated with the current change. That means when log recovery replays this inode item, the LSN that ends up on disk is the LSN for the previous changes in the log, not the current changes being replayed. IOWs, after recovery the LSN on disk is not in sync with the LSN of the modifications that were replayed into the inode. This, again, violates the recovery ordering semantics that on-disk writeback LSNs provide. Hence the inode LSN in the log dinode is -always- invalid. Thirdly, recovery actually has the LSN of the log transaction it is replaying right at hand - it uses it to determine if it should replay the inode by comparing it to the on-disk inode's LSN. But it doesn't use that LSN to stamp the LSN into the inode which will be written back when the transaction is fully replayed. It uses the one in the log dinode, which we know is always going to be incorrect. Looking back at the change history, the inode logging was broken by commit 93f958f9c41f ("xfs: cull unnecessary icdinode fields") way back in 2016 by a stupid idiot who thought he knew how this code worked. i.e. me. That commit replaced an in memory di_lsn field that was updated only at inode writeback time from the inode item.li_lsn value - and hence always contained the same LSN that appeared in the on-disk inode - with a read of the inode item LSN at inode format time. CLearly these are not the same thing. Before 93f958f9c41f, the log recovery behaviour was irrelevant, because the LSN in the log inode always matched the on-disk LSN at the time the inode was logged, hence recovery of the transaction would never make the on-disk LSN in the inode go backwards or get out of sync. A symptom of the problem is this, caught from a failure of generic/482. Before log recovery, the inode has been allocated but never used: xfs_db> inode 393388 xfs_db> p core.magic = 0x494e core.mode = 0 .... v3.crc = 0x99126961 (correct) v3.change_count = 0 v3.lsn = 0 v3.flags2 = 0 v3.cowextsize = 0 v3.crtime.sec = Thu Jan 1 10:00:00 1970 v3.crtime.nsec = 0 After log recovery: xfs_db> p core.magic = 0x494e core.mode = 020444 .... v3.crc = 0x23e68f23 (correct) v3.change_count = 2 v3.lsn = 0 v3.flags2 = 0 v3.cowextsize = 0 v3.crtime.sec = Thu Jul 22 17:03:03 2021 v3.crtime.nsec = 751000000 ... You can see that the LSN of the on-disk inode is 0, even though it clearly has been written to disk. I point out this inode, because the generic/482 failure occurred because several adjacent inodes in this specific inode cluster were not replayed correctly and still appeared to be zero on disk when all the other metadata (inobt, finobt, directories, etc) indicated they should be allocated and written back. The fix for this is two-fold. The first is that we need to either revert the LSN changes in 93f958f9c41f or stop logging the inode LSN altogether. If we do the former, log recovery does not need to change but we add 8 bytes of memory per inode to store what is largely a write-only inode field. If we do the latter, log recovery needs to stamp the on-disk inode in the same manner that inode writeback does. I prefer the latter, because we shouldn't really be trying to log and replay changes to the on disk LSN as the on-disk value is the canonical source of the on-disk version of the inode. It also matches the way we recover buffer items - we create a buf_log_item that carries the current recovery transaction LSN that gets stamped into the buffer by the write verifier when it gets written back when the transaction is fully recovered. However, this might break log recovery on older kernels even more, so I'm going to simply ignore the logged value in recovery and stamp the on-disk inode with the LSN of the transaction being recovered that will trigger writeback on transaction recovery completion. This will ensure that the on-disk inode LSN always reflects the LSN of the last change that was written to disk, regardless of whether it comes from log recovery or runtime writeback. Fixes: 93f958f9c41f ("xfs: cull unnecessary icdinode fields") Signed-off-by: Dave Chinner Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/libxfs/xfs_log_format.h | 11 +++++++++- fs/xfs/xfs_inode_item_recover.c | 39 ++++++++++++++++++++++++--------- 2 files changed, 39 insertions(+), 11 deletions(-) diff --git a/fs/xfs/libxfs/xfs_log_format.h b/fs/xfs/libxfs/xfs_log_format.h index 8bd00da6d2a4..2f46ef3800aa 100644 --- a/fs/xfs/libxfs/xfs_log_format.h +++ b/fs/xfs/libxfs/xfs_log_format.h @@ -414,7 +414,16 @@ struct xfs_log_dinode { /* start of the extended dinode, writable fields */ uint32_t di_crc; /* CRC of the inode */ uint64_t di_changecount; /* number of attribute changes */ - xfs_lsn_t di_lsn; /* flush sequence */ + + /* + * The LSN we write to this field during formatting is not a reflection + * of the current on-disk LSN. It should never be used for recovery + * sequencing, nor should it be recovered into the on-disk inode at all. + * See xlog_recover_inode_commit_pass2() and xfs_log_dinode_to_disk() + * for details. + */ + xfs_lsn_t di_lsn; + uint64_t di_flags2; /* more random flags */ uint32_t di_cowextsize; /* basic cow extent size for file */ uint8_t di_pad2[12]; /* more padding for future expansion */ diff --git a/fs/xfs/xfs_inode_item_recover.c b/fs/xfs/xfs_inode_item_recover.c index cb44f7653f03..538724f9f85c 100644 --- a/fs/xfs/xfs_inode_item_recover.c +++ b/fs/xfs/xfs_inode_item_recover.c @@ -145,7 +145,8 @@ xfs_log_dinode_to_disk_ts( STATIC void xfs_log_dinode_to_disk( struct xfs_log_dinode *from, - struct xfs_dinode *to) + struct xfs_dinode *to, + xfs_lsn_t lsn) { to->di_magic = cpu_to_be16(from->di_magic); to->di_mode = cpu_to_be16(from->di_mode); @@ -182,7 +183,7 @@ xfs_log_dinode_to_disk( to->di_flags2 = cpu_to_be64(from->di_flags2); to->di_cowextsize = cpu_to_be32(from->di_cowextsize); to->di_ino = cpu_to_be64(from->di_ino); - to->di_lsn = cpu_to_be64(from->di_lsn); + to->di_lsn = cpu_to_be64(lsn); memcpy(to->di_pad2, from->di_pad2, sizeof(to->di_pad2)); uuid_copy(&to->di_uuid, &from->di_uuid); to->di_flushiter = 0; @@ -261,16 +262,25 @@ xlog_recover_inode_commit_pass2( } /* - * If the inode has an LSN in it, recover the inode only if it's less - * than the lsn of the transaction we are replaying. Note: we still - * need to replay an owner change even though the inode is more recent - * than the transaction as there is no guarantee that all the btree - * blocks are more recent than this transaction, too. + * If the inode has an LSN in it, recover the inode only if the on-disk + * inode's LSN is older than the lsn of the transaction we are + * replaying. We can have multiple checkpoints with the same start LSN, + * so the current LSN being equal to the on-disk LSN doesn't necessarily + * mean that the on-disk inode is more recent than the change being + * replayed. + * + * We must check the current_lsn against the on-disk inode + * here because the we can't trust the log dinode to contain a valid LSN + * (see comment below before replaying the log dinode for details). + * + * Note: we still need to replay an owner change even though the inode + * is more recent than the transaction as there is no guarantee that all + * the btree blocks are more recent than this transaction, too. */ if (dip->di_version >= 3) { xfs_lsn_t lsn = be64_to_cpu(dip->di_lsn); - if (lsn && lsn != -1 && XFS_LSN_CMP(lsn, current_lsn) >= 0) { + if (lsn && lsn != -1 && XFS_LSN_CMP(lsn, current_lsn) > 0) { trace_xfs_log_recover_inode_skip(log, in_f); error = 0; goto out_owner_change; @@ -368,8 +378,17 @@ xlog_recover_inode_commit_pass2( goto out_release; } - /* recover the log dinode inode into the on disk inode */ - xfs_log_dinode_to_disk(ldip, dip); + /* + * Recover the log dinode inode into the on disk inode. + * + * The LSN in the log dinode is garbage - it can be zero or reflect + * stale in-memory runtime state that isn't coherent with the changes + * logged in this transaction or the changes written to the on-disk + * inode. Hence we write the current lSN into the inode because that + * matches what xfs_iflush() would write inode the inode when flushing + * the changes in this transaction. + */ + xfs_log_dinode_to_disk(ldip, dip, current_lsn); fields = in_f->ilf_fields; if (fields & XFS_ILOG_DEV) From 14b494b7aaf2cd220d1b7ccd9af8a8fe71c8e527 Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Fri, 29 Jul 2022 18:16:09 +0200 Subject: [PATCH 1759/1842] xfs: Enforce attr3 buffer recovery order commit d8f4c2d0398fa1d92cacf854daf80d21a46bfefc upstream. >From the department of "WTAF? How did we miss that!?"... When we are recovering a buffer, the first thing we do is check the buffer magic number and extract the LSN from the buffer. If the LSN is older than the current LSN, we replay the modification to it. If the metadata on disk is newer than the transaction in the log, we skip it. This is a fundamental v5 filesystem metadata recovery behaviour. generic/482 failed with an attribute writeback failure during log recovery. The write verifier caught the corruption before it got written to disk, and the attr buffer dump looked like: XFS (dm-3): Metadata corruption detected at xfs_attr3_leaf_verify+0x275/0x2e0, xfs_attr3_leaf block 0x19be8 XFS (dm-3): Unmount and run xfs_repair XFS (dm-3): First 128 bytes of corrupted metadata buffer: 00000000: 00 00 00 00 00 00 00 00 3b ee 00 00 4d 2a 01 e1 ........;...M*.. 00000010: 00 00 00 00 00 01 9b e8 00 00 00 01 00 00 05 38 ...............8 ^^^^^^^^^^^^^^^^^^^^^^^ 00000020: df 39 5e 51 58 ac 44 b6 8d c5 e7 10 44 09 bc 17 .9^QX.D.....D... 00000030: 00 00 00 00 00 02 00 83 00 03 00 cc 0f 24 01 00 .............$.. 00000040: 00 68 0e bc 0f c8 00 10 00 00 00 00 00 00 00 00 .h.............. 00000050: 00 00 3c 31 0f 24 01 00 00 00 3c 32 0f 88 01 00 ..<1.$....<2.... 00000060: 00 00 3c 33 0f d8 01 00 00 00 00 00 00 00 00 00 ..<3............ 00000070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ ..... The highlighted bytes are the LSN that was replayed into the buffer: 0x100000538. This is cycle 1, block 0x538. Prior to replay, that block on disk looks like this: $ sudo xfs_db -c "fsb 0x417d" -c "type attr3" -c p /dev/mapper/thin-vol hdr.info.hdr.forw = 0 hdr.info.hdr.back = 0 hdr.info.hdr.magic = 0x3bee hdr.info.crc = 0xb5af0bc6 (correct) hdr.info.bno = 105448 hdr.info.lsn = 0x100000900 ^^^^^^^^^^^ hdr.info.uuid = df395e51-58ac-44b6-8dc5-e7104409bc17 hdr.info.owner = 131203 hdr.count = 2 hdr.usedbytes = 120 hdr.firstused = 3796 hdr.holes = 1 hdr.freemap[0-2] = [base,size] Note the LSN stamped into the buffer on disk: 1/0x900. The version on disk is much newer than the log transaction that was being replayed. That's a bug, and should -never- happen. So I immediately went to look at xlog_recover_get_buf_lsn() to check that we handled the LSN correctly. I was wondering if there was a similar "two commits with the same start LSN skips the second replay" problem with buffers. I didn't get that far, because I found a much more basic, rudimentary bug: xlog_recover_get_buf_lsn() doesn't recognise buffers with XFS_ATTR3_LEAF_MAGIC set in them!!! IOWs, attr3 leaf buffers fall through the magic number checks unrecognised, so trigger the "recover immediately" behaviour instead of undergoing an LSN check. IOWs, we incorrectly replay ATTR3 leaf buffers and that causes silent on disk corruption of inode attribute forks and potentially other things.... Git history shows this is *another* zero day bug, this time introduced in commit 50d5c8d8e938 ("xfs: check LSN ordering for v5 superblocks during recovery") which failed to handle the attr3 leaf buffers in recovery. And we've failed to handle them ever since... Signed-off-by: Dave Chinner Reviewed-by: Christoph Hellwig Reviewed-by: Darrick J. Wong Signed-off-by: Darrick J. Wong Signed-off-by: Amir Goldstein Acked-by: Darrick J. Wong Signed-off-by: Greg Kroah-Hartman --- fs/xfs/xfs_buf_item_recover.c | 1 + 1 file changed, 1 insertion(+) diff --git a/fs/xfs/xfs_buf_item_recover.c b/fs/xfs/xfs_buf_item_recover.c index 1d649462d731..b374c9cee117 100644 --- a/fs/xfs/xfs_buf_item_recover.c +++ b/fs/xfs/xfs_buf_item_recover.c @@ -796,6 +796,7 @@ xlog_recover_get_buf_lsn( switch (magicda) { case XFS_DIR3_LEAF1_MAGIC: case XFS_DIR3_LEAFN_MAGIC: + case XFS_ATTR3_LEAF_MAGIC: case XFS_DA3_NODE_MAGIC: lsn = be64_to_cpu(((struct xfs_da3_blkinfo *)blk)->lsn); uuid = &((struct xfs_da3_blkinfo *)blk)->uuid; From 545fc3524ccc454d9f24273ee1c3aeadd7ab1421 Mon Sep 17 00:00:00 2001 From: Thadeu Lima de Souza Cascardo Date: Thu, 28 Jul 2022 09:26:02 -0300 Subject: [PATCH 1760/1842] x86/bugs: Do not enable IBPB at firmware entry when IBPB is not available commit 571c30b1a88465a1c85a6f7762609939b9085a15 upstream. Some cloud hypervisors do not provide IBPB on very recent CPU processors, including AMD processors affected by Retbleed. Using IBPB before firmware calls on such systems would cause a GPF at boot like the one below. Do not enable such calls when IBPB support is not present. EFI Variables Facility v0.08 2004-May-17 general protection fault, maybe for address 0x1: 0000 [#1] PREEMPT SMP NOPTI CPU: 0 PID: 24 Comm: kworker/u2:1 Not tainted 5.19.0-rc8+ #7 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Workqueue: efi_rts_wq efi_call_rts RIP: 0010:efi_call_rts Code: e8 37 33 58 ff 41 bf 48 00 00 00 49 89 c0 44 89 f9 48 83 c8 01 4c 89 c2 48 c1 ea 20 66 90 b9 49 00 00 00 b8 01 00 00 00 31 d2 <0f> 30 e8 7b 9f 5d ff e8 f6 f8 ff ff 4c 89 f1 4c 89 ea 4c 89 e6 48 RSP: 0018:ffffb373800d7e38 EFLAGS: 00010246 RAX: 0000000000000001 RBX: 0000000000000006 RCX: 0000000000000049 RDX: 0000000000000000 RSI: ffff94fbc19d8fe0 RDI: ffff94fbc1b2b300 RBP: ffffb373800d7e70 R08: 0000000000000000 R09: 0000000000000000 R10: 000000000000000b R11: 000000000000000b R12: ffffb3738001fd78 R13: ffff94fbc2fcfc00 R14: ffffb3738001fd80 R15: 0000000000000048 FS: 0000000000000000(0000) GS:ffff94fc3da00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: ffff94fc30201000 CR3: 000000006f610000 CR4: 00000000000406f0 Call Trace: ? __wake_up process_one_work worker_thread ? rescuer_thread kthread ? kthread_complete_and_exit ret_from_fork Modules linked in: Fixes: 28a99e95f55c ("x86/amd: Use IBPB for firmware calls") Reported-by: Dimitri John Ledkov Signed-off-by: Thadeu Lima de Souza Cascardo Signed-off-by: Borislav Petkov Cc: Link: https://lore.kernel.org/r/20220728122602.2500509-1-cascardo@canonical.com Signed-off-by: Greg Kroah-Hartman --- arch/x86/kernel/cpu/bugs.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 7896b67dda42..2e5762faf774 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1476,6 +1476,7 @@ static void __init spectre_v2_select_mitigation(void) * enable IBRS around firmware calls. */ if (boot_cpu_has_bug(X86_BUG_RETBLEED) && + boot_cpu_has(X86_FEATURE_IBPB) && (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)) { From 6aad811b37eeeba902b14cc4ab698d2b37bb4fb9 Mon Sep 17 00:00:00 2001 From: Lorenz Bauer Date: Mon, 1 Aug 2022 15:29:14 +0800 Subject: [PATCH 1761/1842] bpf: Consolidate shared test timing code commit 607b9cc92bd7208338d714a22b8082fe83bcb177 upstream. Share the timing / signal interruption logic between different implementations of PROG_TEST_RUN. There is a change in behaviour as well. We check the loop exit condition before checking for pending signals. This resolves an edge case where a signal arrives during the last iteration. Instead of aborting with EINTR we return the successful result to user space. Signed-off-by: Lorenz Bauer Signed-off-by: Alexei Starovoitov Acked-by: Andrii Nakryiko Link: https://lore.kernel.org/bpf/20210303101816.36774-2-lmb@cloudflare.com [dtcccc: fix conflicts in bpf_test_run()] Signed-off-by: Tianchen Ding Signed-off-by: Greg Kroah-Hartman --- net/bpf/test_run.c | 140 +++++++++++++++++++++++++-------------------- 1 file changed, 78 insertions(+), 62 deletions(-) diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index eb684f31fd69..d2a4f04df1da 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -16,14 +16,78 @@ #define CREATE_TRACE_POINTS #include +struct bpf_test_timer { + enum { NO_PREEMPT, NO_MIGRATE } mode; + u32 i; + u64 time_start, time_spent; +}; + +static void bpf_test_timer_enter(struct bpf_test_timer *t) + __acquires(rcu) +{ + rcu_read_lock(); + if (t->mode == NO_PREEMPT) + preempt_disable(); + else + migrate_disable(); + + t->time_start = ktime_get_ns(); +} + +static void bpf_test_timer_leave(struct bpf_test_timer *t) + __releases(rcu) +{ + t->time_start = 0; + + if (t->mode == NO_PREEMPT) + preempt_enable(); + else + migrate_enable(); + rcu_read_unlock(); +} + +static bool bpf_test_timer_continue(struct bpf_test_timer *t, u32 repeat, int *err, u32 *duration) + __must_hold(rcu) +{ + t->i++; + if (t->i >= repeat) { + /* We're done. */ + t->time_spent += ktime_get_ns() - t->time_start; + do_div(t->time_spent, t->i); + *duration = t->time_spent > U32_MAX ? U32_MAX : (u32)t->time_spent; + *err = 0; + goto reset; + } + + if (signal_pending(current)) { + /* During iteration: we've been cancelled, abort. */ + *err = -EINTR; + goto reset; + } + + if (need_resched()) { + /* During iteration: we need to reschedule between runs. */ + t->time_spent += ktime_get_ns() - t->time_start; + bpf_test_timer_leave(t); + cond_resched(); + bpf_test_timer_enter(t); + } + + /* Do another round. */ + return true; + +reset: + t->i = 0; + return false; +} + static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat, u32 *retval, u32 *time, bool xdp) { struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE] = { NULL }; + struct bpf_test_timer t = { NO_MIGRATE }; enum bpf_cgroup_storage_type stype; - u64 time_start, time_spent = 0; - int ret = 0; - u32 i; + int ret; for_each_cgroup_storage_type(stype) { storage[stype] = bpf_cgroup_storage_alloc(prog, stype); @@ -38,10 +102,8 @@ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat, if (!repeat) repeat = 1; - rcu_read_lock(); - migrate_disable(); - time_start = ktime_get_ns(); - for (i = 0; i < repeat; i++) { + bpf_test_timer_enter(&t); + do { ret = bpf_cgroup_storage_set(storage); if (ret) break; @@ -53,29 +115,8 @@ static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat, bpf_cgroup_storage_unset(); - if (signal_pending(current)) { - ret = -EINTR; - break; - } - - if (need_resched()) { - time_spent += ktime_get_ns() - time_start; - migrate_enable(); - rcu_read_unlock(); - - cond_resched(); - - rcu_read_lock(); - migrate_disable(); - time_start = ktime_get_ns(); - } - } - time_spent += ktime_get_ns() - time_start; - migrate_enable(); - rcu_read_unlock(); - - do_div(time_spent, repeat); - *time = time_spent > U32_MAX ? U32_MAX : (u32)time_spent; + } while (bpf_test_timer_continue(&t, repeat, &ret, time)); + bpf_test_timer_leave(&t); for_each_cgroup_storage_type(stype) bpf_cgroup_storage_free(storage[stype]); @@ -688,18 +729,17 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog, const union bpf_attr *kattr, union bpf_attr __user *uattr) { + struct bpf_test_timer t = { NO_PREEMPT }; u32 size = kattr->test.data_size_in; struct bpf_flow_dissector ctx = {}; u32 repeat = kattr->test.repeat; struct bpf_flow_keys *user_ctx; struct bpf_flow_keys flow_keys; - u64 time_start, time_spent = 0; const struct ethhdr *eth; unsigned int flags = 0; u32 retval, duration; void *data; int ret; - u32 i; if (prog->type != BPF_PROG_TYPE_FLOW_DISSECTOR) return -EINVAL; @@ -735,39 +775,15 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog, ctx.data = data; ctx.data_end = (__u8 *)data + size; - rcu_read_lock(); - preempt_disable(); - time_start = ktime_get_ns(); - for (i = 0; i < repeat; i++) { + bpf_test_timer_enter(&t); + do { retval = bpf_flow_dissect(prog, &ctx, eth->h_proto, ETH_HLEN, size, flags); + } while (bpf_test_timer_continue(&t, repeat, &ret, &duration)); + bpf_test_timer_leave(&t); - if (signal_pending(current)) { - preempt_enable(); - rcu_read_unlock(); - - ret = -EINTR; - goto out; - } - - if (need_resched()) { - time_spent += ktime_get_ns() - time_start; - preempt_enable(); - rcu_read_unlock(); - - cond_resched(); - - rcu_read_lock(); - preempt_disable(); - time_start = ktime_get_ns(); - } - } - time_spent += ktime_get_ns() - time_start; - preempt_enable(); - rcu_read_unlock(); - - do_div(time_spent, repeat); - duration = time_spent > U32_MAX ? U32_MAX : (u32)time_spent; + if (ret < 0) + goto out; ret = bpf_test_finish(kattr, uattr, &flow_keys, sizeof(flow_keys), retval, duration); From 6d3fad2b44eb9d226a896d1c93909f0fd2e1b9ea Mon Sep 17 00:00:00 2001 From: Lorenz Bauer Date: Mon, 1 Aug 2022 15:29:15 +0800 Subject: [PATCH 1762/1842] bpf: Add PROG_TEST_RUN support for sk_lookup programs commit 7c32e8f8bc33a5f4b113a630857e46634e3e143b upstream. Allow to pass sk_lookup programs to PROG_TEST_RUN. User space provides the full bpf_sk_lookup struct as context. Since the context includes a socket pointer that can't be exposed to user space we define that PROG_TEST_RUN returns the cookie of the selected socket or zero in place of the socket pointer. We don't support testing programs that select a reuseport socket, since this would mean running another (unrelated) BPF program from the sk_lookup test handler. Signed-off-by: Lorenz Bauer Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/20210303101816.36774-3-lmb@cloudflare.com Signed-off-by: Tianchen Ding Signed-off-by: Greg Kroah-Hartman --- include/linux/bpf.h | 10 ++++ include/uapi/linux/bpf.h | 5 +- net/bpf/test_run.c | 105 +++++++++++++++++++++++++++++++++ net/core/filter.c | 1 + tools/include/uapi/linux/bpf.h | 5 +- 5 files changed, 124 insertions(+), 2 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index f21bc441e3fa..b010d45a1ecd 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1457,6 +1457,9 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog, int bpf_prog_test_run_raw_tp(struct bpf_prog *prog, const union bpf_attr *kattr, union bpf_attr __user *uattr); +int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, + const union bpf_attr *kattr, + union bpf_attr __user *uattr); bool btf_ctx_access(int off, int size, enum bpf_access_type type, const struct bpf_prog *prog, struct bpf_insn_access_aux *info); @@ -1671,6 +1674,13 @@ static inline int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog, return -ENOTSUPP; } +static inline int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, + const union bpf_attr *kattr, + union bpf_attr __user *uattr) +{ + return -ENOTSUPP; +} + static inline void bpf_map_put(struct bpf_map *map) { } diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 0f39fdcb2273..2a234023821e 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -5007,7 +5007,10 @@ struct bpf_pidns_info { /* User accessible data for SK_LOOKUP programs. Add new fields at the end. */ struct bpf_sk_lookup { - __bpf_md_ptr(struct bpf_sock *, sk); /* Selected socket */ + union { + __bpf_md_ptr(struct bpf_sock *, sk); /* Selected socket */ + __u64 cookie; /* Non-zero if socket was selected in PROG_TEST_RUN */ + }; __u32 family; /* Protocol family (AF_INET, AF_INET6) */ __u32 protocol; /* IP protocol (IPPROTO_TCP, IPPROTO_UDP) */ diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index d2a4f04df1da..f8b231bbbe38 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -10,8 +10,10 @@ #include #include #include +#include #include #include +#include #define CREATE_TRACE_POINTS #include @@ -796,3 +798,106 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog, kfree(data); return ret; } + +int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kattr, + union bpf_attr __user *uattr) +{ + struct bpf_test_timer t = { NO_PREEMPT }; + struct bpf_prog_array *progs = NULL; + struct bpf_sk_lookup_kern ctx = {}; + u32 repeat = kattr->test.repeat; + struct bpf_sk_lookup *user_ctx; + u32 retval, duration; + int ret = -EINVAL; + + if (prog->type != BPF_PROG_TYPE_SK_LOOKUP) + return -EINVAL; + + if (kattr->test.flags || kattr->test.cpu) + return -EINVAL; + + if (kattr->test.data_in || kattr->test.data_size_in || kattr->test.data_out || + kattr->test.data_size_out) + return -EINVAL; + + if (!repeat) + repeat = 1; + + user_ctx = bpf_ctx_init(kattr, sizeof(*user_ctx)); + if (IS_ERR(user_ctx)) + return PTR_ERR(user_ctx); + + if (!user_ctx) + return -EINVAL; + + if (user_ctx->sk) + goto out; + + if (!range_is_zero(user_ctx, offsetofend(typeof(*user_ctx), local_port), sizeof(*user_ctx))) + goto out; + + if (user_ctx->local_port > U16_MAX || user_ctx->remote_port > U16_MAX) { + ret = -ERANGE; + goto out; + } + + ctx.family = (u16)user_ctx->family; + ctx.protocol = (u16)user_ctx->protocol; + ctx.dport = (u16)user_ctx->local_port; + ctx.sport = (__force __be16)user_ctx->remote_port; + + switch (ctx.family) { + case AF_INET: + ctx.v4.daddr = (__force __be32)user_ctx->local_ip4; + ctx.v4.saddr = (__force __be32)user_ctx->remote_ip4; + break; + +#if IS_ENABLED(CONFIG_IPV6) + case AF_INET6: + ctx.v6.daddr = (struct in6_addr *)user_ctx->local_ip6; + ctx.v6.saddr = (struct in6_addr *)user_ctx->remote_ip6; + break; +#endif + + default: + ret = -EAFNOSUPPORT; + goto out; + } + + progs = bpf_prog_array_alloc(1, GFP_KERNEL); + if (!progs) { + ret = -ENOMEM; + goto out; + } + + progs->items[0].prog = prog; + + bpf_test_timer_enter(&t); + do { + ctx.selected_sk = NULL; + retval = BPF_PROG_SK_LOOKUP_RUN_ARRAY(progs, ctx, BPF_PROG_RUN); + } while (bpf_test_timer_continue(&t, repeat, &ret, &duration)); + bpf_test_timer_leave(&t); + + if (ret < 0) + goto out; + + user_ctx->cookie = 0; + if (ctx.selected_sk) { + if (ctx.selected_sk->sk_reuseport && !ctx.no_reuseport) { + ret = -EOPNOTSUPP; + goto out; + } + + user_ctx->cookie = sock_gen_cookie(ctx.selected_sk); + } + + ret = bpf_test_finish(kattr, uattr, NULL, 0, retval, duration); + if (!ret) + ret = bpf_ctx_finish(kattr, uattr, user_ctx, sizeof(*user_ctx)); + +out: + bpf_prog_array_free(progs); + kfree(user_ctx); + return ret; +} diff --git a/net/core/filter.c b/net/core/filter.c index e2b491665775..815edf7bc439 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -10334,6 +10334,7 @@ static u32 sk_lookup_convert_ctx_access(enum bpf_access_type type, } const struct bpf_prog_ops sk_lookup_prog_ops = { + .test_run = bpf_prog_test_run_sk_lookup, }; const struct bpf_verifier_ops sk_lookup_verifier_ops = { diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index e440cd7f32a6..b9ee2ded381a 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -5006,7 +5006,10 @@ struct bpf_pidns_info { /* User accessible data for SK_LOOKUP programs. Add new fields at the end. */ struct bpf_sk_lookup { - __bpf_md_ptr(struct bpf_sock *, sk); /* Selected socket */ + union { + __bpf_md_ptr(struct bpf_sock *, sk); /* Selected socket */ + __u64 cookie; /* Non-zero if socket was selected in PROG_TEST_RUN */ + }; __u32 family; /* Protocol family (AF_INET, AF_INET6) */ __u32 protocol; /* IP protocol (IPPROTO_TCP, IPPROTO_UDP) */ From 4bfc9dc60873923ffa64ee77084bac55031a30a0 Mon Sep 17 00:00:00 2001 From: Lorenz Bauer Date: Mon, 1 Aug 2022 15:29:16 +0800 Subject: [PATCH 1763/1842] selftests: bpf: Don't run sk_lookup in verifier tests commit b4f894633fa14d7d46ba7676f950b90a401504bb upstream. sk_lookup doesn't allow setting data_in for bpf_prog_run. This doesn't play well with the verifier tests, since they always set a 64 byte input buffer. Allow not running verifier tests by setting bpf_test.runs to a negative value and don't run the ctx access case for sk_lookup. We have dedicated ctx access tests so skipping here doesn't reduce coverage. Signed-off-by: Lorenz Bauer Signed-off-by: Alexei Starovoitov Link: https://lore.kernel.org/bpf/20210303101816.36774-6-lmb@cloudflare.com Signed-off-by: Tianchen Ding Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/bpf/test_verifier.c | 4 ++-- tools/testing/selftests/bpf/verifier/ctx_sk_lookup.c | 1 + 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c index a4c55fcb0e7b..0fb92d9a319b 100644 --- a/tools/testing/selftests/bpf/test_verifier.c +++ b/tools/testing/selftests/bpf/test_verifier.c @@ -100,7 +100,7 @@ struct bpf_test { enum bpf_prog_type prog_type; uint8_t flags; void (*fill_helper)(struct bpf_test *self); - uint8_t runs; + int runs; #define bpf_testdata_struct_t \ struct { \ uint32_t retval, retval_unpriv; \ @@ -1054,7 +1054,7 @@ static void do_test_single(struct bpf_test *test, bool unpriv, run_errs = 0; run_successes = 0; - if (!alignment_prevented_execution && fd_prog >= 0) { + if (!alignment_prevented_execution && fd_prog >= 0 && test->runs >= 0) { uint32_t expected_val; int i; diff --git a/tools/testing/selftests/bpf/verifier/ctx_sk_lookup.c b/tools/testing/selftests/bpf/verifier/ctx_sk_lookup.c index 2ad5f974451c..fd3b62a084b9 100644 --- a/tools/testing/selftests/bpf/verifier/ctx_sk_lookup.c +++ b/tools/testing/selftests/bpf/verifier/ctx_sk_lookup.c @@ -239,6 +239,7 @@ .result = ACCEPT, .prog_type = BPF_PROG_TYPE_SK_LOOKUP, .expected_attach_type = BPF_SK_LOOKUP, + .runs = -1, }, /* invalid 8-byte reads from a 4-byte fields in bpf_sk_lookup */ { From 4fd9cb57a3f5e611efde7772643134385de3a5a6 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Wed, 3 Aug 2022 12:00:52 +0200 Subject: [PATCH 1764/1842] Linux 5.10.135 Link: https://lore.kernel.org/r/20220801114133.641770326@linuxfoundation.org Tested-by: Jon Hunter Tested-by: Florian Fainelli Tested-by: Linux Kernel Functional Testing Tested-by: Shuah Khan Tested-by: Guenter Roeck Tested-by: Rudi Heitbaum Tested-by: Pavel Machek (CIP) Tested-by: Sudip Mukherjee Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 00dddc2ac804..5f4dbcb43307 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 134 +SUBLEVEL = 135 EXTRAVERSION = NAME = Dare mighty things From 45b69848a2fea11c03f3a54241416e36eb94e38c Mon Sep 17 00:00:00 2001 From: Ben Hutchings Date: Sat, 23 Jul 2022 17:22:47 +0200 Subject: [PATCH 1765/1842] x86/speculation: Make all RETbleed mitigations 64-bit only commit b648ab487f31bc4c38941bc770ea97fe394304bb upstream. The mitigations for RETBleed are currently ineffective on x86_32 since entry_32.S does not use the required macros. However, for an x86_32 target, the kconfig symbols for them are still enabled by default and /sys/devices/system/cpu/vulnerabilities/retbleed will wrongly report that mitigations are in place. Make all of these symbols depend on X86_64, and only enable RETHUNK by default on X86_64. Fixes: f43b9876e857 ("x86/retbleed: Add fine grained Kconfig knobs") Signed-off-by: Ben Hutchings Signed-off-by: Borislav Petkov Cc: Link: https://lore.kernel.org/r/YtwSR3NNsWp1ohfV@decadent.org.uk [bwh: Backported to 5.10/5.15/5.18: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman --- arch/x86/Kconfig | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 16c045906b2a..159646da3c6b 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2447,7 +2447,7 @@ config RETPOLINE config RETHUNK bool "Enable return-thunks" depends on RETPOLINE && CC_HAS_RETURN_THUNK - default y + default y if X86_64 help Compile the kernel with the return-thunks compiler option to guard against kernel-to-user data leaks by avoiding return speculation. @@ -2456,21 +2456,21 @@ config RETHUNK config CPU_UNRET_ENTRY bool "Enable UNRET on kernel entry" - depends on CPU_SUP_AMD && RETHUNK + depends on CPU_SUP_AMD && RETHUNK && X86_64 default y help Compile the kernel with support for the retbleed=unret mitigation. config CPU_IBPB_ENTRY bool "Enable IBPB on kernel entry" - depends on CPU_SUP_AMD + depends on CPU_SUP_AMD && X86_64 default y help Compile the kernel with support for the retbleed=ibpb mitigation. config CPU_IBRS_ENTRY bool "Enable IBRS on kernel entry" - depends on CPU_SUP_INTEL + depends on CPU_SUP_INTEL && X86_64 default y help Compile the kernel with support for the spectre_v2=ibrs mitigation. From 4f3b852336602ee37876494077efb3f23afd5ba3 Mon Sep 17 00:00:00 2001 From: Tetsuo Handa Date: Mon, 1 Aug 2022 18:59:07 +0300 Subject: [PATCH 1766/1842] ath9k_htc: fix NULL pointer dereference at ath9k_htc_rxep() commit b0ec7e55fce65f125bd1d7f02e2dc4de62abee34 upstream. syzbot is reporting lockdep warning followed by kernel panic at ath9k_htc_rxep() [1], for ath9k_htc_rxep() depends on ath9k_rx_init() being already completed. Since ath9k_htc_rxep() is set by ath9k_htc_connect_svc(WMI_BEACON_SVC) from ath9k_init_htc_services(), it is possible that ath9k_htc_rxep() is called via timer interrupt before ath9k_rx_init() from ath9k_init_device() is called. Since we can't call ath9k_init_device() before ath9k_init_htc_services(), let's hold ath9k_htc_rxep() no-op until ath9k_rx_init() completes. Link: https://syzkaller.appspot.com/bug?extid=4d2d56175b934b9a7bf9 [1] Reported-by: syzbot Signed-off-by: Tetsuo Handa Tested-by: syzbot Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/2b88f416-b2cb-7a18-d688-951e6dc3fe92@i-love.sakura.ne.jp Signed-off-by: Fedor Pchelkin Signed-off-by: Greg Kroah-Hartman --- drivers/net/wireless/ath/ath9k/htc.h | 1 + drivers/net/wireless/ath/ath9k/htc_drv_txrx.c | 8 ++++++++ 2 files changed, 9 insertions(+) diff --git a/drivers/net/wireless/ath/ath9k/htc.h b/drivers/net/wireless/ath/ath9k/htc.h index 0a1634238e67..4f71e962279a 100644 --- a/drivers/net/wireless/ath/ath9k/htc.h +++ b/drivers/net/wireless/ath/ath9k/htc.h @@ -281,6 +281,7 @@ struct ath9k_htc_rxbuf { struct ath9k_htc_rx { struct list_head rxbuf; spinlock_t rxbuflock; + bool initialized; }; #define ATH9K_HTC_TX_CLEANUP_INTERVAL 50 /* ms */ diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c index 30ddf333e04d..592034ea4b68 100644 --- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c +++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c @@ -1133,6 +1133,10 @@ void ath9k_htc_rxep(void *drv_priv, struct sk_buff *skb, struct ath9k_htc_rxbuf *rxbuf = NULL, *tmp_buf = NULL; unsigned long flags; + /* Check if ath9k_rx_init() completed. */ + if (!data_race(priv->rx.initialized)) + goto err; + spin_lock_irqsave(&priv->rx.rxbuflock, flags); list_for_each_entry(tmp_buf, &priv->rx.rxbuf, list) { if (!tmp_buf->in_process) { @@ -1188,6 +1192,10 @@ int ath9k_rx_init(struct ath9k_htc_priv *priv) list_add_tail(&rxbuf->list, &priv->rx.rxbuf); } + /* Allow ath9k_htc_rxep() to operate. */ + smp_wmb(); + priv->rx.initialized = true; + return 0; err: From 78c8397132dd4735ac6a7b5a651302f0b9f264ad Mon Sep 17 00:00:00 2001 From: Tetsuo Handa Date: Mon, 1 Aug 2022 18:59:08 +0300 Subject: [PATCH 1767/1842] ath9k_htc: fix NULL pointer dereference at ath9k_htc_tx_get_packet() commit 8b3046abc99eefe11438090bcc4ec3a3994b55d0 upstream. syzbot is reporting lockdep warning at ath9k_wmi_event_tasklet() followed by kernel panic at get_htc_epid_queue() from ath9k_htc_tx_get_packet() from ath9k_htc_txstatus() [1], for ath9k_wmi_event_tasklet(WMI_TXSTATUS_EVENTID) depends on spin_lock_init() from ath9k_init_priv() being already completed. Since ath9k_wmi_event_tasklet() is set by ath9k_init_wmi() from ath9k_htc_probe_device(), it is possible that ath9k_wmi_event_tasklet() is called via tasklet interrupt before spin_lock_init() from ath9k_init_priv() from ath9k_init_device() from ath9k_htc_probe_device() is called. Let's hold ath9k_wmi_event_tasklet(WMI_TXSTATUS_EVENTID) no-op until ath9k_tx_init() completes. Link: https://syzkaller.appspot.com/bug?extid=31d54c60c5b254d6f75b [1] Reported-by: syzbot Signed-off-by: Tetsuo Handa Tested-by: syzbot Signed-off-by: Kalle Valo Link: https://lore.kernel.org/r/77b76ac8-2bee-6444-d26c-8c30858b8daa@i-love.sakura.ne.jp Signed-off-by: Fedor Pchelkin Signed-off-by: Greg Kroah-Hartman --- drivers/net/wireless/ath/ath9k/htc.h | 1 + drivers/net/wireless/ath/ath9k/htc_drv_txrx.c | 5 +++++ drivers/net/wireless/ath/ath9k/wmi.c | 4 ++++ 3 files changed, 10 insertions(+) diff --git a/drivers/net/wireless/ath/ath9k/htc.h b/drivers/net/wireless/ath/ath9k/htc.h index 4f71e962279a..6b45e63fae4b 100644 --- a/drivers/net/wireless/ath/ath9k/htc.h +++ b/drivers/net/wireless/ath/ath9k/htc.h @@ -306,6 +306,7 @@ struct ath9k_htc_tx { DECLARE_BITMAP(tx_slot, MAX_TX_BUF_NUM); struct timer_list cleanup_timer; spinlock_t tx_lock; + bool initialized; }; struct ath9k_htc_tx_ctl { diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c index 592034ea4b68..43a743ec9d9e 100644 --- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c +++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c @@ -808,6 +808,11 @@ int ath9k_tx_init(struct ath9k_htc_priv *priv) skb_queue_head_init(&priv->tx.data_vi_queue); skb_queue_head_init(&priv->tx.data_vo_queue); skb_queue_head_init(&priv->tx.tx_failed); + + /* Allow ath9k_wmi_event_tasklet(WMI_TXSTATUS_EVENTID) to operate. */ + smp_wmb(); + priv->tx.initialized = true; + return 0; } diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c index fe29ad4b9023..f315c54bd3ac 100644 --- a/drivers/net/wireless/ath/ath9k/wmi.c +++ b/drivers/net/wireless/ath/ath9k/wmi.c @@ -169,6 +169,10 @@ void ath9k_wmi_event_tasklet(struct tasklet_struct *t) &wmi->drv_priv->fatal_work); break; case WMI_TXSTATUS_EVENTID: + /* Check if ath9k_tx_init() completed. */ + if (!data_race(priv->tx.initialized)) + break; + spin_lock_bh(&priv->tx.tx_lock); if (priv->tx.flags & ATH9K_HTC_OP_TX_DRAIN) { spin_unlock_bh(&priv->tx.tx_lock); From 042fb1c281f357d58308366b5e2ddd8e5f1ad384 Mon Sep 17 00:00:00 2001 From: Jakub Sitnicki Date: Mon, 1 Aug 2022 17:57:02 +0300 Subject: [PATCH 1768/1842] selftests/bpf: Extend verifier and bpf_sock tests for dst_port loads commit 8f50f16ff39dd4e2d43d1548ca66925652f8aff7 upstream. Add coverage to the verifier tests and tests for reading bpf_sock fields to ensure that 32-bit, 16-bit, and 8-bit loads from dst_port field are allowed only at intended offsets and produce expected values. While 16-bit and 8-bit access to dst_port field is straight-forward, 32-bit wide loads need be allowed and produce a zero-padded 16-bit value for backward compatibility. Signed-off-by: Jakub Sitnicki Link: https://lore.kernel.org/r/20220130115518.213259-3-jakub@cloudflare.com Signed-off-by: Alexei Starovoitov [OP: backport to 5.10: adjusted context in sock_fields.c] Signed-off-by: Ovidiu Panait Signed-off-by: Greg Kroah-Hartman --- tools/include/uapi/linux/bpf.h | 3 +- .../selftests/bpf/prog_tests/sock_fields.c | 60 +++++++++----- .../selftests/bpf/progs/test_sock_fields.c | 41 ++++++++++ tools/testing/selftests/bpf/verifier/sock.c | 81 ++++++++++++++++++- 4 files changed, 162 insertions(+), 23 deletions(-) diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index b9ee2ded381a..7943e748916d 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -4180,7 +4180,8 @@ struct bpf_sock { __u32 src_ip4; __u32 src_ip6[4]; __u32 src_port; /* host byte order */ - __u32 dst_port; /* network byte order */ + __be16 dst_port; /* network byte order */ + __u16 :16; /* zero padding */ __u32 dst_ip4; __u32 dst_ip6[4]; __u32 state; diff --git a/tools/testing/selftests/bpf/prog_tests/sock_fields.c b/tools/testing/selftests/bpf/prog_tests/sock_fields.c index af87118e748e..e8b5bf707cd4 100644 --- a/tools/testing/selftests/bpf/prog_tests/sock_fields.c +++ b/tools/testing/selftests/bpf/prog_tests/sock_fields.c @@ -1,9 +1,11 @@ // SPDX-License-Identifier: GPL-2.0 /* Copyright (c) 2019 Facebook */ +#define _GNU_SOURCE #include #include #include +#include #include #include #include @@ -21,6 +23,7 @@ enum bpf_linum_array_idx { EGRESS_LINUM_IDX, INGRESS_LINUM_IDX, + READ_SK_DST_PORT_LINUM_IDX, __NR_BPF_LINUM_ARRAY_IDX, }; @@ -43,8 +46,16 @@ static __u64 child_cg_id; static int linum_map_fd; static __u32 duration; -static __u32 egress_linum_idx = EGRESS_LINUM_IDX; -static __u32 ingress_linum_idx = INGRESS_LINUM_IDX; +static bool create_netns(void) +{ + if (!ASSERT_OK(unshare(CLONE_NEWNET), "create netns")) + return false; + + if (!ASSERT_OK(system("ip link set dev lo up"), "bring up lo")) + return false; + + return true; +} static void print_sk(const struct bpf_sock *sk, const char *prefix) { @@ -92,19 +103,24 @@ static void check_result(void) { struct bpf_tcp_sock srv_tp, cli_tp, listen_tp; struct bpf_sock srv_sk, cli_sk, listen_sk; - __u32 ingress_linum, egress_linum; + __u32 idx, ingress_linum, egress_linum, linum; int err; - err = bpf_map_lookup_elem(linum_map_fd, &egress_linum_idx, - &egress_linum); + idx = EGRESS_LINUM_IDX; + err = bpf_map_lookup_elem(linum_map_fd, &idx, &egress_linum); CHECK(err == -1, "bpf_map_lookup_elem(linum_map_fd)", "err:%d errno:%d\n", err, errno); - err = bpf_map_lookup_elem(linum_map_fd, &ingress_linum_idx, - &ingress_linum); + idx = INGRESS_LINUM_IDX; + err = bpf_map_lookup_elem(linum_map_fd, &idx, &ingress_linum); CHECK(err == -1, "bpf_map_lookup_elem(linum_map_fd)", "err:%d errno:%d\n", err, errno); + idx = READ_SK_DST_PORT_LINUM_IDX; + err = bpf_map_lookup_elem(linum_map_fd, &idx, &linum); + ASSERT_OK(err, "bpf_map_lookup_elem(linum_map_fd, READ_SK_DST_PORT_IDX)"); + ASSERT_EQ(linum, 0, "failure in read_sk_dst_port on line"); + memcpy(&srv_sk, &skel->bss->srv_sk, sizeof(srv_sk)); memcpy(&srv_tp, &skel->bss->srv_tp, sizeof(srv_tp)); memcpy(&cli_sk, &skel->bss->cli_sk, sizeof(cli_sk)); @@ -263,7 +279,7 @@ static void test(void) char buf[DATA_LEN]; /* Prepare listen_fd */ - listen_fd = start_server(AF_INET6, SOCK_STREAM, "::1", 0, 0); + listen_fd = start_server(AF_INET6, SOCK_STREAM, "::1", 0xcafe, 0); /* start_server() has logged the error details */ if (CHECK_FAIL(listen_fd == -1)) goto done; @@ -331,8 +347,12 @@ static void test(void) void test_sock_fields(void) { - struct bpf_link *egress_link = NULL, *ingress_link = NULL; int parent_cg_fd = -1, child_cg_fd = -1; + struct bpf_link *link; + + /* Use a dedicated netns to have a fixed listen port */ + if (!create_netns()) + return; /* Create a cgroup, get fd, and join it */ parent_cg_fd = test__join_cgroup(PARENT_CGROUP); @@ -353,17 +373,20 @@ void test_sock_fields(void) if (CHECK(!skel, "test_sock_fields__open_and_load", "failed\n")) goto done; - egress_link = bpf_program__attach_cgroup(skel->progs.egress_read_sock_fields, - child_cg_fd); - if (CHECK(IS_ERR(egress_link), "attach_cgroup(egress)", "err:%ld\n", - PTR_ERR(egress_link))) + link = bpf_program__attach_cgroup(skel->progs.egress_read_sock_fields, child_cg_fd); + if (!ASSERT_OK_PTR(link, "attach_cgroup(egress_read_sock_fields)")) goto done; + skel->links.egress_read_sock_fields = link; - ingress_link = bpf_program__attach_cgroup(skel->progs.ingress_read_sock_fields, - child_cg_fd); - if (CHECK(IS_ERR(ingress_link), "attach_cgroup(ingress)", "err:%ld\n", - PTR_ERR(ingress_link))) + link = bpf_program__attach_cgroup(skel->progs.ingress_read_sock_fields, child_cg_fd); + if (!ASSERT_OK_PTR(link, "attach_cgroup(ingress_read_sock_fields)")) goto done; + skel->links.ingress_read_sock_fields = link; + + link = bpf_program__attach_cgroup(skel->progs.read_sk_dst_port, child_cg_fd); + if (!ASSERT_OK_PTR(link, "attach_cgroup(read_sk_dst_port")) + goto done; + skel->links.read_sk_dst_port = link; linum_map_fd = bpf_map__fd(skel->maps.linum_map); sk_pkt_out_cnt_fd = bpf_map__fd(skel->maps.sk_pkt_out_cnt); @@ -372,8 +395,7 @@ void test_sock_fields(void) test(); done: - bpf_link__destroy(egress_link); - bpf_link__destroy(ingress_link); + test_sock_fields__detach(skel); test_sock_fields__destroy(skel); if (child_cg_fd != -1) close(child_cg_fd); diff --git a/tools/testing/selftests/bpf/progs/test_sock_fields.c b/tools/testing/selftests/bpf/progs/test_sock_fields.c index 7967348b11af..3e2e3ee51cc9 100644 --- a/tools/testing/selftests/bpf/progs/test_sock_fields.c +++ b/tools/testing/selftests/bpf/progs/test_sock_fields.c @@ -12,6 +12,7 @@ enum bpf_linum_array_idx { EGRESS_LINUM_IDX, INGRESS_LINUM_IDX, + READ_SK_DST_PORT_LINUM_IDX, __NR_BPF_LINUM_ARRAY_IDX, }; @@ -250,4 +251,44 @@ int ingress_read_sock_fields(struct __sk_buff *skb) return CG_OK; } +static __noinline bool sk_dst_port__load_word(struct bpf_sock *sk) +{ + __u32 *word = (__u32 *)&sk->dst_port; + return word[0] == bpf_htonl(0xcafe0000); +} + +static __noinline bool sk_dst_port__load_half(struct bpf_sock *sk) +{ + __u16 *half = (__u16 *)&sk->dst_port; + return half[0] == bpf_htons(0xcafe); +} + +static __noinline bool sk_dst_port__load_byte(struct bpf_sock *sk) +{ + __u8 *byte = (__u8 *)&sk->dst_port; + return byte[0] == 0xca && byte[1] == 0xfe; +} + +SEC("cgroup_skb/egress") +int read_sk_dst_port(struct __sk_buff *skb) +{ + __u32 linum, linum_idx; + struct bpf_sock *sk; + + linum_idx = READ_SK_DST_PORT_LINUM_IDX; + + sk = skb->sk; + if (!sk) + RET_LOG(); + + if (!sk_dst_port__load_word(sk)) + RET_LOG(); + if (!sk_dst_port__load_half(sk)) + RET_LOG(); + if (!sk_dst_port__load_byte(sk)) + RET_LOG(); + + return CG_OK; +} + char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/verifier/sock.c b/tools/testing/selftests/bpf/verifier/sock.c index ce13ece08d51..8c224eac93df 100644 --- a/tools/testing/selftests/bpf/verifier/sock.c +++ b/tools/testing/selftests/bpf/verifier/sock.c @@ -121,7 +121,25 @@ .result = ACCEPT, }, { - "sk_fullsock(skb->sk): sk->dst_port [narrow load]", + "sk_fullsock(skb->sk): sk->dst_port [word load] (backward compatibility)", + .insns = { + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), + BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port)), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_CGROUP_SKB, + .result = ACCEPT, +}, +{ + "sk_fullsock(skb->sk): sk->dst_port [half load]", .insns = { BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), @@ -139,7 +157,7 @@ .result = ACCEPT, }, { - "sk_fullsock(skb->sk): sk->dst_port [load 2nd byte]", + "sk_fullsock(skb->sk): sk->dst_port [half load] (invalid)", .insns = { BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), @@ -149,7 +167,64 @@ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN(), - BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 1), + BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 2), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_CGROUP_SKB, + .result = REJECT, + .errstr = "invalid sock access", +}, +{ + "sk_fullsock(skb->sk): sk->dst_port [byte load]", + .insns = { + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), + BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_0, offsetof(struct bpf_sock, dst_port)), + BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 1), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_CGROUP_SKB, + .result = ACCEPT, +}, +{ + "sk_fullsock(skb->sk): sk->dst_port [byte load] (invalid)", + .insns = { + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), + BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 2), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .prog_type = BPF_PROG_TYPE_CGROUP_SKB, + .result = REJECT, + .errstr = "invalid sock access", +}, +{ + "sk_fullsock(skb->sk): past sk->dst_port [half load] (invalid)", + .insns = { + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)), + BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + BPF_EMIT_CALL(BPF_FUNC_sk_fullsock), + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetofend(struct bpf_sock, dst_port)), BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN(), }, From 1069087e2fb11f5fe61f68d83762cf01a25d8061 Mon Sep 17 00:00:00 2001 From: Jakub Sitnicki Date: Mon, 1 Aug 2022 17:57:03 +0300 Subject: [PATCH 1769/1842] selftests/bpf: Check dst_port only on the client socket commit 2d2202ba858c112b03f84d546e260c61425831a1 upstream. cgroup_skb/egress programs which sock_fields test installs process packets flying in both directions, from the client to the server, and in reverse direction. Recently added dst_port check relies on the fact that destination port (remote peer port) of the socket which sends the packet is known ahead of time. This holds true only for the client socket, which connects to the known server port. Filter out any traffic that is not egressing from the client socket in the BPF program that tests reading the dst_port. Fixes: 8f50f16ff39d ("selftests/bpf: Extend verifier and bpf_sock tests for dst_port loads") Signed-off-by: Jakub Sitnicki Signed-off-by: Daniel Borkmann Acked-by: Martin KaFai Lau Link: https://lore.kernel.org/bpf/20220317113920.1068535-3-jakub@cloudflare.com Signed-off-by: Ovidiu Panait Signed-off-by: Greg Kroah-Hartman --- tools/testing/selftests/bpf/progs/test_sock_fields.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/test_sock_fields.c b/tools/testing/selftests/bpf/progs/test_sock_fields.c index 3e2e3ee51cc9..43b31aa1fcf7 100644 --- a/tools/testing/selftests/bpf/progs/test_sock_fields.c +++ b/tools/testing/selftests/bpf/progs/test_sock_fields.c @@ -281,6 +281,10 @@ int read_sk_dst_port(struct __sk_buff *skb) if (!sk) RET_LOG(); + /* Ignore everything but the SYN from the client socket */ + if (sk->state != BPF_TCP_SYN_SENT) + return CG_OK; + if (!sk_dst_port__load_word(sk)) RET_LOG(); if (!sk_dst_port__load_half(sk)) From a01a4e9f5dc93335c716fa4023b1901956e8c904 Mon Sep 17 00:00:00 2001 From: George Kennedy Date: Thu, 16 Dec 2021 13:25:32 -0500 Subject: [PATCH 1770/1842] tun: avoid double free in tun_free_netdev commit 158b515f703e75e7d68289bf4d98c664e1d632df upstream. Avoid double free in tun_free_netdev() by moving the dev->tstats and tun->security allocs to a new ndo_init routine (tun_net_init()) that will be called by register_netdevice(). ndo_init is paired with the desctructor (tun_free_netdev()), so if there's an error in register_netdevice() the destructor will handle the frees. BUG: KASAN: double-free or invalid-free in selinux_tun_dev_free_security+0x1a/0x20 security/selinux/hooks.c:5605 CPU: 0 PID: 25750 Comm: syz-executor416 Not tainted 5.16.0-rc2-syzk #1 Hardware name: Red Hat KVM, BIOS Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x89/0xb5 lib/dump_stack.c:106 print_address_description.constprop.9+0x28/0x160 mm/kasan/report.c:247 kasan_report_invalid_free+0x55/0x80 mm/kasan/report.c:372 ____kasan_slab_free mm/kasan/common.c:346 [inline] __kasan_slab_free+0x107/0x120 mm/kasan/common.c:374 kasan_slab_free include/linux/kasan.h:235 [inline] slab_free_hook mm/slub.c:1723 [inline] slab_free_freelist_hook mm/slub.c:1749 [inline] slab_free mm/slub.c:3513 [inline] kfree+0xac/0x2d0 mm/slub.c:4561 selinux_tun_dev_free_security+0x1a/0x20 security/selinux/hooks.c:5605 security_tun_dev_free_security+0x4f/0x90 security/security.c:2342 tun_free_netdev+0xe6/0x150 drivers/net/tun.c:2215 netdev_run_todo+0x4df/0x840 net/core/dev.c:10627 rtnl_unlock+0x13/0x20 net/core/rtnetlink.c:112 __tun_chr_ioctl+0x80c/0x2870 drivers/net/tun.c:3302 tun_chr_ioctl+0x2f/0x40 drivers/net/tun.c:3311 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:874 [inline] __se_sys_ioctl fs/ioctl.c:860 [inline] __x64_sys_ioctl+0x19d/0x220 fs/ioctl.c:860 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3a/0x80 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae Reported-by: syzkaller Signed-off-by: George Kennedy Suggested-by: Jakub Kicinski Link: https://lore.kernel.org/r/1639679132-19884-1-git-send-email-george.kennedy@oracle.com Signed-off-by: Jakub Kicinski Signed-off-by: Fedor Pchelkin Signed-off-by: Greg Kroah-Hartman --- drivers/net/tun.c | 114 ++++++++++++++++++++++++---------------------- 1 file changed, 59 insertions(+), 55 deletions(-) diff --git a/drivers/net/tun.c b/drivers/net/tun.c index be9ff6a74ecc..a643b2f2f4de 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -220,6 +220,9 @@ struct tun_struct { struct tun_prog __rcu *steering_prog; struct tun_prog __rcu *filter_prog; struct ethtool_link_ksettings link_ksettings; + /* init args */ + struct file *file; + struct ifreq *ifr; }; struct veth { @@ -227,6 +230,9 @@ struct veth { __be16 h_vlan_TCI; }; +static void tun_flow_init(struct tun_struct *tun); +static void tun_flow_uninit(struct tun_struct *tun); + static int tun_napi_receive(struct napi_struct *napi, int budget) { struct tun_file *tfile = container_of(napi, struct tun_file, napi); @@ -975,6 +981,49 @@ static int check_filter(struct tap_filter *filter, const struct sk_buff *skb) static const struct ethtool_ops tun_ethtool_ops; +static int tun_net_init(struct net_device *dev) +{ + struct tun_struct *tun = netdev_priv(dev); + struct ifreq *ifr = tun->ifr; + int err; + + tun->pcpu_stats = netdev_alloc_pcpu_stats(struct tun_pcpu_stats); + if (!tun->pcpu_stats) + return -ENOMEM; + + spin_lock_init(&tun->lock); + + err = security_tun_dev_alloc_security(&tun->security); + if (err < 0) { + free_percpu(tun->pcpu_stats); + return err; + } + + tun_flow_init(tun); + + dev->hw_features = NETIF_F_SG | NETIF_F_FRAGLIST | + TUN_USER_FEATURES | NETIF_F_HW_VLAN_CTAG_TX | + NETIF_F_HW_VLAN_STAG_TX; + dev->features = dev->hw_features | NETIF_F_LLTX; + dev->vlan_features = dev->features & + ~(NETIF_F_HW_VLAN_CTAG_TX | + NETIF_F_HW_VLAN_STAG_TX); + + tun->flags = (tun->flags & ~TUN_FEATURES) | + (ifr->ifr_flags & TUN_FEATURES); + + INIT_LIST_HEAD(&tun->disabled); + err = tun_attach(tun, tun->file, false, ifr->ifr_flags & IFF_NAPI, + ifr->ifr_flags & IFF_NAPI_FRAGS, false); + if (err < 0) { + tun_flow_uninit(tun); + security_tun_dev_free_security(tun->security); + free_percpu(tun->pcpu_stats); + return err; + } + return 0; +} + /* Net device detach from fd. */ static void tun_net_uninit(struct net_device *dev) { @@ -1216,6 +1265,7 @@ static int tun_net_change_carrier(struct net_device *dev, bool new_carrier) } static const struct net_device_ops tun_netdev_ops = { + .ndo_init = tun_net_init, .ndo_uninit = tun_net_uninit, .ndo_open = tun_net_open, .ndo_stop = tun_net_close, @@ -1296,6 +1346,7 @@ static int tun_xdp_tx(struct net_device *dev, struct xdp_buff *xdp) } static const struct net_device_ops tap_netdev_ops = { + .ndo_init = tun_net_init, .ndo_uninit = tun_net_uninit, .ndo_open = tun_net_open, .ndo_stop = tun_net_close, @@ -1336,7 +1387,7 @@ static void tun_flow_uninit(struct tun_struct *tun) #define MAX_MTU 65535 /* Initialize net device. */ -static void tun_net_init(struct net_device *dev) +static void tun_net_initialize(struct net_device *dev) { struct tun_struct *tun = netdev_priv(dev); @@ -2268,10 +2319,6 @@ static void tun_free_netdev(struct net_device *dev) BUG_ON(!(list_empty(&tun->disabled))); free_percpu(tun->pcpu_stats); - /* We clear pcpu_stats so that tun_set_iff() can tell if - * tun_free_netdev() has been called from register_netdevice(). - */ - tun->pcpu_stats = NULL; tun_flow_uninit(tun); security_tun_dev_free_security(tun->security); @@ -2784,41 +2831,16 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr) tun->rx_batched = 0; RCU_INIT_POINTER(tun->steering_prog, NULL); - tun->pcpu_stats = netdev_alloc_pcpu_stats(struct tun_pcpu_stats); - if (!tun->pcpu_stats) { - err = -ENOMEM; - goto err_free_dev; - } + tun->ifr = ifr; + tun->file = file; - spin_lock_init(&tun->lock); - - err = security_tun_dev_alloc_security(&tun->security); - if (err < 0) - goto err_free_stat; - - tun_net_init(dev); - tun_flow_init(tun); - - dev->hw_features = NETIF_F_SG | NETIF_F_FRAGLIST | - TUN_USER_FEATURES | NETIF_F_HW_VLAN_CTAG_TX | - NETIF_F_HW_VLAN_STAG_TX; - dev->features = dev->hw_features | NETIF_F_LLTX; - dev->vlan_features = dev->features & - ~(NETIF_F_HW_VLAN_CTAG_TX | - NETIF_F_HW_VLAN_STAG_TX); - - tun->flags = (tun->flags & ~TUN_FEATURES) | - (ifr->ifr_flags & TUN_FEATURES); - - INIT_LIST_HEAD(&tun->disabled); - err = tun_attach(tun, file, false, ifr->ifr_flags & IFF_NAPI, - ifr->ifr_flags & IFF_NAPI_FRAGS, false); - if (err < 0) - goto err_free_flow; + tun_net_initialize(dev); err = register_netdevice(tun->dev); - if (err < 0) - goto err_detach; + if (err < 0) { + free_netdev(dev); + return err; + } /* free_netdev() won't check refcnt, to aovid race * with dev_put() we need publish tun after registration. */ @@ -2835,24 +2857,6 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr) strcpy(ifr->ifr_name, tun->dev->name); return 0; - -err_detach: - tun_detach_all(dev); - /* We are here because register_netdevice() has failed. - * If register_netdevice() already called tun_free_netdev() - * while dealing with the error, tun->pcpu_stats has been cleared. - */ - if (!tun->pcpu_stats) - goto err_free_dev; - -err_free_flow: - tun_flow_uninit(tun); - security_tun_dev_free_security(tun->security); -err_free_stat: - free_percpu(tun->pcpu_stats); -err_free_dev: - free_netdev(dev); - return err; } static void tun_get_iff(struct tun_struct *tun, struct ifreq *ifr) From a2b472b152f9e407f013486dd1673e6c3b6f1fd5 Mon Sep 17 00:00:00 2001 From: Werner Sembach Date: Thu, 7 Jul 2022 20:09:52 +0200 Subject: [PATCH 1771/1842] ACPI: video: Force backlight native for some TongFang devices commit c752089f7cf5b5800c6ace4cdd1a8351ee78a598 upstream. The TongFang PF5PU1G, PF4NU1F, PF5NU1G, and PF5LUXG/TUXEDO BA15 Gen10, Pulse 14/15 Gen1, and Pulse 15 Gen2 have the same problem as the Clevo NL5xRU and NL5xNU/TUXEDO Aura 15 Gen1 and Gen2: They have a working native and video interface. However the default detection mechanism first registers the video interface before unregistering it again and switching to the native interface during boot. This results in a dangling SBIOS request for backlight change for some reason, causing the backlight to switch to ~2% once per boot on the first power cord connect or disconnect event. Setting the native interface explicitly circumvents this buggy behaviour by avoiding the unregistering process. Signed-off-by: Werner Sembach Cc: All applicable Reviewed-by: Hans de Goede Signed-off-by: Rafael J. Wysocki Signed-off-by: Greg Kroah-Hartman --- drivers/acpi/video_detect.c | 51 ++++++++++++++++++++++++++++++++++++- 1 file changed, 50 insertions(+), 1 deletion(-) diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c index 7b9793cb55c5..3021254597f9 100644 --- a/drivers/acpi/video_detect.c +++ b/drivers/acpi/video_detect.c @@ -484,7 +484,56 @@ static const struct dmi_system_id video_detect_dmi_table[] = { DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"), }, }, - + /* + * The TongFang PF5PU1G, PF4NU1F, PF5NU1G, and PF5LUXG/TUXEDO BA15 Gen10, + * Pulse 14/15 Gen1, and Pulse 15 Gen2 have the same problem as the Clevo + * NL5xRU and NL5xNU/TUXEDO Aura 15 Gen1 and Gen2. See the description + * above. + */ + { + .callback = video_detect_force_native, + .ident = "TongFang PF5PU1G", + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "PF5PU1G"), + }, + }, + { + .callback = video_detect_force_native, + .ident = "TongFang PF4NU1F", + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "PF4NU1F"), + }, + }, + { + .callback = video_detect_force_native, + .ident = "TongFang PF4NU1F", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"), + DMI_MATCH(DMI_BOARD_NAME, "PULSE1401"), + }, + }, + { + .callback = video_detect_force_native, + .ident = "TongFang PF5NU1G", + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "PF5NU1G"), + }, + }, + { + .callback = video_detect_force_native, + .ident = "TongFang PF5NU1G", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"), + DMI_MATCH(DMI_BOARD_NAME, "PULSE1501"), + }, + }, + { + .callback = video_detect_force_native, + .ident = "TongFang PF5LUXG", + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "PF5LUXG"), + }, + }, /* * Desktops which falsely report a backlight and which our heuristics * for this do not catch. From 6ccff35588d22bcd7e797163434a796baa540a94 Mon Sep 17 00:00:00 2001 From: Werner Sembach Date: Thu, 7 Jul 2022 20:09:53 +0200 Subject: [PATCH 1772/1842] ACPI: video: Shortening quirk list by identifying Clevo by board_name only commit f0341e67b3782603737f7788e71bd3530012a4f4 upstream. Taking a recent change in the i8042 quirklist to this one: Clevo board_names are somewhat unique, and if not: The generic Board_-/Sys_Vendor string "Notebook" doesn't help much anyway. So identifying the devices just by the board_name helps keeping the list significantly shorter and might even hit more devices requiring the fix. Signed-off-by: Werner Sembach Fixes: c844d22fe0c0 ("ACPI: video: Force backlight native for Clevo NL5xRU and NL5xNU") Cc: All applicable Reviewed-by: Hans de Goede Signed-off-by: Rafael J. Wysocki Signed-off-by: Greg Kroah-Hartman --- drivers/acpi/video_detect.c | 34 ---------------------------------- 1 file changed, 34 deletions(-) diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c index 3021254597f9..e39d59ad6496 100644 --- a/drivers/acpi/video_detect.c +++ b/drivers/acpi/video_detect.c @@ -424,23 +424,6 @@ static const struct dmi_system_id video_detect_dmi_table[] = { .callback = video_detect_force_native, .ident = "Clevo NL5xRU", .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"), - DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"), - }, - }, - { - .callback = video_detect_force_native, - .ident = "Clevo NL5xRU", - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"), - DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"), - }, - }, - { - .callback = video_detect_force_native, - .ident = "Clevo NL5xRU", - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "Notebook"), DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"), }, }, @@ -464,23 +447,6 @@ static const struct dmi_system_id video_detect_dmi_table[] = { .callback = video_detect_force_native, .ident = "Clevo NL5xNU", .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "TUXEDO"), - DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"), - }, - }, - { - .callback = video_detect_force_native, - .ident = "Clevo NL5xNU", - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"), - DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"), - }, - }, - { - .callback = video_detect_force_native, - .ident = "Clevo NL5xNU", - .matches = { - DMI_MATCH(DMI_SYS_VENDOR, "Notebook"), DMI_MATCH(DMI_BOARD_NAME, "NL5xNU"), }, }, From e2c63e1afdb30d71d7b96c5d776bfc9761bba666 Mon Sep 17 00:00:00 2001 From: Tony Luck Date: Wed, 22 Jun 2022 10:09:06 -0700 Subject: [PATCH 1773/1842] ACPI: APEI: Better fix to avoid spamming the console with old error logs commit c3481b6b75b4797657838f44028fd28226ab48e0 upstream. The fix in commit 3f8dec116210 ("ACPI/APEI: Limit printable size of BERT table data") does not work as intended on systems where the BIOS has a fixed size block of memory for the BERT table, relying on s/w to quit when it finds a record with estatus->block_status == 0. On these systems all errors are suppressed because the check: if (region_len < ACPI_BERT_PRINT_MAX_LEN) always fails. New scheme skips individual CPER records that are too large, and also limits the total number of records that will be printed to 5. Fixes: 3f8dec116210 ("ACPI/APEI: Limit printable size of BERT table data") Cc: All applicable Signed-off-by: Tony Luck Signed-off-by: Rafael J. Wysocki Signed-off-by: Greg Kroah-Hartman --- drivers/acpi/apei/bert.c | 31 +++++++++++++++++++++++-------- 1 file changed, 23 insertions(+), 8 deletions(-) diff --git a/drivers/acpi/apei/bert.c b/drivers/acpi/apei/bert.c index 598fd19b65fa..45973aa6e06d 100644 --- a/drivers/acpi/apei/bert.c +++ b/drivers/acpi/apei/bert.c @@ -29,16 +29,26 @@ #undef pr_fmt #define pr_fmt(fmt) "BERT: " fmt + +#define ACPI_BERT_PRINT_MAX_RECORDS 5 #define ACPI_BERT_PRINT_MAX_LEN 1024 static int bert_disable; +/* + * Print "all" the error records in the BERT table, but avoid huge spam to + * the console if the BIOS included oversize records, or too many records. + * Skipping some records here does not lose anything because the full + * data is available to user tools in: + * /sys/firmware/acpi/tables/data/BERT + */ static void __init bert_print_all(struct acpi_bert_region *region, unsigned int region_len) { struct acpi_hest_generic_status *estatus = (struct acpi_hest_generic_status *)region; int remain = region_len; + int printed = 0, skipped = 0; u32 estatus_len; while (remain >= sizeof(struct acpi_bert_region)) { @@ -46,24 +56,26 @@ static void __init bert_print_all(struct acpi_bert_region *region, if (remain < estatus_len) { pr_err(FW_BUG "Truncated status block (length: %u).\n", estatus_len); - return; + break; } /* No more error records. */ if (!estatus->block_status) - return; + break; if (cper_estatus_check(estatus)) { pr_err(FW_BUG "Invalid error record.\n"); - return; + break; } - pr_info_once("Error records from previous boot:\n"); - if (region_len < ACPI_BERT_PRINT_MAX_LEN) + if (estatus_len < ACPI_BERT_PRINT_MAX_LEN && + printed < ACPI_BERT_PRINT_MAX_RECORDS) { + pr_info_once("Error records from previous boot:\n"); cper_estatus_print(KERN_INFO HW_ERR, estatus); - else - pr_info_once("Max print length exceeded, table data is available at:\n" - "/sys/firmware/acpi/tables/data/BERT"); + printed++; + } else { + skipped++; + } /* * Because the boot error source is "one-time polled" type, @@ -75,6 +87,9 @@ static void __init bert_print_all(struct acpi_bert_region *region, estatus = (void *)estatus + estatus_len; remain -= estatus_len; } + + if (skipped) + pr_info(HW_ERR "Skipped %d error records\n", skipped); } static int __init setup_bert_disable(char *str) From 3c77292d52b341831cb09c24ca4112a1e4f9e91f Mon Sep 17 00:00:00 2001 From: GUO Zihua Date: Fri, 22 Jul 2022 14:31:57 +0800 Subject: [PATCH 1774/1842] crypto: arm64/poly1305 - fix a read out-of-bound commit 7ae19d422c7da84b5f13bc08b98bd737a08d3a53 upstream. A kasan error was reported during fuzzing: BUG: KASAN: slab-out-of-bounds in neon_poly1305_blocks.constprop.0+0x1b4/0x250 [poly1305_neon] Read of size 4 at addr ffff0010e293f010 by task syz-executor.5/1646715 CPU: 4 PID: 1646715 Comm: syz-executor.5 Kdump: loaded Not tainted 5.10.0.aarch64 #1 Hardware name: Huawei TaiShan 2280 /BC11SPCD, BIOS 1.59 01/31/2019 Call trace: dump_backtrace+0x0/0x394 show_stack+0x34/0x4c arch/arm64/kernel/stacktrace.c:196 __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x158/0x1e4 lib/dump_stack.c:118 print_address_description.constprop.0+0x68/0x204 mm/kasan/report.c:387 __kasan_report+0xe0/0x140 mm/kasan/report.c:547 kasan_report+0x44/0xe0 mm/kasan/report.c:564 check_memory_region_inline mm/kasan/generic.c:187 [inline] __asan_load4+0x94/0xd0 mm/kasan/generic.c:252 neon_poly1305_blocks.constprop.0+0x1b4/0x250 [poly1305_neon] neon_poly1305_do_update+0x6c/0x15c [poly1305_neon] neon_poly1305_update+0x9c/0x1c4 [poly1305_neon] crypto_shash_update crypto/shash.c:131 [inline] shash_finup_unaligned+0x84/0x15c crypto/shash.c:179 crypto_shash_finup+0x8c/0x140 crypto/shash.c:193 shash_digest_unaligned+0xb8/0xe4 crypto/shash.c:201 crypto_shash_digest+0xa4/0xfc crypto/shash.c:217 crypto_shash_tfm_digest+0xb4/0x150 crypto/shash.c:229 essiv_skcipher_setkey+0x164/0x200 [essiv] crypto_skcipher_setkey+0xb0/0x160 crypto/skcipher.c:612 skcipher_setkey+0x3c/0x50 crypto/algif_skcipher.c:305 alg_setkey+0x114/0x2a0 crypto/af_alg.c:220 alg_setsockopt+0x19c/0x210 crypto/af_alg.c:253 __sys_setsockopt+0x190/0x2e0 net/socket.c:2123 __do_sys_setsockopt net/socket.c:2134 [inline] __se_sys_setsockopt net/socket.c:2131 [inline] __arm64_sys_setsockopt+0x78/0x94 net/socket.c:2131 __invoke_syscall arch/arm64/kernel/syscall.c:36 [inline] invoke_syscall+0x64/0x100 arch/arm64/kernel/syscall.c:48 el0_svc_common.constprop.0+0x220/0x230 arch/arm64/kernel/syscall.c:155 do_el0_svc+0xb4/0xd4 arch/arm64/kernel/syscall.c:217 el0_svc+0x24/0x3c arch/arm64/kernel/entry-common.c:353 el0_sync_handler+0x160/0x164 arch/arm64/kernel/entry-common.c:369 el0_sync+0x160/0x180 arch/arm64/kernel/entry.S:683 This error can be reproduced by the following code compiled as ko on a system with kasan enabled: #include #include #include #include char test_data[] = "\x00\x01\x02\x03\x04\x05\x06\x07" "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" "\x10\x11\x12\x13\x14\x15\x16\x17" "\x18\x19\x1a\x1b\x1c\x1d\x1e"; int init(void) { struct crypto_shash *tfm = NULL; char *data = NULL, *out = NULL; tfm = crypto_alloc_shash("poly1305", 0, 0); data = kmalloc(POLY1305_KEY_SIZE - 1, GFP_KERNEL); out = kmalloc(POLY1305_DIGEST_SIZE, GFP_KERNEL); memcpy(data, test_data, POLY1305_KEY_SIZE - 1); crypto_shash_tfm_digest(tfm, data, POLY1305_KEY_SIZE - 1, out); kfree(data); kfree(out); return 0; } void deinit(void) { } module_init(init) module_exit(deinit) MODULE_LICENSE("GPL"); The root cause of the bug sits in neon_poly1305_blocks. The logic neon_poly1305_blocks() performed is that if it was called with both s[] and r[] uninitialized, it will first try to initialize them with the data from the first "block" that it believed to be 32 bytes in length. First 16 bytes are used as the key and the next 16 bytes for s[]. This would lead to the aforementioned read out-of-bound. However, after calling poly1305_init_arch(), only 16 bytes were deducted from the input and s[] is initialized yet again with the following 16 bytes. The second initialization of s[] is certainly redundent which indicates that the first initialization should be for r[] only. This patch fixes the issue by calling poly1305_init_arm64() instead of poly1305_init_arch(). This is also the implementation for the same algorithm on arm platform. Fixes: f569ca164751 ("crypto: arm64/poly1305 - incorporate OpenSSL/CRYPTOGAMS NEON implementation") Cc: stable@vger.kernel.org Signed-off-by: GUO Zihua Reviewed-by: Eric Biggers Acked-by: Will Deacon Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman --- arch/arm64/crypto/poly1305-glue.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/crypto/poly1305-glue.c b/arch/arm64/crypto/poly1305-glue.c index 01e22fe40823..9f4599014854 100644 --- a/arch/arm64/crypto/poly1305-glue.c +++ b/arch/arm64/crypto/poly1305-glue.c @@ -52,7 +52,7 @@ static void neon_poly1305_blocks(struct poly1305_desc_ctx *dctx, const u8 *src, { if (unlikely(!dctx->sset)) { if (!dctx->rset) { - poly1305_init_arch(dctx, src); + poly1305_init_arm64(&dctx->h, src); src += POLY1305_BLOCK_SIZE; len -= POLY1305_BLOCK_SIZE; dctx->rset = 1; From a56e1ccdb7bb455d0b23a7b4de0016153d513aae Mon Sep 17 00:00:00 2001 From: Dmitry Klochkov Date: Tue, 14 Jun 2022 15:11:41 +0300 Subject: [PATCH 1775/1842] tools/kvm_stat: fix display of error when multiple processes are found [ Upstream commit 933b5f9f98da29af646b51b36a0753692908ef64 ] Instead of printing an error message, kvm_stat script fails when we restrict statistics to a guest by its name and there are multiple guests with such name: # kvm_stat -g my_vm Traceback (most recent call last): File "/usr/bin/kvm_stat", line 1819, in main() File "/usr/bin/kvm_stat", line 1779, in main options = get_options() File "/usr/bin/kvm_stat", line 1718, in get_options options = argparser.parse_args() File "/usr/lib64/python3.10/argparse.py", line 1825, in parse_args args, argv = self.parse_known_args(args, namespace) File "/usr/lib64/python3.10/argparse.py", line 1858, in parse_known_args namespace, args = self._parse_known_args(args, namespace) File "/usr/lib64/python3.10/argparse.py", line 2067, in _parse_known_args start_index = consume_optional(start_index) File "/usr/lib64/python3.10/argparse.py", line 2007, in consume_optional take_action(action, args, option_string) File "/usr/lib64/python3.10/argparse.py", line 1935, in take_action action(self, namespace, argument_values, option_string) File "/usr/bin/kvm_stat", line 1649, in __call__ ' to specify the desired pid'.format(" ".join(pids))) TypeError: sequence item 0: expected str instance, int found To avoid this, it's needed to convert pids int values to strings before pass them to join(). Signed-off-by: Dmitry Klochkov Message-Id: <20220614121141.160689-1-kdmitry556@gmail.com> Signed-off-by: Paolo Bonzini Signed-off-by: Sasha Levin --- tools/kvm/kvm_stat/kvm_stat | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/tools/kvm/kvm_stat/kvm_stat b/tools/kvm/kvm_stat/kvm_stat index b0bf56c5f120..a1efcfbd8b9e 100755 --- a/tools/kvm/kvm_stat/kvm_stat +++ b/tools/kvm/kvm_stat/kvm_stat @@ -1646,7 +1646,8 @@ Press any other key to refresh statistics immediately. .format(values)) if len(pids) > 1: sys.exit('Error: Multiple processes found (pids: {}). Use "-p"' - ' to specify the desired pid'.format(" ".join(pids))) + ' to specify the desired pid' + .format(" ".join(map(str, pids)))) namespace.pid = pids[0] argparser = argparse.ArgumentParser(description=description_text, From 50763f0ac0706e63c0ba550adccfca25eb3d0667 Mon Sep 17 00:00:00 2001 From: Raghavendra Rao Ananta Date: Wed, 15 Jun 2022 18:57:06 +0000 Subject: [PATCH 1776/1842] selftests: KVM: Handle compiler optimizations in ucall [ Upstream commit 9e2f6498efbbc880d7caa7935839e682b64fe5a6 ] The selftests, when built with newer versions of clang, is found to have over optimized guests' ucall() function, and eliminating the stores for uc.cmd (perhaps due to no immediate readers). This resulted in the userspace side always reading a value of '0', and causing multiple test failures. As a result, prevent the compiler from optimizing the stores in ucall() with WRITE_ONCE(). Suggested-by: Ricardo Koller Suggested-by: Reiji Watanabe Signed-off-by: Raghavendra Rao Ananta Message-Id: <20220615185706.1099208-1-rananta@google.com> Reviewed-by: Andrew Jones Signed-off-by: Paolo Bonzini Signed-off-by: Sasha Levin --- tools/testing/selftests/kvm/lib/aarch64/ucall.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/lib/aarch64/ucall.c b/tools/testing/selftests/kvm/lib/aarch64/ucall.c index 2f37b90ee1a9..f600311fdc6a 100644 --- a/tools/testing/selftests/kvm/lib/aarch64/ucall.c +++ b/tools/testing/selftests/kvm/lib/aarch64/ucall.c @@ -73,20 +73,19 @@ void ucall_uninit(struct kvm_vm *vm) void ucall(uint64_t cmd, int nargs, ...) { - struct ucall uc = { - .cmd = cmd, - }; + struct ucall uc = {}; va_list va; int i; + WRITE_ONCE(uc.cmd, cmd); nargs = nargs <= UCALL_MAX_ARGS ? nargs : UCALL_MAX_ARGS; va_start(va, nargs); for (i = 0; i < nargs; ++i) - uc.args[i] = va_arg(va, uint64_t); + WRITE_ONCE(uc.args[i], va_arg(va, uint64_t)); va_end(va); - *ucall_exit_mmio_addr = (vm_vaddr_t)&uc; + WRITE_ONCE(*ucall_exit_mmio_addr, (vm_vaddr_t)&uc); } uint64_t get_ucall(struct kvm_vm *vm, uint32_t vcpu_id, struct ucall *uc) From 033a4455d9d6ccba0f8acbf297d3094b78ee409e Mon Sep 17 00:00:00 2001 From: Ahmad Fatoum Date: Tue, 24 May 2022 07:56:41 +0200 Subject: [PATCH 1777/1842] Bluetooth: hci_bcm: Add BCM4349B1 variant commit 4f17c2b6694d0c4098f33b07ee3a696976940aa5 upstream. The BCM4349B1, aka CYW/BCM89359, is a WiFi+BT chip and its Bluetooth portion can be controlled over serial. Two subversions are added for the chip, because ROM firmware reports 002.002.013 (at least for the chips I have here), while depending on patchram firmware revision, either 002.002.013 or 002.002.014 is reported. Signed-off-by: Ahmad Fatoum Reviewed-by: Linus Walleij Signed-off-by: Marcel Holtmann Signed-off-by: Greg Kroah-Hartman --- drivers/bluetooth/btbcm.c | 2 ++ drivers/bluetooth/hci_bcm.c | 1 + 2 files changed, 3 insertions(+) diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c index 1b9743b7f2ef..d263eac784da 100644 --- a/drivers/bluetooth/btbcm.c +++ b/drivers/bluetooth/btbcm.c @@ -401,6 +401,8 @@ static const struct bcm_subver_table bcm_uart_subver_table[] = { { 0x6606, "BCM4345C5" }, /* 003.006.006 */ { 0x230f, "BCM4356A2" }, /* 001.003.015 */ { 0x220e, "BCM20702A1" }, /* 001.002.014 */ + { 0x420d, "BCM4349B1" }, /* 002.002.013 */ + { 0x420e, "BCM4349B1" }, /* 002.002.014 */ { 0x4217, "BCM4329B1" }, /* 002.002.023 */ { 0x6106, "BCM4359C0" }, /* 003.001.006 */ { 0x4106, "BCM4335A0" }, /* 002.001.006 */ diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c index 259a643377c2..574f84708859 100644 --- a/drivers/bluetooth/hci_bcm.c +++ b/drivers/bluetooth/hci_bcm.c @@ -1489,6 +1489,7 @@ static const struct of_device_id bcm_bluetooth_of_match[] = { { .compatible = "brcm,bcm4345c5" }, { .compatible = "brcm,bcm4330-bt" }, { .compatible = "brcm,bcm43438-bt", .data = &bcm43438_device_data }, + { .compatible = "brcm,bcm4349-bt", .data = &bcm43438_device_data }, { .compatible = "brcm,bcm43540-bt", .data = &bcm4354_device_data }, { .compatible = "brcm,bcm4335a0" }, { }, From 918ce738e28bb79d84dd208ff9b5dacd8f533058 Mon Sep 17 00:00:00 2001 From: Hakan Jansson Date: Thu, 30 Jun 2022 14:45:22 +0200 Subject: [PATCH 1778/1842] Bluetooth: hci_bcm: Add DT compatible for CYW55572 commit f8cad62002a7699fd05a23b558b980b5a77defe0 upstream. CYW55572 is a Wi-Fi + Bluetooth combo device from Infineon. Signed-off-by: Hakan Jansson Reviewed-by: Linus Walleij Signed-off-by: Luiz Augusto von Dentz Signed-off-by: Greg Kroah-Hartman --- drivers/bluetooth/hci_bcm.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c index 574f84708859..3f6e96a4e114 100644 --- a/drivers/bluetooth/hci_bcm.c +++ b/drivers/bluetooth/hci_bcm.c @@ -1492,6 +1492,7 @@ static const struct of_device_id bcm_bluetooth_of_match[] = { { .compatible = "brcm,bcm4349-bt", .data = &bcm43438_device_data }, { .compatible = "brcm,bcm43540-bt", .data = &bcm4354_device_data }, { .compatible = "brcm,bcm4335a0" }, + { .compatible = "infineon,cyw55572-bt" }, { }, }; MODULE_DEVICE_TABLE(of, bcm_bluetooth_of_match); From e81f95d03060090c4bd4d1d7cea15fb9542224ba Mon Sep 17 00:00:00 2001 From: Aaron Ma Date: Thu, 2 Jun 2022 17:28:22 +0800 Subject: [PATCH 1779/1842] Bluetooth: btusb: Add support of IMC Networks PID 0x3568 commit c69ecb0ea4c96b8b191cbaa0b420222a37867655 upstream. It is 13d3:3568 for MediaTek MT7922 USB Bluetooth chip. T: Bus=03 Lev=01 Prnt=01 Port=02 Cnt=01 Dev#= 2 Spd=480 MxCh= 0 D: Ver= 2.10 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=13d3 ProdID=3568 Rev=01.00 S: Manufacturer=MediaTek Inc. S: Product=Wireless_Device S: SerialNumber=... C: #Ifs= 3 Cfg#= 1 Atr=e0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 3 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=02(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms E: Ad=81(I) Atr=03(Int.) MxPS= 16 Ivl=125us E: Ad=82(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms I: If#= 1 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 0 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 0 Ivl=1ms I: If#= 2 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=(none) E: Ad=0a(O) Atr=03(Int.) MxPS= 64 Ivl=125us E: Ad=8a(I) Atr=03(Int.) MxPS= 64 Ivl=125us Signed-off-by: Aaron Ma Signed-off-by: Marcel Holtmann Signed-off-by: Greg Kroah-Hartman --- drivers/bluetooth/btusb.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index 538232b4c42a..d450da1211e0 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -416,6 +416,9 @@ static const struct usb_device_id blacklist_table[] = { { USB_DEVICE(0x0489, 0xe0d9), .driver_info = BTUSB_MEDIATEK | BTUSB_WIDEBAND_SPEECH | BTUSB_VALID_LE_STATES }, + { USB_DEVICE(0x13d3, 0x3568), .driver_info = BTUSB_MEDIATEK | + BTUSB_WIDEBAND_SPEECH | + BTUSB_VALID_LE_STATES }, /* Additional Realtek 8723AE Bluetooth devices */ { USB_DEVICE(0x0930, 0x021d), .driver_info = BTUSB_REALTEK }, From 1a2a2e34569cf85cad743ee8095d07c3cba5473b Mon Sep 17 00:00:00 2001 From: Hilda Wu Date: Thu, 14 Jul 2022 19:25:19 +0800 Subject: [PATCH 1780/1842] Bluetooth: btusb: Add Realtek RTL8852C support ID 0x04CA:0x4007 commit c379c96cc221767af9688a5d4758a78eea30883a upstream. Add the support ID(0x04CA, 0x4007) to usb_device_id table for Realtek RTL8852C. The device info from /sys/kernel/debug/usb/devices as below. T: Bus=03 Lev=01 Prnt=01 Port=02 Cnt=01 Dev#= 2 Spd=12 MxCh= 0 D: Ver= 1.00 Cls=e0(wlcon) Sub=01 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=04ca ProdID=4007 Rev= 0.00 S: Manufacturer=Realtek S: Product=Bluetooth Radio S: SerialNumber=00e04c000001 C:* #Ifs= 2 Cfg#= 1 Atr=e0 MxPwr=500mA I:* If#= 0 Alt= 0 #EPs= 3 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=81(I) Atr=03(Int.) MxPS= 16 Ivl=1ms E: Ad=02(O) Atr=02(Bulk) MxPS= 64 Ivl=0ms E: Ad=82(I) Atr=02(Bulk) MxPS= 64 Ivl=0ms I:* If#= 1 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 0 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 0 Ivl=1ms I: If#= 1 Alt= 1 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 9 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 9 Ivl=1ms I: If#= 1 Alt= 2 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 17 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 17 Ivl=1ms I: If#= 1 Alt= 3 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 25 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 25 Ivl=1ms I: If#= 1 Alt= 4 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 33 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 33 Ivl=1ms I: If#= 1 Alt= 5 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 49 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 49 Ivl=1ms Signed-off-by: Hilda Wu Signed-off-by: Luiz Augusto von Dentz Signed-off-by: Greg Kroah-Hartman --- drivers/bluetooth/btusb.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index d450da1211e0..ed529bcac92a 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -399,6 +399,10 @@ static const struct usb_device_id blacklist_table[] = { { USB_DEVICE(0x0bda, 0xc822), .driver_info = BTUSB_REALTEK | BTUSB_WIDEBAND_SPEECH }, + /* Realtek 8852CE Bluetooth devices */ + { USB_DEVICE(0x04ca, 0x4007), .driver_info = BTUSB_REALTEK | + BTUSB_WIDEBAND_SPEECH }, + /* Realtek Bluetooth devices */ { USB_VENDOR_AND_INTERFACE_INFO(0x0bda, 0xe0, 0x01, 0x01), .driver_info = BTUSB_REALTEK }, From 3a292cb18132cb7af3a146613f1c9a47ef6f8463 Mon Sep 17 00:00:00 2001 From: Hilda Wu Date: Thu, 14 Jul 2022 19:25:20 +0800 Subject: [PATCH 1781/1842] Bluetooth: btusb: Add Realtek RTL8852C support ID 0x04C5:0x1675 commit 893fa8bc9952a36fb682ee12f0a994b5817a36d2 upstream. Add the support ID(0x04c5, 0x1675) to usb_device_id table for Realtek RTL8852C. The device info from /sys/kernel/debug/usb/devices as below. T: Bus=03 Lev=01 Prnt=01 Port=02 Cnt=01 Dev#= 2 Spd=12 MxCh= 0 D: Ver= 1.00 Cls=e0(wlcon) Sub=01 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=04c5 ProdID=1675 Rev= 0.00 S: Manufacturer=Realtek S: Product=Bluetooth Radio S: SerialNumber=00e04c000001 C:* #Ifs= 2 Cfg#= 1 Atr=e0 MxPwr=500mA I:* If#= 0 Alt= 0 #EPs= 3 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=81(I) Atr=03(Int.) MxPS= 16 Ivl=1ms E: Ad=02(O) Atr=02(Bulk) MxPS= 64 Ivl=0ms E: Ad=82(I) Atr=02(Bulk) MxPS= 64 Ivl=0ms I:* If#= 1 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 0 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 0 Ivl=1ms I: If#= 1 Alt= 1 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 9 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 9 Ivl=1ms I: If#= 1 Alt= 2 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 17 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 17 Ivl=1ms I: If#= 1 Alt= 3 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 25 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 25 Ivl=1ms I: If#= 1 Alt= 4 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 33 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 33 Ivl=1ms I: If#= 1 Alt= 5 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 49 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 49 Ivl=1ms Signed-off-by: Hilda Wu Signed-off-by: Luiz Augusto von Dentz Signed-off-by: Greg Kroah-Hartman --- drivers/bluetooth/btusb.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index ed529bcac92a..263210a3b7b3 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -402,6 +402,8 @@ static const struct usb_device_id blacklist_table[] = { /* Realtek 8852CE Bluetooth devices */ { USB_DEVICE(0x04ca, 0x4007), .driver_info = BTUSB_REALTEK | BTUSB_WIDEBAND_SPEECH }, + { USB_DEVICE(0x04c5, 0x1675), .driver_info = BTUSB_REALTEK | + BTUSB_WIDEBAND_SPEECH }, /* Realtek Bluetooth devices */ { USB_VENDOR_AND_INTERFACE_INFO(0x0bda, 0xe0, 0x01, 0x01), From 9c45bb363e26e86ebaf20f6d2009bedf19fc0d39 Mon Sep 17 00:00:00 2001 From: Hilda Wu Date: Thu, 14 Jul 2022 19:25:21 +0800 Subject: [PATCH 1782/1842] Bluetooth: btusb: Add Realtek RTL8852C support ID 0x0CB8:0xC558 commit 5b75ee37ebb73f58468d4cca172434324af203f1 upstream. Add the support ID(0x0CB8, 0xC558) to usb_device_id table for Realtek RTL8852C. The device info from /sys/kernel/debug/usb/devices as below. T: Bus=03 Lev=01 Prnt=01 Port=02 Cnt=01 Dev#= 2 Spd=12 MxCh= 0 D: Ver= 1.00 Cls=e0(wlcon) Sub=01 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=0cb8 ProdID=c558 Rev= 0.00 S: Manufacturer=Realtek S: Product=Bluetooth Radio S: SerialNumber=00e04c000001 C:* #Ifs= 2 Cfg#= 1 Atr=e0 MxPwr=500mA I:* If#= 0 Alt= 0 #EPs= 3 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=81(I) Atr=03(Int.) MxPS= 16 Ivl=1ms E: Ad=02(O) Atr=02(Bulk) MxPS= 64 Ivl=0ms E: Ad=82(I) Atr=02(Bulk) MxPS= 64 Ivl=0ms I:* If#= 1 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 0 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 0 Ivl=1ms I: If#= 1 Alt= 1 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 9 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 9 Ivl=1ms I: If#= 1 Alt= 2 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 17 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 17 Ivl=1ms I: If#= 1 Alt= 3 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 25 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 25 Ivl=1ms I: If#= 1 Alt= 4 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 33 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 33 Ivl=1ms I: If#= 1 Alt= 5 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 49 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 49 Ivl=1ms Signed-off-by: Hilda Wu Signed-off-by: Luiz Augusto von Dentz Signed-off-by: Greg Kroah-Hartman --- drivers/bluetooth/btusb.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index 263210a3b7b3..0e9e88c82520 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -404,6 +404,8 @@ static const struct usb_device_id blacklist_table[] = { BTUSB_WIDEBAND_SPEECH }, { USB_DEVICE(0x04c5, 0x1675), .driver_info = BTUSB_REALTEK | BTUSB_WIDEBAND_SPEECH }, + { USB_DEVICE(0x0cb8, 0xc558), .driver_info = BTUSB_REALTEK | + BTUSB_WIDEBAND_SPEECH }, /* Realtek Bluetooth devices */ { USB_VENDOR_AND_INTERFACE_INFO(0x0bda, 0xe0, 0x01, 0x01), From 40e2e7f1bf0301d1ed7437b10d9e1c92cb51bf81 Mon Sep 17 00:00:00 2001 From: Hilda Wu Date: Thu, 14 Jul 2022 19:25:22 +0800 Subject: [PATCH 1783/1842] Bluetooth: btusb: Add Realtek RTL8852C support ID 0x13D3:0x3587 commit 8f0054dd29373cd877db87751c143610561d549d upstream. Add the support ID(0x13D3, 0x3587) to usb_device_id table for Realtek RTL8852C. The device info from /sys/kernel/debug/usb/devices as below. T: Bus=03 Lev=01 Prnt=01 Port=02 Cnt=01 Dev#= 2 Spd=12 MxCh= 0 D: Ver= 1.00 Cls=e0(wlcon) Sub=01 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=13d3 ProdID=3587 Rev= 0.00 S: Manufacturer=Realtek S: Product=Bluetooth Radio S: SerialNumber=00e04c000001 C:* #Ifs= 2 Cfg#= 1 Atr=e0 MxPwr=500mA I:* If#= 0 Alt= 0 #EPs= 3 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=81(I) Atr=03(Int.) MxPS= 16 Ivl=1ms E: Ad=02(O) Atr=02(Bulk) MxPS= 64 Ivl=0ms E: Ad=82(I) Atr=02(Bulk) MxPS= 64 Ivl=0ms I:* If#= 1 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 0 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 0 Ivl=1ms I: If#= 1 Alt= 1 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 9 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 9 Ivl=1ms I: If#= 1 Alt= 2 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 17 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 17 Ivl=1ms I: If#= 1 Alt= 3 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 25 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 25 Ivl=1ms I: If#= 1 Alt= 4 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 33 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 33 Ivl=1ms I: If#= 1 Alt= 5 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 49 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 49 Ivl=1ms Signed-off-by: Hilda Wu Signed-off-by: Luiz Augusto von Dentz Signed-off-by: Greg Kroah-Hartman --- drivers/bluetooth/btusb.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index 0e9e88c82520..ff27697a0daa 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -406,6 +406,8 @@ static const struct usb_device_id blacklist_table[] = { BTUSB_WIDEBAND_SPEECH }, { USB_DEVICE(0x0cb8, 0xc558), .driver_info = BTUSB_REALTEK | BTUSB_WIDEBAND_SPEECH }, + { USB_DEVICE(0x13d3, 0x3587), .driver_info = BTUSB_REALTEK | + BTUSB_WIDEBAND_SPEECH }, /* Realtek Bluetooth devices */ { USB_VENDOR_AND_INTERFACE_INFO(0x0bda, 0xe0, 0x01, 0x01), From 75742ffc3630203e95844c72c7144f507e2a557d Mon Sep 17 00:00:00 2001 From: Hilda Wu Date: Thu, 14 Jul 2022 19:25:23 +0800 Subject: [PATCH 1784/1842] Bluetooth: btusb: Add Realtek RTL8852C support ID 0x13D3:0x3586 commit 6ad353dfc8ee3230a5e123c21da50f1b64cc4b39 upstream. Add the support ID(0x13D3, 0x3586) to usb_device_id table for Realtek RTL8852C. The device info from /sys/kernel/debug/usb/devices as below. T: Bus=03 Lev=01 Prnt=01 Port=02 Cnt=01 Dev#= 2 Spd=12 MxCh= 0 D: Ver= 1.00 Cls=e0(wlcon) Sub=01 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=13d3 ProdID=3586 Rev= 0.00 S: Manufacturer=Realtek S: Product=Bluetooth Radio S: SerialNumber=00e04c000001 C:* #Ifs= 2 Cfg#= 1 Atr=e0 MxPwr=500mA I:* If#= 0 Alt= 0 #EPs= 3 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=81(I) Atr=03(Int.) MxPS= 16 Ivl=1ms E: Ad=02(O) Atr=02(Bulk) MxPS= 64 Ivl=0ms E: Ad=82(I) Atr=02(Bulk) MxPS= 64 Ivl=0ms I:* If#= 1 Alt= 0 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 0 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 0 Ivl=1ms I: If#= 1 Alt= 1 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 9 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 9 Ivl=1ms I: If#= 1 Alt= 2 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 17 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 17 Ivl=1ms I: If#= 1 Alt= 3 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 25 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 25 Ivl=1ms I: If#= 1 Alt= 4 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 33 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 33 Ivl=1ms I: If#= 1 Alt= 5 #EPs= 2 Cls=e0(wlcon) Sub=01 Prot=01 Driver=btusb E: Ad=03(O) Atr=01(Isoc) MxPS= 49 Ivl=1ms E: Ad=83(I) Atr=01(Isoc) MxPS= 49 Ivl=1ms Signed-off-by: Hilda Wu Signed-off-by: Luiz Augusto von Dentz Signed-off-by: Greg Kroah-Hartman --- drivers/bluetooth/btusb.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index ff27697a0daa..a699e6166aef 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -408,6 +408,8 @@ static const struct usb_device_id blacklist_table[] = { BTUSB_WIDEBAND_SPEECH }, { USB_DEVICE(0x13d3, 0x3587), .driver_info = BTUSB_REALTEK | BTUSB_WIDEBAND_SPEECH }, + { USB_DEVICE(0x13d3, 0x3586), .driver_info = BTUSB_REALTEK | + BTUSB_WIDEBAND_SPEECH }, /* Realtek Bluetooth devices */ { USB_VENDOR_AND_INTERFACE_INFO(0x0bda, 0xe0, 0x01, 0x01), From e5b556a7b2711a39e3aa13aeff26560c17417b8b Mon Sep 17 00:00:00 2001 From: Ning Qiang Date: Wed, 13 Jul 2022 23:37:34 +0800 Subject: [PATCH 1785/1842] macintosh/adb: fix oob read in do_adb_query() function commit fd97e4ad6d3b0c9fce3bca8ea8e6969d9ce7423b upstream. In do_adb_query() function of drivers/macintosh/adb.c, req->data is copied form userland. The parameter "req->data[2]" is missing check, the array size of adb_handler[] is 16, so adb_handler[req->data[2]].original_address and adb_handler[req->data[2]].handler_id will lead to oob read. Cc: stable Signed-off-by: Ning Qiang Reviewed-by: Kees Cook Reviewed-by: Greg Kroah-Hartman Acked-by: Benjamin Herrenschmidt Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20220713153734.2248-1-sohu0106@126.com Signed-off-by: Greg Kroah-Hartman --- drivers/macintosh/adb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/macintosh/adb.c b/drivers/macintosh/adb.c index 73b396189039..afb0942ccc29 100644 --- a/drivers/macintosh/adb.c +++ b/drivers/macintosh/adb.c @@ -647,7 +647,7 @@ do_adb_query(struct adb_request *req) switch(req->data[1]) { case ADB_QUERY_GETDEVINFO: - if (req->nbytes < 3) + if (req->nbytes < 3 || req->data[2] >= 16) break; mutex_lock(&adb_handler_mutex); req->reply[0] = adb_handler[req->data[2]].original_address; From 509c2c9fe75ea7493eebbb6bb2f711f37530ae19 Mon Sep 17 00:00:00 2001 From: Daniel Sneddon Date: Tue, 2 Aug 2022 15:47:01 -0700 Subject: [PATCH 1786/1842] x86/speculation: Add RSB VM Exit protections commit 2b1299322016731d56807aa49254a5ea3080b6b3 upstream. tl;dr: The Enhanced IBRS mitigation for Spectre v2 does not work as documented for RET instructions after VM exits. Mitigate it with a new one-entry RSB stuffing mechanism and a new LFENCE. == Background == Indirect Branch Restricted Speculation (IBRS) was designed to help mitigate Branch Target Injection and Speculative Store Bypass, i.e. Spectre, attacks. IBRS prevents software run in less privileged modes from affecting branch prediction in more privileged modes. IBRS requires the MSR to be written on every privilege level change. To overcome some of the performance issues of IBRS, Enhanced IBRS was introduced. eIBRS is an "always on" IBRS, in other words, just turn it on once instead of writing the MSR on every privilege level change. When eIBRS is enabled, more privileged modes should be protected from less privileged modes, including protecting VMMs from guests. == Problem == Here's a simplification of how guests are run on Linux' KVM: void run_kvm_guest(void) { // Prepare to run guest VMRESUME(); // Clean up after guest runs } The execution flow for that would look something like this to the processor: 1. Host-side: call run_kvm_guest() 2. Host-side: VMRESUME 3. Guest runs, does "CALL guest_function" 4. VM exit, host runs again 5. Host might make some "cleanup" function calls 6. Host-side: RET from run_kvm_guest() Now, when back on the host, there are a couple of possible scenarios of post-guest activity the host needs to do before executing host code: * on pre-eIBRS hardware (legacy IBRS, or nothing at all), the RSB is not touched and Linux has to do a 32-entry stuffing. * on eIBRS hardware, VM exit with IBRS enabled, or restoring the host IBRS=1 shortly after VM exit, has a documented side effect of flushing the RSB except in this PBRSB situation where the software needs to stuff the last RSB entry "by hand". IOW, with eIBRS supported, host RET instructions should no longer be influenced by guest behavior after the host retires a single CALL instruction. However, if the RET instructions are "unbalanced" with CALLs after a VM exit as is the RET in #6, it might speculatively use the address for the instruction after the CALL in #3 as an RSB prediction. This is a problem since the (untrusted) guest controls this address. Balanced CALL/RET instruction pairs such as in step #5 are not affected. == Solution == The PBRSB issue affects a wide variety of Intel processors which support eIBRS. But not all of them need mitigation. Today, X86_FEATURE_RSB_VMEXIT triggers an RSB filling sequence that mitigates PBRSB. Systems setting RSB_VMEXIT need no further mitigation - i.e., eIBRS systems which enable legacy IBRS explicitly. However, such systems (X86_FEATURE_IBRS_ENHANCED) do not set RSB_VMEXIT and most of them need a new mitigation. Therefore, introduce a new feature flag X86_FEATURE_RSB_VMEXIT_LITE which triggers a lighter-weight PBRSB mitigation versus RSB_VMEXIT. The lighter-weight mitigation performs a CALL instruction which is immediately followed by a speculative execution barrier (INT3). This steers speculative execution to the barrier -- just like a retpoline -- which ensures that speculation can never reach an unbalanced RET. Then, ensure this CALL is retired before continuing execution with an LFENCE. In other words, the window of exposure is opened at VM exit where RET behavior is troublesome. While the window is open, force RSB predictions sampling for RET targets to a dead end at the INT3. Close the window with the LFENCE. There is a subset of eIBRS systems which are not vulnerable to PBRSB. Add these systems to the cpu_vuln_whitelist[] as NO_EIBRS_PBRSB. Future systems that aren't vulnerable will set ARCH_CAP_PBRSB_NO. [ bp: Massage, incorporate review comments from Andy Cooper. ] Signed-off-by: Daniel Sneddon Co-developed-by: Pawan Gupta Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Greg Kroah-Hartman --- Documentation/admin-guide/hw-vuln/spectre.rst | 8 ++ arch/x86/include/asm/cpufeatures.h | 2 + arch/x86/include/asm/msr-index.h | 4 + arch/x86/include/asm/nospec-branch.h | 17 +++- arch/x86/kernel/cpu/bugs.c | 86 ++++++++++++++----- arch/x86/kernel/cpu/common.c | 12 ++- arch/x86/kvm/vmx/vmenter.S | 8 +- tools/arch/x86/include/asm/cpufeatures.h | 1 + tools/arch/x86/include/asm/msr-index.h | 4 + 9 files changed, 113 insertions(+), 29 deletions(-) diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst index 6bd97cd50d62..7e061ed449aa 100644 --- a/Documentation/admin-guide/hw-vuln/spectre.rst +++ b/Documentation/admin-guide/hw-vuln/spectre.rst @@ -422,6 +422,14 @@ The possible values in this file are: 'RSB filling' Protection of RSB on context switch enabled ============= =========================================== + - EIBRS Post-barrier Return Stack Buffer (PBRSB) protection status: + + =========================== ======================================================= + 'PBRSB-eIBRS: SW sequence' CPU is affected and protection of RSB on VMEXIT enabled + 'PBRSB-eIBRS: Vulnerable' CPU is vulnerable + 'PBRSB-eIBRS: Not affected' CPU is not affected by PBRSB + =========================== ======================================================= + Full mitigation might require a microcode update from the CPU vendor. When the necessary microcode is not available, the kernel will report vulnerability. diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 2a51ee2f5a0f..37ba0cdf99aa 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -299,6 +299,7 @@ #define X86_FEATURE_RETHUNK (11*32+14) /* "" Use REturn THUNK */ #define X86_FEATURE_UNRET (11*32+15) /* "" AMD BTB untrain return */ #define X86_FEATURE_USE_IBPB_FW (11*32+16) /* "" Use IBPB during runtime firmware calls */ +#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM exit when EIBRS is enabled */ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */ @@ -429,5 +430,6 @@ #define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */ #define X86_BUG_MMIO_STALE_DATA X86_BUG(25) /* CPU is affected by Processor MMIO Stale Data vulnerabilities */ #define X86_BUG_RETBLEED X86_BUG(26) /* CPU is affected by RETBleed */ +#define X86_BUG_EIBRS_PBRSB X86_BUG(27) /* EIBRS is vulnerable to Post Barrier RSB Predictions */ #endif /* _ASM_X86_CPUFEATURES_H */ diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 407de670cd60..144dc164b759 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -148,6 +148,10 @@ * are restricted to targets in * kernel. */ +#define ARCH_CAP_PBRSB_NO BIT(24) /* + * Not susceptible to Post-Barrier + * Return Stack Buffer Predictions. + */ #define MSR_IA32_FLUSH_CMD 0x0000010b #define L1D_FLUSH BIT(0) /* diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index c3e8e50633ea..8ff940058a41 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -118,13 +118,28 @@ #endif .endm +.macro ISSUE_UNBALANCED_RET_GUARD + ANNOTATE_INTRA_FUNCTION_CALL + call .Lunbalanced_ret_guard_\@ + int3 +.Lunbalanced_ret_guard_\@: + add $(BITS_PER_LONG/8), %_ASM_SP + lfence +.endm + /* * A simpler FILL_RETURN_BUFFER macro. Don't make people use the CPP * monstrosity above, manually. */ -.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req +.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req ftr2 +.ifb \ftr2 ALTERNATIVE "jmp .Lskip_rsb_\@", "", \ftr +.else + ALTERNATIVE_2 "jmp .Lskip_rsb_\@", "", \ftr, "jmp .Lunbalanced_\@", \ftr2 +.endif __FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP) +.Lunbalanced_\@: + ISSUE_UNBALANCED_RET_GUARD .Lskip_rsb_\@: .endm diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 2e5762faf774..859a3f59526c 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -1291,6 +1291,53 @@ static void __init spec_ctrl_disable_kernel_rrsba(void) } } +static void __init spectre_v2_determine_rsb_fill_type_at_vmexit(enum spectre_v2_mitigation mode) +{ + /* + * Similar to context switches, there are two types of RSB attacks + * after VM exit: + * + * 1) RSB underflow + * + * 2) Poisoned RSB entry + * + * When retpoline is enabled, both are mitigated by filling/clearing + * the RSB. + * + * When IBRS is enabled, while #1 would be mitigated by the IBRS branch + * prediction isolation protections, RSB still needs to be cleared + * because of #2. Note that SMEP provides no protection here, unlike + * user-space-poisoned RSB entries. + * + * eIBRS should protect against RSB poisoning, but if the EIBRS_PBRSB + * bug is present then a LITE version of RSB protection is required, + * just a single call needs to retire before a RET is executed. + */ + switch (mode) { + case SPECTRE_V2_NONE: + return; + + case SPECTRE_V2_EIBRS_LFENCE: + case SPECTRE_V2_EIBRS: + if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) { + setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT_LITE); + pr_info("Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT\n"); + } + return; + + case SPECTRE_V2_EIBRS_RETPOLINE: + case SPECTRE_V2_RETPOLINE: + case SPECTRE_V2_LFENCE: + case SPECTRE_V2_IBRS: + setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT); + pr_info("Spectre v2 / SpectreRSB : Filling RSB on VMEXIT\n"); + return; + } + + pr_warn_once("Unknown Spectre v2 mode, disabling RSB mitigation at VM exit"); + dump_stack(); +} + static void __init spectre_v2_select_mitigation(void) { enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline(); @@ -1441,28 +1488,7 @@ static void __init spectre_v2_select_mitigation(void) setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW); pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n"); - /* - * Similar to context switches, there are two types of RSB attacks - * after vmexit: - * - * 1) RSB underflow - * - * 2) Poisoned RSB entry - * - * When retpoline is enabled, both are mitigated by filling/clearing - * the RSB. - * - * When IBRS is enabled, while #1 would be mitigated by the IBRS branch - * prediction isolation protections, RSB still needs to be cleared - * because of #2. Note that SMEP provides no protection here, unlike - * user-space-poisoned RSB entries. - * - * eIBRS, on the other hand, has RSB-poisoning protections, so it - * doesn't need RSB clearing after vmexit. - */ - if (boot_cpu_has(X86_FEATURE_RETPOLINE) || - boot_cpu_has(X86_FEATURE_KERNEL_IBRS)) - setup_force_cpu_cap(X86_FEATURE_RSB_VMEXIT); + spectre_v2_determine_rsb_fill_type_at_vmexit(mode); /* * Retpoline protects the kernel, but doesn't protect firmware. IBRS @@ -2215,6 +2241,19 @@ static char *ibpb_state(void) return ""; } +static char *pbrsb_eibrs_state(void) +{ + if (boot_cpu_has_bug(X86_BUG_EIBRS_PBRSB)) { + if (boot_cpu_has(X86_FEATURE_RSB_VMEXIT_LITE) || + boot_cpu_has(X86_FEATURE_RSB_VMEXIT)) + return ", PBRSB-eIBRS: SW sequence"; + else + return ", PBRSB-eIBRS: Vulnerable"; + } else { + return ", PBRSB-eIBRS: Not affected"; + } +} + static ssize_t spectre_v2_show_state(char *buf) { if (spectre_v2_enabled == SPECTRE_V2_LFENCE) @@ -2227,12 +2266,13 @@ static ssize_t spectre_v2_show_state(char *buf) spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE) return sprintf(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and SMT\n"); - return sprintf(buf, "%s%s%s%s%s%s\n", + return sprintf(buf, "%s%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], ibpb_state(), boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "", stibp_state(), boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "", + pbrsb_eibrs_state(), spectre_v2_module_string()); } diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 901352bd3b42..9fc91482e85e 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1024,6 +1024,7 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c) #define NO_SWAPGS BIT(6) #define NO_ITLB_MULTIHIT BIT(7) #define NO_SPECTRE_V2 BIT(8) +#define NO_EIBRS_PBRSB BIT(9) #define VULNWL(vendor, family, model, whitelist) \ X86_MATCH_VENDOR_FAM_MODEL(vendor, family, model, whitelist) @@ -1064,7 +1065,7 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT), VULNWL_INTEL(ATOM_GOLDMONT_D, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT), - VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT | NO_EIBRS_PBRSB), /* * Technically, swapgs isn't serializing on AMD (despite it previously @@ -1074,7 +1075,9 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { * good enough for our purposes. */ - VULNWL_INTEL(ATOM_TREMONT_D, NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_TREMONT, NO_EIBRS_PBRSB), + VULNWL_INTEL(ATOM_TREMONT_L, NO_EIBRS_PBRSB), + VULNWL_INTEL(ATOM_TREMONT_D, NO_ITLB_MULTIHIT | NO_EIBRS_PBRSB), /* AMD Family 0xf - 0x12 */ VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT), @@ -1252,6 +1255,11 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) setup_force_cpu_bug(X86_BUG_RETBLEED); } + if (cpu_has(c, X86_FEATURE_IBRS_ENHANCED) && + !cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) && + !(ia32_cap & ARCH_CAP_PBRSB_NO)) + setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB); + if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) return; diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 857fa0fc49fa..982138bebb70 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -197,11 +197,13 @@ SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL) * entries and (in some cases) RSB underflow. * * eIBRS has its own protection against poisoned RSB, so it doesn't - * need the RSB filling sequence. But it does need to be enabled - * before the first unbalanced RET. + * need the RSB filling sequence. But it does need to be enabled, and a + * single call to retire, before the first unbalanced RET. */ - FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT + FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT,\ + X86_FEATURE_RSB_VMEXIT_LITE + pop %_ASM_ARG2 /* @flags */ pop %_ASM_ARG1 /* @vmx */ diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h index 54ba20492ad1..ec53f52a06a5 100644 --- a/tools/arch/x86/include/asm/cpufeatures.h +++ b/tools/arch/x86/include/asm/cpufeatures.h @@ -296,6 +296,7 @@ #define X86_FEATURE_RETPOLINE_LFENCE (11*32+13) /* "" Use LFENCE for Spectre variant 2 */ #define X86_FEATURE_RETHUNK (11*32+14) /* "" Use REturn THUNK */ #define X86_FEATURE_UNRET (11*32+15) /* "" AMD BTB untrain return */ +#define X86_FEATURE_RSB_VMEXIT_LITE (11*32+17) /* "" Fill RSB on VM-Exit when EIBRS is enabled */ /* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */ #define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */ diff --git a/tools/arch/x86/include/asm/msr-index.h b/tools/arch/x86/include/asm/msr-index.h index 407de670cd60..144dc164b759 100644 --- a/tools/arch/x86/include/asm/msr-index.h +++ b/tools/arch/x86/include/asm/msr-index.h @@ -148,6 +148,10 @@ * are restricted to targets in * kernel. */ +#define ARCH_CAP_PBRSB_NO BIT(24) /* + * Not susceptible to Post-Barrier + * Return Stack Buffer Predictions. + */ #define MSR_IA32_FLUSH_CMD 0x0000010b #define L1D_FLUSH BIT(0) /* From 1bea03b44ea2267988cce064f5887b01d421b28c Mon Sep 17 00:00:00 2001 From: Pawan Gupta Date: Tue, 2 Aug 2022 15:47:02 -0700 Subject: [PATCH 1787/1842] x86/speculation: Add LFENCE to RSB fill sequence commit ba6e31af2be96c4d0536f2152ed6f7b6c11bca47 upstream. RSB fill sequence does not have any protection for miss-prediction of conditional branch at the end of the sequence. CPU can speculatively execute code immediately after the sequence, while RSB filling hasn't completed yet. #define __FILL_RETURN_BUFFER(reg, nr, sp) \ mov $(nr/2), reg; \ 771: \ ANNOTATE_INTRA_FUNCTION_CALL; \ call 772f; \ 773: /* speculation trap */ \ UNWIND_HINT_EMPTY; \ pause; \ lfence; \ jmp 773b; \ 772: \ ANNOTATE_INTRA_FUNCTION_CALL; \ call 774f; \ 775: /* speculation trap */ \ UNWIND_HINT_EMPTY; \ pause; \ lfence; \ jmp 775b; \ 774: \ add $(BITS_PER_LONG/8) * 2, sp; \ dec reg; \ jnz 771b; <----- CPU can miss-predict here. Before RSB is filled, RETs that come in program order after this macro can be executed speculatively, making them vulnerable to RSB-based attacks. Mitigate it by adding an LFENCE after the conditional branch to prevent speculation while RSB is being filled. Suggested-by: Andrew Cooper Signed-off-by: Pawan Gupta Signed-off-by: Borislav Petkov Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/nospec-branch.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index 8ff940058a41..0acd99329923 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -60,7 +60,9 @@ 774: \ add $(BITS_PER_LONG/8) * 2, sp; \ dec reg; \ - jnz 771b; + jnz 771b; \ + /* barrier for jnz misprediction */ \ + lfence; #ifdef __ASSEMBLY__ From 6eae1503ddf94b4c3581092d566b17ed12d80f20 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Thu, 11 Aug 2022 13:06:47 +0200 Subject: [PATCH 1788/1842] Linux 5.10.136 Link: https://lore.kernel.org/r/20220809175512.853274191@linuxfoundation.org Tested-by: Florian Fainelli Tested-by: Linux Kernel Functional Testing Tested-by: Pavel Machek (CIP) Tested-by: Rudi Heitbaum Tested-by: Salvatore Bonaccorso Tested-by: Sudip Mukherjee Tested-by: Guenter Roeck Tested-by: Jon Hunter Tested-by: Shuah Khan Signed-off-by: Greg Kroah-Hartman --- Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 5f4dbcb43307..1730698124c7 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 10 -SUBLEVEL = 135 +SUBLEVEL = 136 EXTRAVERSION = NAME = Dare mighty things From 7351343bc8d38d25d082f7eb157fa965e8eec3ee Mon Sep 17 00:00:00 2001 From: Jiyoung Jeong Date: Wed, 7 Sep 2022 07:36:35 +0900 Subject: [PATCH 1789/1842] ANDROID: GKI: Update symbol list for Exynos SoC Leaf changes summary: 3 artifacts changed Changed leaf types summary: 0 leaf type changed Removed/Changed/Added functions summary: 0 Removed, 0 Changed, 3 Added functions Removed/Changed/Added variables summary: 0 Removed, 0 Changed, 0 Added variable 3 Added functions: [A] 'function void __page_frag_cache_drain(page*, unsigned int)' [A] 'function void* page_frag_alloc(page_frag_cache*, unsigned int, gfp_t)' [A] 'function void page_frag_free(void*)' Bug: 245485515 Signed-off-by: Jiyoung Jeong Change-Id: Idd4f42a38c3b36d7e59d5cbc59cfd996891e2530 --- android/abi_gki_aarch64.xml | 486 ++++++++++++++++++--------------- android/abi_gki_aarch64_exynos | 64 +++++ 2 files changed, 325 insertions(+), 225 deletions(-) diff --git a/android/abi_gki_aarch64.xml b/android/abi_gki_aarch64.xml index 098fda915835..8584a30036bd 100644 --- a/android/abi_gki_aarch64.xml +++ b/android/abi_gki_aarch64.xml @@ -209,6 +209,7 @@ + @@ -3873,6 +3874,8 @@ + + @@ -26819,6 +26822,7 @@ + @@ -97536,6 +97540,23 @@ + + + + + + + + + + + + + + + + + @@ -115298,11 +115319,11 @@ - - - - - + + + + + @@ -115932,9 +115953,9 @@ - - - + + + @@ -115987,9 +116008,9 @@ - - - + + + @@ -116500,12 +116521,17 @@ + + + + + - - + + @@ -117600,10 +117626,10 @@ - - - - + + + + @@ -117658,17 +117684,17 @@ - - - - + + + + - - - - - + + + + + @@ -117955,27 +117981,27 @@ - + + + + + + - + + + - - - - - - - - - - - + + + + @@ -118085,12 +118111,12 @@ - - - - - - + + + + + + @@ -118165,17 +118191,17 @@ - - - - + + + + - - - - - + + + + + @@ -118283,12 +118309,12 @@ - - - - - - + + + + + + @@ -118307,18 +118333,18 @@ - - - - + + + + - - - - - - + + + + + + @@ -118328,10 +118354,10 @@ - - - - + + + + @@ -118476,10 +118502,10 @@ - - - - + + + + @@ -118556,10 +118582,10 @@ - - - - + + + + @@ -118570,11 +118596,11 @@ - - - - - + + + + + @@ -118637,33 +118663,33 @@ - - - - + + + + - - - - - - - - - - - - - + - - - + + + + + + + + + + + + + + + @@ -118854,9 +118880,9 @@ - - - + + + @@ -118929,15 +118955,15 @@ - - - - + + + + - - - + + + @@ -118958,15 +118984,15 @@ - - - + + + - - - - + + + + @@ -119126,10 +119152,10 @@ - - - - + + + + @@ -119183,9 +119209,9 @@ - - - + + + @@ -119692,7 +119718,7 @@ - + @@ -119700,8 +119726,8 @@ - - + + @@ -119744,10 +119770,10 @@ - - - - + + + + @@ -119765,7 +119791,7 @@ - + @@ -119778,8 +119804,8 @@ - - + + @@ -119797,15 +119823,15 @@ - + - + - + - + @@ -119830,7 +119856,7 @@ - + @@ -119844,9 +119870,9 @@ - + - + @@ -119859,11 +119885,11 @@ - - - - - + + + + + @@ -119904,7 +119930,7 @@ - + @@ -119917,14 +119943,14 @@ - - + + - - + + @@ -119951,7 +119977,7 @@ - + @@ -119960,7 +119986,7 @@ - + @@ -120527,9 +120553,9 @@ - - - + + + @@ -120653,9 +120679,9 @@ - - - + + + @@ -129807,14 +129833,14 @@ - - - + + + - - - + + + @@ -130762,8 +130788,8 @@ - - + + @@ -133303,9 +133329,9 @@ - - - + + + @@ -133400,10 +133426,10 @@ - - - - + + + + @@ -134079,8 +134105,8 @@ - - + + @@ -135190,16 +135216,16 @@ - - + + - - + + - - + + @@ -136557,10 +136583,10 @@ - - - - + + + + @@ -136569,6 +136595,16 @@ + + + + + + + + + + @@ -139034,8 +139070,8 @@ - - + + @@ -139666,10 +139702,10 @@ - - - - + + + + @@ -140988,16 +141024,16 @@ - - - - + + + + - - - - + + + + @@ -141524,21 +141560,21 @@ - + - - + + - - - - + + + + @@ -144478,11 +144514,11 @@ - - - - - + + + + + @@ -148143,14 +148179,14 @@ - - - + + + - - - + + + diff --git a/android/abi_gki_aarch64_exynos b/android/abi_gki_aarch64_exynos index 777a6434355e..0b37ab51884c 100644 --- a/android/abi_gki_aarch64_exynos +++ b/android/abi_gki_aarch64_exynos @@ -144,6 +144,7 @@ clk_set_rate clk_unprepare clockevents_config_and_register + clocks_calc_mult_shift __clocksource_register_scale __close_fd cma_alloc @@ -198,6 +199,8 @@ cpufreq_table_index_unsorted cpufreq_this_cpu_can_update cpufreq_unregister_notifier + cpu_hotplug_disable + cpu_hotplug_enable __cpuhp_remove_state __cpuhp_setup_state __cpuhp_setup_state_cpuslocked @@ -235,6 +238,7 @@ crypto_shash_update crypto_unregister_alg crypto_unregister_scomp + csum_ipv6_magic csum_partial csum_tcpudp_nofold _ctype @@ -385,6 +389,7 @@ dev_pm_opp_add dev_pm_opp_disable dev_pm_opp_find_freq_ceil + dev_pm_opp_find_freq_ceil_by_volt dev_pm_opp_find_freq_exact dev_pm_opp_find_freq_floor dev_pm_opp_get_freq @@ -428,6 +433,7 @@ dma_buf_export dma_buf_fd dma_buf_get + dma_buf_get_flags dma_buf_map_attachment dma_buf_mmap dma_buf_move_notify @@ -509,6 +515,7 @@ drm_atomic_add_affected_connectors drm_atomic_add_affected_planes drm_atomic_commit + drm_atomic_get_connector_state drm_atomic_get_crtc_state drm_atomic_get_plane_state drm_atomic_get_private_obj_state @@ -552,6 +559,7 @@ drm_atomic_set_fb_for_plane drm_atomic_set_mode_for_crtc drm_atomic_state_alloc + drm_atomic_state_clear __drm_atomic_state_free drm_bridge_add drm_bridge_attach @@ -584,6 +592,7 @@ drm_crtc_vblank_off drm_crtc_vblank_on drm_crtc_vblank_put + drm_crtc_wait_one_vblank drm_cvt_mode __drm_dbg __drm_debug @@ -745,6 +754,7 @@ drm_vma_node_allow drm_vma_node_is_allowed drm_vma_node_revoke + drm_wait_one_vblank drm_writeback_connector_init drm_writeback_queue_job drm_writeback_signal_completion @@ -824,6 +834,7 @@ __get_free_pages get_net_ns_by_fd get_net_ns_by_pid + get_options get_random_bytes get_random_u32 __get_task_comm @@ -966,6 +977,7 @@ input_set_capability input_unregister_device input_unregister_handle + int_pow int_sqrt iomem_resource iommu_alloc_resv_region @@ -975,6 +987,7 @@ iommu_device_sysfs_remove iommu_device_unlink iommu_device_unregister + iommu_dma_enable_best_fit_algo iommu_dma_reserve_iova iommu_domain_alloc iommu_fwspec_add_ids @@ -1009,6 +1022,7 @@ irq_domain_set_info irq_domain_xlate_onetwocell irq_domain_xlate_twocell + irq_do_set_affinity irq_find_mapping irq_get_irqchip_state irq_get_irq_data @@ -1038,6 +1052,7 @@ kasan_flag_enabled kasprintf kernel_kobj + kernfs_path_from_node kern_mount kern_unmount key_create_or_update @@ -1074,6 +1089,8 @@ kobject_uevent kobject_uevent_env krealloc + kset_create_and_add + kset_unregister kstat kstrdup kstrndup @@ -1091,6 +1108,7 @@ kstrtoull_from_user ksys_sync_helper kthread_bind + kthread_bind_mask kthread_cancel_delayed_work_sync kthread_cancel_work_sync kthread_create_on_node @@ -1120,6 +1138,8 @@ kvfree kvfree_call_rcu kvmalloc_node + led_classdev_register_ext + led_classdev_unregister __list_add_valid __list_del_entry_valid list_sort @@ -1144,6 +1164,7 @@ __memcpy_toio memdup_user memmove + memory_read_from_buffer memparse memremap memset @@ -1159,6 +1180,7 @@ mii_nway_restart mipi_dsi_attach mipi_dsi_compression_mode + mipi_dsi_dcs_get_display_brightness mipi_dsi_dcs_read mipi_dsi_dcs_set_column_address mipi_dsi_dcs_set_display_brightness @@ -1316,6 +1338,7 @@ of_property_read_string_helper of_property_read_u32_index of_property_read_u64 + of_property_read_u64_index of_property_read_variable_u16_array of_property_read_variable_u32_array of_property_read_variable_u8_array @@ -1331,6 +1354,9 @@ oops_in_progress orderly_poweroff page_endio + page_frag_alloc + __page_frag_cache_drain + page_frag_free page_mapping __page_pinner_migration_failed panic @@ -1378,9 +1404,11 @@ pci_read_config_dword pci_read_config_word __pci_register_driver + pci_release_regions pci_release_resource pci_rescan_bus pci_resize_resource + pci_restore_msi_state pci_restore_state pci_save_state pci_set_master @@ -1388,6 +1416,7 @@ pci_store_saved_state pci_unmap_rom pci_unregister_driver + pci_wake_from_d3 pci_write_config_dword pci_write_config_word PDE_DATA @@ -1418,6 +1447,7 @@ pinctrl_utils_free_map pin_get_name pin_user_pages + pin_user_pages_fast pin_user_pages_remote platform_bus_type platform_device_add @@ -1506,6 +1536,8 @@ pwm_set_chip_data queue_delayed_work_on queue_work_on + radix_tree_delete + radix_tree_lookup radix_tree_tagged ___ratelimit raw_notifier_call_chain @@ -1647,11 +1679,13 @@ sched_set_normal sched_setscheduler sched_setscheduler_nocheck + sched_uclamp_used schedule schedule_timeout schedule_timeout_interruptible scnprintf scsi_block_when_processing_errors + scsi_dma_unmap scsi_eh_ready_devs __scsi_execute scsi_print_sense_hdr @@ -1712,6 +1746,7 @@ skb_realloc_headroom skb_trim smp_call_function + smp_call_function_any smp_call_function_many smp_call_function_single smp_call_on_cpu @@ -1783,6 +1818,7 @@ snd_soc_info_volsw_range snd_soc_info_volsw_sx snd_soc_info_xr_sx + snd_soc_lookup_component snd_soc_new_compress snd_soc_of_get_dai_link_codecs snd_soc_of_get_dai_name @@ -1832,6 +1868,7 @@ __stack_chk_guard static_key_slow_dec static_key_slow_inc + stop_machine stop_one_cpu_nowait stpcpy strcasecmp @@ -1903,7 +1940,15 @@ tasklet_init tasklet_kill __tasklet_schedule + tasklet_setup task_rq_lock + tcp_register_congestion_control + tcp_reno_cong_avoid + tcp_reno_ssthresh + tcp_reno_undo_cwnd + tcp_slow_start + tcp_unregister_congestion_control + thermal_cdev_update thermal_cooling_device_unregister thermal_of_cooling_device_register thermal_zone_device_disable @@ -1925,11 +1970,13 @@ trace_event_reg trace_handle_return __traceiter_android_rvh_can_migrate_task + __traceiter_android_rvh_cpu_cgroup_attach __traceiter_android_rvh_cpu_cgroup_can_attach __traceiter_android_rvh_dequeue_task __traceiter_android_rvh_enqueue_task __traceiter_android_rvh_find_lowest_rq __traceiter_android_rvh_find_new_ilb + __traceiter_android_rvh_flush_task __traceiter_android_rvh_gic_v3_set_affinity __traceiter_android_rvh_post_init_entity_util_avg __traceiter_android_rvh_replace_next_task_fair @@ -1937,18 +1984,24 @@ __traceiter_android_rvh_sched_newidle_balance __traceiter_android_rvh_sched_nohz_balancer_kick __traceiter_android_rvh_sched_rebalance_domains + __traceiter_android_rvh_schedule __traceiter_android_rvh_select_fallback_rq __traceiter_android_rvh_select_task_rq_fair __traceiter_android_rvh_select_task_rq_rt + __traceiter_android_rvh_wake_up_new_task + __traceiter_android_vh_cgroup_attach __traceiter_android_vh_cpu_idle_enter __traceiter_android_vh_cpu_idle_exit __traceiter_android_vh_do_wake_up_sync __traceiter_android_vh_ipi_stop + __traceiter_android_vh_is_fpsimd_save __traceiter_android_vh_scheduler_tick __traceiter_android_vh_set_wake_flags __traceiter_android_vh_show_mem + __traceiter_android_vh_ufs_check_int_errors __traceiter_android_vh_ufs_compl_command __traceiter_android_vh_ufs_prepare_command + __traceiter_android_vh_ufs_send_tm_command __traceiter_cpu_idle __traceiter_device_pm_callback_end __traceiter_device_pm_callback_start @@ -1976,11 +2029,13 @@ __traceiter_workqueue_execute_start trace_output_call __tracepoint_android_rvh_can_migrate_task + __tracepoint_android_rvh_cpu_cgroup_attach __tracepoint_android_rvh_cpu_cgroup_can_attach __tracepoint_android_rvh_dequeue_task __tracepoint_android_rvh_enqueue_task __tracepoint_android_rvh_find_lowest_rq __tracepoint_android_rvh_find_new_ilb + __tracepoint_android_rvh_flush_task __tracepoint_android_rvh_gic_v3_set_affinity __tracepoint_android_rvh_post_init_entity_util_avg __tracepoint_android_rvh_replace_next_task_fair @@ -1988,18 +2043,24 @@ __tracepoint_android_rvh_sched_newidle_balance __tracepoint_android_rvh_sched_nohz_balancer_kick __tracepoint_android_rvh_sched_rebalance_domains + __tracepoint_android_rvh_schedule __tracepoint_android_rvh_select_fallback_rq __tracepoint_android_rvh_select_task_rq_fair __tracepoint_android_rvh_select_task_rq_rt + __tracepoint_android_rvh_wake_up_new_task + __tracepoint_android_vh_cgroup_attach __tracepoint_android_vh_cpu_idle_enter __tracepoint_android_vh_cpu_idle_exit __tracepoint_android_vh_do_wake_up_sync __tracepoint_android_vh_ipi_stop + __tracepoint_android_vh_is_fpsimd_save __tracepoint_android_vh_scheduler_tick __tracepoint_android_vh_set_wake_flags __tracepoint_android_vh_show_mem + __tracepoint_android_vh_ufs_check_int_errors __tracepoint_android_vh_ufs_compl_command __tracepoint_android_vh_ufs_prepare_command + __tracepoint_android_vh_ufs_send_tm_command __tracepoint_cpu_idle __tracepoint_device_pm_callback_end __tracepoint_device_pm_callback_start @@ -2129,6 +2190,7 @@ unregister_shrinker up update_devfreq + update_rq_clock up_read up_write usb_add_function @@ -2301,6 +2363,7 @@ wakeup_source_add wakeup_source_destroy wakeup_source_register + wakeup_source_remove wakeup_source_unregister __wake_up_sync __wake_up_sync_key @@ -2322,6 +2385,7 @@ xhci_gen_setup xhci_get_endpoint_index xhci_get_ep_ctx + xhci_get_slot_ctx xhci_init_driver xhci_initialize_ring_info xhci_link_segments From 8aaba3c5a1d2445f690c87f0b73ae15f7ffb14d2 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Tue, 19 Jul 2022 11:09:01 -0700 Subject: [PATCH 1790/1842] BACKPORT: watchqueue: make sure to serialize 'wqueue->defunct' properly commit 353f7988dd8413c47718f7ca79c030b6fb62cfe5 upstream. When the pipe is closed, we mark the associated watchqueue defunct by calling watch_queue_clear(). However, while that is protected by the watchqueue lock, new watchqueue entries aren't actually added under that lock at all: they use the pipe->rd_wait.lock instead, and looking up that pipe happens without any locking. The watchqueue code uses the RCU read-side section to make sure that the wqueue entry itself hasn't disappeared, but that does not protect the pipe_info in any way. So make sure to actually hold the wqueue lock when posting watch events, properly serializing against the pipe being torn down. Bug: 235277737 Reported-by: Noam Rathaus Cc: Greg KH Cc: David Howells Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman Signed-off-by: Lee Jones Change-Id: I42b0d56021be1d8950c3642ae0acc5cdccadb394 --- kernel/watch_queue.c | 53 +++++++++++++++++++++++++++++++------------- 1 file changed, 37 insertions(+), 16 deletions(-) diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c index 249ed3259144..dcf1e676797d 100644 --- a/kernel/watch_queue.c +++ b/kernel/watch_queue.c @@ -34,6 +34,27 @@ MODULE_LICENSE("GPL"); #define WATCH_QUEUE_NOTE_SIZE 128 #define WATCH_QUEUE_NOTES_PER_PAGE (PAGE_SIZE / WATCH_QUEUE_NOTE_SIZE) +/* + * This must be called under the RCU read-lock, which makes + * sure that the wqueue still exists. It can then take the lock, + * and check that the wqueue hasn't been destroyed, which in + * turn makes sure that the notification pipe still exists. + */ +static inline bool lock_wqueue(struct watch_queue *wqueue) +{ + spin_lock_bh(&wqueue->lock); + if (unlikely(wqueue->defunct)) { + spin_unlock_bh(&wqueue->lock); + return false; + } + return true; +} + +static inline void unlock_wqueue(struct watch_queue *wqueue) +{ + spin_unlock_bh(&wqueue->lock); +} + static void watch_queue_pipe_buf_release(struct pipe_inode_info *pipe, struct pipe_buffer *buf) { @@ -69,6 +90,10 @@ static const struct pipe_buf_operations watch_queue_pipe_buf_ops = { /* * Post a notification to a watch queue. + * + * Must be called with the RCU lock for reading, and the + * watch_queue lock held, which guarantees that the pipe + * hasn't been released. */ static bool post_one_notification(struct watch_queue *wqueue, struct watch_notification *n) @@ -85,9 +110,6 @@ static bool post_one_notification(struct watch_queue *wqueue, spin_lock_irq(&pipe->rd_wait.lock); - if (wqueue->defunct) - goto out; - mask = pipe->ring_size - 1; head = pipe->head; tail = pipe->tail; @@ -203,7 +225,10 @@ void __post_watch_notification(struct watch_list *wlist, if (security_post_notification(watch->cred, cred, n) < 0) continue; - post_one_notification(wqueue, n); + if (lock_wqueue(wqueue)) { + post_one_notification(wqueue, n); + unlock_wqueue(wqueue);; + } } rcu_read_unlock(); @@ -465,11 +490,12 @@ int add_watch_to_object(struct watch *watch, struct watch_list *wlist) return -EAGAIN; } - spin_lock_bh(&wqueue->lock); - kref_get(&wqueue->usage); - kref_get(&watch->usage); - hlist_add_head(&watch->queue_node, &wqueue->watches); - spin_unlock_bh(&wqueue->lock); + if (lock_wqueue(wqueue)) { + kref_get(&wqueue->usage); + kref_get(&watch->usage); + hlist_add_head(&watch->queue_node, &wqueue->watches); + unlock_wqueue(wqueue); + } hlist_add_head(&watch->list_node, &wlist->watchers); return 0; @@ -523,20 +549,15 @@ int remove_watch_from_object(struct watch_list *wlist, struct watch_queue *wq, wqueue = rcu_dereference(watch->queue); - /* We don't need the watch list lock for the next bit as RCU is - * protecting *wqueue from deallocation. - */ - if (wqueue) { + if (lock_wqueue(wqueue)) { post_one_notification(wqueue, &n.watch); - spin_lock_bh(&wqueue->lock); - if (!hlist_unhashed(&watch->queue_node)) { hlist_del_init_rcu(&watch->queue_node); put_watch(watch); } - spin_unlock_bh(&wqueue->lock); + unlock_wqueue(wqueue); } if (wlist->release_watch) { From cee231f83ba2858cee9bf37c06d07cc4cecc6624 Mon Sep 17 00:00:00 2001 From: Peifeng Li Date: Sat, 10 Sep 2022 10:16:31 +0800 Subject: [PATCH 1791/1842] ANDROID: GKI: add symbols in android/abi_gki_aarch64_oplus - __traceiter_android_rvh_check_preempt_tick - __tracepoint_android_rvh_check_preempt_tick Bug: 241191475 Signed-off-by: Peifeng Li Change-Id: Iafc8f210047aa82f56fc90e45678c25d80d4e548 --- android/abi_gki_aarch64_oplus | 2 ++ 1 file changed, 2 insertions(+) diff --git a/android/abi_gki_aarch64_oplus b/android/abi_gki_aarch64_oplus index a8d958385dfc..fc17ce17fc3a 100644 --- a/android/abi_gki_aarch64_oplus +++ b/android/abi_gki_aarch64_oplus @@ -2684,6 +2684,7 @@ __traceiter_android_rvh_build_perf_domains __traceiter_android_rvh_can_migrate_task __traceiter_android_rvh_check_preempt_wakeup + __traceiter_android_rvh_check_preempt_tick __traceiter_android_rvh_cpu_cgroup_attach __traceiter_android_rvh_cpu_cgroup_online __traceiter_android_rvh_cpu_overutilized @@ -2923,6 +2924,7 @@ __tracepoint_android_rvh_build_perf_domains __tracepoint_android_rvh_can_migrate_task __tracepoint_android_rvh_check_preempt_wakeup + __tracepoint_android_rvh_check_preempt_tick __tracepoint_android_rvh_cpu_cgroup_attach __tracepoint_android_rvh_cpu_cgroup_online __tracepoint_android_rvh_cpu_overutilized From d7586fa2096466951b12adf157d1305310eb5bb7 Mon Sep 17 00:00:00 2001 From: Sean Christopherson Date: Fri, 11 Mar 2022 03:27:41 +0000 Subject: [PATCH 1792/1842] BACKPORT: KVM: x86: avoid calling x86 emulator without a decoded instruction commit fee060cd52d69c114b62d1a2948ea9648b5131f9 upstream. Whenever x86_decode_emulated_instruction() detects a breakpoint, it returns the value that kvm_vcpu_check_breakpoint() writes into its pass-by-reference second argument. Unfortunately this is completely bogus because the expected outcome of x86_decode_emulated_instruction is an EMULATION_* value. Then, if kvm_vcpu_check_breakpoint() does "*r = 0" (corresponding to a KVM_EXIT_DEBUG userspace exit), it is misunderstood as EMULATION_OK and x86_emulate_instruction() is called without having decoded the instruction. This causes various havoc from running with a stale emulation context. The fix is to move the call to kvm_vcpu_check_breakpoint() where it was before commit 4aa2691dcbd3 ("KVM: x86: Factor out x86 instruction emulation with decoding") introduced x86_decode_emulated_instruction(). The other caller of the function does not need breakpoint checks, because it is invoked as part of a vmexit and the processor has already checked those before executing the instruction that #GP'd. This fixes CVE-2022-1852. Bug: 235183128 Reported-by: Qiuhao Li Reported-by: Gaoning Pan Reported-by: Yongkang Jia Fixes: 4aa2691dcbd3 ("KVM: x86: Factor out x86 instruction emulation with decoding") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Message-Id: <20220311032801.3467418-2-seanjc@google.com> [Rewrote commit message according to Qiuhao's report, since a patch already existed to fix the bug. - Paolo] Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman Signed-off-by: Lee Jones Change-Id: I3acbb7fc23566c4108f15960c420384af52c2703 --- arch/x86/kvm/x86.c | 31 +++++++++++++++++++------------ 1 file changed, 19 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4588f73bf59a..1a7c3405773e 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7295,7 +7295,7 @@ int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu) } EXPORT_SYMBOL_GPL(kvm_skip_emulated_instruction); -static bool kvm_vcpu_check_breakpoint(struct kvm_vcpu *vcpu, int *r) +static bool kvm_vcpu_check_code_breakpoint(struct kvm_vcpu *vcpu, int *r) { if (unlikely(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP) && (vcpu->arch.guest_debug_dr7 & DR7_BP_EN_MASK)) { @@ -7364,25 +7364,23 @@ static bool is_vmware_backdoor_opcode(struct x86_emulate_ctxt *ctxt) } /* - * Decode to be emulated instruction. Return EMULATION_OK if success. + * Decode an instruction for emulation. The caller is responsible for handling + * code breakpoints. Note, manually detecting code breakpoints is unnecessary + * (and wrong) when emulating on an intercepted fault-like exception[*], as + * code breakpoints have higher priority and thus have already been done by + * hardware. + * + * [*] Except #MC, which is higher priority, but KVM should never emulate in + * response to a machine check. */ int x86_decode_emulated_instruction(struct kvm_vcpu *vcpu, int emulation_type, void *insn, int insn_len) { - int r = EMULATION_OK; struct x86_emulate_ctxt *ctxt = vcpu->arch.emulate_ctxt; + int r; init_emulate_ctxt(vcpu); - /* - * We will reenter on the same instruction since we do not set - * complete_userspace_io. This does not handle watchpoints yet, - * those would be handled in the emulate_ops. - */ - if (!(emulation_type & EMULTYPE_SKIP) && - kvm_vcpu_check_breakpoint(vcpu, &r)) - return r; - ctxt->ud = emulation_type & EMULTYPE_TRAP_UD; r = x86_decode_insn(ctxt, insn, insn_len); @@ -7417,6 +7415,15 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (!(emulation_type & EMULTYPE_NO_DECODE)) { kvm_clear_exception_queue(vcpu); + /* + * Return immediately if RIP hits a code breakpoint, such #DBs + * are fault-like and are higher priority than any faults on + * the code fetch itself. + */ + if (!(emulation_type & EMULTYPE_SKIP) && + kvm_vcpu_check_code_breakpoint(vcpu, &r)) + return r; + r = x86_decode_emulated_instruction(vcpu, emulation_type, insn, insn_len); if (r != EMULATION_OK) { From 2bd9e6cddc51da4da03cfe65ef239919c019c2f1 Mon Sep 17 00:00:00 2001 From: David Howells Date: Thu, 26 May 2022 07:34:52 +0100 Subject: [PATCH 1793/1842] BACKPORT: pipe: Fix missing lock in pipe_resize_ring() commit 189b0ddc245139af81198d1a3637cac74f96e13a upstream. pipe_resize_ring() needs to take the pipe->rd_wait.lock spinlock to prevent post_one_notification() from trying to insert into the ring whilst the ring is being replaced. The occupancy check must be done after the lock is taken, and the lock must be taken after the new ring is allocated. The bug can lead to an oops looking something like: BUG: KASAN: use-after-free in post_one_notification.isra.0+0x62e/0x840 Read of size 4 at addr ffff88801cc72a70 by task poc/27196 ... Call Trace: post_one_notification.isra.0+0x62e/0x840 __post_watch_notification+0x3b7/0x650 key_create_or_update+0xb8b/0xd20 __do_sys_add_key+0x175/0x340 __x64_sys_add_key+0xbe/0x140 do_syscall_64+0x5c/0xc0 entry_SYSCALL_64_after_hwframe+0x44/0xae Reported by Selim Enes Karaduman @Enesdex working with Trend Micro Zero Day Initiative. Bug: 244395411 Fixes: c73be61cede5 ("pipe: Add general notification queue support") Reported-by: zdi-disclosures@trendmicro.com # ZDI-CAN-17291 Signed-off-by: David Howells Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman Signed-off-by: Lee Jones Change-Id: I129164eb9dba557d5a4370f4eca124b9916774a6 --- fs/pipe.c | 31 ++++++++++++++++++------------- 1 file changed, 18 insertions(+), 13 deletions(-) diff --git a/fs/pipe.c b/fs/pipe.c index ea77cf5b519f..ea680351ea8d 100644 --- a/fs/pipe.c +++ b/fs/pipe.c @@ -1245,30 +1245,33 @@ unsigned int round_pipe_size(unsigned long size) /* * Resize the pipe ring to a number of slots. + * + * Note the pipe can be reduced in capacity, but only if the current + * occupancy doesn't exceed nr_slots; if it does, EBUSY will be + * returned instead. */ int pipe_resize_ring(struct pipe_inode_info *pipe, unsigned int nr_slots) { struct pipe_buffer *bufs; unsigned int head, tail, mask, n; - /* - * We can shrink the pipe, if arg is greater than the ring occupancy. - * Since we don't expect a lot of shrink+grow operations, just free and - * allocate again like we would do for growing. If the pipe currently - * contains more buffers than arg, then return busy. - */ - mask = pipe->ring_size - 1; - head = pipe->head; - tail = pipe->tail; - n = pipe_occupancy(pipe->head, pipe->tail); - if (nr_slots < n) - return -EBUSY; - bufs = kcalloc(nr_slots, sizeof(*bufs), GFP_KERNEL_ACCOUNT | __GFP_NOWARN); if (unlikely(!bufs)) return -ENOMEM; + spin_lock_irq(&pipe->rd_wait.lock); + mask = pipe->ring_size - 1; + head = pipe->head; + tail = pipe->tail; + + n = pipe_occupancy(head, tail); + if (nr_slots < n) { + spin_unlock_irq(&pipe->rd_wait.lock); + kfree(bufs); + return -EBUSY; + } + /* * The pipe array wraps around, so just start the new one at zero * and adjust the indices. @@ -1300,6 +1303,8 @@ int pipe_resize_ring(struct pipe_inode_info *pipe, unsigned int nr_slots) pipe->tail = tail; pipe->head = head; + spin_unlock_irq(&pipe->rd_wait.lock); + /* This might have made more room for writers */ wake_up_interruptible(&pipe->wr_wait); return 0; From c762f435c0e0c62bbfae478a55c89826f6a312cb Mon Sep 17 00:00:00 2001 From: Sarthak Kukreti Date: Tue, 31 May 2022 15:56:40 -0400 Subject: [PATCH 1794/1842] BACKPORT: dm verity: set DM_TARGET_IMMUTABLE feature flag commit 4caae58406f8ceb741603eee460d79bacca9b1b5 upstream. The device-mapper framework provides a mechanism to mark targets as immutable (and hence fail table reloads that try to change the target type). Add the DM_TARGET_IMMUTABLE flag to the dm-verity target's feature flags to prevent switching the verity target with a different target type. Bug: 234475629 Fixes: a4ffc152198e ("dm: add verity target") Cc: stable@vger.kernel.org Signed-off-by: Sarthak Kukreti Reviewed-by: Kees Cook Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman Signed-off-by: Lee Jones Change-Id: Iaeec7fa3be98a646062439e4551f84242dacfb45 --- drivers/md/dm-verity-target.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index 9d5c6dd5b756..2c355b58d078 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -1252,6 +1252,7 @@ static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv) static struct target_type verity_target = { .name = "verity", .version = {1, 7, 0}, + .features = DM_TARGET_IMMUTABLE, .module = THIS_MODULE, .ctr = verity_ctr, .dtr = verity_dtr, From f50f24e781738c8e5aa9f285d8726202f33107d6 Mon Sep 17 00:00:00 2001 From: Peifeng Li Date: Tue, 13 Sep 2022 19:07:41 +0800 Subject: [PATCH 1795/1842] ANDROID: vendor_hooks: Add hooks for lookaround Add hooks for support lookaround in memory reclamation. - android_vh_test_clear_look_around_ref - android_vh_check_page_look_around_ref - android_vh_look_around_migrate_page - android_vh_look_around Bug: 241079328 Signed-off-by: Peifeng Li Change-Id: I9a606ae71d2f1303df3b02403b30bc8fdc9d06dd --- include/trace/hooks/mm.h | 11 +++++++++++ include/trace/hooks/vmscan.h | 3 +++ mm/migrate.c | 2 ++ mm/page_alloc.c | 2 ++ mm/rmap.c | 1 + mm/vmscan.c | 4 ++++ 6 files changed, 23 insertions(+) diff --git a/include/trace/hooks/mm.h b/include/trace/hooks/mm.h index 19a04b1aa4ff..dff62c101ba5 100644 --- a/include/trace/hooks/mm.h +++ b/include/trace/hooks/mm.h @@ -44,6 +44,7 @@ struct readahead_control; #endif /* __GENKSYMS__ */ struct cma; struct swap_slots_cache; +struct page_vma_mapped_walk; DECLARE_RESTRICTED_HOOK(android_rvh_set_skip_swapcache_flags, TP_PROTO(gfp_t *flags), @@ -268,6 +269,16 @@ DECLARE_HOOK(android_vh_alloc_pages_failure_bypass, TP_PROTO(gfp_t gfp_mask, int order, int alloc_flags, int migratetype, struct page **page), TP_ARGS(gfp_mask, order, alloc_flags, migratetype, page)); +DECLARE_HOOK(android_vh_test_clear_look_around_ref, + TP_PROTO(struct page *page), + TP_ARGS(page)); +DECLARE_HOOK(android_vh_look_around_migrate_page, + TP_PROTO(struct page *old_page, struct page *new_page), + TP_ARGS(old_page, new_page)); +DECLARE_HOOK(android_vh_look_around, + TP_PROTO(struct page_vma_mapped_walk *pvmw, struct page *page, + struct vm_area_struct *vma, int *referenced), + TP_ARGS(pvmw, page, vma, referenced)); /* macro versions of hooks are no longer required */ #endif /* _TRACE_HOOK_MM_H */ diff --git a/include/trace/hooks/vmscan.h b/include/trace/hooks/vmscan.h index ab54c20fef62..4bdb18c36ac6 100644 --- a/include/trace/hooks/vmscan.h +++ b/include/trace/hooks/vmscan.h @@ -50,6 +50,9 @@ DECLARE_HOOK(android_vh_inactive_is_low, DECLARE_HOOK(android_vh_snapshot_refaults, TP_PROTO(struct lruvec *target_lruvec), TP_ARGS(target_lruvec)); +DECLARE_HOOK(android_vh_check_page_look_around_ref, + TP_PROTO(struct page *page, int *skip), + TP_ARGS(page, skip)); #endif /* _TRACE_HOOK_VMSCAN_H */ /* This part must be outside protection */ #include diff --git a/mm/migrate.c b/mm/migrate.c index c4d9931d11c4..415dbbfae71b 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -56,6 +56,7 @@ #include #undef CREATE_TRACE_POINTS #include +#include #include "internal.h" @@ -606,6 +607,7 @@ void migrate_page_states(struct page *newpage, struct page *page) SetPageChecked(newpage); if (PageMappedToDisk(page)) SetPageMappedToDisk(newpage); + trace_android_vh_look_around_migrate_page(page, newpage); /* Move dirty on pages not done by migrate_page_move_mapping() */ if (PageDirty(page)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4696696a9b19..a493ea72dcda 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -72,6 +72,7 @@ #include #include #include +#include #include #include @@ -2403,6 +2404,7 @@ static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags set_page_pfmemalloc(page); else clear_page_pfmemalloc(page); + trace_android_vh_test_clear_look_around_ref(page); } /* diff --git a/mm/rmap.c b/mm/rmap.c index 2e62c1aa5139..d48141f90360 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -789,6 +789,7 @@ static bool page_referenced_one(struct page *page, struct vm_area_struct *vma, } if (pvmw.pte) { + trace_android_vh_look_around(&pvmw, page, vma, &referenced); if (ptep_clear_flush_young_notify(vma, address, pvmw.pte)) { /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 2979893b9cf5..701595410992 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1022,12 +1022,16 @@ static enum page_references page_check_references(struct page *page, unsigned long vm_flags; bool should_protect = false; bool trylock_fail = false; + int ret = 0; trace_android_vh_page_should_be_protected(page, &should_protect); if (unlikely(should_protect)) return PAGEREF_ACTIVATE; trace_android_vh_page_trylock_set(page); + trace_android_vh_check_page_look_around_ref(page, &ret); + if (ret) + return ret; referenced_ptes = page_referenced(page, 1, sc->target_mem_cgroup, &vm_flags); referenced_page = TestClearPageReferenced(page); From 9252f4d58ba8ae8633fed25b4ee2da401aa69086 Mon Sep 17 00:00:00 2001 From: liang zhang Date: Wed, 14 Sep 2022 15:31:32 +0800 Subject: [PATCH 1796/1842] ANDROID: transsion: Update the ABI xml and symbol list Leaf changes summary: 2 artifacts changed Changed leaf types summary: 0 leaf type changed Removed/Changed/Added functions summary: 0 Removed, 0 Changed, 1 Added functions Removed/Changed/Added variables summary: 0 Removed, 0 Changed, 1 Added variables 1 Added functions: [A] 'function int __traceiter_android_vh_ra_tuning_max_page(struct readahead_control *, unsigned long *)' 1 Added variables: [A] 'tracepoint __tracepoint_android_vh_ra_tuning_max_page' Bug: 246685233 Change-Id: I9f53bf1e2188f4626c92c2acf90f60a2c20ef3ca Signed-off-by: liang zhang --- android/abi_gki_aarch64.xml | 1129 +++++++++++++++++------------ android/abi_gki_aarch64_transsion | 4 +- 2 files changed, 665 insertions(+), 468 deletions(-) diff --git a/android/abi_gki_aarch64.xml b/android/abi_gki_aarch64.xml index 8584a30036bd..a1f2fe1b47b2 100644 --- a/android/abi_gki_aarch64.xml +++ b/android/abi_gki_aarch64.xml @@ -552,6 +552,7 @@ + @@ -6473,6 +6474,7 @@ + @@ -15127,6 +15129,89 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -21192,6 +21277,14 @@ + + + + + + + + @@ -40483,6 +40576,17 @@ + + + + + + + + + + + @@ -42282,6 +42386,7 @@ + @@ -54647,6 +54752,7 @@ + @@ -62539,7 +62645,65 @@ - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -67262,6 +67426,11 @@ + + + + + @@ -67746,7 +67915,14 @@ - + + + + + + + + @@ -83000,6 +83176,7 @@ + @@ -83536,6 +83713,14 @@ + + + + + + + + @@ -92848,6 +93033,7 @@ + @@ -95093,6 +95279,7 @@ + @@ -115084,6 +115271,7 @@ + @@ -115319,11 +115507,11 @@ - - - - - + + + + + @@ -115953,9 +116141,9 @@ - - - + + + @@ -116008,9 +116196,9 @@ - - - + + + @@ -116521,9 +116709,9 @@ - - - + + + @@ -117469,9 +117657,9 @@ - - - + + + @@ -117480,14 +117668,14 @@ - - - + + + - - - + + + @@ -117626,10 +117814,10 @@ - - - - + + + + @@ -117639,11 +117827,11 @@ - - - - - + + + + + @@ -117670,31 +117858,31 @@ - - - - - + + + + + - - - - - + + + + + - - - - + + + + - - - - - + + + + + @@ -117954,25 +118142,25 @@ - - - - - - - - + + + + + + + + - - - + + + - - - - + + + + @@ -117981,27 +118169,27 @@ - - - + + + - - - + + + - - - - - + + + + + - - - - + + + + @@ -118061,11 +118249,11 @@ - - - - - + + + + + @@ -118075,12 +118263,12 @@ - - - - - - + + + + + + @@ -118090,9 +118278,9 @@ - - - + + + @@ -118101,22 +118289,22 @@ - - - - - - - - + + + + + + + + - - - - - - + + + + + + @@ -118149,10 +118337,10 @@ - - - - + + + + @@ -118161,22 +118349,22 @@ - - - + + + - - - - + + + + - - - - - + + + + + @@ -118191,17 +118379,17 @@ - - - - + + + + - - - - - + + + + + @@ -118302,19 +118490,19 @@ - - - - - + + + + + - - - - - - + + + + + + @@ -118333,10 +118521,10 @@ - - - - + + + + @@ -118347,17 +118535,17 @@ - - - - - + + + + + - - - - + + + + @@ -118429,11 +118617,11 @@ - - - - - + + + + + @@ -118458,9 +118646,9 @@ - - - + + + @@ -118470,58 +118658,58 @@ - - - + + + - - - - + + + + - - - - + + + + - - - + + + - - - + + + - - - + + + - - - - + + + + - - - - - - - - + + + + + + + + - - - - + + + + @@ -118582,10 +118770,10 @@ - - - - + + + + @@ -118596,17 +118784,17 @@ - - - - - + + + + + - - - - + + + + @@ -118625,19 +118813,19 @@ - - - - - - - + + + + + + + - - - - + + + + @@ -118663,6 +118851,12 @@ + + + + + + @@ -118687,9 +118881,9 @@ - - - + + + @@ -118698,14 +118892,14 @@ - - - - - - - - + + + + + + + + @@ -118764,16 +118958,16 @@ - - - - + + + + - - - - + + + + @@ -118880,9 +119074,9 @@ - - - + + + @@ -118906,9 +119100,9 @@ - - - + + + @@ -118917,10 +119111,10 @@ - - - - + + + + @@ -118928,10 +119122,10 @@ - - - - + + + + @@ -118955,10 +119149,10 @@ - - - - + + + + @@ -118984,15 +119178,15 @@ - - - + + + - - - - + + + + @@ -119025,12 +119219,12 @@ - - - - - - + + + + + + @@ -119152,19 +119346,19 @@ - - - - + + + + - - - - - - - + + + + + + + @@ -119209,9 +119403,9 @@ - - - + + + @@ -119688,10 +119882,10 @@ - + - - + + @@ -119718,16 +119912,16 @@ - + - + - - - - + + + + @@ -119766,14 +119960,14 @@ - - - + + + - - - - + + + + @@ -119783,29 +119977,29 @@ - + - + - + - - + + - + - - - + + + - - + + @@ -119822,16 +120016,16 @@ - - + + - + - - + + @@ -119844,21 +120038,21 @@ - + - + - - - - - - - - - + + + + + + + + + @@ -119868,30 +120062,31 @@ - + - + - - + + - - + + + - + - + @@ -119909,8 +120104,8 @@ - - + + @@ -119930,27 +120125,27 @@ - + - + - + - + - + - - - + + + @@ -119958,7 +120153,7 @@ - + @@ -119977,16 +120172,16 @@ - - + + - - + + @@ -120457,7 +120652,7 @@ - + @@ -120553,9 +120748,9 @@ - - - + + + @@ -120679,9 +120874,9 @@ - - - + + + @@ -121934,11 +122129,11 @@ - - - - - + + + + + @@ -129187,10 +129382,10 @@ - - - - + + + + @@ -129833,14 +130028,14 @@ - - - + + + - - - + + + @@ -130676,10 +130871,10 @@ - - - - + + + + @@ -130788,8 +130983,8 @@ - - + + @@ -132190,7 +132385,7 @@ - + @@ -133156,8 +133351,8 @@ - - + + @@ -134563,30 +134758,30 @@ - - - + + + - - - - - + + + + + - - - + + + - - - - - - - + + + + + + + @@ -136583,10 +136778,10 @@ - - - - + + + + @@ -136595,14 +136790,14 @@ - - - - + + + + - - + + @@ -136619,8 +136814,8 @@ - - + + @@ -138763,8 +138958,8 @@ - - + + @@ -139070,8 +139265,8 @@ - - + + @@ -141560,11 +141755,11 @@ - + - - + + @@ -143352,9 +143547,9 @@ - - - + + + @@ -144514,11 +144709,11 @@ - - - - - + + + + + @@ -145161,42 +145356,42 @@ - - - + + + - - - + + + - - + + - - + + - - + + - - - - - + + + + + - - - + + + - - + + @@ -148498,19 +148693,19 @@ - - - - - - + + + + + + - - - - - + + + + + diff --git a/android/abi_gki_aarch64_transsion b/android/abi_gki_aarch64_transsion index 648b5a445e8b..c38d29451ee4 100644 --- a/android/abi_gki_aarch64_transsion +++ b/android/abi_gki_aarch64_transsion @@ -34,6 +34,7 @@ __traceiter_android_vh_alloc_si __traceiter_android_vh_free_pages __traceiter_android_vh_set_shmem_page_flag + __traceiter_android_vh_ra_tuning_max_page __tracepoint_android_vh_handle_pte_fault_end __tracepoint_android_vh_cow_user_page __tracepoint_android_vh_swapin_add_anon_rmap @@ -57,4 +58,5 @@ __tracepoint_android_vh_si_swapinfo __tracepoint_android_vh_alloc_si __tracepoint_android_vh_free_pages - __tracepoint_android_vh_set_shmem_page_flag \ No newline at end of file + __tracepoint_android_vh_set_shmem_page_flag + __tracepoint_android_vh_ra_tuning_max_page From feedd14d1450f63ff39eda2b4c1482283e489c31 Mon Sep 17 00:00:00 2001 From: liang zhang Date: Wed, 14 Sep 2022 10:17:41 +0000 Subject: [PATCH 1797/1842] Revert "Revert "ANDROID: add for tuning readahead size"" This reverts commit 98e5fb34d1137987cb2551d79082dc4c794795d4. Reason for revert: Bug: 246685233 Change-Id: Ic18a59bd77040fe58cc1e09678a707d3802f2bb4 Signed-off-by: liang zhang --- drivers/android/vendor_hooks.c | 1 + include/trace/hooks/mm.h | 3 +++ mm/readahead.c | 2 ++ 3 files changed, 6 insertions(+) diff --git a/drivers/android/vendor_hooks.c b/drivers/android/vendor_hooks.c index 7a1ffec55caf..ccc2c869d5d0 100644 --- a/drivers/android/vendor_hooks.c +++ b/drivers/android/vendor_hooks.c @@ -428,6 +428,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_has_work_ilocked); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_read_done); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_handle_tlb_conf); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_shrink_node_memcgs); +EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ra_tuning_max_page); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_handle_pte_fault_end); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cow_user_page); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_swapin_add_anon_rmap); diff --git a/include/trace/hooks/mm.h b/include/trace/hooks/mm.h index dff62c101ba5..b1c0f440f8ff 100644 --- a/include/trace/hooks/mm.h +++ b/include/trace/hooks/mm.h @@ -190,6 +190,9 @@ DECLARE_HOOK(android_vh_pcplist_add_cma_pages_bypass, DECLARE_HOOK(android_vh_subpage_dma_contig_alloc, TP_PROTO(bool *allow_subpage_alloc, struct device *dev, size_t *size), TP_ARGS(allow_subpage_alloc, dev, size)); +DECLARE_HOOK(android_vh_ra_tuning_max_page, + TP_PROTO(struct readahead_control *ractl, unsigned long *max_page), + TP_ARGS(ractl, max_page)); DECLARE_HOOK(android_vh_handle_pte_fault_end, TP_PROTO(struct vm_fault *vmf, unsigned long highest_memmap_pfn), TP_ARGS(vmf, highest_memmap_pfn)); diff --git a/mm/readahead.c b/mm/readahead.c index a6bfa987a04a..a95364c99487 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -459,6 +459,8 @@ static void ondemand_readahead(struct readahead_control *ractl, if (req_size > max_pages && bdi->io_pages > max_pages) max_pages = min(req_size, bdi->io_pages); + trace_android_vh_ra_tuning_max_page(ractl, &max_pages); + /* * start of file */ From db2516ff46e29d44068514a666c41aa3959e9bcb Mon Sep 17 00:00:00 2001 From: Peifeng Li Date: Thu, 15 Sep 2022 17:47:49 +0800 Subject: [PATCH 1798/1842] ANDROID: vendor_hooks: Add hooks for lookaround Add hooks for support lookaround in memory reclamation. add drivers/android/vendor_hooks.c for export tracepoint symbol Bug: 241079328 Signed-off-by: Peifeng Li Change-Id: Ia6e9fa0ae5708e88fa498c63cf63aad7c55e5f98 --- drivers/android/vendor_hooks.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/android/vendor_hooks.c b/drivers/android/vendor_hooks.c index ccc2c869d5d0..9224b7bab3bd 100644 --- a/drivers/android/vendor_hooks.c +++ b/drivers/android/vendor_hooks.c @@ -456,3 +456,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_set_shmem_page_flag); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_sched_pelt_multiplier); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_pages_reclaim_bypass); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_pages_failure_bypass); +EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_check_page_look_around_ref); +EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_look_around); +EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_look_around_migrate_page); +EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_test_clear_look_around_ref); From d915364e92446d1b9dbd51fa274f1698d9906cba Mon Sep 17 00:00:00 2001 From: Peifeng Li Date: Fri, 16 Sep 2022 09:45:43 +0800 Subject: [PATCH 1799/1842] ANDROID: GKI: Update symbols to symbol list Update symbols to symbol list externed by oem modules. Leaf changes summary: 8 artifacts changed Changed leaf types summary: 0 leaf type changed Removed/Changed/Added functions summary: 0 Removed, 0 Changed, 4 Added functions Removed/Changed/Added variables summary: 0 Removed, 0 Changed, 4 Added variables 4 Added functions: [A] 'function int __traceiter_android_vh_check_page_look_around_ref(void*, page*, int*)' [A] 'function int __traceiter_android_vh_look_around(void*, page_vma_mapped_walk*, page*, vm_area_struct*, int*)' [A] 'function int __traceiter_android_vh_look_around_migrate_page(void*, page*, page*)' [A] 'function int __traceiter_android_vh_test_clear_look_around_ref(void*, page*)' 4 Added variables: [A] 'tracepoint __tracepoint_android_vh_check_page_look_around_ref' [A] 'tracepoint __tracepoint_android_vh_look_around' [A] 'tracepoint __tracepoint_android_vh_look_around_migrate_page' [A] 'tracepoint __tracepoint_android_vh_test_clear_look_around_ref' Bug: 193384408 Signed-off-by: Peifeng Li Change-Id: I81225e1a5ab6d1495983ac1df1d43e2dbdfc0600 --- android/abi_gki_aarch64.xml | 510 ++++++++++++++-------------------- android/abi_gki_aarch64_oplus | 9 + 2 files changed, 210 insertions(+), 309 deletions(-) diff --git a/android/abi_gki_aarch64.xml b/android/abi_gki_aarch64.xml index a1f2fe1b47b2..c70a270b833a 100644 --- a/android/abi_gki_aarch64.xml +++ b/android/abi_gki_aarch64.xml @@ -436,6 +436,7 @@ + @@ -517,6 +518,8 @@ + + @@ -612,6 +615,7 @@ + @@ -6351,6 +6355,7 @@ + @@ -6436,6 +6441,8 @@ + + @@ -6543,6 +6550,7 @@ + @@ -15129,89 +15137,6 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - @@ -21277,14 +21202,6 @@ - - - - - - - - @@ -40576,17 +40493,6 @@ - - - - - - - - - - - @@ -42386,7 +42292,6 @@ - @@ -44169,7 +44074,23 @@ - + + + + + + + + + + + + + + + + + @@ -48460,6 +48381,7 @@ + @@ -53343,6 +53265,13 @@ + + + + + + + @@ -54752,7 +54681,6 @@ - @@ -62645,65 +62573,7 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + @@ -65976,6 +65846,13 @@ + + + + + + + @@ -67426,11 +67303,6 @@ - - - - - @@ -67915,14 +67787,7 @@ - - - - - - - - + @@ -83176,7 +83041,6 @@ - @@ -83713,14 +83577,6 @@ - - - - - - - - @@ -90525,6 +90381,7 @@ + @@ -93033,7 +92890,6 @@ - @@ -94310,6 +94166,7 @@ + @@ -95279,7 +95136,6 @@ - @@ -98322,6 +98178,7 @@ + @@ -102196,6 +102053,12 @@ + + + + + + @@ -110432,6 +110295,7 @@ + @@ -115271,7 +115135,6 @@ - @@ -117814,10 +117677,10 @@ - - - - + + + + @@ -117872,17 +117735,17 @@ - - - - + + + + - - - - - + + + + + @@ -118118,6 +117981,12 @@ + + + + + + @@ -118169,27 +118038,27 @@ - - - - - - + - + - - - - - - + + + + + + + + + + + @@ -118299,12 +118168,12 @@ - - - - - - + + + + + + @@ -118379,17 +118248,17 @@ - - - - + + + + - - - - - + + + + + @@ -118497,12 +118366,12 @@ - - - - - - + + + + + + @@ -118521,10 +118390,10 @@ - - - - + + + + @@ -118542,10 +118411,10 @@ - - - - + + + + @@ -118636,6 +118505,20 @@ + + + + + + + + + + + + + + @@ -118690,10 +118573,10 @@ - - - - + + + + @@ -118770,10 +118653,10 @@ - - - - + + + + @@ -118784,11 +118667,11 @@ - - - - - + + + + + @@ -118851,10 +118734,10 @@ - - - - + + + + @@ -118881,9 +118764,9 @@ - - - + + + @@ -119074,9 +118957,9 @@ - - - + + + @@ -119149,10 +119032,10 @@ - - - - + + + + @@ -119178,15 +119061,15 @@ - - - + + + - - - - + + + + @@ -119201,6 +119084,11 @@ + + + + + @@ -119346,10 +119234,10 @@ - - - - + + + + @@ -119403,9 +119291,9 @@ - - - + + + @@ -119912,7 +119800,7 @@ - + @@ -119920,8 +119808,8 @@ - - + + @@ -119956,6 +119844,7 @@ + @@ -119964,10 +119853,10 @@ - - - - + + + + @@ -119985,7 +119874,7 @@ - + @@ -119998,8 +119887,8 @@ - - + + @@ -120017,15 +119906,15 @@ - + - + - + @@ -120041,6 +119930,8 @@ + + @@ -120050,7 +119941,7 @@ - + @@ -120064,9 +119955,9 @@ - + - + @@ -120079,12 +119970,12 @@ - + - + @@ -120125,7 +120016,7 @@ - + @@ -120138,16 +120029,17 @@ - + - - + + + @@ -120172,7 +120064,7 @@ - + @@ -120181,7 +120073,7 @@ - + diff --git a/android/abi_gki_aarch64_oplus b/android/abi_gki_aarch64_oplus index fc17ce17fc3a..a117542bfc3a 100644 --- a/android/abi_gki_aarch64_oplus +++ b/android/abi_gki_aarch64_oplus @@ -2784,6 +2784,7 @@ __traceiter_android_vh_cpu_idle_enter __traceiter_android_vh_cpu_idle_exit __traceiter_android_vh_cpu_up + __traceiter_android_vh_check_page_look_around_ref __traceiter_android_vh_do_futex __traceiter_android_vh_do_send_sig_info __traceiter_android_vh_drain_all_pages_bypass @@ -2817,6 +2818,8 @@ __traceiter_android_vh_killed_process __traceiter_android_vh_kmalloc_slab __traceiter_android_vh_logbuf + __traceiter_android_vh_look_around + __traceiter_android_vh_look_around_migrate_page __traceiter_android_vh_mem_cgroup_alloc __traceiter_android_vh_mem_cgroup_css_offline __traceiter_android_vh_mem_cgroup_css_online @@ -2918,6 +2921,7 @@ __traceiter_suspend_resume __traceiter_task_newtask __traceiter_task_rename + __traceiter_android_vh_test_clear_look_around_ref __traceiter_xhci_urb_giveback __tracepoint_android_rvh_account_irq __tracepoint_android_rvh_after_enqueue_task @@ -3024,6 +3028,7 @@ __tracepoint_android_vh_cpu_idle_enter __tracepoint_android_vh_cpu_idle_exit __tracepoint_android_vh_cpu_up + __tracepoint_android_vh_check_page_look_around_ref __tracepoint_android_vh_do_futex __tracepoint_android_vh_do_send_sig_info __tracepoint_android_vh_drain_all_pages_bypass @@ -3057,6 +3062,8 @@ __tracepoint_android_vh_killed_process __tracepoint_android_vh_kmalloc_slab __tracepoint_android_vh_logbuf + __tracepoint_android_vh_look_around + __tracepoint_android_vh_look_around_migrate_page __tracepoint_android_vh_mem_cgroup_alloc __tracepoint_android_vh_mem_cgroup_css_offline __tracepoint_android_vh_mem_cgroup_css_online @@ -3130,6 +3137,7 @@ __tracepoint_android_vh_tune_inactive_ratio __tracepoint_android_vh_tune_scan_type __tracepoint_android_vh_tune_swappiness + __tracepoint_android_vh_test_clear_look_around_ref __tracepoint_android_vh_ufs_compl_command __tracepoint_android_vh_ufs_send_command __tracepoint_android_vh_ufs_send_tm_command @@ -3624,3 +3632,4 @@ xhci_ring_cmd_db xhci_ring_free xhci_trb_virt_to_dma + zero_pfn From bcf6dddd9746bc5ea3a4af2d9dee6977f3d2d318 Mon Sep 17 00:00:00 2001 From: Tadeusz Struk Date: Tue, 13 Sep 2022 10:56:54 -0700 Subject: [PATCH 1800/1842] ANDROID: incfs: Add check for ATTR_KILL_SUID and ATTR_MODE in incfs_setattr Add an explicite check for ATTR_KILL_SUID and ATTR_MODE in incfs_setattr. Both of these attributes can not be set at the same time, otherwise notify_change() function will check it and invoke BUG(), crashing the system. Bug: 243394930 Signed-off-by: Tadeusz Struk Change-Id: I91080d68efbd62f1441e20a5c02feef3d1b06e4e --- fs/incfs/vfs.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/fs/incfs/vfs.c b/fs/incfs/vfs.c index 776640451f6f..342998f24090 100644 --- a/fs/incfs/vfs.c +++ b/fs/incfs/vfs.c @@ -1592,6 +1592,10 @@ static int incfs_setattr(struct dentry *dentry, struct iattr *ia) if (ia->ia_valid & ATTR_SIZE) return -EINVAL; + if ((ia->ia_valid & (ATTR_KILL_SUID|ATTR_KILL_SGID)) && + (ia->ia_valid & ATTR_MODE)) + return -EINVAL; + if (!di) return -EINVAL; backing_dentry = di->backing_path.dentry; From 06b301069fde6f28b458bb5b95e7863a5387710c Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Tue, 20 Sep 2022 16:57:30 +0200 Subject: [PATCH 1801/1842] ANDROID: remove unused xhci_get_endpoint_address export In commit 731d2da95e41 ("ANDROID: usb: host: export xhci symbols for ring management"), many xhci symbols were exported, but xhci_get_endpoint_address was never actually used by any external modules, so remove the export as it is unneeded. Bug: 183761108 Bug: 203756332 Cc: Daehwan Jung Fixes: 731d2da95e41 ("ANDROID: usb: host: export xhci symbols for ring management") Signed-off-by: Greg Kroah-Hartman Change-Id: I08aab7192297c832f5a9dd559a016e6ff1140b86 --- drivers/usb/host/xhci.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index 07017eb4727a..f17d9dce4593 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -1334,7 +1334,6 @@ unsigned int xhci_get_endpoint_address(unsigned int ep_index) unsigned int direction = ep_index % 2 ? USB_DIR_OUT : USB_DIR_IN; return direction | number; } -EXPORT_SYMBOL_GPL(xhci_get_endpoint_address); /* Find the flag for this endpoint (for use in the control context). Use the * endpoint index to create a bitmask. The slot context is bit 0, endpoint 0 is From 2fc96f32ee7bbd8437e9ab002c0f51174187ef39 Mon Sep 17 00:00:00 2001 From: Weichao Guo Date: Wed, 7 Sep 2022 10:38:48 +0800 Subject: [PATCH 1802/1842] FROMLIST: f2fs: let FI_OPU_WRITE override FADVISE_COLD_BIT Cold files may be fragmented due to SSR, defragment is needed as sequential reads are dominant scenarios of these files. FI_OPU_WRITE should override FADVISE_COLD_BIT to avoid defragment fails. Bug: 246903585 Signed-off-by: Weichao Guo Signed-off-by: Chao Yu Signed-off-by: Jaegeuk Kim Signed-off-by: Weichao Guo Link: https://lore.kernel.org/all/YxlTQ3H+PPKcvpyc@google.com/T/ Change-Id: I52ab86a15ec275772c5356bfc985803bbdde4408 --- fs/f2fs/data.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index b0ad5f156619..3bac45d7a94c 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -2524,7 +2524,7 @@ bool f2fs_should_update_inplace(struct inode *inode, struct f2fs_io_info *fio) return true; /* if this is cold file, we should overwrite to avoid fragmentation */ - if (file_is_cold(inode)) + if (file_is_cold(inode) && !is_inode_flag_set(inode, FI_OPU_WRITE)) return true; return check_inplace_update_policy(inode, fio); From 9072e986bd823530949f0100a2032f42e89cdd8d Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Tue, 20 Sep 2022 17:02:42 +0200 Subject: [PATCH 1803/1842] Revert "ANDROID: Export functions to be used with dma_map_ops in modules" The symbols exported by this commit were never used by external modules, so just remove them as the exports are not needed (and cause merge problems at times.) Bug: 151050914 Bug: 203756332 Cc: Suren Baghdasaryan Signed-off-by: Greg Kroah-Hartman Change-Id: I926ad3cc732ec1db97fc4711962bc3902105dd25 --- kernel/dma/direct.c | 3 --- kernel/dma/ops_helpers.c | 2 -- 2 files changed, 5 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 8ca84610d4d4..73a03f8628be 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -43,7 +43,6 @@ u64 dma_direct_get_required_mask(struct device *dev) return (1ULL << (fls64(max_dma) - 1)) * 2 - 1; } -EXPORT_SYMBOL_GPL(dma_direct_get_required_mask); static gfp_t dma_direct_optimal_gfp_mask(struct device *dev, u64 dma_mask, u64 *phys_limit) @@ -316,7 +315,6 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size, dma_free_contiguous(dev, page, size); return NULL; } -EXPORT_SYMBOL_GPL(dma_direct_alloc); void dma_direct_free_pages(struct device *dev, size_t size, struct page *page, dma_addr_t dma_addr, @@ -335,7 +333,6 @@ void dma_direct_free_pages(struct device *dev, size_t size, dma_free_contiguous(dev, page, size); } -EXPORT_SYMBOL_GPL(dma_direct_free); #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ defined(CONFIG_SWIOTLB) diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index e28e1e17eaf5..af4a6ef48ce0 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -27,7 +27,6 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0); return ret; } -EXPORT_SYMBOL_GPL(dma_common_get_sgtable); /* * Create userspace mapping for the DMA-coherent memory. @@ -58,7 +57,6 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, return -ENXIO; #endif /* CONFIG_MMU */ } -EXPORT_SYMBOL_GPL(dma_common_mmap); struct page *dma_common_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp) From cc51dcbc60c4492d68e3b075ff4d8bd61729dae4 Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Wed, 21 Sep 2022 16:35:26 +0200 Subject: [PATCH 1804/1842] Revert "ANDROID: vendor_hooks:vendor hook for __alloc_pages_slowpath." This reverts commit dec2f52d08d2d5b36fffafd489457ce4ac1c530e. The hooks android_vh_alloc_pages_reclaim_bypass and android_vh_alloc_pages_failure_bypass are not used by any vendor, so remove it to help with merge issues with future LTS releases. If this is needed by any real user, it can easily be reverted to add it back and then the symbol should be added to the abi list at the same time to prevent it from being removed again later. Bug: 203756332 Bug: 243629905 Cc: xiaofeng Signed-off-by: Greg Kroah-Hartman Change-Id: Id313f6971e0b5437fcfc1ed3f8d4c56706217133 --- drivers/android/vendor_hooks.c | 2 -- include/trace/hooks/mm.h | 8 -------- mm/page_alloc.c | 11 ----------- 3 files changed, 21 deletions(-) diff --git a/drivers/android/vendor_hooks.c b/drivers/android/vendor_hooks.c index 9224b7bab3bd..ea1b1cf1468e 100644 --- a/drivers/android/vendor_hooks.c +++ b/drivers/android/vendor_hooks.c @@ -454,8 +454,6 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_si); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_pages); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_set_shmem_page_flag); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_sched_pelt_multiplier); -EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_pages_reclaim_bypass); -EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_pages_failure_bypass); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_check_page_look_around_ref); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_look_around); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_look_around_migrate_page); diff --git a/include/trace/hooks/mm.h b/include/trace/hooks/mm.h index b1c0f440f8ff..37ce86164bc8 100644 --- a/include/trace/hooks/mm.h +++ b/include/trace/hooks/mm.h @@ -264,14 +264,6 @@ DECLARE_HOOK(android_vh_set_shmem_page_flag, DECLARE_HOOK(android_vh_remove_vmalloc_stack, TP_PROTO(struct vm_struct *vm), TP_ARGS(vm)); -DECLARE_HOOK(android_vh_alloc_pages_reclaim_bypass, - TP_PROTO(gfp_t gfp_mask, int order, int alloc_flags, - int migratetype, struct page **page), - TP_ARGS(gfp_mask, order, alloc_flags, migratetype, page)); -DECLARE_HOOK(android_vh_alloc_pages_failure_bypass, - TP_PROTO(gfp_t gfp_mask, int order, int alloc_flags, - int migratetype, struct page **page), - TP_ARGS(gfp_mask, order, alloc_flags, migratetype, page)); DECLARE_HOOK(android_vh_test_clear_look_around_ref, TP_PROTO(struct page *page), TP_ARGS(page)); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a493ea72dcda..9bb27db5be15 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4926,12 +4926,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, if (current->flags & PF_MEMALLOC) goto nopage; - trace_android_vh_alloc_pages_reclaim_bypass(gfp_mask, order, - alloc_flags, ac->migratetype, &page); - - if (page) - goto got_pg; - /* Try direct reclaim and then allocating */ page = __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags, ac, &did_some_progress); @@ -5039,11 +5033,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, goto retry; } fail: - trace_android_vh_alloc_pages_failure_bypass(gfp_mask, order, - alloc_flags, ac->migratetype, &page); - if (page) - goto got_pg; - warn_alloc(gfp_mask, ac->nodemask, "page allocation failure: order:%u", order); got_pg: From 72b1f9fd160533f90001244e2bcd853d418f2a9e Mon Sep 17 00:00:00 2001 From: Greg Kroah-Hartman Date: Wed, 21 Sep 2022 18:10:03 +0200 Subject: [PATCH 1805/1842] Revert "ANDROID: arm64: debug-monitors: export break hook APIs" This reverts commit 210d9157b60b708fbb1bb094e70e367eec19c1e9. Well, most of that commit, all except the register_kernel_break_hook export, as that was actually being used. All of the other symbol exports were not being used at all, so they did not need to be exported. Bug: 169899018 Bug: 157965270 Cc: Jonglin Lee Signed-off-by: Greg Kroah-Hartman Change-Id: Ib00df5822901ed81f4ec5147e63e37589eeee793 --- arch/arm64/kernel/debug-monitors.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c index 852abc57c2ce..d7f904cd005b 100644 --- a/arch/arm64/kernel/debug-monitors.c +++ b/arch/arm64/kernel/debug-monitors.c @@ -283,13 +283,11 @@ void register_user_break_hook(struct break_hook *hook) { register_debug_hook(&hook->node, &user_break_hook); } -EXPORT_SYMBOL_GPL(register_user_break_hook); void unregister_user_break_hook(struct break_hook *hook) { unregister_debug_hook(&hook->node); } -EXPORT_SYMBOL_GPL(unregister_user_break_hook); void register_kernel_break_hook(struct break_hook *hook) { @@ -301,7 +299,6 @@ void unregister_kernel_break_hook(struct break_hook *hook) { unregister_debug_hook(&hook->node); } -EXPORT_SYMBOL_GPL(unregister_kernel_break_hook); static int call_break_hook(struct pt_regs *regs, unsigned int esr) { From 2c625a20c068978580a89fca1c3c7909ecddf5df Mon Sep 17 00:00:00 2001 From: Udipto Goswami Date: Tue, 20 Sep 2022 10:10:18 +0530 Subject: [PATCH 1806/1842] ANDROID: ABI: Add extcon_get_property_capability symbol Leaf changes summary: 1 artifact changed Changed leaf types summary: 0 leaf type changed Removed/Changed/Added functions summary: 0 Removed, 0 Changed, 1 Added function Removed/Changed/Added variables summary: 0 Removed, 0 Changed, 0 Added variable 1 Added function: [A] 'function int extcon_get_property_capability(extcon_dev*, unsigned int, unsigned int)' Bug: 247757521 Change-Id: Ic62fe3dfb3b1f88bbe4196c43dd32e1dbbebf92d Signed-off-by: Udipto Goswami --- android/abi_gki_aarch64.xml | 100 ++++++++++++++++++++++++++++++++++- android/abi_gki_aarch64_qcom | 1 + 2 files changed, 100 insertions(+), 1 deletion(-) diff --git a/android/abi_gki_aarch64.xml b/android/abi_gki_aarch64.xml index c70a270b833a..b45d6bcb2096 100644 --- a/android/abi_gki_aarch64.xml +++ b/android/abi_gki_aarch64.xml @@ -2495,6 +2495,7 @@ + @@ -15137,6 +15138,89 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -67787,7 +67871,14 @@ - + + + + + + + + @@ -92890,6 +92981,7 @@ + @@ -129591,6 +129683,12 @@ + + + + + + diff --git a/android/abi_gki_aarch64_qcom b/android/abi_gki_aarch64_qcom index a73748d1b9e4..7941c74c90a7 100644 --- a/android/abi_gki_aarch64_qcom +++ b/android/abi_gki_aarch64_qcom @@ -881,6 +881,7 @@ extcon_get_edev_name extcon_get_extcon_dev extcon_get_property + extcon_get_property_capability extcon_get_state extcon_register_notifier extcon_set_state_sync From 5c5b7a4da67ca427d09d4225d798d7b83f6498f8 Mon Sep 17 00:00:00 2001 From: Peifeng Li Date: Thu, 22 Sep 2022 11:36:30 +0800 Subject: [PATCH 1807/1842] ANDROID: vendor_hook: rename the the name of hooks Renamed trace_android_vh_record_percpu_rwsem_lock_starttime to trace_android_vh_record_pcpu_rwsem_starttime. Because the orignal name is too long, which results to the compile-err of .ko that uses the symbol: ERROR: modpost: too long symbol "__tracepoint_android_vh_record_percpu_rwsem_lock_starttime" There is not any users of the the orignal hooks so that it is safe to rename it. Bug: 241191475 Signed-off-by: Peifeng Li Change-Id: Ie246a933414db5e9e28a65a4c280fae3a1cbefe3 --- drivers/android/vendor_hooks.c | 2 +- include/linux/percpu-rwsem.h | 8 ++++---- include/trace/hooks/dtask.h | 2 +- kernel/locking/percpu-rwsem.c | 14 +++++++------- 4 files changed, 13 insertions(+), 13 deletions(-) diff --git a/drivers/android/vendor_hooks.c b/drivers/android/vendor_hooks.c index ea1b1cf1468e..0790c299d2be 100644 --- a/drivers/android/vendor_hooks.c +++ b/drivers/android/vendor_hooks.c @@ -267,7 +267,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_revert_creds); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_record_mutex_lock_starttime); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_record_rtmutex_lock_starttime); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_record_rwsem_lock_starttime); -EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_record_percpu_rwsem_lock_starttime); +EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_record_pcpu_rwsem_starttime); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_set_memory_x); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_set_memory_nx); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_set_memory_ro); diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h index 2f33e0a6e1a9..92d172cfce06 100644 --- a/include/linux/percpu-rwsem.h +++ b/include/linux/percpu-rwsem.h @@ -9,7 +9,7 @@ #include #include -void _trace_android_vh_record_percpu_rwsem_lock_starttime( +void _trace_android_vh_record_pcpu_rwsem_starttime( struct task_struct *tsk, unsigned long settime); struct percpu_rw_semaphore { @@ -76,7 +76,7 @@ static inline void percpu_down_read(struct percpu_rw_semaphore *sem) * bleeding the critical section out. */ preempt_enable(); - _trace_android_vh_record_percpu_rwsem_lock_starttime(current, jiffies); + _trace_android_vh_record_pcpu_rwsem_starttime(current, jiffies); } static inline bool percpu_down_read_trylock(struct percpu_rw_semaphore *sem) @@ -98,7 +98,7 @@ static inline bool percpu_down_read_trylock(struct percpu_rw_semaphore *sem) */ if (ret) { - _trace_android_vh_record_percpu_rwsem_lock_starttime(current, jiffies); + _trace_android_vh_record_pcpu_rwsem_starttime(current, jiffies); rwsem_acquire_read(&sem->dep_map, 0, 1, _RET_IP_); } @@ -107,7 +107,7 @@ static inline bool percpu_down_read_trylock(struct percpu_rw_semaphore *sem) static inline void percpu_up_read(struct percpu_rw_semaphore *sem) { - _trace_android_vh_record_percpu_rwsem_lock_starttime(current, 0); + _trace_android_vh_record_pcpu_rwsem_starttime(current, 0); rwsem_release(&sem->dep_map, _RET_IP_); preempt_disable(); diff --git a/include/trace/hooks/dtask.h b/include/trace/hooks/dtask.h index 208edf8ac265..956e8421755c 100644 --- a/include/trace/hooks/dtask.h +++ b/include/trace/hooks/dtask.h @@ -77,7 +77,7 @@ DECLARE_HOOK(android_vh_record_rtmutex_lock_starttime, DECLARE_HOOK(android_vh_record_rwsem_lock_starttime, TP_PROTO(struct task_struct *tsk, unsigned long settime_jiffies), TP_ARGS(tsk, settime_jiffies)); -DECLARE_HOOK(android_vh_record_percpu_rwsem_lock_starttime, +DECLARE_HOOK(android_vh_record_pcpu_rwsem_starttime, TP_PROTO(struct task_struct *tsk, unsigned long settime_jiffies), TP_ARGS(tsk, settime_jiffies)); diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c index 4a9e7aafd7f4..c8a474aa1b3b 100644 --- a/kernel/locking/percpu-rwsem.c +++ b/kernel/locking/percpu-rwsem.c @@ -13,17 +13,17 @@ #include /* - * trace_android_vh_record_percpu_rwsem_lock_starttime is called in + * trace_android_vh_record_pcpu_rwsem_starttime is called in * include/linux/percpu-rwsem.h by including include/hooks/dtask.h, which * will result to build-err. So we create - * func:_trace_android_vh_record_percpu_rwsem_lock_starttime for percpu-rwsem.h to call. + * func:_trace_android_vh_record_pcpu_rwsem_starttime for percpu-rwsem.h to call. */ -void _trace_android_vh_record_percpu_rwsem_lock_starttime(struct task_struct *tsk, +void _trace_android_vh_record_pcpu_rwsem_starttime(struct task_struct *tsk, unsigned long settime) { - trace_android_vh_record_percpu_rwsem_lock_starttime(tsk, settime); + trace_android_vh_record_pcpu_rwsem_starttime(tsk, settime); } -EXPORT_SYMBOL_GPL(_trace_android_vh_record_percpu_rwsem_lock_starttime); +EXPORT_SYMBOL_GPL(_trace_android_vh_record_pcpu_rwsem_starttime); int __percpu_init_rwsem(struct percpu_rw_semaphore *sem, const char *name, struct lock_class_key *key) @@ -252,13 +252,13 @@ void percpu_down_write(struct percpu_rw_semaphore *sem) /* Wait for all active readers to complete. */ rcuwait_wait_event(&sem->writer, readers_active_check(sem), TASK_UNINTERRUPTIBLE); - trace_android_vh_record_percpu_rwsem_lock_starttime(current, jiffies); + trace_android_vh_record_pcpu_rwsem_starttime(current, jiffies); } EXPORT_SYMBOL_GPL(percpu_down_write); void percpu_up_write(struct percpu_rw_semaphore *sem) { - trace_android_vh_record_percpu_rwsem_lock_starttime(current, 0); + trace_android_vh_record_pcpu_rwsem_starttime(current, 0); rwsem_release(&sem->dep_map, _RET_IP_); /* From ecf5583fc791e5ce03d90f3ca81b829b5a637f17 Mon Sep 17 00:00:00 2001 From: Peifeng Li Date: Thu, 22 Sep 2022 12:01:53 +0800 Subject: [PATCH 1808/1842] ANDROID: GKI: Update symbols to symbol list Update symbols to symbol list externed by oem modules. Leaf changes summary: 4 artifacts changed Changed leaf types summary: 0 leaf type changed Removed/Changed/Added functions summary: 1 Removed, 0 Changed, 1 Added function Removed/Changed/Added variables summary: 1 Removed, 0 Changed, 1 Added variable 1 Removed function: [D] 'function int __traceiter_android_vh_record_percpu_rwsem_lock_starttime(void*, task_struct*, unsigned long int)' 1 Added function: [A] 'function int __traceiter_android_vh_record_pcpu_rwsem_starttime(void*, task_struct*, unsigned long int)' 1 Removed variable: [D] 'tracepoint __tracepoint_android_vh_record_percpu_rwsem_lock_starttime' 1 Added variable: [A] 'tracepoint __tracepoint_android_vh_record_pcpu_rwsem_starttime' Bug: 193384408 Signed-off-by: Peifeng Li Change-Id: Ie70a216e9815198f82b00c7a960123737e0b45de --- android/abi_gki_aarch64.xml | 99 +++++++++++------------------------ android/abi_gki_aarch64_oplus | 4 +- 2 files changed, 32 insertions(+), 71 deletions(-) diff --git a/android/abi_gki_aarch64.xml b/android/abi_gki_aarch64.xml index b45d6bcb2096..52fedee14385 100644 --- a/android/abi_gki_aarch64.xml +++ b/android/abi_gki_aarch64.xml @@ -557,7 +557,7 @@ - + @@ -6484,7 +6484,7 @@ - + @@ -44158,23 +44158,7 @@ - - - - - - - - - - - - - - - - - + @@ -48465,7 +48449,6 @@ - @@ -53349,13 +53332,6 @@ - - - - - - - @@ -65930,13 +65906,6 @@ - - - - - - - @@ -90472,7 +90441,6 @@ - @@ -102145,12 +102113,6 @@ - - - - - - @@ -110387,7 +110349,6 @@ - @@ -118838,7 +118799,7 @@ - + @@ -120064,7 +120025,7 @@ - + @@ -148493,15 +148454,15 @@ - - - - + + + + - - - + + + @@ -148525,19 +148486,19 @@ - - - + + + - - - - + + + + @@ -148550,9 +148511,9 @@ - - - + + + @@ -148574,9 +148535,9 @@ - - - + + + @@ -148599,9 +148560,9 @@ - - - + + + @@ -148651,8 +148612,8 @@ - - + + diff --git a/android/abi_gki_aarch64_oplus b/android/abi_gki_aarch64_oplus index a117542bfc3a..92b08a855be8 100644 --- a/android/abi_gki_aarch64_oplus +++ b/android/abi_gki_aarch64_oplus @@ -2853,7 +2853,7 @@ __traceiter_android_vh_record_mutex_lock_starttime __traceiter_android_vh_record_rtmutex_lock_starttime __traceiter_android_vh_record_rwsem_lock_starttime - __traceiter_android_vh_record_percpu_rwsem_lock_starttime + __traceiter_android_vh_record_pcpu_rwsem_starttime __traceiter_android_vh_rmqueue __traceiter_android_vh_rwsem_init __traceiter_android_vh_rwsem_mark_wake_readers @@ -3097,7 +3097,7 @@ __tracepoint_android_vh_record_mutex_lock_starttime __tracepoint_android_vh_record_rtmutex_lock_starttime __tracepoint_android_vh_record_rwsem_lock_starttime - __tracepoint_android_vh_record_percpu_rwsem_lock_starttime + __tracepoint_android_vh_record_pcpu_rwsem_starttime __tracepoint_android_vh_rmqueue __tracepoint_android_vh_rwsem_init __tracepoint_android_vh_rwsem_mark_wake_readers From b71060e6eb512d7ec56f00f0597d083a00c55c80 Mon Sep 17 00:00:00 2001 From: Hongyu Jin Date: Fri, 1 Apr 2022 19:55:27 +0800 Subject: [PATCH 1809/1842] BACKPORT: erofs: fix use-after-free of on-stack io[] The root cause is the race as follows: Thread #1 Thread #2(irq ctx) z_erofs_runqueue() struct z_erofs_decompressqueue io_A[]; submit bio A z_erofs_decompress_kickoff(,,1) z_erofs_decompressqueue_endio(bio A) z_erofs_decompress_kickoff(,,-1) spin_lock_irqsave() atomic_add_return() io_wait_event() -> pending_bios is already 0 [end of function] wake_up_locked(io_A[]) // crash Referenced backtrace in kernel 5.4: [ 10.129422] Unable to handle kernel paging request at virtual address eb0454a4 [ 10.364157] CPU: 0 PID: 709 Comm: getprop Tainted: G WC O 5.4.147-ab09225 #1 [ 11.556325] [] (__wake_up_common) from [] (__wake_up_locked+0x40/0x48) [ 11.565487] [] (__wake_up_locked) from [] (z_erofs_vle_unzip_kickoff+0x6c/0xc0) [ 11.575438] [] (z_erofs_vle_unzip_kickoff) from [] (z_erofs_vle_read_endio+0x16c/0x17c) [ 11.586082] [] (z_erofs_vle_read_endio) from [] (clone_endio+0xb4/0x1d0) [ 11.595428] [] (clone_endio) from [] (blk_update_request+0x150/0x4dc) [ 11.604516] [] (blk_update_request) from [] (mmc_blk_cqe_complete_rq+0x144/0x15c) [ 11.614640] [] (mmc_blk_cqe_complete_rq) from [] (blk_done_softirq+0xb0/0xcc) [ 11.624419] [] (blk_done_softirq) from [] (__do_softirq+0x184/0x56c) [ 11.633419] [] (__do_softirq) from [] (irq_exit+0xd4/0x138) [ 11.641640] [] (irq_exit) from [] (__handle_domain_irq+0x94/0xd0) [ 11.650381] [] (__handle_domain_irq) from [] (gic_handle_irq+0x50/0xd4) [ 11.659641] [] (gic_handle_irq) from [] (__irq_svc+0x70/0xb0) Bug: 246657836 Change-Id: Ieebf1c5abb48723538d05a5e65b5179a382dab3f (cherry picked from commit 60b30050116c0351b90154044345c1b53ae1f323) [Hongyu: Resolved minor conflict in fs/erofs/zdata.c ] Signed-off-by: Hongyu Jin Reviewed-by: Gao Xiang Reviewed-by: Chao Yu Link: https://lore.kernel.org/r/20220401115527.4935-1-hongyu.jin.cn@gmail.com Signed-off-by: Gao Xiang --- fs/erofs/zdata.c | 12 ++++-------- fs/erofs/zdata.h | 2 +- 2 files changed, 5 insertions(+), 9 deletions(-) diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c index 7fc79c47798a..7596db389201 100644 --- a/fs/erofs/zdata.c +++ b/fs/erofs/zdata.c @@ -789,12 +789,9 @@ static void z_erofs_decompress_kickoff(struct z_erofs_decompressqueue *io, /* wake up the caller thread for sync decompression */ if (sync) { - unsigned long flags; - - spin_lock_irqsave(&io->u.wait.lock, flags); if (!atomic_add_return(bios, &io->pending_bios)) - wake_up_locked(&io->u.wait); - spin_unlock_irqrestore(&io->u.wait.lock, flags); + complete(&io->u.done); + return; } @@ -1214,7 +1211,7 @@ jobqueue_init(struct super_block *sb, } else { fg_out: q = fgq; - init_waitqueue_head(&fgq->u.wait); + init_completion(&fgq->u.done); atomic_set(&fgq->pending_bios, 0); } q->sb = sb; @@ -1377,8 +1374,7 @@ static void z_erofs_runqueue(struct super_block *sb, return; /* wait until all bios are completed */ - io_wait_event(io[JQ_SUBMIT].u.wait, - !atomic_read(&io[JQ_SUBMIT].pending_bios)); + wait_for_completion_io(&io[JQ_SUBMIT].u.done); /* handle synchronous decompress queue in the caller context */ z_erofs_decompress_queue(&io[JQ_SUBMIT], pagepool); diff --git a/fs/erofs/zdata.h b/fs/erofs/zdata.h index 942ee69dff6a..60bf396164ec 100644 --- a/fs/erofs/zdata.h +++ b/fs/erofs/zdata.h @@ -90,7 +90,7 @@ struct z_erofs_decompressqueue { z_erofs_next_pcluster_t head; union { - wait_queue_head_t wait; + struct completion done; struct work_struct work; } u; }; From b9ac329a8390d5446611fb4b9b2bdd5a7cd7c0ca Mon Sep 17 00:00:00 2001 From: Todd Kjos Date: Wed, 21 Sep 2022 21:36:04 +0000 Subject: [PATCH 1810/1842] ANDROID: force struct selinux_state to be defined in KMI struct selinux_state is defined in security/selinux/include/security.h, however libabigail is not finding its definition based on the instantiation of the hooks, so force it to be defined by defining a dummy exported symbol. Since blk_mq_alloc_data is defined in a subsystem-private header, create a new vendor_hooks.c file in security/selinux to define the dummy symbol. Bug: 233047575 Signed-off-by: Todd Kjos Change-Id: Ia505c76db2eed339b3815073f847b500535cc954 --- android/abi_gki_aarch64_type_visibility | 4 ++++ build.config.gki.aarch64 | 1 + drivers/android/vendor_hooks.c | 2 -- security/selinux/Makefile | 2 ++ security/selinux/vendor_hooks.c | 22 ++++++++++++++++++++++ 5 files changed, 29 insertions(+), 2 deletions(-) create mode 100644 android/abi_gki_aarch64_type_visibility create mode 100644 security/selinux/vendor_hooks.c diff --git a/android/abi_gki_aarch64_type_visibility b/android/abi_gki_aarch64_type_visibility new file mode 100644 index 000000000000..88a3c8813346 --- /dev/null +++ b/android/abi_gki_aarch64_type_visibility @@ -0,0 +1,4 @@ +[abi_symbol_list] + +# for type visibility + GKI_struct_selinux_state diff --git a/build.config.gki.aarch64 b/build.config.gki.aarch64 index 9e70748400a1..70f439aacafe 100644 --- a/build.config.gki.aarch64 +++ b/build.config.gki.aarch64 @@ -10,6 +10,7 @@ ABI_DEFINITION=android/abi_gki_aarch64.xml TIDY_ABI=1 KMI_SYMBOL_LIST=android/abi_gki_aarch64 ADDITIONAL_KMI_SYMBOL_LISTS=" +android/abi_gki_aarch64_type_visibility android/abi_gki_aarch64_core android/abi_gki_aarch64_db845c android/abi_gki_aarch64_exynos diff --git a/drivers/android/vendor_hooks.c b/drivers/android/vendor_hooks.c index 0790c299d2be..1ea002e1f230 100644 --- a/drivers/android/vendor_hooks.c +++ b/drivers/android/vendor_hooks.c @@ -53,7 +53,6 @@ #include #include #include -#include #include #include #include @@ -336,7 +335,6 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_dequeue_task_fair); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_sched_stat_runtime_rt); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_prepare_update_load_avg_se); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_finish_update_load_avg_se); -EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_selinux_is_initialized); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_tune_inactive_ratio); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_update_topology_flags_workfn); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_of_i2c_get_board_info); diff --git a/security/selinux/Makefile b/security/selinux/Makefile index 4d8e0e8adf0b..8e8102b558b4 100644 --- a/security/selinux/Makefile +++ b/security/selinux/Makefile @@ -10,6 +10,8 @@ selinux-y := avc.o hooks.o selinuxfs.o netlink.o nlmsgtab.o netif.o \ ss/ebitmap.o ss/hashtab.o ss/symtab.o ss/sidtab.o ss/avtab.o \ ss/policydb.o ss/services.o ss/conditional.o ss/mls.o ss/context.o +selinux-$(CONFIG_ANDROID_VENDOR_HOOKS) += vendor_hooks.o + selinux-$(CONFIG_SECURITY_NETWORK_XFRM) += xfrm.o selinux-$(CONFIG_NETLABEL) += netlabel.o diff --git a/security/selinux/vendor_hooks.c b/security/selinux/vendor_hooks.c new file mode 100644 index 000000000000..3802e8233289 --- /dev/null +++ b/security/selinux/vendor_hooks.c @@ -0,0 +1,22 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* vendor_hook.c + * + * Copyright 2022 Google LLC + */ + +#ifndef __GENKSYMS__ +#include "security.h" +#endif + +#define CREATE_TRACE_POINTS +#include +#include +#include + +EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_selinux_is_initialized); + +/* + * For type visibility + */ +struct selinux_state *GKI_struct_selinux_state; +EXPORT_SYMBOL_GPL(GKI_struct_selinux_state); From c6f7a0ebd8fc4ba7ded80a9cb69dd1015e1075ee Mon Sep 17 00:00:00 2001 From: Todd Kjos Date: Wed, 21 Sep 2022 21:48:21 +0000 Subject: [PATCH 1811/1842] ANDROID: make sure all types for hooks are defined in KMI There are 2 remaining types directly referenced by vendor hooks that were not fully-defined in the KMI: struct gic_chip_data : defined in include/linux/irqchip/arm-gic-v3.h struct swap_slots_cache : defined include/linux/swap_slots.h libabigail is not finding definitions based on the instantiation of the hooks, so force them to be defined by defining dummy exported symbols. Update XML with the now visible definitions Bug: 233047575 Signed-off-by: Todd Kjos Change-Id: I521b2a596e1d7361d0f44a87ffe330186896b9f8 --- android/abi_gki_aarch64.xml | 609 +++++++++++------------- android/abi_gki_aarch64_type_visibility | 2 + drivers/android/vendor_hooks.c | 13 + 3 files changed, 287 insertions(+), 337 deletions(-) diff --git a/android/abi_gki_aarch64.xml b/android/abi_gki_aarch64.xml index 52fedee14385..5fe18ac581fc 100644 --- a/android/abi_gki_aarch64.xml +++ b/android/abi_gki_aarch64.xml @@ -6198,6 +6198,9 @@ + + + @@ -7344,17 +7347,7 @@ - - - - - - - - - - - + @@ -9471,14 +9464,6 @@ - - - - - - - - @@ -10765,11 +10750,6 @@ - - - - - @@ -15138,89 +15118,6 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - @@ -15329,11 +15226,6 @@ - - - - - @@ -15757,7 +15649,6 @@ - @@ -17043,14 +16934,6 @@ - - - - - - - - @@ -18598,7 +18481,6 @@ - @@ -18913,7 +18795,6 @@ - @@ -19766,6 +19647,29 @@ + + + + + + + + + + + + + + + + + + + + + + + @@ -21274,10 +21178,6 @@ - - - - @@ -21769,11 +21669,6 @@ - - - - - @@ -24738,14 +24633,6 @@ - - - - - - - - @@ -25018,10 +24905,6 @@ - - - - @@ -25299,11 +25182,6 @@ - - - - - @@ -25553,6 +25431,7 @@ + @@ -25963,7 +25842,6 @@ - @@ -31960,7 +31838,6 @@ - @@ -32340,7 +32217,6 @@ - @@ -32645,7 +32521,41 @@ - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -35157,10 +35067,14 @@ + + + + @@ -40420,26 +40334,6 @@ - - - - - - - - - - - - - - - - - - - - @@ -43997,6 +43891,7 @@ + @@ -44156,9 +44051,59 @@ - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + + + + + + + + + + + + + + + + + @@ -45519,6 +45464,7 @@ + @@ -48449,6 +48395,7 @@ + @@ -50722,7 +50669,6 @@ - @@ -53189,7 +53135,6 @@ - @@ -53332,6 +53277,13 @@ + + + + + + + @@ -53954,6 +53906,7 @@ + @@ -54119,17 +54072,6 @@ - - - - - - - - - - - @@ -56656,6 +56598,7 @@ + @@ -60176,6 +60119,7 @@ + @@ -61725,11 +61669,6 @@ - - - - - @@ -61804,20 +61743,6 @@ - - - - - - - - - - - - - - @@ -65906,6 +65831,13 @@ + + + + + + + @@ -67840,14 +67772,7 @@ - - - - - - - - + @@ -70848,7 +70773,6 @@ - @@ -72710,12 +72634,6 @@ - - - - - - @@ -73432,6 +73350,7 @@ + @@ -73843,7 +73762,32 @@ - + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -78497,6 +78441,7 @@ + @@ -78881,17 +78826,6 @@ - - - - - - - - - - - @@ -79368,6 +79302,7 @@ + @@ -83637,6 +83572,38 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -85385,26 +85352,6 @@ - - - - - - - - - - - - - - - - - - - - @@ -90350,7 +90297,6 @@ - @@ -90441,6 +90387,7 @@ + @@ -91046,6 +90993,7 @@ + @@ -92526,6 +92474,7 @@ + @@ -92949,7 +92898,6 @@ - @@ -94454,6 +94402,7 @@ + @@ -94910,6 +94859,7 @@ + @@ -101851,26 +101801,6 @@ - - - - - - - - - - - - - - - - - - - - @@ -102113,6 +102043,12 @@ + + + + + + @@ -103128,7 +103064,6 @@ - @@ -105040,6 +104975,7 @@ + @@ -109026,11 +108962,6 @@ - - - - - @@ -110349,6 +110280,7 @@ + @@ -115395,6 +115327,9 @@ + + + @@ -115423,11 +115358,11 @@ - - - - - + + + + + @@ -116057,9 +115992,9 @@ - - - + + + @@ -116112,9 +116047,9 @@ - - - + + + @@ -116625,9 +116560,9 @@ - - - + + + @@ -118558,18 +118493,18 @@ - - - - - - + + + + + + - - - - + + + + @@ -119137,9 +119072,9 @@ - - - + + + @@ -119983,8 +119918,8 @@ - - + + @@ -120092,7 +120027,7 @@ - + @@ -120693,9 +120628,9 @@ - - - + + + @@ -120819,9 +120754,9 @@ - - - + + + @@ -129979,14 +129914,14 @@ - - - + + + - - - + + + @@ -130934,8 +130869,8 @@ - - + + @@ -136741,14 +136676,14 @@ - - - - + + + + - - + + @@ -139321,8 +139256,8 @@ - - + + @@ -141706,11 +141641,11 @@ - + - - + + diff --git a/android/abi_gki_aarch64_type_visibility b/android/abi_gki_aarch64_type_visibility index 88a3c8813346..705bf5574222 100644 --- a/android/abi_gki_aarch64_type_visibility +++ b/android/abi_gki_aarch64_type_visibility @@ -2,3 +2,5 @@ # for type visibility GKI_struct_selinux_state + GKI_struct_gic_chip_data + GKI_struct_swap_slots_cache diff --git a/drivers/android/vendor_hooks.c b/drivers/android/vendor_hooks.c index 1ea002e1f230..c9b54b2fa509 100644 --- a/drivers/android/vendor_hooks.c +++ b/drivers/android/vendor_hooks.c @@ -456,3 +456,16 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_check_page_look_around_ref); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_look_around); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_look_around_migrate_page); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_test_clear_look_around_ref); + +/* + * For type visibility + */ +#ifdef CONFIG_ARM64 +#include +const struct gic_chip_data *GKI_struct_gic_chip_data; +EXPORT_SYMBOL_GPL(GKI_struct_gic_chip_data); +#endif + +#include +const struct swap_slots_cache *GKI_struct_swap_slots_cache; +EXPORT_SYMBOL_GPL(GKI_struct_swap_slots_cache); From 4135365b5d883c52f7aab9762d7a2aa6b5ec1945 Mon Sep 17 00:00:00 2001 From: Wei Liu Date: Fri, 23 Sep 2022 15:38:06 +0800 Subject: [PATCH 1812/1842] ANDROID: GKI: Update symbols to symbol list Update symbols to symbol list externed by oppo network group. Leaf changes summary: 4 artifacts changed Changed leaf types summary: 0 leaf type changed Removed/Changed/Added functions summary: 0 Removed, 0 Changed, 0 Added function Removed/Changed/Added variables summary: 0 Removed, 0 Changed, 4 Added variables 4 Added variables: [A] 'tracepoint __tracepoint_net_dev_queue' [A] 'tracepoint __tracepoint_net_dev_xmit' [A] 'tracepoint __tracepoint_netif_receive_skb' [A] 'tracepoint __tracepoint_netif_rx' Bug: 193384408 Signed-off-by: Wei Liu Change-Id: I11fce7b80fd50f6c01b488f2a660f80179485d93 --- android/abi_gki_aarch64.xml | 8 ++++++++ android/abi_gki_aarch64_oplus | 4 ++++ net/core/net-traces.c | 6 ++++++ 3 files changed, 18 insertions(+) diff --git a/android/abi_gki_aarch64.xml b/android/abi_gki_aarch64.xml index 5fe18ac581fc..dee2bfb4bd31 100644 --- a/android/abi_gki_aarch64.xml +++ b/android/abi_gki_aarch64.xml @@ -6627,6 +6627,10 @@ + + + + @@ -120100,6 +120104,10 @@ + + + + diff --git a/android/abi_gki_aarch64_oplus b/android/abi_gki_aarch64_oplus index 92b08a855be8..384bdbfa612b 100644 --- a/android/abi_gki_aarch64_oplus +++ b/android/abi_gki_aarch64_oplus @@ -3150,6 +3150,10 @@ __tracepoint_ipi_entry __tracepoint_ipi_raise __tracepoint_irq_handler_entry + __tracepoint_net_dev_queue + __tracepoint_net_dev_xmit + __tracepoint_netif_receive_skb + __tracepoint_netif_rx __tracepoint_pelt_se_tp tracepoint_probe_register tracepoint_probe_register_prio diff --git a/net/core/net-traces.c b/net/core/net-traces.c index 465362a9b55d..ffeb3a682859 100644 --- a/net/core/net-traces.c +++ b/net/core/net-traces.c @@ -58,3 +58,9 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(kfree_skb); EXPORT_TRACEPOINT_SYMBOL_GPL(napi_poll); EXPORT_TRACEPOINT_SYMBOL_GPL(tcp_send_reset); + +EXPORT_TRACEPOINT_SYMBOL_GPL(net_dev_queue); +EXPORT_TRACEPOINT_SYMBOL_GPL(net_dev_xmit); +EXPORT_TRACEPOINT_SYMBOL_GPL(netif_receive_skb); +EXPORT_TRACEPOINT_SYMBOL_GPL(netif_rx); + From 2d8afda40e31b23d2981fe273d32b01b9c87b6d9 Mon Sep 17 00:00:00 2001 From: Krishna Kurapati Date: Sat, 27 Aug 2022 08:45:10 +0530 Subject: [PATCH 1813/1842] UPSTREAM: usb: gadget: mass_storage: Fix cdrom data transfers on MAC-OS During cdrom emulation, the response to read_toc command must contain the cdrom address as the number of sectors (2048 byte sized blocks) represented either as an absolute value (when MSF bit is '0') or in terms of PMin/PSec/PFrame (when MSF bit is set to '1'). Incase of cdrom, the fsg_lun_open call sets the sector size to 2048 bytes. When MAC OS sends a read_toc request with MSF set to '1', the store_cdrom_address assumes that the address being provided is the LUN size represented in 512 byte sized blocks instead of 2048. It tries to modify the address further to convert it to 2048 byte sized blocks and store it in MSF format. This results in data transfer failures as the cdrom address being provided in the read_toc response is incorrect. Fixes: 3f565a363cee ("usb: gadget: storage: adapt logic block size to bound block devices") Cc: stable@vger.kernel.org Acked-by: Alan Stern Signed-off-by: Krishna Kurapati Link: https://lore.kernel.org/r/1661570110-19127-1-git-send-email-quic_kriskura@quicinc.com Signed-off-by: Greg Kroah-Hartman (cherry picked from commit 9d4dc16ec71bd6368548e9743223e449b4377fc7) Bug: 245221519 Change-Id: I50687e4dfc2b26d35adce50e51b54d28fb85967e Signed-off-by: Krishna Kurapati --- drivers/usb/gadget/function/storage_common.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/usb/gadget/function/storage_common.c b/drivers/usb/gadget/function/storage_common.c index 2451e45ada6e..87bfe87daa86 100644 --- a/drivers/usb/gadget/function/storage_common.c +++ b/drivers/usb/gadget/function/storage_common.c @@ -294,8 +294,10 @@ EXPORT_SYMBOL_GPL(fsg_lun_fsync_sub); void store_cdrom_address(u8 *dest, int msf, u32 addr) { if (msf) { - /* Convert to Minutes-Seconds-Frames */ - addr >>= 2; /* Convert to 2048-byte frames */ + /* + * Convert to Minutes-Seconds-Frames. + * Sector size is already set to 2048 bytes. + */ addr += 2*75; /* Lead-in occupies 2 seconds */ dest[3] = addr % 75; /* Frames */ addr /= 75; From 6d04d8ce9083ea681202f4dc679f8dc4975b6f69 Mon Sep 17 00:00:00 2001 From: Pavankumar Kondeti Date: Mon, 26 Sep 2022 18:36:10 +0530 Subject: [PATCH 1814/1842] ANDROID: vendor_hooks: Allow shared pages reclaim via MADV_PAGEOUT Add a hook in madvise_cold_or_pageout_pte_range() to allow vendor modules to influence the shared pages reclaim. Bug: 242678506 Change-Id: I269a385b59f7291c2e96478674bb3d05f94584cb Signed-off-by: Pavankumar Kondeti --- drivers/android/vendor_hooks.c | 1 + include/trace/hooks/mm.h | 3 +++ mm/madvise.c | 4 +++- 3 files changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/android/vendor_hooks.c b/drivers/android/vendor_hooks.c index c9b54b2fa509..ff62cc7c921b 100644 --- a/drivers/android/vendor_hooks.c +++ b/drivers/android/vendor_hooks.c @@ -441,6 +441,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_drain_slots_cache_cpu); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_swap_slot_cache); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_swap_slot); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_get_swap_page); +EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_madvise_cold_or_pageout); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_page_isolated_for_reclaim); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_inactive_is_low); EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_snapshot_refaults); diff --git a/include/trace/hooks/mm.h b/include/trace/hooks/mm.h index 37ce86164bc8..358a89380982 100644 --- a/include/trace/hooks/mm.h +++ b/include/trace/hooks/mm.h @@ -237,6 +237,9 @@ DECLARE_HOOK(android_vh_get_swap_page, TP_PROTO(struct page *page, swp_entry_t *entry, struct swap_slots_cache *cache, bool *found), TP_ARGS(page, entry, cache, found)); +DECLARE_HOOK(android_vh_madvise_cold_or_pageout, + TP_PROTO(struct vm_area_struct *vma, bool *allow_shared), + TP_ARGS(vma, allow_shared)); DECLARE_HOOK(android_vh_page_isolated_for_reclaim, TP_PROTO(struct mm_struct *mm, struct page *page), TP_ARGS(mm, page)); diff --git a/mm/madvise.c b/mm/madvise.c index db54a747d6f3..2758648a60f6 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -319,10 +319,12 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, spinlock_t *ptl; struct page *page = NULL; LIST_HEAD(page_list); + bool allow_shared = false; if (fatal_signal_pending(current)) return -EINTR; + trace_android_vh_madvise_cold_or_pageout(vma, &allow_shared); #ifdef CONFIG_TRANSPARENT_HUGEPAGE if (pmd_trans_huge(*pmd)) { pmd_t orig_pmd; @@ -438,7 +440,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, } /* Do not interfere with other mappings of this page */ - if (page_mapcount(page) != 1) + if (!allow_shared && page_mapcount(page) != 1) continue; VM_BUG_ON_PAGE(PageTransCompound(page), page); From d195c9f2bbc28f560b04a37b82f9893a771e4ade Mon Sep 17 00:00:00 2001 From: Todd Kjos Date: Wed, 28 Sep 2022 16:59:24 +0000 Subject: [PATCH 1815/1842] ANDROID: force struct page_vma_mapped_walk to be defined in KMI A vendor hook was recently defined that references struct page_vma_mapped_walk, but it is only forward-declared and therefore not fully defined in the KMI. Add inclusion of linux/rmap.h to vender_hooks.c to add the full definition. Bug: 233047575 Signed-off-by: Todd Kjos Change-Id: I3bbaca92a70e4464e370e987ae4154de19c4fee2 --- android/abi_gki_aarch64.xml | 167 ++++++++++++++++++++------------- drivers/android/vendor_hooks.c | 4 + 2 files changed, 108 insertions(+), 63 deletions(-) diff --git a/android/abi_gki_aarch64.xml b/android/abi_gki_aarch64.xml index 9df93bbe19c7..b69cd3d6656d 100644 --- a/android/abi_gki_aarch64.xml +++ b/android/abi_gki_aarch64.xml @@ -83107,6 +83107,9 @@ + + + @@ -92067,6 +92070,9 @@ + + + @@ -98314,7 +98320,29 @@ - + + + + + + + + + + + + + + + + + + + + + + + @@ -105303,7 +105331,20 @@ - + + + + + + + + + + + + + + @@ -115459,9 +115500,9 @@ - + - + @@ -117797,10 +117838,10 @@ - - - - + + + + @@ -117855,10 +117896,10 @@ - - - - + + + + @@ -118368,10 +118409,10 @@ - - - - + + + + @@ -118531,10 +118572,10 @@ - - - - + + + + @@ -118625,18 +118666,18 @@ - - - - - - + + + + + + - - - - + + + + @@ -118773,10 +118814,10 @@ - - - - + + + + @@ -118884,9 +118925,9 @@ - - - + + + @@ -119077,9 +119118,9 @@ - - - + + + @@ -119152,10 +119193,10 @@ - - - - + + + + @@ -119204,9 +119245,9 @@ - - - + + + @@ -119354,10 +119395,10 @@ - - - - + + + + @@ -119920,7 +119961,7 @@ - + @@ -119928,7 +119969,7 @@ - + @@ -120007,7 +120048,7 @@ - + @@ -120034,7 +120075,7 @@ - + @@ -120050,8 +120091,8 @@ - - + + @@ -120075,7 +120116,7 @@ - + @@ -120095,7 +120136,7 @@ - + @@ -120136,7 +120177,7 @@ - + @@ -120149,7 +120190,7 @@ - + @@ -120159,7 +120200,7 @@ - + @@ -120184,7 +120225,7 @@ - + diff --git a/drivers/android/vendor_hooks.c b/drivers/android/vendor_hooks.c index ff62cc7c921b..d4352dfeb34d 100644 --- a/drivers/android/vendor_hooks.c +++ b/drivers/android/vendor_hooks.c @@ -6,6 +6,10 @@ * Copyright 2020 Google LLC */ +#ifndef __GENKSYMS__ +#include +#endif + #define CREATE_TRACE_POINTS #include #include From 5545801f5c8307aabe2c7153ca35dc85879f13a1 Mon Sep 17 00:00:00 2001 From: Pavankumar Kondeti Date: Wed, 28 Sep 2022 10:08:15 +0530 Subject: [PATCH 1816/1842] ANDROID: abi_gki_aarch64_qcom: Add android_vh_madvise_cold_or_pageout Add android_vh_madvise_cold_or_pageout symbol so that vendor modules can influence the shared pages reclaim behavior. Leaf changes summary: 1 artifact changed Changed leaf types summary: 0 leaf type changed Removed/Changed/Added functions summary: 0 Removed, 0 Changed, 0 Added function Removed/Changed/Added variables summary: 0 Removed, 0 Changed, 1 Added variable 1 Added variable: [A] 'tracepoint __tracepoint_android_vh_madvise_cold_or_pageout' Bug: 242678506 Change-Id: I6180578876858543eb3b71da45b6f75d40dfc008 Signed-off-by: Pavankumar Kondeti --- android/abi_gki_aarch64.xml | 2 ++ android/abi_gki_aarch64_qcom | 1 + 2 files changed, 3 insertions(+) diff --git a/android/abi_gki_aarch64.xml b/android/abi_gki_aarch64.xml index b69cd3d6656d..203534259b30 100644 --- a/android/abi_gki_aarch64.xml +++ b/android/abi_gki_aarch64.xml @@ -6448,6 +6448,7 @@ + @@ -120093,6 +120094,7 @@ + diff --git a/android/abi_gki_aarch64_qcom b/android/abi_gki_aarch64_qcom index 7941c74c90a7..dc16110241bc 100644 --- a/android/abi_gki_aarch64_qcom +++ b/android/abi_gki_aarch64_qcom @@ -2581,6 +2581,7 @@ __traceiter_android_vh_jiffies_update __traceiter_android_vh_logbuf __traceiter_android_vh_logbuf_pr_cont + __tracepoint_android_vh_madvise_cold_or_pageout __traceiter_android_vh_printk_hotplug __traceiter_android_vh_rproc_recovery __traceiter_android_vh_rproc_recovery_set From 9b080edfbde3c26f4780b5bae8a401b38ca0d852 Mon Sep 17 00:00:00 2001 From: Jeongik Cha Date: Mon, 4 Jul 2022 17:43:54 +0900 Subject: [PATCH 1817/1842] UPSTREAM: wifi: mac80211_hwsim: fix race condition in pending packet commit 4ee186fa7e40ae06ebbfbad77e249e3746e14114 upstream. A pending packet uses a cookie as an unique key, but it can be duplicated because it didn't use atomic operators. And also, a pending packet can be null in hwsim_tx_info_frame_received_nl due to race condition with mac80211_hwsim_stop. For this, * Use an atomic type and operator for a cookie * Add a lock around the loop for pending packets Signed-off-by: Jeongik Cha Link: https://lore.kernel.org/r/20220704084354.3556326-1-jeongik@google.com Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman (cherry picked from commit eb8fc4277b628ac81db806c130a500dd48a9e524) Signed-off-by: Carlos Llamas Bug: 236994625 Change-Id: Ic6613c8869a51b5de303e40406f023af689b9d64 --- drivers/net/wireless/mac80211_hwsim.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c index e963ebb1156a..10a3fdb23a1f 100644 --- a/drivers/net/wireless/mac80211_hwsim.c +++ b/drivers/net/wireless/mac80211_hwsim.c @@ -597,7 +597,7 @@ struct mac80211_hwsim_data { bool ps_poll_pending; struct dentry *debugfs; - uintptr_t pending_cookie; + atomic64_t pending_cookie; struct sk_buff_head pending; /* packets pending */ /* * Only radios in the same group can communicate together (the @@ -1204,7 +1204,7 @@ static void mac80211_hwsim_tx_frame_nl(struct ieee80211_hw *hw, int i; struct hwsim_tx_rate tx_attempts[IEEE80211_TX_MAX_RATES]; struct hwsim_tx_rate_flag tx_attempts_flags[IEEE80211_TX_MAX_RATES]; - uintptr_t cookie; + u64 cookie; if (data->ps != PS_DISABLED) hdr->frame_control |= cpu_to_le16(IEEE80211_FCTL_PM); @@ -1273,8 +1273,7 @@ static void mac80211_hwsim_tx_frame_nl(struct ieee80211_hw *hw, goto nla_put_failure; /* We create a cookie to identify this skb */ - data->pending_cookie++; - cookie = data->pending_cookie; + cookie = (u64)atomic64_inc_return(&data->pending_cookie); info->rate_driver_data[0] = (void *)cookie; if (nla_put_u64_64bit(skb, HWSIM_ATTR_COOKIE, cookie, HWSIM_ATTR_PAD)) goto nla_put_failure; @@ -3514,6 +3513,7 @@ static int hwsim_tx_info_frame_received_nl(struct sk_buff *skb_2, const u8 *src; unsigned int hwsim_flags; int i; + unsigned long flags; bool found = false; if (!info->attrs[HWSIM_ATTR_ADDR_TRANSMITTER] || @@ -3541,18 +3541,20 @@ static int hwsim_tx_info_frame_received_nl(struct sk_buff *skb_2, } /* look for the skb matching the cookie passed back from user */ + spin_lock_irqsave(&data2->pending.lock, flags); skb_queue_walk_safe(&data2->pending, skb, tmp) { u64 skb_cookie; txi = IEEE80211_SKB_CB(skb); - skb_cookie = (u64)(uintptr_t)txi->rate_driver_data[0]; + skb_cookie = (u64)txi->rate_driver_data[0]; if (skb_cookie == ret_skb_cookie) { - skb_unlink(skb, &data2->pending); + __skb_unlink(skb, &data2->pending); found = true; break; } } + spin_unlock_irqrestore(&data2->pending.lock, flags); /* not found */ if (!found) From 08cb67eb33769f0bcda5457b54b5e9e1c0845c3a Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Mon, 11 Jul 2022 13:14:24 +0200 Subject: [PATCH 1818/1842] UPSTREAM: wifi: mac80211_hwsim: add back erroneously removed cast commit 58b6259d820d63c2adf1c7541b54cce5a2ae6073 upstream. The robots report that we're now casting to a differently sized integer, which is correct, and the previous patch had erroneously removed it. Reported-by: kernel test robot Fixes: 4ee186fa7e40 ("wifi: mac80211_hwsim: fix race condition in pending packet") Signed-off-by: Johannes Berg Cc: Jeongik Cha Signed-off-by: Greg Kroah-Hartman (cherry picked from commit d400222f49599423862010f0c7f6fee142be72d7) Signed-off-by: Carlos Llamas Bug: 236994625 Change-Id: I4b5cfa77c47d4d03b46600f0b543e27340c228c0 --- drivers/net/wireless/mac80211_hwsim.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c index 10a3fdb23a1f..88f47d528bd6 100644 --- a/drivers/net/wireless/mac80211_hwsim.c +++ b/drivers/net/wireless/mac80211_hwsim.c @@ -3546,7 +3546,7 @@ static int hwsim_tx_info_frame_received_nl(struct sk_buff *skb_2, u64 skb_cookie; txi = IEEE80211_SKB_CB(skb); - skb_cookie = (u64)txi->rate_driver_data[0]; + skb_cookie = (u64)(uintptr_t)txi->rate_driver_data[0]; if (skb_cookie == ret_skb_cookie) { __skb_unlink(skb, &data2->pending); From f18e68a2344c04d9efe4c42faa08ad0edb38f693 Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Wed, 13 Jul 2022 21:16:45 +0200 Subject: [PATCH 1819/1842] UPSTREAM: wifi: mac80211_hwsim: use 32-bit skb cookie commit cc5250cdb43d444061412df7fae72d2b4acbdf97 upstream. We won't really have enough skbs to need a 64-bit cookie, and on 32-bit platforms storing the 64-bit cookie into the void *rate_driver_data doesn't work anyway. Switch back to using just a 32-bit cookie and uintptr_t for the type to avoid compiler warnings about all this. Fixes: 4ee186fa7e40 ("wifi: mac80211_hwsim: fix race condition in pending packet") Signed-off-by: Johannes Berg Cc: Jeongik Cha Signed-off-by: Greg Kroah-Hartman (cherry picked from commit 6dece5ad6e1e7d8c2bacfae606dc6f18a18c51e0) Signed-off-by: Carlos Llamas Bug: 236994625 Change-Id: I81b075297ec2248f706aebc914cd5e2783665bbc --- drivers/net/wireless/mac80211_hwsim.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c index 88f47d528bd6..2d1f7fb18381 100644 --- a/drivers/net/wireless/mac80211_hwsim.c +++ b/drivers/net/wireless/mac80211_hwsim.c @@ -597,7 +597,7 @@ struct mac80211_hwsim_data { bool ps_poll_pending; struct dentry *debugfs; - atomic64_t pending_cookie; + atomic_t pending_cookie; struct sk_buff_head pending; /* packets pending */ /* * Only radios in the same group can communicate together (the @@ -1204,7 +1204,7 @@ static void mac80211_hwsim_tx_frame_nl(struct ieee80211_hw *hw, int i; struct hwsim_tx_rate tx_attempts[IEEE80211_TX_MAX_RATES]; struct hwsim_tx_rate_flag tx_attempts_flags[IEEE80211_TX_MAX_RATES]; - u64 cookie; + uintptr_t cookie; if (data->ps != PS_DISABLED) hdr->frame_control |= cpu_to_le16(IEEE80211_FCTL_PM); @@ -1273,7 +1273,7 @@ static void mac80211_hwsim_tx_frame_nl(struct ieee80211_hw *hw, goto nla_put_failure; /* We create a cookie to identify this skb */ - cookie = (u64)atomic64_inc_return(&data->pending_cookie); + cookie = atomic_inc_return(&data->pending_cookie); info->rate_driver_data[0] = (void *)cookie; if (nla_put_u64_64bit(skb, HWSIM_ATTR_COOKIE, cookie, HWSIM_ATTR_PAD)) goto nla_put_failure; @@ -3543,10 +3543,10 @@ static int hwsim_tx_info_frame_received_nl(struct sk_buff *skb_2, /* look for the skb matching the cookie passed back from user */ spin_lock_irqsave(&data2->pending.lock, flags); skb_queue_walk_safe(&data2->pending, skb, tmp) { - u64 skb_cookie; + uintptr_t skb_cookie; txi = IEEE80211_SKB_CB(skb); - skb_cookie = (u64)(uintptr_t)txi->rate_driver_data[0]; + skb_cookie = (uintptr_t)txi->rate_driver_data[0]; if (skb_cookie == ret_skb_cookie) { __skb_unlink(skb, &data2->pending); From 24220df8028a5ae16b20aa5dc175186b14252ccb Mon Sep 17 00:00:00 2001 From: Chao Yu Date: Wed, 28 Sep 2022 23:38:53 +0800 Subject: [PATCH 1820/1842] FROMGIT: f2fs: support recording stop_checkpoint reason into super_block This patch supports to record stop_checkpoint error into f2fs_super_block.s_stop_reason[]. Bug: 247456379 Bug: 246094874 (cherry picked from commit 93523dddd98b9838896277fa9cad238a72214f02 https: //git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git dev) Signed-off-by: Chao Yu Signed-off-by: Jaegeuk Kim Change-Id: I3e5fb355a7a7413b1e4bb4937791491ca73e6853 --- fs/f2fs/checkpoint.c | 10 +++++++--- fs/f2fs/data.c | 6 ++++-- fs/f2fs/f2fs.h | 4 +++- fs/f2fs/file.c | 11 ++++++----- fs/f2fs/gc.c | 6 ++++-- fs/f2fs/inode.c | 3 ++- fs/f2fs/segment.c | 7 +++++-- fs/f2fs/super.c | 20 ++++++++++++++++++++ include/linux/f2fs_fs.h | 17 ++++++++++++++++- 9 files changed, 67 insertions(+), 17 deletions(-) diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c index 6054f13e5c89..de2c454d6940 100644 --- a/fs/f2fs/checkpoint.c +++ b/fs/f2fs/checkpoint.c @@ -25,12 +25,16 @@ static struct kmem_cache *ino_entry_slab; struct kmem_cache *f2fs_inode_entry_slab; -void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io) +void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io, + unsigned char reason) { f2fs_build_fault_attr(sbi, 0, 0); set_ckpt_flags(sbi, CP_ERROR_FLAG); - if (!end_io) + if (!end_io) { f2fs_flush_merged_writes(sbi); + + f2fs_handle_stop(sbi, reason); + } } /* @@ -120,7 +124,7 @@ struct page *f2fs_get_meta_page_retry(struct f2fs_sb_info *sbi, pgoff_t index) if (PTR_ERR(page) == -EIO && ++count <= DEFAULT_RETRY_IO_COUNT) goto retry; - f2fs_stop_checkpoint(sbi, false); + f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_META_PAGE); } return page; } diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index 3bac45d7a94c..867b2b72ec67 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -311,7 +311,8 @@ static void f2fs_write_end_io(struct bio *bio) mempool_free(page, sbi->write_io_dummy); if (unlikely(bio->bi_status)) - f2fs_stop_checkpoint(sbi, true); + f2fs_stop_checkpoint(sbi, true, + STOP_CP_REASON_WRITE_FAIL); continue; } @@ -327,7 +328,8 @@ static void f2fs_write_end_io(struct bio *bio) if (unlikely(bio->bi_status)) { mapping_set_error(page->mapping, -EIO); if (type == F2FS_WB_CP_DATA) - f2fs_stop_checkpoint(sbi, true); + f2fs_stop_checkpoint(sbi, true, + STOP_CP_REASON_WRITE_FAIL); } f2fs_bug_on(sbi, page->mapping == NODE_MAPPING(sbi) && diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 56b67a0d18d5..19d7d1f4c4fa 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -3482,6 +3482,7 @@ int f2fs_enable_quota_files(struct f2fs_sb_info *sbi, bool rdonly); int f2fs_quota_sync(struct super_block *sb, int type); loff_t max_file_blocks(struct inode *inode); void f2fs_quota_off_umount(struct super_block *sb); +void f2fs_handle_stop(struct f2fs_sb_info *sbi, unsigned char reason); int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover); int f2fs_sync_fs(struct super_block *sb, int sync); int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi); @@ -3631,7 +3632,8 @@ unsigned int f2fs_usable_blks_in_seg(struct f2fs_sb_info *sbi, /* * checkpoint.c */ -void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io); +void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io, + unsigned char reason); void f2fs_flush_ckpt_thread(struct f2fs_sb_info *sbi); struct page *f2fs_grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index); struct page *f2fs_get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index); diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c index 478b65efbf31..aa67c0c7caf2 100644 --- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -2256,7 +2256,8 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg) if (ret) { if (ret == -EROFS) { ret = 0; - f2fs_stop_checkpoint(sbi, false); + f2fs_stop_checkpoint(sbi, false, + STOP_CP_REASON_SHUTDOWN); set_sbi_flag(sbi, SBI_IS_SHUTDOWN); trace_f2fs_shutdown(sbi, in, ret); } @@ -2269,7 +2270,7 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg) ret = freeze_bdev(sb->s_bdev); if (ret) goto out; - f2fs_stop_checkpoint(sbi, false); + f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_SHUTDOWN); set_sbi_flag(sbi, SBI_IS_SHUTDOWN); thaw_bdev(sb->s_bdev); break; @@ -2278,16 +2279,16 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg) ret = f2fs_sync_fs(sb, 1); if (ret) goto out; - f2fs_stop_checkpoint(sbi, false); + f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_SHUTDOWN); set_sbi_flag(sbi, SBI_IS_SHUTDOWN); break; case F2FS_GOING_DOWN_NOSYNC: - f2fs_stop_checkpoint(sbi, false); + f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_SHUTDOWN); set_sbi_flag(sbi, SBI_IS_SHUTDOWN); break; case F2FS_GOING_DOWN_METAFLUSH: f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_META_IO); - f2fs_stop_checkpoint(sbi, false); + f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_SHUTDOWN); set_sbi_flag(sbi, SBI_IS_SHUTDOWN); break; case F2FS_GOING_DOWN_NEED_FSCK: diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c index 78e847010d2e..3bc3af6421ab 100644 --- a/fs/f2fs/gc.c +++ b/fs/f2fs/gc.c @@ -68,7 +68,8 @@ static int gc_thread_func(void *data) if (time_to_inject(sbi, FAULT_CHECKPOINT)) { f2fs_show_injection_info(sbi, FAULT_CHECKPOINT); - f2fs_stop_checkpoint(sbi, false); + f2fs_stop_checkpoint(sbi, false, + STOP_CP_REASON_FAULT_INJECT); } if (!sb_start_write_trylock(sbi->sb)) { @@ -1633,7 +1634,8 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi, f2fs_err(sbi, "Inconsistent segment (%u) type [%d, %d] in SSA and SIT", segno, type, GET_SUM_TYPE((&sum->footer))); set_sbi_flag(sbi, SBI_NEED_FSCK); - f2fs_stop_checkpoint(sbi, false); + f2fs_stop_checkpoint(sbi, false, + STOP_CP_REASON_CORRUPTED_SUMMARY); goto skip; } diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c index d25853c98d4a..29bf3e215f52 100644 --- a/fs/f2fs/inode.c +++ b/fs/f2fs/inode.c @@ -685,7 +685,8 @@ void f2fs_update_inode_page(struct inode *inode) cond_resched(); goto retry; } else if (err != -ENOENT) { - f2fs_stop_checkpoint(sbi, false); + f2fs_stop_checkpoint(sbi, false, + STOP_CP_REASON_UPDATE_INODE); } return; } diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index 2d29eb15a550..2ff9c9903e35 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -499,7 +499,7 @@ void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need) { if (time_to_inject(sbi, FAULT_CHECKPOINT)) { f2fs_show_injection_info(sbi, FAULT_CHECKPOINT); - f2fs_stop_checkpoint(sbi, false); + f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_FAULT_INJECT); } /* balance_fs_bg is able to be pending */ @@ -782,8 +782,11 @@ int f2fs_flush_device_cache(struct f2fs_sb_info *sbi) if (!f2fs_test_bit(i, (char *)&sbi->dirty_device)) continue; ret = __submit_flush_wait(sbi, FDEV(i).bdev); - if (ret) + if (ret) { + f2fs_stop_checkpoint(sbi, false, + STOP_CP_REASON_FLUSH_FAIL); break; + } spin_lock(&sbi->dev_lock); f2fs_clear_bit(i, (char *)&sbi->dirty_device); diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index cb049996e53b..89d90d366754 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -3642,6 +3642,26 @@ int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover) return err; } +void f2fs_handle_stop(struct f2fs_sb_info *sbi, unsigned char reason) +{ + struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi); + int err; + + f2fs_bug_on(sbi, reason >= MAX_STOP_REASON); + + f2fs_down_write(&sbi->sb_lock); + + if (raw_super->s_stop_reason[reason] < ((1 << BITS_PER_BYTE) - 1)) + raw_super->s_stop_reason[reason]++; + + err = f2fs_commit_super(sbi, false); + if (err) + f2fs_err(sbi, "f2fs_commit_super fails to record reason:%u err:%d", + reason, err); + + f2fs_up_write(&sbi->sb_lock); +} + static int f2fs_scan_devices(struct f2fs_sb_info *sbi) { struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi); diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h index d445150c5350..5dd1e52b8997 100644 --- a/include/linux/f2fs_fs.h +++ b/include/linux/f2fs_fs.h @@ -73,6 +73,20 @@ struct f2fs_device { __le32 total_segments; } __packed; +/* reason of stop_checkpoint */ +enum stop_cp_reason { + STOP_CP_REASON_SHUTDOWN, + STOP_CP_REASON_FAULT_INJECT, + STOP_CP_REASON_META_PAGE, + STOP_CP_REASON_WRITE_FAIL, + STOP_CP_REASON_CORRUPTED_SUMMARY, + STOP_CP_REASON_UPDATE_INODE, + STOP_CP_REASON_FLUSH_FAIL, + STOP_CP_REASON_MAX, +}; + +#define MAX_STOP_REASON 32 + struct f2fs_super_block { __le32 magic; /* Magic Number */ __le16 major_ver; /* Major Version */ @@ -116,7 +130,8 @@ struct f2fs_super_block { __u8 hot_ext_count; /* # of hot file extension */ __le16 s_encoding; /* Filename charset encoding */ __le16 s_encoding_flags; /* Filename charset encoding flags */ - __u8 reserved[306]; /* valid reserved region */ + __u8 s_stop_reason[MAX_STOP_REASON]; /* stop checkpoint reason */ + __u8 reserved[274]; /* valid reserved region */ __le32 crc; /* checksum of superblock */ } __packed; From b046e2dca512d1b5c5121e8f55bdf4f0ae660e93 Mon Sep 17 00:00:00 2001 From: Todd Kjos Date: Thu, 6 Oct 2022 18:00:02 +0000 Subject: [PATCH 1821/1842] ANDROID: Fix kenelci build-break for !CONFIG_PERF_EVENTS Kernelci builds were broken if !CONFIG_PERF_EVENTS since 830f0202d737 ("ANDROID: cpu/hotplug: avoid breaking Android ABI by fusing cpuhp steps") causes perf_event_init_cpu(cpu) to be reduced to "NULL(cpu)": kernel/cpu.c:1868:21: error: called object type 'void *' is not a function or function pointer Fixes: 830f0202d737 ("ANDROID: cpu/hotplug: avoid breaking Android ABI by fusing cpuhp steps") Signed-off-by: Todd Kjos Change-Id: Ifc7351f74470c87018770395af4b4f6096f0d73f --- kernel/cpu.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/cpu.c b/kernel/cpu.c index 90d09bafecf6..fc15c01d61b8 100644 --- a/kernel/cpu.c +++ b/kernel/cpu.c @@ -1865,7 +1865,9 @@ int __boot_cpu_id; /* Horrific hacks because we can't add more to cpuhp_hp_states. */ static int random_and_perf_prepare_fusion(unsigned int cpu) { +#ifdef CONFIG_PERF_EVENTS perf_event_init_cpu(cpu); +#endif random_prepare_cpu(cpu); return 0; } From 95f23ced41762d64b6afb579d54e5f98d5bf5c4b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Stephan=20M=C3=BCller?= Date: Mon, 20 Dec 2021 07:21:53 +0100 Subject: [PATCH 1822/1842] UPSTREAM: crypto: jitter - add oversampling of noise source The output n bits can receive more than n bits of min entropy, of course, but the fixed output of the conditioning function can only asymptotically approach the output size bits of min entropy, not attain that bound. Random maps will tend to have output collisions, which reduces the creditable output entropy (that is what SP 800-90B Section 3.1.5.1.2 attempts to bound). The value "64" is justified in Appendix A.4 of the current 90C draft, and aligns with NIST's in "epsilon" definition in this document, which is that a string can be considered "full entropy" if you can bound the min entropy in each bit of output to at least 1-epsilon, where epsilon is required to be <= 2^(-32). Note, this patch causes the Jitter RNG to cut its performance in half in FIPS mode because the conditioning function of the LFSR produces 64 bits of entropy in one block. The oversampling requires that additionally 64 bits of entropy are sampled from the noise source. If the conditioner is changed, such as using SHA-256, the impact of the oversampling is only one fourth, because for the 256 bit block of the conditioner, only 64 additional bits from the noise source must be sampled. This patch is derived from the user space jitterentropy-library. Signed-off-by: Stephan Mueller Reviewed-by: Simo Sorce Signed-off-by: Herbert Xu Bug: 188620248 (cherry picked from commit 908dffaf88a248e542bdae3ca174f27b8f4ccf37) Change-Id: I7ae1fe58c1b5ea5f206a8f3675f0c20e255a97ec Signed-off-by: Eric Biggers --- crypto/jitterentropy.c | 23 +++++++++++++++++++++-- 1 file changed, 21 insertions(+), 2 deletions(-) diff --git a/crypto/jitterentropy.c b/crypto/jitterentropy.c index 37c4c308339e..423c55d0e165 100644 --- a/crypto/jitterentropy.c +++ b/crypto/jitterentropy.c @@ -117,6 +117,22 @@ struct rand_data { #define JENT_EHEALTH 9 /* Health test failed during initialization */ #define JENT_ERCT 10 /* RCT failed during initialization */ +/* + * The output n bits can receive more than n bits of min entropy, of course, + * but the fixed output of the conditioning function can only asymptotically + * approach the output size bits of min entropy, not attain that bound. Random + * maps will tend to have output collisions, which reduces the creditable + * output entropy (that is what SP 800-90B Section 3.1.5.1.2 attempts to bound). + * + * The value "64" is justified in Appendix A.4 of the current 90C draft, + * and aligns with NIST's in "epsilon" definition in this document, which is + * that a string can be considered "full entropy" if you can bound the min + * entropy in each bit of output to at least 1-epsilon, where epsilon is + * required to be <= 2^(-32). + */ +#define JENT_ENTROPY_SAFETY_FACTOR 64 + +#include #include "jitterentropy.h" /*************************************************************************** @@ -546,7 +562,10 @@ static int jent_measure_jitter(struct rand_data *ec) */ static void jent_gen_entropy(struct rand_data *ec) { - unsigned int k = 0; + unsigned int k = 0, safety_factor = 0; + + if (fips_enabled) + safety_factor = JENT_ENTROPY_SAFETY_FACTOR; /* priming of the ->prev_time value */ jent_measure_jitter(ec); @@ -560,7 +579,7 @@ static void jent_gen_entropy(struct rand_data *ec) * We multiply the loop value with ->osr to obtain the * oversampling rate requested by the caller */ - if (++k >= (DATA_SIZE_BITS * ec->osr)) + if (++k >= ((DATA_SIZE_BITS + safety_factor) * ec->osr)) break; } } From 9613bc53b51dd60b85b84b7c75cdfb7d11f2222e Mon Sep 17 00:00:00 2001 From: Todd Kjos Date: Mon, 10 Oct 2022 10:21:23 -0700 Subject: [PATCH 1823/1842] Revert "firmware_loader: use kernel credentials when reading firmware" This reverts commit 5a73581116362340087fad4df7e4530c4a819821. Introduces incompatible behavior in android12-5.10 which broke partner devices. Discussion this topic is in b/222166126. The new behavior should be fine for android13 and later kernels. The patch was merged into android12-5.10 with the latest LTS update. Bug: 247895237 Bug: 248989172 Signed-off-by: Todd Kjos Change-Id: Id8b617b5c91fc080c3b3cbe7ba55cd231bef5cfd --- drivers/base/firmware_loader/main.c | 17 ----------------- 1 file changed, 17 deletions(-) diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c index a4dd500bc141..1372f40d0371 100644 --- a/drivers/base/firmware_loader/main.c +++ b/drivers/base/firmware_loader/main.c @@ -793,8 +793,6 @@ _request_firmware(const struct firmware **firmware_p, const char *name, size_t offset, u32 opt_flags) { struct firmware *fw = NULL; - struct cred *kern_cred = NULL; - const struct cred *old_cred; bool nondirect = false; int ret; @@ -811,18 +809,6 @@ _request_firmware(const struct firmware **firmware_p, const char *name, if (ret <= 0) /* error or already assigned */ goto out; - /* - * We are about to try to access the firmware file. Because we may have been - * called by a driver when serving an unrelated request from userland, we use - * the kernel credentials to read the file. - */ - kern_cred = prepare_kernel_cred(NULL); - if (!kern_cred) { - ret = -ENOMEM; - goto out; - } - old_cred = override_creds(kern_cred); - ret = fw_get_filesystem_firmware(device, fw->priv, "", NULL); /* Only full reads can support decompression, platform, and sysfs. */ @@ -848,9 +834,6 @@ _request_firmware(const struct firmware **firmware_p, const char *name, } else ret = assign_fw(fw, device); - revert_creds(old_cred); - put_cred(kern_cred); - out: if (ret < 0) { fw_abort_batch_reqs(fw); From 10af956b38d074de701399b41175c09024df26ee Mon Sep 17 00:00:00 2001 From: Todd Kjos Date: Mon, 10 Oct 2022 10:21:23 -0700 Subject: [PATCH 1824/1842] Revert "firmware_loader: use kernel credentials when reading firmware" This reverts commit 5a73581116362340087fad4df7e4530c4a819821. Introduces incompatible behavior in android12-5.10 which broke partner devices. Discussion this topic is in b/222166126. The new behavior should be fine for android13 and later kernels. The patch was merged into android12-5.10 with the latest LTS update. Bug: 247895237 Bug: 248989172 Signed-off-by: Todd Kjos Change-Id: Id8b617b5c91fc080c3b3cbe7ba55cd231bef5cfd --- drivers/base/firmware_loader/main.c | 17 ----------------- 1 file changed, 17 deletions(-) diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c index a4dd500bc141..1372f40d0371 100644 --- a/drivers/base/firmware_loader/main.c +++ b/drivers/base/firmware_loader/main.c @@ -793,8 +793,6 @@ _request_firmware(const struct firmware **firmware_p, const char *name, size_t offset, u32 opt_flags) { struct firmware *fw = NULL; - struct cred *kern_cred = NULL; - const struct cred *old_cred; bool nondirect = false; int ret; @@ -811,18 +809,6 @@ _request_firmware(const struct firmware **firmware_p, const char *name, if (ret <= 0) /* error or already assigned */ goto out; - /* - * We are about to try to access the firmware file. Because we may have been - * called by a driver when serving an unrelated request from userland, we use - * the kernel credentials to read the file. - */ - kern_cred = prepare_kernel_cred(NULL); - if (!kern_cred) { - ret = -ENOMEM; - goto out; - } - old_cred = override_creds(kern_cred); - ret = fw_get_filesystem_firmware(device, fw->priv, "", NULL); /* Only full reads can support decompression, platform, and sysfs. */ @@ -848,9 +834,6 @@ _request_firmware(const struct firmware **firmware_p, const char *name, } else ret = assign_fw(fw, device); - revert_creds(old_cred); - put_cred(kern_cred); - out: if (ret < 0) { fw_abort_batch_reqs(fw); From c35cda5280bad7bfc021382ddb9f30f0ab38ff54 Mon Sep 17 00:00:00 2001 From: Minchan Kim Date: Thu, 19 May 2022 14:08:54 -0700 Subject: [PATCH 1825/1842] BACKPORT: mm: don't be stuck to rmap lock on reclaim path The rmap locks(i_mmap_rwsem and anon_vma->root->rwsem) could be contended under memory pressure if processes keep working on their vmas(e.g., fork, mmap, munmap). It makes reclaim path stuck. In our real workload traces, we see kswapd is waiting the lock for 300ms+(worst case, a sec) and it makes other processes entering direct reclaim, which were also stuck on the lock. This patch makes lru aging path try_lock mode like shink_page_list so the reclaim context will keep working with next lru pages without being stuck. if it found the rmap lock contended, it rotates the page back to head of lru in both active/inactive lrus to make them consistent behavior, which is basic starting point rather than adding more heristic. Since this patch introduces a new "contended" field as out-param along with try_lock in-param in rmap_walk_control, it's not immutable any longer if the try_lock is set so remove const keywords on rmap related functions. Since rmap walking is already expensive operation, I doubt the const would help sizable benefit( And we didn't have it until 5.17). In a heavy app workload in Android, trace shows following statistics. It almost removes rmap lock contention from reclaim path. Martin Liu reported: Before: max_dur(ms) min_dur(ms) max-min(dur)ms avg_dur(ms) sum_dur(ms) count blocked_function 1632 0 1631 151.542173 31672 209 page_lock_anon_vma_read 601 0 601 145.544681 28817 198 rmap_walk_file After: max_dur(ms) min_dur(ms) max-min(dur)ms avg_dur(ms) sum_dur(ms) count blocked_function NaN NaN NaN NaN NaN 0.0 NaN 0 0 0 0.127645 1 12 rmap_walk_file [minchan@kernel.org: add comment, per Matthew] Link: https://lkml.kernel.org/r/YnNqeB5tUf6LZ57b@google.com Link: https://lkml.kernel.org/r/20220510215423.164547-1-minchan@kernel.org Signed-off-by: Minchan Kim Acked-by: Johannes Weiner Cc: Suren Baghdasaryan Cc: Michal Hocko Cc: John Dias Cc: Tim Murray Cc: Matthew Wilcox Cc: Vladimir Davydov Cc: Martin Liu Cc: Minchan Kim Cc: Matthew Wilcox Signed-off-by: Andrew Morton Conflicts: folio->page (cherry picked from commit 6d4675e601357834dadd2ba1d803f6484596015c) Bug: 239681156 Bug: 252333201 Signed-off-by: Minchan Kim Change-Id: I0c63e0291120c8a1b5f2d83b8a7b210cb56c27a2 Signed-off-by: chenxin --- include/linux/fs.h | 5 +++++ include/linux/rmap.h | 24 ++++++++++++++++++------ mm/huge_memory.c | 2 +- mm/ksm.c | 8 +++++++- mm/memory-failure.c | 2 +- mm/page_idle.c | 6 +++--- mm/rmap.c | 43 +++++++++++++++++++++++++++++++++++++------ mm/vmscan.c | 7 ++++++- 8 files changed, 78 insertions(+), 19 deletions(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index 4019e6fa3b95..8cf258fc9162 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -515,6 +515,11 @@ static inline void i_mmap_unlock_write(struct address_space *mapping) up_write(&mapping->i_mmap_rwsem); } +static inline int i_mmap_trylock_read(struct address_space *mapping) +{ + return down_read_trylock(&mapping->i_mmap_rwsem); +} + static inline void i_mmap_lock_read(struct address_space *mapping) { down_read(&mapping->i_mmap_rwsem); diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 7dee138fbf0f..0a4d49ca8ccf 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -134,6 +134,11 @@ static inline void anon_vma_lock_read(struct anon_vma *anon_vma) down_read(&anon_vma->root->rwsem); } +static inline int anon_vma_trylock_read(struct anon_vma *anon_vma) +{ + return down_read_trylock(&anon_vma->root->rwsem); +} + static inline void anon_vma_unlock_read(struct anon_vma *anon_vma) { up_read(&anon_vma->root->rwsem); @@ -261,17 +266,14 @@ void try_to_munlock(struct page *); void remove_migration_ptes(struct page *old, struct page *new, bool locked); -/* - * Called by memory-failure.c to kill processes. - */ -struct anon_vma *page_lock_anon_vma_read(struct page *page); -void page_unlock_anon_vma_read(struct anon_vma *anon_vma); int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); /* * rmap_walk_control: To control rmap traversing for specific needs * * arg: passed to rmap_one() and invalid_vma() + * try_lock: bail out if the rmap lock is contended + * contended: indicate the rmap traversal bailed out due to lock contention * rmap_one: executed on each vma where page is mapped * done: for checking traversing termination condition * anon_lock: for getting anon_lock by optimized way rather than default @@ -279,6 +281,8 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); */ struct rmap_walk_control { void *arg; + bool try_lock; + bool contended; /* * Return false if page table scanning in rmap_walk should be stopped. * Otherwise, return true. @@ -286,13 +290,21 @@ struct rmap_walk_control { bool (*rmap_one)(struct page *page, struct vm_area_struct *vma, unsigned long addr, void *arg); int (*done)(struct page *page); - struct anon_vma *(*anon_lock)(struct page *page); + struct anon_vma *(*anon_lock)(struct page *page, + struct rmap_walk_control *rwc); bool (*invalid_vma)(struct vm_area_struct *vma, void *arg); }; void rmap_walk(struct page *page, struct rmap_walk_control *rwc); void rmap_walk_locked(struct page *page, struct rmap_walk_control *rwc); +/* + * Called by memory-failure.c to kill processes. + */ +struct anon_vma *page_lock_anon_vma_read(struct page *page, + struct rmap_walk_control *rwc); +void page_unlock_anon_vma_read(struct anon_vma *anon_vma); + #else /* !CONFIG_MMU */ #define anon_vma_init() do {} while (0) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c24716571a93..d10c9c3118a8 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1481,7 +1481,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd) */ get_page(page); spin_unlock(vmf->ptl); - anon_vma = page_lock_anon_vma_read(page); + anon_vma = page_lock_anon_vma_read(page, NULL); /* Confirm the PMD did not change while page_table_lock was released */ spin_lock(vmf->ptl); diff --git a/mm/ksm.c b/mm/ksm.c index e2464c04ede2..2695ddbeb47e 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2626,7 +2626,13 @@ void rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc) struct vm_area_struct *vma; cond_resched(); - anon_vma_lock_read(anon_vma); + if (!anon_vma_trylock_read(anon_vma)) { + if (rwc->try_lock) { + rwc->contended = true; + return; + } + anon_vma_lock_read(anon_vma); + } anon_vma_interval_tree_foreach(vmac, &anon_vma->rb_root, 0, ULONG_MAX) { unsigned long addr; diff --git a/mm/memory-failure.c b/mm/memory-failure.c index aef267c6a724..4bd73d6dc18a 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -477,7 +477,7 @@ static void collect_procs_anon(struct page *page, struct list_head *to_kill, struct anon_vma *av; pgoff_t pgoff; - av = page_lock_anon_vma_read(page); + av = page_lock_anon_vma_read(page, NULL); if (av == NULL) /* Not actually mapped anymore */ return; diff --git a/mm/page_idle.c b/mm/page_idle.c index 144fb4ed961d..b5613232e881 100644 --- a/mm/page_idle.c +++ b/mm/page_idle.c @@ -92,10 +92,10 @@ static bool page_idle_clear_pte_refs_one(struct page *page, static void page_idle_clear_pte_refs(struct page *page) { /* - * Since rwc.arg is unused, rwc is effectively immutable, so we - * can make it static const to save some cycles and stack. + * Since rwc.try_lock is unused, rwc is effectively immutable, so we + * can make it static to save some cycles and stack. */ - static const struct rmap_walk_control rwc = { + static struct rmap_walk_control rwc = { .rmap_one = page_idle_clear_pte_refs_one, .anon_lock = page_lock_anon_vma_read, }; diff --git a/mm/rmap.c b/mm/rmap.c index d48141f90360..033b04704b59 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -518,9 +518,11 @@ struct anon_vma *page_get_anon_vma(struct page *page) * * Its a little more complex as it tries to keep the fast path to a single * atomic op -- the trylock. If we fail the trylock, we fall back to getting a - * reference like with page_get_anon_vma() and then block on the mutex. + * reference like with page_get_anon_vma() and then block on the mutex + * on !rwc->try_lock case. */ -struct anon_vma *page_lock_anon_vma_read(struct page *page) +struct anon_vma *page_lock_anon_vma_read(struct page *page, + struct rmap_walk_control *rwc) { struct anon_vma *anon_vma = NULL; struct anon_vma *root_anon_vma; @@ -553,6 +555,13 @@ struct anon_vma *page_lock_anon_vma_read(struct page *page) anon_vma = NULL; goto out; } + + if (rwc && rwc->try_lock) { + anon_vma = NULL; + rwc->contended = true; + goto out; + } + /* trylock failed, we got to sleep */ if (!atomic_inc_not_zero(&anon_vma->refcount)) { anon_vma = NULL; @@ -850,8 +859,10 @@ static bool invalid_page_referenced_vma(struct vm_area_struct *vma, void *arg) * @memcg: target memory cgroup * @vm_flags: collect encountered vma->vm_flags who actually referenced the page * - * Quick test_and_clear_referenced for all mappings to a page, - * returns the number of ptes which referenced the page. + * Quick test_and_clear_referenced for all mappings of a page, + * + * Return: The number of mappings which referenced the page. Return -1 if + * the function bailed out due to rmap lock contention. */ int page_referenced(struct page *page, int is_locked, @@ -867,6 +878,7 @@ int page_referenced(struct page *page, .rmap_one = page_referenced_one, .arg = (void *)&pra, .anon_lock = page_lock_anon_vma_read, + .try_lock = true, }; *vm_flags = 0; @@ -897,7 +909,7 @@ int page_referenced(struct page *page, if (we_locked) unlock_page(page); - return pra.referenced; + return rwc.contended ? -1 : pra.referenced; } static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, @@ -1898,7 +1910,7 @@ static struct anon_vma *rmap_walk_anon_lock(struct page *page, struct anon_vma *anon_vma; if (rwc->anon_lock) - return rwc->anon_lock(page); + return rwc->anon_lock(page, rwc); /* * Note: remove_migration_ptes() cannot use page_lock_anon_vma_read() @@ -1910,7 +1922,17 @@ static struct anon_vma *rmap_walk_anon_lock(struct page *page, if (!anon_vma) return NULL; + if (anon_vma_trylock_read(anon_vma)) + goto out; + + if (rwc->try_lock) { + anon_vma = NULL; + rwc->contended = true; + goto out; + } + anon_vma_lock_read(anon_vma); +out: return anon_vma; } @@ -2009,9 +2031,18 @@ static void rmap_walk_file(struct page *page, struct rmap_walk_control *rwc, if (!got_lock) return; } else { + if (i_mmap_trylock_read(mapping)) + goto lookup; + + if (rwc->try_lock) { + rwc->contended = true; + return; + } + i_mmap_lock_read(mapping); } } +lookup: vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff_start, pgoff_end) { unsigned long address = vma_address(page, vma); diff --git a/mm/vmscan.c b/mm/vmscan.c index 701595410992..ea87d599199e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1045,6 +1045,10 @@ static enum page_references page_check_references(struct page *page, if (vm_flags & VM_LOCKED) return PAGEREF_RECLAIM; + /* rmap lock contention: rotate */ + if (referenced_ptes == -1) + return PAGEREF_KEEP; + if (referenced_ptes) { /* * All mapped pages start out with page table @@ -2119,8 +2123,9 @@ static void shrink_active_list(unsigned long nr_to_scan, if (bypass) goto skip_page_referenced; trace_android_vh_page_trylock_set(page); + /* Referenced or rmap lock contention: rotate */ if (page_referenced(page, 0, sc->target_mem_cgroup, - &vm_flags)) { + &vm_flags) != 0) { /* * Identify referenced, file-backed active pages and * give them one more trip around the active list. So From 3b39e91301386b04e12060b5c57d1fc1810d40c9 Mon Sep 17 00:00:00 2001 From: Lee Jones Date: Fri, 8 Jul 2022 08:40:09 +0100 Subject: [PATCH 1826/1842] BACKPORT: HID: steam: Prevent NULL pointer dereference in steam_{recv,send}_report commit cd11d1a6114bd4bc6450ae59f6e110ec47362126 upstream. It is possible for a malicious device to forgo submitting a Feature Report. The HID Steam driver presently makes no prevision for this and de-references the 'struct hid_report' pointer obtained from the HID devices without first checking its validity. Let's change that. Bug: 223455965 Cc: Jiri Kosina Cc: Benjamin Tissoires Cc: linux-input@vger.kernel.org Fixes: c164d6abf3841 ("HID: add driver for Valve Steam Controller") Signed-off-by: Lee Jones Signed-off-by: Jiri Kosina Signed-off-by: Greg Kroah-Hartman Signed-off-by: Lee Jones Change-Id: Ica12507b87309a7c46b4cab6fcfe4499cd96f45d --- drivers/hid/hid-steam.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/hid/hid-steam.c b/drivers/hid/hid-steam.c index a3b151b29bd7..fc616db4231b 100644 --- a/drivers/hid/hid-steam.c +++ b/drivers/hid/hid-steam.c @@ -134,6 +134,11 @@ static int steam_recv_report(struct steam_device *steam, int ret; r = steam->hdev->report_enum[HID_FEATURE_REPORT].report_id_hash[0]; + if (!r) { + hid_err(steam->hdev, "No HID_FEATURE_REPORT submitted - nothing to read\n"); + return -EINVAL; + } + if (hid_report_len(r) < 64) return -EINVAL; @@ -165,6 +170,11 @@ static int steam_send_report(struct steam_device *steam, int ret; r = steam->hdev->report_enum[HID_FEATURE_REPORT].report_id_hash[0]; + if (!r) { + hid_err(steam->hdev, "No HID_FEATURE_REPORT submitted - nothing to read\n"); + return -EINVAL; + } + if (hid_report_len(r) < 64) return -EINVAL; From a3835ce695a4a7751e2f4a0128a4c30cb15463d0 Mon Sep 17 00:00:00 2001 From: Johannes Weiner Date: Mon, 3 May 2021 13:49:17 -0400 Subject: [PATCH 1827/1842] UPSTREAM: psi: Fix psi state corruption when schedule() races with cgroup move 4117cebf1a9f ("psi: Optimize task switch inside shared cgroups") introduced a race condition that corrupts internal psi state. This manifests as kernel warnings, sometimes followed by bogusly high IO pressure: psi: task underflow! cpu=1 t=2 tasks=[0 0 0 0] clear=c set=0 (schedule() decreasing RUNNING and ONCPU, both of which are 0) psi: incosistent task state! task=2412744:systemd cpu=17 psi_flags=e clear=3 set=0 (cgroup_move_task() clearing MEMSTALL and IOWAIT, but task is MEMSTALL | RUNNING | ONCPU) What the offending commit does is batch the two psi callbacks in schedule() to reduce the number of cgroup tree updates. When prev is deactivated and removed from the runqueue, nothing is done in psi at first; when the task switch completes, TSK_RUNNING and TSK_IOWAIT are updated along with TSK_ONCPU. However, the deactivation and the task switch inside schedule() aren't atomic: pick_next_task() may drop the rq lock for load balancing. When this happens, cgroup_move_task() can run after the task has been physically dequeued, but the psi updates are still pending. Since it looks at the task's scheduler state, it doesn't move everything to the new cgroup that the task switch that follows is about to clear from it. cgroup_move_task() will leak the TSK_RUNNING count in the old cgroup, and psi_sched_switch() will underflow it in the new cgroup. A similar thing can happen for iowait. TSK_IOWAIT is usually set when a p->in_iowait task is dequeued, but again this update is deferred to the switch. cgroup_move_task() can see an unqueued p->in_iowait task and move a non-existent TSK_IOWAIT. This results in the inconsistent task state warning, as well as a counter underflow that will result in permanent IO ghost pressure being reported. Fix this bug by making cgroup_move_task() use task->psi_flags instead of looking at the potentially mismatching scheduler state. [ We used the scheduler state historically in order to not rely on task->psi_flags for anything but debugging. But that ship has sailed anyway, and this is simpler and more robust. We previously already batched TSK_ONCPU clearing with the TSK_RUNNING update inside the deactivation call from schedule(). But that ordering was safe and didn't result in TSK_ONCPU corruption: unlike most places in the scheduler, cgroup_move_task() only checked task_current() and handled TSK_ONCPU if the task was still queued. ] bug: b/253347377 Fixes: 4117cebf1a9f ("psi: Optimize task switch inside shared cgroups") Signed-off-by: Johannes Weiner Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20210503174917.38579-1-hannes@cmpxchg.org (cherry picked from commit d583d360a620e6229422b3455d0be082b8255f5e) Change-Id: Id0a292058d4bffb716d8e1496f72139e8d435410 --- kernel/sched/psi.c | 36 ++++++++++++++++++++++++++---------- 1 file changed, 26 insertions(+), 10 deletions(-) diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c index f46ac0f39777..e14917cd2a1d 100644 --- a/kernel/sched/psi.c +++ b/kernel/sched/psi.c @@ -1020,7 +1020,7 @@ void psi_cgroup_free(struct cgroup *cgroup) */ void cgroup_move_task(struct task_struct *task, struct css_set *to) { - unsigned int task_flags = 0; + unsigned int task_flags; struct rq_flags rf; struct rq *rq; @@ -1035,15 +1035,31 @@ void cgroup_move_task(struct task_struct *task, struct css_set *to) rq = task_rq_lock(task, &rf); - if (task_on_rq_queued(task)) { - task_flags = TSK_RUNNING; - if (task_current(rq, task)) - task_flags |= TSK_ONCPU; - } else if (task->in_iowait) - task_flags = TSK_IOWAIT; - - if (task->in_memstall) - task_flags |= TSK_MEMSTALL; + /* + * We may race with schedule() dropping the rq lock between + * deactivating prev and switching to next. Because the psi + * updates from the deactivation are deferred to the switch + * callback to save cgroup tree updates, the task's scheduling + * state here is not coherent with its psi state: + * + * schedule() cgroup_move_task() + * rq_lock() + * deactivate_task() + * p->on_rq = 0 + * psi_dequeue() // defers TSK_RUNNING & TSK_IOWAIT updates + * pick_next_task() + * rq_unlock() + * rq_lock() + * psi_task_change() // old cgroup + * task->cgroups = to + * psi_task_change() // new cgroup + * rq_unlock() + * rq_lock() + * psi_sched_switch() // does deferred updates in new cgroup + * + * Don't rely on the scheduling state. Use psi_flags instead. + */ + task_flags = task->psi_flags; if (task_flags) psi_task_change(task, task_flags, 0); From f9a66cbe70914bcf03184e9aa787ef1d6cd3d8fd Mon Sep 17 00:00:00 2001 From: Seiya Wang Date: Thu, 13 Oct 2022 16:51:03 +0800 Subject: [PATCH 1828/1842] ANDROID: GKI: Update symbol list for mtk AIoT projects 2 Added functions: [A] 'function int usb_unlink_urb(struct urb*)' [A] 'function int usb_clear_halt(struct usb_device*, int)' Bug: 253387971 Signed-off-by: Seiya Wang Change-Id: If9b4d6fc9e272b07c62a8c2492e695fac957fac4 --- android/abi_gki_aarch64.xml | 917 ++++++++++++++++++------------------ android/abi_gki_aarch64_mtk | 2 + 2 files changed, 466 insertions(+), 453 deletions(-) diff --git a/android/abi_gki_aarch64.xml b/android/abi_gki_aarch64.xml index 203534259b30..f4d56bb1813c 100644 --- a/android/abi_gki_aarch64.xml +++ b/android/abi_gki_aarch64.xml @@ -5661,6 +5661,7 @@ + @@ -5788,6 +5789,7 @@ + @@ -16288,18 +16290,18 @@ - + - + - + - + - + @@ -20099,75 +20101,75 @@ - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + @@ -21385,7 +21387,7 @@ - + @@ -22108,7 +22110,7 @@ - + @@ -25066,66 +25068,66 @@ - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + @@ -30726,12 +30728,12 @@ - + - + - + @@ -44253,81 +44255,81 @@ - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + @@ -46366,21 +46368,21 @@ - + - + - + - + - + - + @@ -48023,12 +48025,12 @@ - + - + - + @@ -49631,24 +49633,24 @@ - + - + - + - + - + - + - + @@ -51386,24 +51388,24 @@ - + - + - + - + - + - + - + @@ -56801,114 +56803,114 @@ - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + @@ -60930,96 +60932,96 @@ - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + @@ -63367,12 +63369,12 @@ - + - + - + @@ -67668,36 +67670,36 @@ - + - + - + - + - + - + - + - + - + - + - + @@ -80047,24 +80049,24 @@ - + - + - + - + - + - + - + @@ -80753,81 +80755,81 @@ - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + @@ -83941,213 +83943,213 @@ - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + @@ -84788,7 +84790,7 @@ - + @@ -92179,12 +92181,12 @@ - + - + - + @@ -94503,21 +94505,21 @@ - + - + - + - + - + - + @@ -95991,12 +95993,12 @@ - + - + - + @@ -98321,27 +98323,27 @@ - + - + - + - + - + - + - + - + @@ -108367,15 +108369,15 @@ - + - + - + - + @@ -113404,18 +113406,18 @@ - + - + - + - + - + @@ -113473,15 +113475,15 @@ - + - + - + - + @@ -114442,174 +114444,174 @@ - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + @@ -115119,15 +115121,15 @@ - + - + - + - + @@ -115826,49 +115828,49 @@ - - - - + + + + - - - + + + - - - - - - - + + + + + + + - - - - - - - + + + + + + + - - - - + + + + - - - - + + + + @@ -116716,7 +116718,7 @@ - + @@ -123122,8 +123124,8 @@ - - + + @@ -129996,10 +129998,10 @@ - - - - + + + + @@ -136879,8 +136881,8 @@ - - + + @@ -139330,8 +139332,8 @@ - - + + @@ -139931,8 +139933,8 @@ - - + + @@ -139998,34 +140000,34 @@ - - - - + + + + - - - - + + + + - - - - - - + + + + + + - - - - - - - - + + + + + + + + @@ -144777,11 +144779,11 @@ - - - - - + + + + + @@ -145994,6 +145996,11 @@ + + + + + @@ -146601,6 +146608,10 @@ + + + + diff --git a/android/abi_gki_aarch64_mtk b/android/abi_gki_aarch64_mtk index ee8da67b460e..1a3cc8ff7d30 100644 --- a/android/abi_gki_aarch64_mtk +++ b/android/abi_gki_aarch64_mtk @@ -3216,6 +3216,7 @@ update_devfreq usb_add_phy_dev usb_assign_descriptors + usb_clear_halt usb_copy_descriptors usb_ep_alloc_request usb_ep_autoconfig @@ -3239,6 +3240,7 @@ usb_phy_set_charger_current usb_remove_phy usb_role_switch_set_role + usb_unlink_urb v4l2_async_notifier_add_subdev v4l2_async_notifier_cleanup v4l2_async_subdev_notifier_register From 9687724b763521c0846b1f5be78ce46df48c5c3a Mon Sep 17 00:00:00 2001 From: Johannes Weiner Date: Mon, 3 May 2021 13:49:17 -0400 Subject: [PATCH 1829/1842] UPSTREAM: psi: Fix psi state corruption when schedule() races with cgroup move 4117cebf1a9f ("psi: Optimize task switch inside shared cgroups") introduced a race condition that corrupts internal psi state. This manifests as kernel warnings, sometimes followed by bogusly high IO pressure: psi: task underflow! cpu=1 t=2 tasks=[0 0 0 0] clear=c set=0 (schedule() decreasing RUNNING and ONCPU, both of which are 0) psi: incosistent task state! task=2412744:systemd cpu=17 psi_flags=e clear=3 set=0 (cgroup_move_task() clearing MEMSTALL and IOWAIT, but task is MEMSTALL | RUNNING | ONCPU) What the offending commit does is batch the two psi callbacks in schedule() to reduce the number of cgroup tree updates. When prev is deactivated and removed from the runqueue, nothing is done in psi at first; when the task switch completes, TSK_RUNNING and TSK_IOWAIT are updated along with TSK_ONCPU. However, the deactivation and the task switch inside schedule() aren't atomic: pick_next_task() may drop the rq lock for load balancing. When this happens, cgroup_move_task() can run after the task has been physically dequeued, but the psi updates are still pending. Since it looks at the task's scheduler state, it doesn't move everything to the new cgroup that the task switch that follows is about to clear from it. cgroup_move_task() will leak the TSK_RUNNING count in the old cgroup, and psi_sched_switch() will underflow it in the new cgroup. A similar thing can happen for iowait. TSK_IOWAIT is usually set when a p->in_iowait task is dequeued, but again this update is deferred to the switch. cgroup_move_task() can see an unqueued p->in_iowait task and move a non-existent TSK_IOWAIT. This results in the inconsistent task state warning, as well as a counter underflow that will result in permanent IO ghost pressure being reported. Fix this bug by making cgroup_move_task() use task->psi_flags instead of looking at the potentially mismatching scheduler state. [ We used the scheduler state historically in order to not rely on task->psi_flags for anything but debugging. But that ship has sailed anyway, and this is simpler and more robust. We previously already batched TSK_ONCPU clearing with the TSK_RUNNING update inside the deactivation call from schedule(). But that ordering was safe and didn't result in TSK_ONCPU corruption: unlike most places in the scheduler, cgroup_move_task() only checked task_current() and handled TSK_ONCPU if the task was still queued. ] Bug: 253347377 Bug: 253518869 Fixes: 4117cebf1a9f ("psi: Optimize task switch inside shared cgroups") Signed-off-by: Johannes Weiner Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20210503174917.38579-1-hannes@cmpxchg.org (cherry picked from commit d583d360a620e6229422b3455d0be082b8255f5e) Change-Id: Id0a292058d4bffb716d8e1496f72139e8d435410 --- kernel/sched/psi.c | 36 ++++++++++++++++++++++++++---------- 1 file changed, 26 insertions(+), 10 deletions(-) diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c index f46ac0f39777..e14917cd2a1d 100644 --- a/kernel/sched/psi.c +++ b/kernel/sched/psi.c @@ -1020,7 +1020,7 @@ void psi_cgroup_free(struct cgroup *cgroup) */ void cgroup_move_task(struct task_struct *task, struct css_set *to) { - unsigned int task_flags = 0; + unsigned int task_flags; struct rq_flags rf; struct rq *rq; @@ -1035,15 +1035,31 @@ void cgroup_move_task(struct task_struct *task, struct css_set *to) rq = task_rq_lock(task, &rf); - if (task_on_rq_queued(task)) { - task_flags = TSK_RUNNING; - if (task_current(rq, task)) - task_flags |= TSK_ONCPU; - } else if (task->in_iowait) - task_flags = TSK_IOWAIT; - - if (task->in_memstall) - task_flags |= TSK_MEMSTALL; + /* + * We may race with schedule() dropping the rq lock between + * deactivating prev and switching to next. Because the psi + * updates from the deactivation are deferred to the switch + * callback to save cgroup tree updates, the task's scheduling + * state here is not coherent with its psi state: + * + * schedule() cgroup_move_task() + * rq_lock() + * deactivate_task() + * p->on_rq = 0 + * psi_dequeue() // defers TSK_RUNNING & TSK_IOWAIT updates + * pick_next_task() + * rq_unlock() + * rq_lock() + * psi_task_change() // old cgroup + * task->cgroups = to + * psi_task_change() // new cgroup + * rq_unlock() + * rq_lock() + * psi_sched_switch() // does deferred updates in new cgroup + * + * Don't rely on the scheduling state. Use psi_flags instead. + */ + task_flags = task->psi_flags; if (task_flags) psi_task_change(task, task_flags, 0); From 39157f2ec27ae724942b6a14d2fa862613b7c007 Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Wed, 28 Sep 2022 21:56:15 +0200 Subject: [PATCH 1830/1842] UPSTREAM: wifi: cfg80211: fix u8 overflow in cfg80211_update_notlisted_nontrans() commit aebe9f4639b13a1f4e9a6b42cdd2e38c617b442d upstream. In the copy code of the elements, we do the following calculation to reach the end of the MBSSID element: /* copy the IEs after MBSSID */ cpy_len = mbssid[1] + 2; This looks fine, however, cpy_len is a u8, the same as mbssid[1], so the addition of two can overflow. In this case the subsequent memcpy() will overflow the allocated buffer, since it copies 256 bytes too much due to the way the allocation and memcpy() sizes are calculated. Fix this by using size_t for the cpy_len variable. This fixes CVE-2022-41674. Bug: 253641805 Bug: 256773215 Reported-by: Soenke Huster Tested-by: Soenke Huster Fixes: 0b8fb8235be8 ("cfg80211: Parsing of Multiple BSSID information in scanning") Reviewed-by: Kees Cook Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman Signed-off-by: Lee Jones Change-Id: I70d3a1188609751797cbabe905028d92d1700f17 --- net/wireless/scan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/net/wireless/scan.c b/net/wireless/scan.c index 6dc9b7e22b71..ccbaaa7bb4f7 100644 --- a/net/wireless/scan.c +++ b/net/wireless/scan.c @@ -2228,7 +2228,7 @@ cfg80211_update_notlisted_nontrans(struct wiphy *wiphy, size_t new_ie_len; struct cfg80211_bss_ies *new_ies; const struct cfg80211_bss_ies *old; - u8 cpy_len; + size_t cpy_len; lockdep_assert_held(&wiphy_to_rdev(wiphy)->bss_lock); From 2fd63a0312db9eecf8bc8a797f464116a29c0ff6 Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Wed, 28 Sep 2022 22:01:37 +0200 Subject: [PATCH 1831/1842] UPSTREAM: wifi: cfg80211/mac80211: reject bad MBSSID elements commit 8f033d2becc24aa6bfd2a5c104407963560caabc upstream. Per spec, the maximum value for the MaxBSSID ('n') indicator is 8, and the minimum is 1 since a multiple BSSID set with just one BSSID doesn't make sense (the # of BSSIDs is limited by 2^n). Limit this in the parsing in both cfg80211 and mac80211, rejecting any elements with an invalid value. This fixes potentially bad shifts in the processing of these inside the cfg80211_gen_new_bssid() function later. I found this during the investigation of CVE-2022-41674 fixed by the previous patch. Bug: 253641805 Bug: 256773215 Fixes: 0b8fb8235be8 ("cfg80211: Parsing of Multiple BSSID information in scanning") Fixes: 78ac51f81532 ("mac80211: support multi-bssid") Reviewed-by: Kees Cook Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman Signed-off-by: Lee Jones Change-Id: I7aa0b1a425fcf3a7797e83afa8ad6dd68b283b48 --- net/mac80211/util.c | 2 ++ net/wireless/scan.c | 2 ++ 2 files changed, 4 insertions(+) diff --git a/net/mac80211/util.c b/net/mac80211/util.c index a1f129292ad8..11d5686893c6 100644 --- a/net/mac80211/util.c +++ b/net/mac80211/util.c @@ -1409,6 +1409,8 @@ static size_t ieee802_11_find_bssid_profile(const u8 *start, size_t len, for_each_element_id(elem, WLAN_EID_MULTIPLE_BSSID, start, len) { if (elem->datalen < 2) continue; + if (elem->data[0] < 1 || elem->data[0] > 8) + continue; for_each_element(sub, elem->data + 1, elem->datalen - 1) { u8 new_bssid[ETH_ALEN]; diff --git a/net/wireless/scan.c b/net/wireless/scan.c index ccbaaa7bb4f7..a5dc69e833ea 100644 --- a/net/wireless/scan.c +++ b/net/wireless/scan.c @@ -2093,6 +2093,8 @@ static void cfg80211_parse_mbssid_data(struct wiphy *wiphy, for_each_element_id(elem, WLAN_EID_MULTIPLE_BSSID, ie, ielen) { if (elem->datalen < 4) continue; + if (elem->data[0] < 1 || (int)elem->data[0] > 8) + continue; for_each_element(sub, elem->data + 1, elem->datalen - 1) { u8 profile_len; From 32f866aa2e408f02ce658848298f1d2ea9b5e310 Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Thu, 29 Sep 2022 21:50:44 +0200 Subject: [PATCH 1832/1842] UPSTREAM: wifi: cfg80211: ensure length byte is present before access commit 567e14e39e8f8c6997a1378bc3be615afca86063 upstream. When iterating the elements here, ensure the length byte is present before checking it to see if the entire element will fit into the buffer. Longer term, we should rewrite this code using the type-safe element iteration macros that check all of this. Bug: 254180332 Bug: 256773215 Fixes: 0b8fb8235be8 ("cfg80211: Parsing of Multiple BSSID information in scanning") Reported-by: Soenke Huster Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman Signed-off-by: Lee Jones Change-Id: I6ece37c57ca56462566adbcac6def6b35dc5b799 --- net/wireless/scan.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/net/wireless/scan.c b/net/wireless/scan.c index a5dc69e833ea..1b3ae8ce1113 100644 --- a/net/wireless/scan.c +++ b/net/wireless/scan.c @@ -304,7 +304,8 @@ static size_t cfg80211_gen_new_ie(const u8 *ie, size_t ielen, tmp_old = cfg80211_find_ie(WLAN_EID_SSID, ie, ielen); tmp_old = (tmp_old) ? tmp_old + tmp_old[1] + 2 : ie; - while (tmp_old + tmp_old[1] + 2 - ie <= ielen) { + while (tmp_old + 2 - ie <= ielen && + tmp_old + tmp_old[1] + 2 - ie <= ielen) { if (tmp_old[0] == 0) { tmp_old++; continue; @@ -364,7 +365,8 @@ static size_t cfg80211_gen_new_ie(const u8 *ie, size_t ielen, * copied to new ie, skip ssid, capability, bssid-index ie */ tmp_new = sub_copy; - while (tmp_new + tmp_new[1] + 2 - sub_copy <= subie_len) { + while (tmp_new + 2 - sub_copy <= subie_len && + tmp_new + tmp_new[1] + 2 - sub_copy <= subie_len) { if (!(tmp_new[0] == WLAN_EID_NON_TX_BSSID_CAP || tmp_new[0] == WLAN_EID_SSID)) { memcpy(pos, tmp_new, tmp_new[1] + 2); From 71e1fbbb62a1072f8e57359f067e40af9d37205d Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Fri, 30 Sep 2022 23:44:23 +0200 Subject: [PATCH 1833/1842] UPSTREAM: wifi: cfg80211: fix BSS refcounting bugs MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 0b7808818cb9df6680f98996b8e9a439fa7bcc2f upstream. There are multiple refcounting bugs related to multi-BSSID: - In bss_ref_get(), if the BSS has a hidden_beacon_bss, then the bss pointer is overwritten before checking for the transmitted BSS, which is clearly wrong. Fix this by using the bss_from_pub() macro. - In cfg80211_bss_update() we copy the transmitted_bss pointer from tmp into new, but then if we release new, we'll unref it erroneously. We already set the pointer and ref it, but need to NULL it since it was copied from the tmp data. - In cfg80211_inform_single_bss_data(), if adding to the non- transmitted list fails, we unlink the BSS and yet still we return it, but this results in returning an entry without a reference. We shouldn't return it anyway if it was broken enough to not get added there. This fixes CVE-2022-42720. Bug: 253642015 Bug: 256773215 Reported-by: Sönke Huster Tested-by: Sönke Huster Fixes: a3584f56de1c ("cfg80211: Properly track transmitting and non-transmitting BSS") Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman Signed-off-by: Lee Jones Change-Id: I408bf72ca59b6ffbe2aba460f3e9326bf1c94eec --- net/wireless/scan.c | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/net/wireless/scan.c b/net/wireless/scan.c index 1b3ae8ce1113..d7b92cb2b610 100644 --- a/net/wireless/scan.c +++ b/net/wireless/scan.c @@ -143,18 +143,12 @@ static inline void bss_ref_get(struct cfg80211_registered_device *rdev, lockdep_assert_held(&rdev->bss_lock); bss->refcount++; - if (bss->pub.hidden_beacon_bss) { - bss = container_of(bss->pub.hidden_beacon_bss, - struct cfg80211_internal_bss, - pub); - bss->refcount++; - } - if (bss->pub.transmitted_bss) { - bss = container_of(bss->pub.transmitted_bss, - struct cfg80211_internal_bss, - pub); - bss->refcount++; - } + + if (bss->pub.hidden_beacon_bss) + bss_from_pub(bss->pub.hidden_beacon_bss)->refcount++; + + if (bss->pub.transmitted_bss) + bss_from_pub(bss->pub.transmitted_bss)->refcount++; } static inline void bss_ref_put(struct cfg80211_registered_device *rdev, @@ -1736,6 +1730,8 @@ cfg80211_bss_update(struct cfg80211_registered_device *rdev, new->refcount = 1; INIT_LIST_HEAD(&new->hidden_list); INIT_LIST_HEAD(&new->pub.nontrans_list); + /* we'll set this later if it was non-NULL */ + new->pub.transmitted_bss = NULL; if (rcu_access_pointer(tmp->pub.proberesp_ies)) { hidden = rb_find_bss(rdev, tmp, BSS_CMP_HIDE_ZLEN); @@ -1973,10 +1969,15 @@ cfg80211_inform_single_bss_data(struct wiphy *wiphy, spin_lock_bh(&rdev->bss_lock); if (cfg80211_add_nontrans_list(non_tx_data->tx_bss, &res->pub)) { - if (__cfg80211_unlink_bss(rdev, res)) + if (__cfg80211_unlink_bss(rdev, res)) { rdev->bss_generation++; + res = NULL; + } } spin_unlock_bh(&rdev->bss_lock); + + if (!res) + return NULL; } trace_cfg80211_return_bss(&res->pub); From 68544e6856d5a22b0ff629ed523b44eaefc4591e Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Sat, 1 Oct 2022 00:01:44 +0200 Subject: [PATCH 1834/1842] UPSTREAM: wifi: cfg80211: avoid nontransmitted BSS list corruption MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit bcca852027e5878aec911a347407ecc88d6fff7f upstream. If a non-transmitted BSS shares enough information (both SSID and BSSID!) with another non-transmitted BSS of a different AP, then we can find and update it, and then try to add it to the non-transmitted BSS list. We do a search for it on the transmitted BSS, but if it's not there (but belongs to another transmitted BSS), the list gets corrupted. Since this is an erroneous situation, simply fail the list insertion in this case and free the non-transmitted BSS. This fixes CVE-2022-42721. Bug: 253642088 Bug: 256773215 Reported-by: Sönke Huster Tested-by: Sönke Huster Fixes: 0b8fb8235be8 ("cfg80211: Parsing of Multiple BSSID information in scanning") Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman Signed-off-by: Lee Jones Change-Id: If83261f8b711f5ad0ce922abea2c35fedbc36c39 --- net/wireless/scan.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/net/wireless/scan.c b/net/wireless/scan.c index d7b92cb2b610..d1e338b85a82 100644 --- a/net/wireless/scan.c +++ b/net/wireless/scan.c @@ -425,6 +425,15 @@ cfg80211_add_nontrans_list(struct cfg80211_bss *trans_bss, rcu_read_unlock(); + /* + * This is a bit weird - it's not on the list, but already on another + * one! The only way that could happen is if there's some BSSID/SSID + * shared by multiple APs in their multi-BSSID profiles, potentially + * with hidden SSID mixed in ... ignore it. + */ + if (!list_empty(&nontrans_bss->nontrans_list)) + return -EINVAL; + /* add to the list */ list_add_tail(&nontrans_bss->nontrans_list, &trans_bss->nontrans_list); return 0; From 4f154da834bc1023bd3d3a2cf15391b020f96335 Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Wed, 5 Oct 2022 15:10:09 +0200 Subject: [PATCH 1835/1842] UPSTREAM: wifi: mac80211_hwsim: avoid mac80211 warning on bad rate MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit 1833b6f46d7e2830251a063935ab464256defe22 upstream. If the tool on the other side (e.g. wmediumd) gets confused about the rate, we hit a warning in mac80211. Silence that by effectively duplicating the check here and dropping the frame silently (in mac80211 it's dropped with the warning). Bug: 254180332 Bug: 256773215 Reported-by: Sönke Huster Tested-by: Sönke Huster Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman Signed-off-by: Lee Jones Change-Id: Ieb3a258b998aca815efc5d09492ce66e461b5b88 --- drivers/net/wireless/mac80211_hwsim.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c index 2d1f7fb18381..07935e9671dd 100644 --- a/drivers/net/wireless/mac80211_hwsim.c +++ b/drivers/net/wireless/mac80211_hwsim.c @@ -3681,6 +3681,8 @@ static int hwsim_cloned_frame_received_nl(struct sk_buff *skb_2, rx_status.band = channel->band; rx_status.rate_idx = nla_get_u32(info->attrs[HWSIM_ATTR_RX_RATE]); + if (rx_status.rate_idx >= data2->hw->wiphy->bands[rx_status.band]->n_bitrates) + goto out; rx_status.signal = nla_get_u32(info->attrs[HWSIM_ATTR_SIGNAL]); hdr = (void *)skb->data; From a08a4fdd8c064c33109c5a287e50b045b09cf3a0 Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Wed, 5 Oct 2022 21:24:10 +0200 Subject: [PATCH 1836/1842] UPSTREAM: wifi: mac80211: fix crash in beacon protection for P2P-device MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit b2d03cabe2b2e150ff5a381731ea0355459be09f upstream. If beacon protection is active but the beacon cannot be decrypted or is otherwise malformed, we call the cfg80211 API to report this to userspace, but that uses a netdev pointer, which isn't present for P2P-Device. Fix this to call it only conditionally to ensure cfg80211 won't crash in the case of P2P-Device. This fixes CVE-2022-42722. Bug: 253642089 Bug: 256773215 Reported-by: Sönke Huster Fixes: 9eaf183af741 ("mac80211: Report beacon protection failures to user space") Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman Signed-off-by: Lee Jones Change-Id: Ie3336b950136e26debbe835f97ad450d03f6baad --- net/mac80211/rx.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c index e991abb45f68..97a63b940482 100644 --- a/net/mac80211/rx.c +++ b/net/mac80211/rx.c @@ -1975,10 +1975,11 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx) if (mmie_keyidx < NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS || mmie_keyidx >= NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS + - NUM_DEFAULT_BEACON_KEYS) { - cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev, - skb->data, - skb->len); + NUM_DEFAULT_BEACON_KEYS) { + if (rx->sdata->dev) + cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev, + skb->data, + skb->len); return RX_DROP_MONITOR; /* unexpected BIP keyidx */ } @@ -2126,7 +2127,8 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx) /* either the frame has been decrypted or will be dropped */ status->flag |= RX_FLAG_DECRYPTED; - if (unlikely(ieee80211_is_beacon(fc) && result == RX_DROP_UNUSABLE)) + if (unlikely(ieee80211_is_beacon(fc) && result == RX_DROP_UNUSABLE && + rx->sdata->dev)) cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev, skb->data, skb->len); From 97e742c53f0de5499f6a0156b22837c830fddde9 Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Wed, 5 Oct 2022 23:11:43 +0200 Subject: [PATCH 1837/1842] UPSTREAM: wifi: cfg80211: update hidden BSSes to avoid WARN_ON MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit c90b93b5b782891ebfda49d4e5da36632fefd5d1 upstream. When updating beacon elements in a non-transmitted BSS, also update the hidden sub-entries to the same beacon elements, so that a future update through other paths won't trigger a WARN_ON(). The warning is triggered because the beacon elements in the hidden BSSes that are children of the BSS should always be the same as in the parent. Bug: 254180332 Bug: 256773215 Reported-by: Sönke Huster Tested-by: Sönke Huster Fixes: 0b8fb8235be8 ("cfg80211: Parsing of Multiple BSSID information in scanning") Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman Signed-off-by: Lee Jones Change-Id: Iea4669ba97b926dfa67e9592b3a263d3f18508e5 --- net/wireless/scan.c | 31 ++++++++++++++++++++----------- 1 file changed, 20 insertions(+), 11 deletions(-) diff --git a/net/wireless/scan.c b/net/wireless/scan.c index d1e338b85a82..22d169923261 100644 --- a/net/wireless/scan.c +++ b/net/wireless/scan.c @@ -1602,6 +1602,23 @@ struct cfg80211_non_tx_bss { u8 bssid_index; }; +static void cfg80211_update_hidden_bsses(struct cfg80211_internal_bss *known, + const struct cfg80211_bss_ies *new_ies, + const struct cfg80211_bss_ies *old_ies) +{ + struct cfg80211_internal_bss *bss; + + /* Assign beacon IEs to all sub entries */ + list_for_each_entry(bss, &known->hidden_list, hidden_list) { + const struct cfg80211_bss_ies *ies; + + ies = rcu_access_pointer(bss->pub.beacon_ies); + WARN_ON(ies != old_ies); + + rcu_assign_pointer(bss->pub.beacon_ies, new_ies); + } +} + static bool cfg80211_update_known_bss(struct cfg80211_registered_device *rdev, struct cfg80211_internal_bss *known, @@ -1625,7 +1642,6 @@ cfg80211_update_known_bss(struct cfg80211_registered_device *rdev, kfree_rcu((struct cfg80211_bss_ies *)old, rcu_head); } else if (rcu_access_pointer(new->pub.beacon_ies)) { const struct cfg80211_bss_ies *old; - struct cfg80211_internal_bss *bss; if (known->pub.hidden_beacon_bss && !list_empty(&known->hidden_list)) { @@ -1653,16 +1669,7 @@ cfg80211_update_known_bss(struct cfg80211_registered_device *rdev, if (old == rcu_access_pointer(known->pub.ies)) rcu_assign_pointer(known->pub.ies, new->pub.beacon_ies); - /* Assign beacon IEs to all sub entries */ - list_for_each_entry(bss, &known->hidden_list, hidden_list) { - const struct cfg80211_bss_ies *ies; - - ies = rcu_access_pointer(bss->pub.beacon_ies); - WARN_ON(ies != old); - - rcu_assign_pointer(bss->pub.beacon_ies, - new->pub.beacon_ies); - } + cfg80211_update_hidden_bsses(known, new->pub.beacon_ies, old); if (old) kfree_rcu((struct cfg80211_bss_ies *)old, rcu_head); @@ -2309,6 +2316,8 @@ cfg80211_update_notlisted_nontrans(struct wiphy *wiphy, } else { old = rcu_access_pointer(nontrans_bss->beacon_ies); rcu_assign_pointer(nontrans_bss->beacon_ies, new_ies); + cfg80211_update_hidden_bsses(bss_from_pub(nontrans_bss), + new_ies, old); rcu_assign_pointer(nontrans_bss->ies, new_ies); if (old) kfree_rcu((struct cfg80211_bss_ies *)old, rcu_head); From 3b8b7caece09a24a10ecc06c7b689ec89c0a6755 Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Fri, 14 Oct 2022 18:41:48 +0200 Subject: [PATCH 1838/1842] UPSTREAM: mac80211: mlme: find auth challenge directly There's no need to parse all elements etc. just to find the authentication challenge - use cfg80211_find_elem() instead. This also allows us to remove WLAN_EID_CHALLENGE handling from the element parsing entirely. Bug: 254180332 Bug: 256773215 Link: https://lore.kernel.org/r/20210920154009.45f9b3a15722.Ice3159ffad03a007d6154cbf1fb3a8c48489e86f@changeid Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman Signed-off-by: Lee Jones (cherry picked from commit 66dacdbc2e830e1187bf0f1171ca257d816ab7e3) Change-Id: Ife49cbad96bb43064449d93b8f8ada9db24be540 --- net/mac80211/ieee80211_i.h | 2 -- net/mac80211/mlme.c | 11 ++++++----- net/mac80211/util.c | 4 ---- 3 files changed, 6 insertions(+), 11 deletions(-) diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h index bcc94cc1b620..e11f6d3fbd2b 100644 --- a/net/mac80211/ieee80211_i.h +++ b/net/mac80211/ieee80211_i.h @@ -1485,7 +1485,6 @@ struct ieee802_11_elems { const u8 *supp_rates; const u8 *ds_params; const struct ieee80211_tim_ie *tim; - const u8 *challenge; const u8 *rsn; const u8 *rsnx; const u8 *erp_info; @@ -1538,7 +1537,6 @@ struct ieee802_11_elems { u8 ssid_len; u8 supp_rates_len; u8 tim_len; - u8 challenge_len; u8 rsn_len; u8 rsnx_len; u8 ext_supp_rates_len; diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index 3988403064ab..9e70cb86b420 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -2899,14 +2899,14 @@ static void ieee80211_auth_challenge(struct ieee80211_sub_if_data *sdata, { struct ieee80211_local *local = sdata->local; struct ieee80211_mgd_auth_data *auth_data = sdata->u.mgd.auth_data; + const struct element *challenge; u8 *pos; - struct ieee802_11_elems elems; u32 tx_flags = 0; pos = mgmt->u.auth.variable; - ieee802_11_parse_elems(pos, len - (pos - (u8 *)mgmt), false, &elems, - mgmt->bssid, auth_data->bss->bssid); - if (!elems.challenge) + challenge = cfg80211_find_elem(WLAN_EID_CHALLENGE, pos, + len - (pos - (u8 *)mgmt)); + if (!challenge) return; auth_data->expected_transaction = 4; drv_mgd_prepare_tx(sdata->local, sdata, 0); @@ -2914,7 +2914,8 @@ static void ieee80211_auth_challenge(struct ieee80211_sub_if_data *sdata, tx_flags = IEEE80211_TX_CTL_REQ_TX_STATUS | IEEE80211_TX_INTFL_MLME_CONN_TX; ieee80211_send_auth(sdata, 3, auth_data->algorithm, 0, - elems.challenge - 2, elems.challenge_len + 2, + (void *)challenge, + challenge->datalen + sizeof(*challenge), auth_data->bss->bssid, auth_data->bss->bssid, auth_data->key, auth_data->key_len, auth_data->key_idx, tx_flags); diff --git a/net/mac80211/util.c b/net/mac80211/util.c index 11d5686893c6..e6b5d164a0ee 100644 --- a/net/mac80211/util.c +++ b/net/mac80211/util.c @@ -1124,10 +1124,6 @@ _ieee802_11_parse_elems_crc(const u8 *start, size_t len, bool action, } else elem_parse_failed = true; break; - case WLAN_EID_CHALLENGE: - elems->challenge = pos; - elems->challenge_len = elen; - break; case WLAN_EID_VENDOR_SPECIFIC: if (elen >= 4 && pos[0] == 0x00 && pos[1] == 0x50 && pos[2] == 0xf2) { From 254348ee7b7bcb673453f7859cd735ed15178a04 Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Fri, 14 Oct 2022 18:41:49 +0200 Subject: [PATCH 1839/1842] UPSTREAM: wifi: mac80211: don't parse mbssid in assoc response This is simply not valid and simplifies the next commit. 11;rgb:ffff/ffff/ddddI'll make a separate patch for this in the current main tree as well. Bug: 254180332 Bug: 256773215 Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman Signed-off-by: Lee Jones (cherry picked from commit 353b5c8d4bea712774ccc631782ed8cc3630528a) Change-Id: Ie554c036923c94b125035141a3bffafc129a5aa6 --- net/mac80211/mlme.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index 9e70cb86b420..0163b835f608 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -3300,7 +3300,7 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata, } capab_info = le16_to_cpu(mgmt->u.assoc_resp.capab_info); ieee802_11_parse_elems(pos, len - (pos - (u8 *)mgmt), false, elems, - mgmt->bssid, assoc_data->bss->bssid); + mgmt->bssid, NULL); if (elems->aid_resp) aid = le16_to_cpu(elems->aid_resp->aid); @@ -3708,7 +3708,7 @@ static void ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata, return; ieee802_11_parse_elems(pos, len - (pos - (u8 *)mgmt), false, &elems, - mgmt->bssid, assoc_data->bss->bssid); + mgmt->bssid, NULL); if (status_code == WLAN_STATUS_ASSOC_REJECTED_TEMPORARILY && elems.timeout_int && From 3335f95077ab2dc88450db41f674c827a22639e7 Mon Sep 17 00:00:00 2001 From: Johannes Berg Date: Fri, 14 Oct 2022 18:41:50 +0200 Subject: [PATCH 1840/1842] UPSTREAM: wifi: mac80211: fix MBSSID parsing use-after-free Commit ff05d4b45dd89b922578dac497dcabf57cf771c6 upstream. This is a different version of the commit, changed to store the non-transmitted profile in the elems, and freeing it in the few places where it's relevant, since that is only the case when the last argument for parsing (the non-tx BSSID) is non-NULL. When we parse a multi-BSSID element, we might point some element pointers into the allocated nontransmitted_profile. However, we free this before returning, causing UAF when the relevant pointers in the parsed elements are accessed. Fix this by not allocating the scratch buffer separately but as part of the returned structure instead, that way, there are no lifetime issues with it. The scratch buffer introduction as part of the returned data here is taken from MLO feature work done by Ilan. This fixes CVE-2022-42719. Bug: 253642087 Bug: 256773215 Fixes: 5023b14cf4df ("mac80211: support profile split between elements") Co-developed-by: Ilan Peer Signed-off-by: Ilan Peer Reviewed-by: Kees Cook Signed-off-by: Johannes Berg Signed-off-by: Greg Kroah-Hartman Signed-off-by: Lee Jones Change-Id: I68b07f5850a7ef363d631043d01f58a08aea9274 --- net/mac80211/ieee80211_i.h | 2 ++ net/mac80211/mlme.c | 6 +++++- net/mac80211/scan.c | 2 ++ net/mac80211/util.c | 7 ++++++- 4 files changed, 15 insertions(+), 2 deletions(-) diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h index e11f6d3fbd2b..63499db5c63d 100644 --- a/net/mac80211/ieee80211_i.h +++ b/net/mac80211/ieee80211_i.h @@ -1551,6 +1551,8 @@ struct ieee802_11_elems { u8 country_elem_len; u8 bssid_index_len; + void *nontx_profile; + /* whether a parse error occurred while retrieving these elements */ bool parse_error; }; diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index 0163b835f608..c52b8eb7fb8a 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -3394,6 +3394,7 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata, sdata_info(sdata, "AP bug: VHT operation missing from AssocResp\n"); } + kfree(bss_elems.nontx_profile); } /* @@ -4045,6 +4046,7 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata, ifmgd->assoc_data->timeout = jiffies; ifmgd->assoc_data->timeout_started = true; run_again(sdata, ifmgd->assoc_data->timeout); + kfree(elems.nontx_profile); return; } @@ -4222,7 +4224,7 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata, ieee80211_report_disconnect(sdata, deauth_buf, sizeof(deauth_buf), true, WLAN_REASON_DEAUTH_LEAVING); - return; + goto free; } if (sta && elems.opmode_notif) @@ -4237,6 +4239,8 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata, elems.cisco_dtpc_elem); ieee80211_bss_info_change_notify(sdata, changed); +free: + kfree(elems.nontx_profile); } void ieee80211_sta_rx_queued_ext(struct ieee80211_sub_if_data *sdata, diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c index 887f945bb12d..edd84f367880 100644 --- a/net/mac80211/scan.c +++ b/net/mac80211/scan.c @@ -227,6 +227,8 @@ ieee80211_bss_info_update(struct ieee80211_local *local, rx_status, beacon); } + kfree(elems.nontx_profile); + return bss; } diff --git a/net/mac80211/util.c b/net/mac80211/util.c index e6b5d164a0ee..7fa6efa8b83c 100644 --- a/net/mac80211/util.c +++ b/net/mac80211/util.c @@ -1483,6 +1483,11 @@ u32 ieee802_11_parse_elems_crc(const u8 *start, size_t len, bool action, cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE, nontransmitted_profile, nontransmitted_profile_len); + if (!nontransmitted_profile_len) { + nontransmitted_profile_len = 0; + kfree(nontransmitted_profile); + nontransmitted_profile = NULL; + } } crc = _ieee802_11_parse_elems_crc(start, len, action, elems, filter, @@ -1512,7 +1517,7 @@ u32 ieee802_11_parse_elems_crc(const u8 *start, size_t len, bool action, offsetofend(struct ieee80211_bssid_index, dtim_count)) elems->dtim_count = elems->bssid_index->dtim_count; - kfree(nontransmitted_profile); + elems->nontx_profile = nontransmitted_profile; return crc; } From aff80ec64dc03305bbac81b608d8aa1920de2412 Mon Sep 17 00:00:00 2001 From: Carlos Llamas Date: Fri, 4 Nov 2022 17:54:49 +0000 Subject: [PATCH 1841/1842] FROMLIST: binder: fix UAF of alloc->vma in race with munmap() In commit 720c24192404 ("ANDROID: binder: change down_write to down_read") binder assumed the mmap read lock is sufficient to protect alloc->vma inside binder_update_page_range(). This used to be accurate until commit dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap"), which now downgrades the mmap_lock after detaching the vma from the rbtree in munmap(). Then it proceeds to teardown and free the vma with only the read lock held. This means that accesses to alloc->vma in binder_update_page_range() now will race with vm_area_free() in munmap() and can cause a UAF as shown in the following KASAN trace: ================================================================== BUG: KASAN: use-after-free in vm_insert_page+0x7c/0x1f0 Read of size 8 at addr ffff16204ad00600 by task server/558 CPU: 3 PID: 558 Comm: server Not tainted 5.10.150-00001-gdc8dcf942daa #1 Hardware name: linux,dummy-virt (DT) Call trace: dump_backtrace+0x0/0x2a0 show_stack+0x18/0x2c dump_stack+0xf8/0x164 print_address_description.constprop.0+0x9c/0x538 kasan_report+0x120/0x200 __asan_load8+0xa0/0xc4 vm_insert_page+0x7c/0x1f0 binder_update_page_range+0x278/0x50c binder_alloc_new_buf+0x3f0/0xba0 binder_transaction+0x64c/0x3040 binder_thread_write+0x924/0x2020 binder_ioctl+0x1610/0x2e5c __arm64_sys_ioctl+0xd4/0x120 el0_svc_common.constprop.0+0xac/0x270 do_el0_svc+0x38/0xa0 el0_svc+0x1c/0x2c el0_sync_handler+0xe8/0x114 el0_sync+0x180/0x1c0 Allocated by task 559: kasan_save_stack+0x38/0x6c __kasan_kmalloc.constprop.0+0xe4/0xf0 kasan_slab_alloc+0x18/0x2c kmem_cache_alloc+0x1b0/0x2d0 vm_area_alloc+0x28/0x94 mmap_region+0x378/0x920 do_mmap+0x3f0/0x600 vm_mmap_pgoff+0x150/0x17c ksys_mmap_pgoff+0x284/0x2dc __arm64_sys_mmap+0x84/0xa4 el0_svc_common.constprop.0+0xac/0x270 do_el0_svc+0x38/0xa0 el0_svc+0x1c/0x2c el0_sync_handler+0xe8/0x114 el0_sync+0x180/0x1c0 Freed by task 560: kasan_save_stack+0x38/0x6c kasan_set_track+0x28/0x40 kasan_set_free_info+0x24/0x4c __kasan_slab_free+0x100/0x164 kasan_slab_free+0x14/0x20 kmem_cache_free+0xc4/0x34c vm_area_free+0x1c/0x2c remove_vma+0x7c/0x94 __do_munmap+0x358/0x710 __vm_munmap+0xbc/0x130 __arm64_sys_munmap+0x4c/0x64 el0_svc_common.constprop.0+0xac/0x270 do_el0_svc+0x38/0xa0 el0_svc+0x1c/0x2c el0_sync_handler+0xe8/0x114 el0_sync+0x180/0x1c0 [...] ================================================================== To prevent the race above, revert back to taking the mmap write lock inside binder_update_page_range(). One might expect an increase of mmap lock contention. However, binder already serializes these calls via top level alloc->mutex. Also, there was no performance impact shown when running the binder benchmark tests. Note this patch is specific to stable branches 5.4 and 5.10. Since in newer kernel releases binder no longer caches a pointer to the vma. Instead, it has been refactored to use vma_lookup() which avoids the issue described here. This switch was introduced in commit a43cfc87caaf ("android: binder: stop saving a pointer to the VMA"). Bug: 254837884 Link: https://lore.kernel.org/all/20221104175450.306810-1-cmllamas@google.com/ Fixes: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap") Reported-by: Jann Horn Cc: # 5.10.x Cc: Minchan Kim Cc: Yang Shi Cc: Liam Howlett Signed-off-by: Carlos Llamas Change-Id: Ieabadbfa30f99812da9c226cf1ddd5e60f62c607 (cherry picked from commit b684150a44187376ae2ea8369c400140ffc3ea67) --- drivers/android/binder_alloc.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index d30267e08536..447342a878ff 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -213,7 +213,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, mm = alloc->vma_vm_mm; if (mm) { - mmap_read_lock(mm); + mmap_write_lock(mm); vma = alloc->vma; } @@ -271,7 +271,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, trace_binder_alloc_page_end(alloc, index); } if (mm) { - mmap_read_unlock(mm); + mmap_write_unlock(mm); mmput(mm); } return 0; @@ -304,7 +304,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate, } err_no_vma: if (mm) { - mmap_read_unlock(mm); + mmap_write_unlock(mm); mmput(mm); } return vma ? -ENOMEM : -ESRCH; From 9171bbdc05b9daca374bf27449298dcf1fc21ecb Mon Sep 17 00:00:00 2001 From: "Sivasri Kumar, Vanka" Date: Thu, 10 Nov 2022 00:03:05 +0530 Subject: [PATCH 1842/1842] ANDROID: abi_gki_aarch64_qcom: Add wait_on_page_bit In commit fae05b2314b1 ("zsmalloc: fix races between asynchronous zspage free and page migration"), wait_on_page_bit symbol was required to fix the build of that target platform. Functions changes summary: 0 Removed, 0 Changed , 1 Added functions Variables changes summary: 0 Removed, 0 Changed , 0 Added variables 1 Added function: [A] 'function void wait_on_page_bit(page*, int)' Bug: 258412729 Change-Id: Ic392d6789788e1e2a46f95726fb0a0cce05896e1 Signed-off-by: Sivasri Kumar, Vanka Signed-off-by: Bibek Kumar Patro --- android/abi_gki_aarch64.xml | 6 +++--- android/abi_gki_aarch64_qcom | 3 ++- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/android/abi_gki_aarch64.xml b/android/abi_gki_aarch64.xml index f4d56bb1813c..c96e97fd4457 100644 --- a/android/abi_gki_aarch64.xml +++ b/android/abi_gki_aarch64.xml @@ -148078,9 +148078,9 @@ - - - + + + diff --git a/android/abi_gki_aarch64_qcom b/android/abi_gki_aarch64_qcom index dc16110241bc..7cc6ace46b16 100644 --- a/android/abi_gki_aarch64_qcom +++ b/android/abi_gki_aarch64_qcom @@ -2581,7 +2581,6 @@ __traceiter_android_vh_jiffies_update __traceiter_android_vh_logbuf __traceiter_android_vh_logbuf_pr_cont - __tracepoint_android_vh_madvise_cold_or_pageout __traceiter_android_vh_printk_hotplug __traceiter_android_vh_rproc_recovery __traceiter_android_vh_rproc_recovery_set @@ -2703,6 +2702,7 @@ __tracepoint_android_vh_jiffies_update __tracepoint_android_vh_logbuf __tracepoint_android_vh_logbuf_pr_cont + __tracepoint_android_vh_madvise_cold_or_pageout __tracepoint_android_vh_oom_check_panic __tracepoint_android_vh_printk_hotplug __tracepoint_android_vh_process_killed @@ -3026,6 +3026,7 @@ wait_for_completion_interruptible_timeout wait_for_completion_killable wait_for_completion_timeout + wait_on_page_bit __wait_rcu_gp wait_woken __wake_up