Merge keystone/android12-5.10-keystone-qcom-release.136+ (897411a) into msm-5.10

* refs/heads/tmp-897411a:
  ANDROID: dma-buf: don't re-purpose kobject as work_struct
  ANDROID: dma-buf: Fix build breakage with !CONFIG_DMABUF_SYSFS_STATS
  ANDROID: usb: gadget: uvc: remove duplicate code in unbind
  FROMGIT: mm/madvise: fix madvise_pageout for private file mappings
  ANDROID: arm64: mm: perform clean & invalidation in __dma_map_area
  BACKPORT: mm/page_alloc: always initialize memory map for the holes
  ANDROID: dma-buf: Add vendor hook for deferred dmabuf sysfs stats release
  ANDROID: dm-user: Remove bio recount in I/O path
  ANDROID: abi_gki_aarch64_qcom: Add wait_on_page_bit
  UPSTREAM: drm/meson: Fix overflow implicit truncation warnings
  UPSTREAM: irqchip/tegra: Fix overflow implicit truncation warnings
  UPSTREAM: video: fbdev: pxa3xx-gcu: Fix integer overflow in pxa3xx_gcu_write
  UPSTREAM: irqchip/gic-v4: Wait for GICR_VPENDBASER.Dirty to clear before descheduling
  UPSTREAM: mm: kfence: fix missing objcg housekeeping for SLAB
  UPSTREAM: clk: Fix clk_hw_get_clk() when dev is NULL
  UPSTREAM: arm64: kasan: fix include error in MTE functions
  UPSTREAM: arm64: prevent instrumentation of bp hardening callbacks
  UPSTREAM: PM: domains: Fix sleep-in-atomic bug caused by genpd_debug_remove()
  UPSTREAM: mm: fix use-after-free bug when mm->mmap is reused after being freed
  BACKPORT: vsprintf: Fix %pK with kptr_restrict == 0
  UPSTREAM: net: preserve skb_end_offset() in skb_unclone_keeptruesize()
  BACKPORT: net: add skb_set_end_offset() helper
  UPSTREAM: arm64: Correct wrong label in macro __init_el2_gicv3
  UPSTREAM: KVM: arm64: Stop handle_exit() from handling HVC twice when an SError occurs
  UPSTREAM: KVM: arm64: Avoid consuming a stale esr value when SError occur
  BACKPORT: arm64: Enable Cortex-A510 erratum 2051678 by default
  UPSTREAM: usb: typec: tcpm: Do not disconnect when receiving VSAFE0V
  UPSTREAM: usb: typec: tcpci: don't touch CC line if it's Vconn source
  UPSTREAM: dt-bindings: memory: mtk-smi: Correct minItems to 2 for the gals clocks
  BACKPORT: dt-bindings: memory: mtk-smi: No need mediatek,larb-id for mt8167
  BACKPORT: dt-bindings: memory: mtk-smi: Rename clock to clocks
  UPSTREAM: KVM: arm64: Use shadow SPSR_EL1 when injecting exceptions on !VHE
  UPSTREAM: block: fix async_depth sysfs interface for mq-deadline
  UPSTREAM: dma-buf: cma_heap: Fix mutex locking section
  UPSTREAM: scsi: ufs: ufs-mediatek: Fix error checking in ufs_mtk_init_va09_pwr_ctrl()
  UPSTREAM: f2fs: include non-compressed blocks in compr_written_block
  UPSTREAM: kasan: fix Kconfig check of CC_HAS_WORKING_NOSANITIZE_ADDRESS
  UPSTREAM: dma-buf: DMABUF_SYSFS_STATS should depend on DMA_SHARED_BUFFER
  UPSTREAM: mmflags.h: add missing __GFP_ZEROTAGS and __GFP_SKIP_KASAN_POISON names
  BACKPORT: scsi: ufs: Optimize serialization of setup_xfer_req() calls
  UPSTREAM: Kbuild: lto: fix module versionings mismatch in GNU make 3.X
  UPSTREAM: clk: versatile: Depend on HAS_IOMEM
  BACKPORT: arm64: meson: select COMMON_CLK
  UPSTREAM: kbuild: do not include include/config/auto.conf from adjust_autoksyms.sh
  UPSTREAM: inet: fully convert sk->sk_rx_dst to RCU rules
  ANDROID: Update symbol list for mtk
  FROMLIST: binder: fix UAF of alloc->vma in race with munmap()
  ANDROID: GKI: Update symbol list for mtk tablet projects
  UPSTREAM: af_key: Do not call xfrm_probe_algs in parallel
  UPSTREAM: mm: Fix TLB flush for not-first PFNMAP mappings in unmap_region()
  UPSTREAM: mm: Force TLB flush for PFNMAP mappings before unlink_file_vma()
  FROMGIT: f2fs: let's avoid to get cp_rwsem twice by f2fs_evict_inode by d_invalidate
  ANDROID: abi_gki_aarch64_qcom: whitelist some vm symbols
  ANDROID: vendor_hook: skip trace_android_vh_page_trylock_set when ignore_references is true
  BACKPORT: ANDROID: dma-buf: Move sysfs work out of DMA-BUF export path
  UPSTREAM: wifi: mac80211: fix MBSSID parsing use-after-free
  UPSTREAM: wifi: mac80211: don't parse mbssid in assoc response
  UPSTREAM: mac80211: mlme: find auth challenge directly
  UPSTREAM: wifi: cfg80211: update hidden BSSes to avoid WARN_ON
  UPSTREAM: wifi: mac80211: fix crash in beacon protection for P2P-device
  UPSTREAM: wifi: mac80211_hwsim: avoid mac80211 warning on bad rate
  UPSTREAM: wifi: cfg80211: avoid nontransmitted BSS list corruption
  UPSTREAM: wifi: cfg80211: fix BSS refcounting bugs
  UPSTREAM: wifi: cfg80211: ensure length byte is present before access
  UPSTREAM: wifi: cfg80211/mac80211: reject bad MBSSID elements
  UPSTREAM: wifi: cfg80211: fix u8 overflow in cfg80211_update_notlisted_nontrans()
  ANDROID: GKI: Update symbols to symbol list
  ANDROID: sched: add restricted hooks to replace the former hooks
  ANDROID: GKI: Add symbol snd_pcm_stop_xrun
  ANDROID: ABI: update allowed list for galaxy
  ANDROID: GKI: Update symbols to symbol list
  UPSTREAM: dma-buf: ensure unique directory name for dmabuf stats
  UPSTREAM: dma-buf: call dma_buf_stats_setup after dmabuf is in valid list

 Conflicts:
	Documentation/devicetree/bindings
	Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml
	Documentation/devicetree/bindings/memory-controllers/mediatek,smi-larb.yaml

Change-Id: If8181c06536ceb430cde0826f557298b4ab648e7
Signed-off-by: Srinivasarao Pathipati <quic_c_spathi@quicinc.com>
This commit is contained in:
Srinivasarao Pathipati 2022-12-21 20:59:04 +05:30
commit f0e9702eed
68 changed files with 816 additions and 396 deletions

View File

@ -3420,8 +3420,7 @@
difficult since unequal pointers can no longer be
compared. However, if this command-line option is
specified, then all normal pointers will have their true
value printed. Pointers printed via %pK may still be
hashed. This option should only be specified when
value printed. This option should only be specified when
debugging the kernel. Please do not use on production
kernels.

View File

@ -1,4 +1,4 @@
android12-5.10-2022-10_r5
android12-5.10-2022-11_r4
8970630c07cda8d0a0604d0ad81fc67f29f2420a
2b4f804f72e8d8157532cf32a679268a7a965ae6

File diff suppressed because it is too large Load Diff

View File

@ -1901,6 +1901,8 @@
find_vpid
finish_wait
firmware_request_nowarn
fixed_phy_register
fixed_phy_unregister
fixed_size_llseek
flow_keys_basic_dissector
flush_dcache_page
@ -2327,6 +2329,7 @@
irq_create_mapping_affinity
irq_create_of_mapping
irq_dispose_mapping
irq_domain_add_simple
irq_domain_alloc_irqs_parent
irq_domain_create_hierarchy
irq_domain_free_irqs_common
@ -3023,6 +3026,7 @@
phy_ethtool_get_link_ksettings
phy_ethtool_nway_reset
phy_ethtool_set_link_ksettings
phy_ethtool_set_wol
phy_exit
phy_find_first
phy_get_pause
@ -3034,9 +3038,12 @@
phy_power_off
phy_power_on
phy_print_status
phy_register_fixup_for_uid
phy_save_page
phy_set_mode_ext
phy_start
phy_stop
phy_unregister_fixup_for_uid
pick_highest_pushable_task
pid_nr_ns
pid_task
@ -4066,6 +4073,7 @@
ttm_tt_populate
ttm_tt_set_placement_caching
ttm_unmap_and_unpopulate_pages
tty_encode_baud_rate
tty_flip_buffer_push
tty_insert_flip_string_fixed_flag
tty_kref_put
@ -4214,8 +4222,10 @@
usb_asmedia_modifyflowcontrol
usb_assign_descriptors
usb_autopm_get_interface
usb_autopm_get_interface_async
usb_autopm_get_interface_no_resume
usb_autopm_put_interface
usb_autopm_put_interface_async
usb_bulk_msg
usb_calc_bus_time
usb_choose_configuration
@ -4293,6 +4303,7 @@
usb_ifnum_to_if
usb_initialize_gadget
usb_interface_id
usb_interrupt_msg
usb_kill_urb
usb_match_id
usb_match_one_id

View File

@ -412,6 +412,7 @@
devm_kasprintf
devm_kfree
devm_kmalloc
devm_led_classdev_flash_register_ext
devm_led_classdev_register_ext
devm_led_classdev_unregister
devm_mbox_controller_register
@ -1149,7 +1150,9 @@
kvmalloc_node
led_classdev_flash_register_ext
led_classdev_flash_unregister
led_colors
led_get_flash_fault
led_set_brightness
led_set_brightness_sync
led_set_flash_brightness
led_set_flash_timeout
@ -1159,6 +1162,7 @@
led_update_brightness
led_update_flash_brightness
linear_range_get_max_value
linear_range_get_selector_high
linear_range_get_value
__list_add_valid
__list_del_entry_valid
@ -1704,6 +1708,7 @@
regulator_enable
regulator_enable_regmap
regulator_get
regulator_get_bypass_regmap
regulator_get_current_limit_regmap
regulator_get_mode
regulator_get_optional
@ -1721,6 +1726,7 @@
regulator_notifier_call_chain
regulator_put
regulator_set_active_discharge_regmap
regulator_set_bypass_regmap
regulator_set_current_limit
regulator_set_current_limit_regmap
regulator_set_load
@ -2973,6 +2979,7 @@
platform_find_device_by_driver
pm_wq
power_supply_is_system_supplied
power_supply_register_no_ws
power_supply_unreg_notifier
prepare_to_wait
printk_deferred

View File

@ -408,6 +408,7 @@
dev_fwnode
__dev_get_by_index
dev_get_by_index
dev_get_by_index_rcu
dev_get_by_name
dev_get_regmap
dev_get_stats
@ -1658,7 +1659,9 @@
net_ratelimit
nf_ct_attach
nf_ct_delete
nf_register_net_hook
nf_register_net_hooks
nf_unregister_net_hook
nf_unregister_net_hooks
nla_find
nla_memcpy
@ -2286,6 +2289,7 @@
rtc_update_irq
rtc_valid_tm
rtnl_is_locked
__rtnl_link_unregister
rtnl_lock
rtnl_unlock
runqueues

View File

@ -129,6 +129,7 @@
cgroup_path_ns
cgroup_taskset_first
cgroup_taskset_next
check_move_unevictable_pages
__check_object_size
check_preempt_curr
check_zeroed_user
@ -1659,6 +1660,7 @@
page_endio
page_mapping
__page_pinner_migration_failed
__pagevec_release
panic
panic_notifier_list
panic_timeout

View File

@ -2626,6 +2626,7 @@
snd_pcm_fill_iec958_consumer
snd_pcm_fill_iec958_consumer_hw_params
snd_pcm_hw_constraint_eld
snd_pcm_stop_xrun
# required by snd-soc-rk817.ko
snd_soc_component_exit_regmap

View File

@ -10,6 +10,7 @@
nr_swap_pages
plist_requeue
plist_del
__traceiter_android_rvh_handle_pte_fault_end
__traceiter_android_vh_handle_pte_fault_end
__traceiter_android_vh_cow_user_page
__traceiter_android_vh_swapin_add_anon_rmap
@ -20,9 +21,13 @@
__traceiter_android_vh_count_pswpout
__traceiter_android_vh_count_swpout_vm_event
__traceiter_android_vh_swap_slot_cache_active
__traceiter_android_rvh_drain_slots_cache_cpu
__traceiter_android_vh_drain_slots_cache_cpu
__traceiter_android_rvh_alloc_swap_slot_cache
__traceiter_android_vh_alloc_swap_slot_cache
__traceiter_android_rvh_free_swap_slot
__traceiter_android_vh_free_swap_slot
__traceiter_android_rvh_get_swap_page
__traceiter_android_vh_get_swap_page
__traceiter_android_vh_page_isolated_for_reclaim
__traceiter_android_vh_inactive_is_low
@ -31,10 +36,12 @@
__traceiter_android_vh_unuse_swap_page
__traceiter_android_vh_init_swap_info_struct
__traceiter_android_vh_si_swapinfo
__traceiter_android_rvh_alloc_si
__traceiter_android_vh_alloc_si
__traceiter_android_vh_free_pages
__traceiter_android_vh_set_shmem_page_flag
__traceiter_android_vh_ra_tuning_max_page
__tracepoint_android_rvh_handle_pte_fault_end
__tracepoint_android_vh_handle_pte_fault_end
__tracepoint_android_vh_cow_user_page
__tracepoint_android_vh_swapin_add_anon_rmap
@ -45,9 +52,13 @@
__tracepoint_android_vh_count_pswpout
__tracepoint_android_vh_count_swpout_vm_event
__tracepoint_android_vh_swap_slot_cache_active
__tracepoint_android_rvh_drain_slots_cache_cpu
__tracepoint_android_vh_drain_slots_cache_cpu
__tracepoint_android_rvh_alloc_swap_slot_cache
__tracepoint_android_vh_alloc_swap_slot_cache
__tracepoint_android_rvh_free_swap_slot
__tracepoint_android_vh_free_swap_slot
__tracepoint_android_rvh_get_swap_page
__tracepoint_android_vh_get_swap_page
__tracepoint_android_vh_page_isolated_for_reclaim
__tracepoint_android_vh_inactive_is_low
@ -56,6 +67,7 @@
__tracepoint_android_vh_unuse_swap_page
__tracepoint_android_vh_init_swap_info_struct
__tracepoint_android_vh_si_swapinfo
__tracepoint_android_rvh_alloc_si
__tracepoint_android_vh_alloc_si
__tracepoint_android_vh_free_pages
__tracepoint_android_vh_set_shmem_page_flag

View File

@ -671,6 +671,7 @@ config ARM64_ERRATUM_1508412
config ARM64_ERRATUM_2051678
bool "Cortex-A510: 2051678: disable Hardware Update of the page table's dirty bit"
default y
help
This options adds the workaround for ARM Cortex-A510 erratum ARM64_ERRATUM_2051678.
Affected Coretex-A510 might not respect the ordering rules for

View File

@ -152,6 +152,7 @@ config ARCH_MEDIATEK
config ARCH_MESON
bool "Amlogic Platforms"
select COMMON_CLK
help
This enables support for the arm64 based Amlogic SoCs
such as the s905, S905X/D, S912, A113X/D or S905X/D2

View File

@ -107,7 +107,7 @@
msr_s SYS_ICC_SRE_EL2, x0
isb // Make sure SRE is now set
mrs_s x0, SYS_ICC_SRE_EL2 // Read SRE back,
tbz x0, #0, 1f // and check that it sticks
tbz x0, #0, .Lskip_gicv3_\@ // and check that it sticks
msr_s SYS_ICH_HCR_EL2, xzr // Reset ICC_HCR_EL2 to defaults
.Lskip_gicv3_\@:
.endm

View File

@ -5,6 +5,7 @@
#ifndef __ASM_MTE_KASAN_H
#define __ASM_MTE_KASAN_H
#include <asm/compiler.h>
#include <asm/mte-def.h>
#ifndef __ASSEMBLY__

View File

@ -67,7 +67,8 @@ struct bp_hardening_data {
DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
static inline void arm64_apply_bp_hardening(void)
/* Called during entry so must be __always_inline */
static __always_inline void arm64_apply_bp_hardening(void)
{
struct bp_hardening_data *d;

View File

@ -233,17 +233,20 @@ static void install_bp_hardening_cb(bp_hardening_cb_t fn)
__this_cpu_write(bp_hardening_data.slot, HYP_VECTOR_SPECTRE_DIRECT);
}
static void call_smc_arch_workaround_1(void)
/* Called during entry so must be noinstr */
static noinstr void call_smc_arch_workaround_1(void)
{
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
}
static void call_hvc_arch_workaround_1(void)
/* Called during entry so must be noinstr */
static noinstr void call_hvc_arch_workaround_1(void)
{
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
}
static void qcom_link_stack_sanitisation(void)
/* Called during entry so must be noinstr */
static noinstr void qcom_link_stack_sanitisation(void)
{
u64 tmp;

View File

@ -240,6 +240,14 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index)
{
struct kvm_run *run = vcpu->run;
if (ARM_SERROR_PENDING(exception_index)) {
/*
* The SError is handled by handle_exit_early(). If the guest
* survives it will re-execute the original instruction.
*/
return 1;
}
exception_index = ARM_EXCEPTION_CODE(exception_index);
switch (exception_index) {

View File

@ -38,7 +38,10 @@ static inline void __vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg)
static void __vcpu_write_spsr(struct kvm_vcpu *vcpu, u64 val)
{
write_sysreg_el1(val, SYS_SPSR);
if (has_vhe())
write_sysreg_el1(val, SYS_SPSR);
else
__vcpu_sys_reg(vcpu, SPSR_EL1) = val;
}
static void __vcpu_write_spsr_abt(struct kvm_vcpu *vcpu, u64 val)

View File

@ -423,7 +423,8 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
if (ARM_SERROR_PENDING(*exit_code)) {
if (ARM_SERROR_PENDING(*exit_code) &&
ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) {
u8 esr_ec = kvm_vcpu_trap_get_class(vcpu);
/*

View File

@ -162,7 +162,8 @@ extern int page_is_ram(unsigned long pfn);
# define ARCH_PFN_OFFSET (PAGE_OFFSET >> PAGE_SHIFT)
# else /* CONFIG_MMU */
# define ARCH_PFN_OFFSET (memory_start >> PAGE_SHIFT)
# define pfn_valid(pfn) ((pfn) < (max_mapnr + ARCH_PFN_OFFSET))
# define pfn_valid(pfn) ((pfn) >= ARCH_PFN_OFFSET && \
(pfn) < (max_mapnr + ARCH_PFN_OFFSET))
# endif /* CONFIG_MMU */
# endif /* __ASSEMBLY__ */

View File

@ -866,7 +866,7 @@ SHOW_JIFFIES(deadline_write_expire_show, dd->fifo_expire[DD_WRITE]);
SHOW_JIFFIES(deadline_aging_expire_show, dd->aging_expire);
SHOW_INT(deadline_writes_starved_show, dd->writes_starved);
SHOW_INT(deadline_front_merges_show, dd->front_merges);
SHOW_INT(deadline_async_depth_show, dd->front_merges);
SHOW_INT(deadline_async_depth_show, dd->async_depth);
SHOW_INT(deadline_fifo_batch_show, dd->fifo_batch);
#undef SHOW_INT
#undef SHOW_JIFFIES
@ -896,7 +896,7 @@ STORE_JIFFIES(deadline_write_expire_store, &dd->fifo_expire[DD_WRITE], 0, INT_MA
STORE_JIFFIES(deadline_aging_expire_store, &dd->aging_expire, 0, INT_MAX);
STORE_INT(deadline_writes_starved_store, &dd->writes_starved, INT_MIN, INT_MAX);
STORE_INT(deadline_front_merges_store, &dd->front_merges, 0, 1);
STORE_INT(deadline_async_depth_store, &dd->front_merges, 1, INT_MAX);
STORE_INT(deadline_async_depth_store, &dd->async_depth, 1, INT_MAX);
STORE_INT(deadline_fifo_batch_store, &dd->fifo_batch, 0, INT_MAX);
#undef STORE_FUNCTION
#undef STORE_INT

View File

@ -7,6 +7,7 @@
*/
#ifndef __GENKSYMS__
#include <linux/dma-buf.h>
#include <linux/rmap.h>
#endif
@ -432,6 +433,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_binder_read_done);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_handle_tlb_conf);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_shrink_node_memcgs);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ra_tuning_max_page);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_handle_pte_fault_end);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_handle_pte_fault_end);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cow_user_page);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_swapin_add_anon_rmap);
@ -442,9 +444,13 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_count_pswpin);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_count_pswpout);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_count_swpout_vm_event);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_swap_slot_cache_active);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_drain_slots_cache_cpu);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_drain_slots_cache_cpu);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_alloc_swap_slot_cache);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_swap_slot_cache);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_free_swap_slot);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_swap_slot);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_get_swap_page);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_get_swap_page);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_madvise_cold_or_pageout);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_page_isolated_for_reclaim);
@ -454,6 +460,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_account_swap_pages);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_unuse_swap_page);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_init_swap_info_struct);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_si_swapinfo);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_alloc_si);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_alloc_si);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_free_pages);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_set_shmem_page_flag);

View File

@ -2052,9 +2052,9 @@ static int genpd_remove(struct generic_pm_domain *genpd)
kfree(link);
}
genpd_debug_remove(genpd);
list_del(&genpd->gpd_list_node);
genpd_unlock(genpd);
genpd_debug_remove(genpd);
cancel_work_sync(&genpd->power_off_work);
if (genpd_is_cpu_domain(genpd))
free_cpumask_var(genpd->cpus);

View File

@ -3828,8 +3828,9 @@ struct clk *clk_hw_create_clk(struct device *dev, struct clk_hw *hw,
struct clk *clk_hw_get_clk(struct clk_hw *hw, const char *con_id)
{
struct device *dev = hw->core->dev;
const char *name = dev ? dev_name(dev) : NULL;
return clk_hw_create_clk(dev, hw, dev_name(dev), con_id);
return clk_hw_create_clk(dev, hw, name, con_id);
}
EXPORT_SYMBOL(clk_hw_get_clk);

View File

@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0-only
menu "Clock driver for ARM Reference designs"
depends on HAS_IOMEM
config ICST
bool "Clock driver for ARM Reference designs ICST"

View File

@ -67,7 +67,7 @@ menuconfig DMABUF_HEAPS
menuconfig DMABUF_SYSFS_STATS
bool "DMA-BUF sysfs statistics"
select DMA_SHARED_BUFFER
depends on DMA_SHARED_BUFFER
help
Choose this option to enable DMA-BUF sysfs statistics
in location /sys/kernel/dmabuf/buffers.

View File

@ -142,15 +142,21 @@ void dma_buf_uninit_sysfs_statistics(void)
kset_unregister(dma_buf_stats_kset);
}
struct dma_buf_create_sysfs_entry {
struct dma_buf *dmabuf;
struct work_struct work;
};
union dma_buf_create_sysfs_work_entry {
struct dma_buf_create_sysfs_entry create_entry;
struct dma_buf_sysfs_entry sysfs_entry;
};
static void sysfs_add_workfn(struct work_struct *work)
{
/* The ABI would have to change for this to be false, but let's be paranoid. */
_Static_assert(sizeof(struct kobject) >= sizeof(struct work_struct),
"kobject is smaller than work_struct");
struct dma_buf_sysfs_entry *sysfs_entry =
container_of((struct kobject *)work, struct dma_buf_sysfs_entry, kobj);
struct dma_buf *dmabuf = sysfs_entry->dmabuf;
struct dma_buf_create_sysfs_entry *create_entry =
container_of(work, struct dma_buf_create_sysfs_entry, work);
struct dma_buf *dmabuf = create_entry->dmabuf;
/*
* A dmabuf is ref-counted via its file member. If this handler holds the only
@ -161,6 +167,7 @@ static void sysfs_add_workfn(struct work_struct *work)
* is released, and that can't happen until the end of this function.
*/
if (file_count(dmabuf->file) > 1) {
dmabuf->sysfs_entry->dmabuf = dmabuf;
/*
* kobject_init_and_add expects kobject to be zero-filled, but we have populated it
* to trigger this work function.
@ -185,8 +192,8 @@ static void sysfs_add_workfn(struct work_struct *work)
int dma_buf_stats_setup(struct dma_buf *dmabuf)
{
struct dma_buf_sysfs_entry *sysfs_entry;
struct work_struct *work;
struct dma_buf_create_sysfs_entry *create_entry;
union dma_buf_create_sysfs_work_entry *work_entry;
if (!dmabuf || !dmabuf->file)
return -EINVAL;
@ -196,21 +203,18 @@ int dma_buf_stats_setup(struct dma_buf *dmabuf)
return -EINVAL;
}
sysfs_entry = kmalloc(sizeof(struct dma_buf_sysfs_entry), GFP_KERNEL);
if (!sysfs_entry)
work_entry = kmalloc(sizeof(union dma_buf_create_sysfs_work_entry), GFP_KERNEL);
if (!work_entry)
return -ENOMEM;
sysfs_entry->dmabuf = dmabuf;
dmabuf->sysfs_entry = sysfs_entry;
dmabuf->sysfs_entry = &work_entry->sysfs_entry;
/*
* The use of kobj as a work_struct is an ugly hack
* to avoid an ABI break in this frozen kernel.
*/
work = (struct work_struct *)&dmabuf->sysfs_entry->kobj;
INIT_WORK(work, sysfs_add_workfn);
create_entry = &work_entry->create_entry;
create_entry->dmabuf = dmabuf;
INIT_WORK(&create_entry->work, sysfs_add_workfn);
get_dma_buf(dmabuf); /* This reference will be dropped in sysfs_add_workfn. */
schedule_work(work);
schedule_work(&create_entry->work);
return 0;
}

View File

@ -486,6 +486,7 @@ EXPORT_SYMBOL_GPL(is_dma_buf_file);
static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags)
{
static atomic64_t dmabuf_inode = ATOMIC64_INIT(0);
struct file *file;
struct inode *inode = alloc_anon_inode(dma_buf_mnt->mnt_sb);
@ -495,6 +496,13 @@ static struct file *dma_buf_getfile(struct dma_buf *dmabuf, int flags)
inode->i_size = dmabuf->size;
inode_set_bytes(inode, dmabuf->size);
/*
* The ->i_ino acquired from get_next_ino() is not unique thus
* not suitable for using it as dentry name by dmabuf stats.
* Override ->i_ino with the unique and dmabuffs specific
* value.
*/
inode->i_ino = atomic64_add_return(1, &dmabuf_inode);
file = alloc_file_pseudo(inode, dma_buf_mnt, "dmabuf",
flags, &dma_buf_fops);
if (IS_ERR(file))
@ -621,10 +629,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
file->f_mode |= FMODE_LSEEK;
dmabuf->file = file;
ret = dma_buf_stats_setup(dmabuf);
if (ret)
goto err_sysfs;
mutex_init(&dmabuf->lock);
INIT_LIST_HEAD(&dmabuf->attachments);
@ -632,6 +636,10 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info)
list_add(&dmabuf->list_node, &db_list.head);
mutex_unlock(&db_list.lock);
ret = dma_buf_stats_setup(dmabuf);
if (ret)
goto err_sysfs;
return dmabuf;
err_sysfs:

View File

@ -126,10 +126,11 @@ static int cma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
struct cma_heap_buffer *buffer = dmabuf->priv;
struct dma_heap_attachment *a;
mutex_lock(&buffer->lock);
if (buffer->vmap_cnt)
invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
mutex_lock(&buffer->lock);
list_for_each_entry(a, &buffer->attachments, list) {
if (!a->mapped)
continue;
@ -146,10 +147,11 @@ static int cma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
struct cma_heap_buffer *buffer = dmabuf->priv;
struct dma_heap_attachment *a;
mutex_lock(&buffer->lock);
if (buffer->vmap_cnt)
flush_kernel_vmap_range(buffer->vaddr, buffer->len);
mutex_lock(&buffer->lock);
list_for_each_entry(a, &buffer->attachments, list) {
if (!a->mapped)
continue;

View File

@ -469,17 +469,17 @@ void meson_viu_init(struct meson_drm *priv)
priv->io_base + _REG(VD2_IF0_LUMA_FIFO_SIZE));
if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) {
writel_relaxed(VIU_OSD_BLEND_REORDER(0, 1) |
VIU_OSD_BLEND_REORDER(1, 0) |
VIU_OSD_BLEND_REORDER(2, 0) |
VIU_OSD_BLEND_REORDER(3, 0) |
VIU_OSD_BLEND_DIN_EN(1) |
VIU_OSD_BLEND1_DIN3_BYPASS_TO_DOUT1 |
VIU_OSD_BLEND1_DOUT_BYPASS_TO_BLEND2 |
VIU_OSD_BLEND_DIN0_BYPASS_TO_DOUT0 |
VIU_OSD_BLEND_BLEN2_PREMULT_EN(1) |
VIU_OSD_BLEND_HOLD_LINES(4),
priv->io_base + _REG(VIU_OSD_BLEND_CTRL));
u32 val = (u32)VIU_OSD_BLEND_REORDER(0, 1) |
(u32)VIU_OSD_BLEND_REORDER(1, 0) |
(u32)VIU_OSD_BLEND_REORDER(2, 0) |
(u32)VIU_OSD_BLEND_REORDER(3, 0) |
(u32)VIU_OSD_BLEND_DIN_EN(1) |
(u32)VIU_OSD_BLEND1_DIN3_BYPASS_TO_DOUT1 |
(u32)VIU_OSD_BLEND1_DOUT_BYPASS_TO_BLEND2 |
(u32)VIU_OSD_BLEND_DIN0_BYPASS_TO_DOUT0 |
(u32)VIU_OSD_BLEND_BLEN2_PREMULT_EN(1) |
(u32)VIU_OSD_BLEND_HOLD_LINES(4);
writel_relaxed(val, priv->io_base + _REG(VIU_OSD_BLEND_CTRL));
writel_relaxed(OSD_BLEND_PATH_SEL_ENABLE,
priv->io_base + _REG(OSD1_BLEND_SRC_CTRL));

View File

@ -3002,18 +3002,12 @@ static int __init allocate_lpi_tables(void)
return 0;
}
static u64 its_clear_vpend_valid(void __iomem *vlpi_base, u64 clr, u64 set)
static u64 read_vpend_dirty_clear(void __iomem *vlpi_base)
{
u32 count = 1000000; /* 1s! */
bool clean;
u64 val;
val = gicr_read_vpendbaser(vlpi_base + GICR_VPENDBASER);
val &= ~GICR_VPENDBASER_Valid;
val &= ~clr;
val |= set;
gicr_write_vpendbaser(val, vlpi_base + GICR_VPENDBASER);
do {
val = gicr_read_vpendbaser(vlpi_base + GICR_VPENDBASER);
clean = !(val & GICR_VPENDBASER_Dirty);
@ -3024,10 +3018,26 @@ static u64 its_clear_vpend_valid(void __iomem *vlpi_base, u64 clr, u64 set)
}
} while (!clean && count);
if (unlikely(val & GICR_VPENDBASER_Dirty)) {
if (unlikely(!clean))
pr_err_ratelimited("ITS virtual pending table not cleaning\n");
return val;
}
static u64 its_clear_vpend_valid(void __iomem *vlpi_base, u64 clr, u64 set)
{
u64 val;
/* Make sure we wait until the RD is done with the initial scan */
val = read_vpend_dirty_clear(vlpi_base);
val &= ~GICR_VPENDBASER_Valid;
val &= ~clr;
val |= set;
gicr_write_vpendbaser(val, vlpi_base + GICR_VPENDBASER);
val = read_vpend_dirty_clear(vlpi_base);
if (unlikely(val & GICR_VPENDBASER_Dirty))
val |= GICR_VPENDBASER_PendingLast;
}
return val;
}

View File

@ -148,10 +148,10 @@ static int tegra_ictlr_suspend(void)
lic->cop_iep[i] = readl_relaxed(ictlr + ICTLR_COP_IEP_CLASS);
/* Disable COP interrupts */
writel_relaxed(~0ul, ictlr + ICTLR_COP_IER_CLR);
writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_COP_IER_CLR);
/* Disable CPU interrupts */
writel_relaxed(~0ul, ictlr + ICTLR_CPU_IER_CLR);
writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_CPU_IER_CLR);
/* Enable the wakeup sources of ictlr */
writel_relaxed(lic->ictlr_wake_mask[i], ictlr + ICTLR_CPU_IER_SET);
@ -172,12 +172,12 @@ static void tegra_ictlr_resume(void)
writel_relaxed(lic->cpu_iep[i],
ictlr + ICTLR_CPU_IEP_CLASS);
writel_relaxed(~0ul, ictlr + ICTLR_CPU_IER_CLR);
writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_CPU_IER_CLR);
writel_relaxed(lic->cpu_ier[i],
ictlr + ICTLR_CPU_IER_SET);
writel_relaxed(lic->cop_iep[i],
ictlr + ICTLR_COP_IEP_CLASS);
writel_relaxed(~0ul, ictlr + ICTLR_COP_IER_CLR);
writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_COP_IER_CLR);
writel_relaxed(lic->cop_ier[i],
ictlr + ICTLR_COP_IER_SET);
}
@ -312,7 +312,7 @@ static int __init tegra_ictlr_init(struct device_node *node,
lic->base[i] = base;
/* Disable all interrupts */
writel_relaxed(~0UL, base + ICTLR_CPU_IER_CLR);
writel_relaxed(GENMASK(31, 0), base + ICTLR_CPU_IER_CLR);
/* All interrupts target IRQ */
writel_relaxed(0, base + ICTLR_CPU_IEP_CLASS);

View File

@ -188,7 +188,6 @@ static void message_kill(struct message *m, mempool_t *pool)
{
m->bio->bi_status = BLK_STS_IOERR;
bio_endio(m->bio);
bio_put(m->bio);
mempool_free(m, pool);
}
@ -989,7 +988,6 @@ static ssize_t dev_write(struct kiocb *iocb, struct iov_iter *from)
*/
WARN_ON(bio_size(c->cur_from_user->bio) != 0);
bio_endio(c->cur_from_user->bio);
bio_put(c->cur_from_user->bio);
/*
* We don't actually need to take the target lock here, as all
@ -1227,7 +1225,6 @@ static int user_map(struct dm_target *ti, struct bio *bio)
return DM_MAPIO_REQUEUE;
}
bio_get(bio);
entry->msg.type = bio_type_to_user_type(bio);
entry->msg.flags = bio_flags_to_user_flags(bio);
entry->msg.sector = bio->bi_iter.bi_sector;

View File

@ -501,7 +501,7 @@ static void ufs_mtk_init_va09_pwr_ctrl(struct ufs_hba *hba)
struct ufs_mtk_host *host = ufshcd_get_variant(hba);
host->reg_va09 = regulator_get(hba->dev, "va09");
if (!host->reg_va09)
if (IS_ERR(host->reg_va09))
dev_info(hba->dev, "failed to get va09");
else
host->caps |= UFS_MTK_CAP_VA09_PWR_CTRL;

View File

@ -2093,12 +2093,13 @@ void ufshcd_send_command(struct ufs_hba *hba, unsigned int task_tag)
lrbp->issue_time_stamp = ktime_get();
lrbp->compl_time_stamp = ktime_set(0, 0);
ufshcd_vops_setup_xfer_req(hba, task_tag, (lrbp->cmd ? true : false));
trace_android_vh_ufs_send_command(hba, lrbp);
ufshcd_add_command_trace(hba, task_tag, "send");
ufshcd_clk_scaling_start_busy(hba);
if (unlikely(ufshcd_should_inform_monitor(hba, lrbp)))
ufshcd_start_monitor(hba, lrbp);
if (hba->vops && hba->vops->setup_xfer_req)
hba->vops->setup_xfer_req(hba, task_tag, !!lrbp->cmd);
if (ufshcd_has_utrlcnr(hba)) {
set_bit(task_tag, &hba->outstanding_reqs);
ufshcd_writel(hba, 1 << task_tag,

View File

@ -1304,18 +1304,6 @@ static inline int ufshcd_vops_pwr_change_notify(struct ufs_hba *hba,
return -ENOTSUPP;
}
static inline void ufshcd_vops_setup_xfer_req(struct ufs_hba *hba, int tag,
bool is_scsi_cmd)
{
if (hba->vops && hba->vops->setup_xfer_req) {
unsigned long flags;
spin_lock_irqsave(hba->host->host_lock, flags);
hba->vops->setup_xfer_req(hba, tag, is_scsi_cmd);
spin_unlock_irqrestore(hba->host->host_lock, flags);
}
}
static inline void ufshcd_vops_setup_task_mgmt(struct ufs_hba *hba,
int tag, u8 tm_function)
{

View File

@ -906,18 +906,6 @@ static void uvc_function_unbind(struct usb_configuration *c,
uvcg_dbg(f, "done waiting with ret: %ld\n", wait_ret);
}
/* If we know we're connected via v4l2, then there should be a cleanup
* of the device from userspace either via UVC_EVENT_DISCONNECT or
* though the video device removal uevent. Allow some time for the
* application to close out before things get deleted.
*/
if (uvc->func_connected) {
uvcg_dbg(f, "waiting for clean disconnect\n");
wait_ret = wait_event_interruptible_timeout(uvc->func_connected_queue,
uvc->func_connected == false, msecs_to_jiffies(500));
uvcg_dbg(f, "done waiting with ret: %ld\n", wait_ret);
}
device_remove_file(&uvc->vdev.dev, &dev_attr_function_name);
video_unregister_device(&uvc->vdev);
v4l2_device_unregister(&uvc->v4l2_dev);

View File

@ -85,9 +85,25 @@ static int tcpci_write16(struct tcpci *tcpci, unsigned int reg, u16 val)
static int tcpci_set_cc(struct tcpc_dev *tcpc, enum typec_cc_status cc)
{
struct tcpci *tcpci = tcpc_to_tcpci(tcpc);
bool vconn_pres;
enum typec_cc_polarity polarity = TYPEC_POLARITY_CC1;
unsigned int reg;
int ret;
ret = regmap_read(tcpci->regmap, TCPC_POWER_STATUS, &reg);
if (ret < 0)
return ret;
vconn_pres = !!(reg & TCPC_POWER_STATUS_VCONN_PRES);
if (vconn_pres) {
ret = regmap_read(tcpci->regmap, TCPC_TCPC_CTRL, &reg);
if (ret < 0)
return ret;
if (reg & TCPC_TCPC_CTRL_ORIENTATION)
polarity = TYPEC_POLARITY_CC2;
}
switch (cc) {
case TYPEC_CC_RA:
reg = (TCPC_ROLE_CTRL_CC_RA << TCPC_ROLE_CTRL_CC1_SHIFT) |
@ -122,6 +138,16 @@ static int tcpci_set_cc(struct tcpc_dev *tcpc, enum typec_cc_status cc)
break;
}
if (vconn_pres) {
if (polarity == TYPEC_POLARITY_CC2) {
reg &= ~(TCPC_ROLE_CTRL_CC1_MASK << TCPC_ROLE_CTRL_CC1_SHIFT);
reg |= (TCPC_ROLE_CTRL_CC_OPEN << TCPC_ROLE_CTRL_CC1_SHIFT);
} else {
reg &= ~(TCPC_ROLE_CTRL_CC2_MASK << TCPC_ROLE_CTRL_CC2_SHIFT);
reg |= (TCPC_ROLE_CTRL_CC_OPEN << TCPC_ROLE_CTRL_CC2_SHIFT);
}
}
ret = regmap_write(tcpci->regmap, TCPC_ROLE_CTRL, reg);
if (ret < 0)
return ret;

View File

@ -98,6 +98,7 @@
#define TCPC_POWER_STATUS_SOURCING_VBUS BIT(4)
#define TCPC_POWER_STATUS_VBUS_DET BIT(3)
#define TCPC_POWER_STATUS_VBUS_PRES BIT(2)
#define TCPC_POWER_STATUS_VCONN_PRES BIT(1)
#define TCPC_POWER_STATUS_SINKING_VBUS BIT(0)
#define TCPC_FAULT_STATUS 0x1f

View File

@ -5354,6 +5354,10 @@ static void _tcpm_pd_vbus_vsafe0v(struct tcpm_port *port)
case PR_SWAP_SNK_SRC_SOURCE_ON:
/* Do nothing, vsafe0v is expected during transition */
break;
case SNK_ATTACH_WAIT:
case SNK_DEBOUNCED:
/*Do nothing, still waiting for VSAFE5V for connect */
break;
default:
if (port->pwr_role == TYPEC_SINK && port->auto_vbus_discharge_enabled)
tcpm_set_state(port, SNK_UNATTACHED, 0);

View File

@ -381,7 +381,7 @@ pxa3xx_gcu_write(struct file *file, const char *buff,
struct pxa3xx_gcu_batch *buffer;
struct pxa3xx_gcu_priv *priv = to_pxa3xx_gcu_priv(file);
int words = count / 4;
size_t words = count / 4;
/* Does not need to be atomic. There's a lock in user space,
* but anyhow, this is just for statistics. */

View File

@ -1469,6 +1469,7 @@ int f2fs_write_multi_pages(struct compress_ctx *cc,
if (cluster_may_compress(cc)) {
err = f2fs_compress_pages(cc);
if (err == -EAGAIN) {
add_compr_block_stat(cc->inode, cc->cluster_size);
goto write;
} else if (err) {
f2fs_put_rpages_wbc(cc, wbc, true, 1);

View File

@ -618,6 +618,8 @@ static int f2fs_unlink(struct inode *dir, struct dentry *dentry)
goto fail;
}
f2fs_delete_entry(de, page, dir, inode);
f2fs_unlock_op(sbi);
#ifdef CONFIG_UNICODE
/* VFS negative dentries are incompatible with Encoding and
* Case-insensitiveness. Eventually we'll want avoid
@ -628,8 +630,6 @@ static int f2fs_unlink(struct inode *dir, struct dentry *dentry)
if (IS_CASEFOLDED(dir))
d_invalidate(dentry);
#endif
f2fs_unlock_op(sbi);
if (IS_DIRSYNC(dir))
f2fs_sync_fs(sbi->sb, 1);
fail:

View File

@ -1417,6 +1417,11 @@ static inline unsigned int skb_end_offset(const struct sk_buff *skb)
{
return skb->end;
}
static inline void skb_set_end_offset(struct sk_buff *skb, unsigned int offset)
{
skb->end = offset;
}
#else
static inline unsigned char *skb_end_pointer(const struct sk_buff *skb)
{
@ -1427,6 +1432,11 @@ static inline unsigned int skb_end_offset(const struct sk_buff *skb)
{
return skb->end - skb->head;
}
static inline void skb_set_end_offset(struct sk_buff *skb, unsigned int offset)
{
skb->end = skb->head + offset;
}
#endif
/* Internal */
@ -1646,19 +1656,19 @@ static inline int skb_unclone(struct sk_buff *skb, gfp_t pri)
return 0;
}
/* This variant of skb_unclone() makes sure skb->truesize is not changed */
/* This variant of skb_unclone() makes sure skb->truesize
* and skb_end_offset() are not changed, whenever a new skb->head is needed.
*
* Indeed there is no guarantee that ksize(kmalloc(X)) == ksize(kmalloc(X))
* when various debugging features are in place.
*/
int __skb_unclone_keeptruesize(struct sk_buff *skb, gfp_t pri);
static inline int skb_unclone_keeptruesize(struct sk_buff *skb, gfp_t pri)
{
might_sleep_if(gfpflags_allow_blocking(pri));
if (skb_cloned(skb)) {
unsigned int save = skb->truesize;
int res;
res = pskb_expand_head(skb, 0, 0, pri);
skb->truesize = save;
return res;
}
if (skb_cloned(skb))
return __skb_unclone_keeptruesize(skb, pri);
return 0;
}

View File

@ -426,7 +426,7 @@ struct sock {
#ifdef CONFIG_XFRM
struct xfrm_policy __rcu *sk_policy[2];
#endif
struct dst_entry *sk_rx_dst;
struct dst_entry __rcu *sk_rx_dst;
struct dst_entry __rcu *sk_dst_cache;
atomic_t sk_omem_alloc;
int sk_sndbuf;

View File

@ -48,7 +48,9 @@
{(unsigned long)__GFP_WRITE, "__GFP_WRITE"}, \
{(unsigned long)__GFP_RECLAIM, "__GFP_RECLAIM"}, \
{(unsigned long)__GFP_DIRECT_RECLAIM, "__GFP_DIRECT_RECLAIM"},\
{(unsigned long)__GFP_KSWAPD_RECLAIM, "__GFP_KSWAPD_RECLAIM"}\
{(unsigned long)__GFP_KSWAPD_RECLAIM, "__GFP_KSWAPD_RECLAIM"},\
{(unsigned long)__GFP_ZEROTAGS, "__GFP_ZEROTAGS"}, \
{(unsigned long)__GFP_SKIP_KASAN_POISON,"__GFP_SKIP_KASAN_POISON"}\
#define show_gfp_flags(flags) \
(flags) ? __print_flags(flags, "|", \

View File

@ -11,13 +11,7 @@
#include <trace/hooks/vendor_hooks.h>
#ifdef __GENKSYMS__
struct dma_buf_sysfs_entry;
#else
/* struct dma_buf_sysfs_entry */
#include <linux/dma-buf.h>
#endif
DECLARE_RESTRICTED_HOOK(android_rvh_dma_buf_stats_teardown,
TP_PROTO(struct dma_buf_sysfs_entry *sysfs_entry, bool *skip_sysfs_release),
TP_ARGS(sysfs_entry, skip_sysfs_release), 1);

View File

@ -193,6 +193,9 @@ DECLARE_HOOK(android_vh_subpage_dma_contig_alloc,
DECLARE_HOOK(android_vh_ra_tuning_max_page,
TP_PROTO(struct readahead_control *ractl, unsigned long *max_page),
TP_ARGS(ractl, max_page));
DECLARE_RESTRICTED_HOOK(android_rvh_handle_pte_fault_end,
TP_PROTO(struct vm_fault *vmf, unsigned long highest_memmap_pfn),
TP_ARGS(vmf, highest_memmap_pfn), 1);
DECLARE_HOOK(android_vh_handle_pte_fault_end,
TP_PROTO(struct vm_fault *vmf, unsigned long highest_memmap_pfn),
TP_ARGS(vmf, highest_memmap_pfn));
@ -223,16 +226,30 @@ DECLARE_HOOK(android_vh_count_swpout_vm_event,
DECLARE_HOOK(android_vh_swap_slot_cache_active,
TP_PROTO(bool swap_slot_cache_active),
TP_ARGS(swap_slot_cache_active));
DECLARE_RESTRICTED_HOOK(android_rvh_drain_slots_cache_cpu,
TP_PROTO(struct swap_slots_cache *cache, unsigned int type,
bool free_slots, bool *skip),
TP_ARGS(cache, type, free_slots, skip), 1);
DECLARE_HOOK(android_vh_drain_slots_cache_cpu,
TP_PROTO(struct swap_slots_cache *cache, unsigned int type,
bool free_slots, bool *skip),
TP_ARGS(cache, type, free_slots, skip));
DECLARE_RESTRICTED_HOOK(android_rvh_alloc_swap_slot_cache,
TP_PROTO(struct swap_slots_cache *cache, int *ret, bool *skip),
TP_ARGS(cache, ret, skip), 1);
DECLARE_HOOK(android_vh_alloc_swap_slot_cache,
TP_PROTO(struct swap_slots_cache *cache, int *ret, bool *skip),
TP_ARGS(cache, ret, skip));
DECLARE_RESTRICTED_HOOK(android_rvh_free_swap_slot,
TP_PROTO(swp_entry_t entry, struct swap_slots_cache *cache, bool *skip),
TP_ARGS(entry, cache, skip), 1);
DECLARE_HOOK(android_vh_free_swap_slot,
TP_PROTO(swp_entry_t entry, struct swap_slots_cache *cache, bool *skip),
TP_ARGS(entry, cache, skip));
DECLARE_RESTRICTED_HOOK(android_rvh_get_swap_page,
TP_PROTO(struct page *page, swp_entry_t *entry,
struct swap_slots_cache *cache, bool *found),
TP_ARGS(page, entry, cache, found), 1);
DECLARE_HOOK(android_vh_get_swap_page,
TP_PROTO(struct page *page, swp_entry_t *entry,
struct swap_slots_cache *cache, bool *found),
@ -255,6 +272,9 @@ DECLARE_HOOK(android_vh_init_swap_info_struct,
DECLARE_HOOK(android_vh_si_swapinfo,
TP_PROTO(struct swap_info_struct *si, bool *skip),
TP_ARGS(si, skip));
DECLARE_RESTRICTED_HOOK(android_rvh_alloc_si,
TP_PROTO(struct swap_info_struct **p, bool *skip),
TP_ARGS(p, skip), 1);
DECLARE_HOOK(android_vh_alloc_si,
TP_PROTO(struct swap_info_struct **p, bool *skip),
TP_ARGS(p, skip));

View File

@ -59,6 +59,7 @@ choice
config KASAN_GENERIC
bool "Generic mode"
depends on HAVE_ARCH_KASAN && CC_HAS_KASAN_GENERIC
depends on CC_HAS_WORKING_NOSANITIZE_ADDRESS
select SLUB_DEBUG if SLUB
select CONSTRUCTORS
help
@ -79,6 +80,7 @@ config KASAN_GENERIC
config KASAN_SW_TAGS
bool "Software tag-based mode"
depends on HAVE_ARCH_KASAN_SW_TAGS && CC_HAS_KASAN_SW_TAGS
depends on CC_HAS_WORKING_NOSANITIZE_ADDRESS
select SLUB_DEBUG if SLUB
select CONSTRUCTORS
help

View File

@ -53,6 +53,10 @@
#include <linux/string_helpers.h>
#include "kstrtox.h"
/* Disable pointer hashing if requested */
bool no_hash_pointers __ro_after_init;
EXPORT_SYMBOL_GPL(no_hash_pointers);
static unsigned long long simple_strntoull(const char *startp, size_t max_chars,
char **endp, unsigned int base)
{
@ -849,6 +853,19 @@ static char *ptr_to_id(char *buf, char *end, const void *ptr,
return pointer_string(buf, end, (const void *)hashval, spec);
}
static char *default_pointer(char *buf, char *end, const void *ptr,
struct printf_spec spec)
{
/*
* default is to _not_ leak addresses, so hash before printing,
* unless no_hash_pointers is specified on the command line.
*/
if (unlikely(no_hash_pointers))
return pointer_string(buf, end, ptr, spec);
return ptr_to_id(buf, end, ptr, spec);
}
int kptr_restrict __read_mostly;
static noinline_for_stack
@ -858,7 +875,7 @@ char *restricted_pointer(char *buf, char *end, const void *ptr,
switch (kptr_restrict) {
case 0:
/* Handle as %p, hash and do _not_ leak addresses. */
return ptr_to_id(buf, end, ptr, spec);
return default_pointer(buf, end, ptr, spec);
case 1: {
const struct cred *cred;
@ -2118,10 +2135,6 @@ char *fwnode_string(char *buf, char *end, struct fwnode_handle *fwnode,
return widen_string(buf, buf - buf_start, end, spec);
}
/* Disable pointer hashing if requested */
bool no_hash_pointers __ro_after_init;
EXPORT_SYMBOL_GPL(no_hash_pointers);
static int __init no_hash_pointers_enable(char *str)
{
no_hash_pointers = true;
@ -2339,7 +2352,7 @@ char *pointer(const char *fmt, char *buf, char *end, void *ptr,
case 'e':
/* %pe with a non-ERR_PTR gets treated as plain %p */
if (!IS_ERR(ptr))
break;
return default_pointer(buf, end, ptr, spec);
return err_ptr(buf, end, ptr, spec);
case 'u':
case 'k':
@ -2349,16 +2362,9 @@ char *pointer(const char *fmt, char *buf, char *end, void *ptr,
default:
return error_string(buf, end, "(einval)", spec);
}
default:
return default_pointer(buf, end, ptr, spec);
}
/*
* default is to _not_ leak addresses, so hash before printing,
* unless no_hash_pointers is specified on the command line.
*/
if (unlikely(no_hash_pointers))
return pointer_string(buf, end, ptr, spec);
else
return ptr_to_id(buf, end, ptr, spec);
}
/*

View File

@ -38,6 +38,7 @@
struct madvise_walk_private {
struct mmu_gather *tlb;
bool pageout;
bool can_pageout_file;
};
/*
@ -313,6 +314,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
struct madvise_walk_private *private = walk->private;
struct mmu_gather *tlb = private->tlb;
bool pageout = private->pageout;
bool pageout_anon_only = pageout && !private->can_pageout_file;
struct mm_struct *mm = tlb->mm;
struct vm_area_struct *vma = walk->vma;
pte_t *orig_pte, *pte, ptent;
@ -351,6 +353,9 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
if (page_mapcount(page) != 1)
goto huge_unlock;
if (pageout_anon_only && !PageAnon(page))
goto huge_unlock;
if (next - addr != HPAGE_PMD_SIZE) {
int err;
@ -419,6 +424,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
if (PageTransCompound(page)) {
if (page_mapcount(page) != 1)
break;
if (pageout_anon_only && !PageAnon(page))
break;
get_page(page);
if (!trylock_page(page)) {
put_page(page);
@ -443,6 +450,9 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
if (!allow_shared && page_mapcount(page) != 1)
continue;
if (pageout_anon_only && !PageAnon(page))
continue;
VM_BUG_ON_PAGE(PageTransCompound(page), page);
if (pte_young(ptent)) {
@ -524,11 +534,13 @@ static long madvise_cold(struct vm_area_struct *vma,
static void madvise_pageout_page_range(struct mmu_gather *tlb,
struct vm_area_struct *vma,
unsigned long addr, unsigned long end)
unsigned long addr, unsigned long end,
bool can_pageout_file)
{
struct madvise_walk_private walk_private = {
.pageout = true,
.tlb = tlb,
.can_pageout_file = can_pageout_file,
};
vm_write_begin(vma);
@ -538,10 +550,8 @@ static void madvise_pageout_page_range(struct mmu_gather *tlb,
vm_write_end(vma);
}
static inline bool can_do_pageout(struct vm_area_struct *vma)
static inline bool can_do_file_pageout(struct vm_area_struct *vma)
{
if (vma_is_anonymous(vma))
return true;
if (!vma->vm_file)
return false;
/*
@ -560,17 +570,23 @@ static long madvise_pageout(struct vm_area_struct *vma,
{
struct mm_struct *mm = vma->vm_mm;
struct mmu_gather tlb;
bool can_pageout_file;
*prev = vma;
if (!can_madv_lru_vma(vma))
return -EINVAL;
if (!can_do_pageout(vma))
return 0;
/*
* If the VMA belongs to a private file mapping, there can be private
* dirty pages which can be paged out if even this process is neither
* owner nor write capable of the file. Cache the file access check
* here and use it later during page walk.
*/
can_pageout_file = can_do_file_pageout(vma);
lru_add_drain();
tlb_gather_mmu(&tlb, mm, start_addr, end_addr);
madvise_pageout_page_range(&tlb, vma, start_addr, end_addr);
madvise_pageout_page_range(&tlb, vma, start_addr, end_addr, can_pageout_file);
tlb_finish_mmu(&tlb, start_addr, end_addr);
return 0;

View File

@ -4777,6 +4777,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
if (vmf->flags & FAULT_FLAG_WRITE)
flush_tlb_fix_spurious_fault(vmf->vma, vmf->address);
}
trace_android_rvh_handle_pte_fault_end(vmf, highest_memmap_pfn);
trace_android_vh_handle_pte_fault_end(vmf, highest_memmap_pfn);
unlock:
pte_unmap_unlock(vmf->pte, vmf->ptl);

View File

@ -2765,11 +2765,28 @@ static void unmap_region(struct mm_struct *mm,
{
struct vm_area_struct *next = vma_next(mm, prev);
struct mmu_gather tlb;
struct vm_area_struct *cur_vma;
lru_add_drain();
tlb_gather_mmu(&tlb, mm, start, end);
update_hiwater_rss(mm);
unmap_vmas(&tlb, vma, start, end);
/*
* Ensure we have no stale TLB entries by the time this mapping is
* removed from the rmap.
* Note that we don't have to worry about nested flushes here because
* we're holding the mm semaphore for removing the mapping - so any
* concurrent flush in this region has to be coming through the rmap,
* and we synchronize against that using the rmap lock.
*/
for (cur_vma = vma; cur_vma; cur_vma = cur_vma->vm_next) {
if ((cur_vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) != 0) {
tlb_flush_mmu(&tlb);
break;
}
}
free_pgtables(&tlb, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS,
next ? next->vm_start : USER_PGTABLES_CEILING);
tlb_finish_mmu(&tlb, start, end);
@ -3326,6 +3343,7 @@ void exit_mmap(struct mm_struct *mm)
vma = remove_vma(vma);
cond_resched();
}
mm->mmap = NULL;
mmap_write_unlock(mm);
vm_unacct_memory(nr_accounted);
}

View File

@ -6367,7 +6367,6 @@ static void __meminit zone_init_free_lists(struct zone *zone)
}
}
#if !defined(CONFIG_FLAT_NODE_MEM_MAP)
/*
* Only struct pages that correspond to ranges defined by memblock.memory
* are zeroed and initialized by going through __init_single_page() during
@ -6412,13 +6411,6 @@ static void __init init_unavailable_range(unsigned long spfn,
pr_info("On node %d, zone %s: %lld pages in unavailable ranges",
node, zone_names[zone], pgcnt);
}
#else
static inline void init_unavailable_range(unsigned long spfn,
unsigned long epfn,
int zone, int node)
{
}
#endif
static void __init memmap_init_zone_range(struct zone *zone,
unsigned long start_pfn,

View File

@ -3428,6 +3428,7 @@ static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp,
if (is_kfence_address(objp)) {
kmemleak_free_recursive(objp, cachep->flags);
memcg_slab_free_hook(cachep, &objp, 1);
__kfence_free(objp);
return;
}

View File

@ -133,6 +133,8 @@ static int alloc_swap_slot_cache(unsigned int cpu)
* as kvzalloc could trigger reclaim and get_swap_page,
* which can lock swap_slots_cache_mutex.
*/
trace_android_rvh_alloc_swap_slot_cache(&per_cpu(swp_slots, cpu),
&ret, &skip);
trace_android_vh_alloc_swap_slot_cache(&per_cpu(swp_slots, cpu),
&ret, &skip);
if (skip)
@ -190,6 +192,8 @@ static void drain_slots_cache_cpu(unsigned int cpu, unsigned int type,
bool skip = false;
cache = &per_cpu(swp_slots, cpu);
trace_android_rvh_drain_slots_cache_cpu(cache, type,
free_slots, &skip);
trace_android_vh_drain_slots_cache_cpu(cache, type,
free_slots, &skip);
if (skip)
@ -298,6 +302,7 @@ int free_swap_slot(swp_entry_t entry)
bool skip = false;
cache = raw_cpu_ptr(&swp_slots);
trace_android_rvh_free_swap_slot(entry, cache, &skip);
trace_android_vh_free_swap_slot(entry, cache, &skip);
if (skip)
return 0;
@ -335,6 +340,7 @@ swp_entry_t get_swap_page(struct page *page)
bool found = false;
entry.val = 0;
trace_android_rvh_get_swap_page(page, &entry, raw_cpu_ptr(&swp_slots), &found);
trace_android_vh_get_swap_page(page, &entry, raw_cpu_ptr(&swp_slots), &found);
if (found)
goto out;

View File

@ -2908,6 +2908,7 @@ static struct swap_info_struct *alloc_swap_info(void)
int i;
bool skip = false;
trace_android_rvh_alloc_si(&p, &skip);
trace_android_vh_alloc_si(&p, &skip);
if (!skip)
p = kvzalloc(struct_size(p, avail_lists, nr_node_ids), GFP_KERNEL);

View File

@ -1352,8 +1352,8 @@ static unsigned int shrink_page_list(struct list_head *page_list,
if (unlikely(PageTransHuge(page)))
flags |= TTU_SPLIT_HUGE_PMD;
trace_android_vh_page_trylock_set(page);
if (!ignore_references)
trace_android_vh_page_trylock_set(page);
if (!try_to_unmap(page, flags)) {
stat->nr_unmap_fail += nr_pages;
if (!was_swapbacked && PageSwapBacked(page))

View File

@ -1676,11 +1676,10 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
skb->head = data;
skb->head_frag = 0;
skb->data += off;
skb_set_end_offset(skb, size);
#ifdef NET_SKBUFF_DATA_USES_OFFSET
skb->end = size;
off = nhead;
#else
skb->end = skb->head + size;
#endif
skb->tail += off;
skb_headers_offset_update(skb, nhead);
@ -1728,6 +1727,38 @@ struct sk_buff *skb_realloc_headroom(struct sk_buff *skb, unsigned int headroom)
}
EXPORT_SYMBOL(skb_realloc_headroom);
int __skb_unclone_keeptruesize(struct sk_buff *skb, gfp_t pri)
{
unsigned int saved_end_offset, saved_truesize;
struct skb_shared_info *shinfo;
int res;
saved_end_offset = skb_end_offset(skb);
saved_truesize = skb->truesize;
res = pskb_expand_head(skb, 0, 0, pri);
if (res)
return res;
skb->truesize = saved_truesize;
if (likely(skb_end_offset(skb) == saved_end_offset))
return 0;
shinfo = skb_shinfo(skb);
/* We are about to change back skb->end,
* we need to move skb_shinfo() to its new location.
*/
memmove(skb->head + saved_end_offset,
shinfo,
offsetof(struct skb_shared_info, frags[shinfo->nr_frags]));
skb_set_end_offset(skb, saved_end_offset);
return 0;
}
/**
* skb_copy_expand - copy and expand sk_buff
* @skb: buffer to copy
@ -5975,11 +6006,7 @@ static int pskb_carve_inside_header(struct sk_buff *skb, const u32 off,
skb->head = data;
skb->data = data;
skb->head_frag = 0;
#ifdef NET_SKBUFF_DATA_USES_OFFSET
skb->end = size;
#else
skb->end = skb->head + size;
#endif
skb_set_end_offset(skb, size);
skb_set_tail_pointer(skb, skb_headlen(skb));
skb_headers_offset_update(skb, 0);
skb->cloned = 0;
@ -6117,11 +6144,7 @@ static int pskb_carve_inside_nonlinear(struct sk_buff *skb, const u32 off,
skb->head = data;
skb->head_frag = 0;
skb->data = data;
#ifdef NET_SKBUFF_DATA_USES_OFFSET
skb->end = size;
#else
skb->end = skb->head + size;
#endif
skb_set_end_offset(skb, size);
skb_reset_tail_pointer(skb);
skb_headers_offset_update(skb, 0);
skb->cloned = 0;

View File

@ -158,7 +158,7 @@ void inet_sock_destruct(struct sock *sk)
kfree(rcu_dereference_protected(inet->inet_opt, 1));
dst_release(rcu_dereference_protected(sk->sk_dst_cache, 1));
dst_release(sk->sk_rx_dst);
dst_release(rcu_dereference_protected(sk->sk_rx_dst, 1));
sk_refcnt_debug_dec(sk);
}
EXPORT_SYMBOL(inet_sock_destruct);

View File

@ -2793,8 +2793,7 @@ int tcp_disconnect(struct sock *sk, int flags)
icsk->icsk_ack.rcv_mss = TCP_MIN_MSS;
memset(&tp->rx_opt, 0, sizeof(tp->rx_opt));
__sk_dst_reset(sk);
dst_release(sk->sk_rx_dst);
sk->sk_rx_dst = NULL;
dst_release(xchg((__force struct dst_entry **)&sk->sk_rx_dst, NULL));
tcp_saved_syn_free(tp);
tp->compressed_ack = 0;
tp->segs_in = 0;

View File

@ -5745,7 +5745,7 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb)
trace_tcp_probe(sk, skb);
tcp_mstamp_refresh(tp);
if (unlikely(!sk->sk_rx_dst))
if (unlikely(!rcu_access_pointer(sk->sk_rx_dst)))
inet_csk(sk)->icsk_af_ops->sk_rx_dst_set(sk, skb);
/*
* Header prediction.

View File

@ -1670,15 +1670,18 @@ int tcp_v4_do_rcv(struct sock *sk, struct sk_buff *skb)
struct sock *rsk;
if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */
struct dst_entry *dst = sk->sk_rx_dst;
struct dst_entry *dst;
dst = rcu_dereference_protected(sk->sk_rx_dst,
lockdep_sock_is_held(sk));
sock_rps_save_rxhash(sk, skb);
sk_mark_napi_id(sk, skb);
if (dst) {
if (inet_sk(sk)->rx_dst_ifindex != skb->skb_iif ||
!dst->ops->check(dst, 0)) {
RCU_INIT_POINTER(sk->sk_rx_dst, NULL);
dst_release(dst);
sk->sk_rx_dst = NULL;
}
}
tcp_rcv_established(sk, skb);
@ -1753,7 +1756,7 @@ int tcp_v4_early_demux(struct sk_buff *skb)
skb->sk = sk;
skb->destructor = sock_edemux;
if (sk_fullsock(sk)) {
struct dst_entry *dst = READ_ONCE(sk->sk_rx_dst);
struct dst_entry *dst = rcu_dereference(sk->sk_rx_dst);
if (dst)
dst = dst_check(dst, 0);
@ -2162,7 +2165,7 @@ void inet_sk_rx_dst_set(struct sock *sk, const struct sk_buff *skb)
struct dst_entry *dst = skb_dst(skb);
if (dst && dst_hold_safe(dst)) {
sk->sk_rx_dst = dst;
rcu_assign_pointer(sk->sk_rx_dst, dst);
inet_sk(sk)->rx_dst_ifindex = skb->skb_iif;
}
}

View File

@ -2196,7 +2196,7 @@ bool udp_sk_rx_dst_set(struct sock *sk, struct dst_entry *dst)
struct dst_entry *old;
if (dst_hold_safe(dst)) {
old = xchg(&sk->sk_rx_dst, dst);
old = xchg((__force struct dst_entry **)&sk->sk_rx_dst, dst);
dst_release(old);
return old != dst;
}
@ -2386,7 +2386,7 @@ int __udp4_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
struct dst_entry *dst = skb_dst(skb);
int ret;
if (unlikely(sk->sk_rx_dst != dst))
if (unlikely(rcu_dereference(sk->sk_rx_dst) != dst))
udp_sk_rx_dst_set(sk, dst);
ret = udp_unicast_rcv_skb(sk, skb, uh);
@ -2545,7 +2545,7 @@ int udp_v4_early_demux(struct sk_buff *skb)
skb->sk = sk;
skb->destructor = sock_efree;
dst = READ_ONCE(sk->sk_rx_dst);
dst = rcu_dereference(sk->sk_rx_dst);
if (dst)
dst = dst_check(dst, 0);

View File

@ -107,7 +107,7 @@ static void inet6_sk_rx_dst_set(struct sock *sk, const struct sk_buff *skb)
if (dst && dst_hold_safe(dst)) {
const struct rt6_info *rt = (const struct rt6_info *)dst;
sk->sk_rx_dst = dst;
rcu_assign_pointer(sk->sk_rx_dst, dst);
inet_sk(sk)->rx_dst_ifindex = skb->skb_iif;
tcp_inet6_sk(sk)->rx_dst_cookie = rt6_get_cookie(rt);
}
@ -1482,15 +1482,18 @@ static int tcp_v6_do_rcv(struct sock *sk, struct sk_buff *skb)
opt_skb = skb_clone(skb, sk_gfp_mask(sk, GFP_ATOMIC));
if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */
struct dst_entry *dst = sk->sk_rx_dst;
struct dst_entry *dst;
dst = rcu_dereference_protected(sk->sk_rx_dst,
lockdep_sock_is_held(sk));
sock_rps_save_rxhash(sk, skb);
sk_mark_napi_id(sk, skb);
if (dst) {
if (inet_sk(sk)->rx_dst_ifindex != skb->skb_iif ||
dst->ops->check(dst, np->rx_dst_cookie) == NULL) {
RCU_INIT_POINTER(sk->sk_rx_dst, NULL);
dst_release(dst);
sk->sk_rx_dst = NULL;
}
}
@ -1842,7 +1845,7 @@ INDIRECT_CALLABLE_SCOPE void tcp_v6_early_demux(struct sk_buff *skb)
skb->sk = sk;
skb->destructor = sock_edemux;
if (sk_fullsock(sk)) {
struct dst_entry *dst = READ_ONCE(sk->sk_rx_dst);
struct dst_entry *dst = rcu_dereference(sk->sk_rx_dst);
if (dst)
dst = dst_check(dst, tcp_inet6_sk(sk)->rx_dst_cookie);

View File

@ -941,7 +941,7 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
struct dst_entry *dst = skb_dst(skb);
int ret;
if (unlikely(sk->sk_rx_dst != dst))
if (unlikely(rcu_dereference(sk->sk_rx_dst) != dst))
udp6_sk_rx_dst_set(sk, dst);
if (!uh->check && !udp_sk(sk)->no_check6_rx) {
@ -1055,7 +1055,7 @@ INDIRECT_CALLABLE_SCOPE void udp_v6_early_demux(struct sk_buff *skb)
skb->sk = sk;
skb->destructor = sock_efree;
dst = READ_ONCE(sk->sk_rx_dst);
dst = rcu_dereference(sk->sk_rx_dst);
if (dst)
dst = dst_check(dst, inet6_sk(sk)->rx_dst_cookie);

View File

@ -1701,9 +1701,12 @@ static int pfkey_register(struct sock *sk, struct sk_buff *skb, const struct sad
pfk->registered |= (1<<hdr->sadb_msg_satype);
}
mutex_lock(&pfkey_mutex);
xfrm_probe_algs();
supp_skb = compose_sadb_supported(hdr, GFP_KERNEL | __GFP_ZERO);
mutex_unlock(&pfkey_mutex);
if (!supp_skb) {
if (hdr->sadb_msg_satype != SADB_SATYPE_UNSPEC)
pfk->registered &= ~(1<<hdr->sadb_msg_satype);

View File

@ -393,7 +393,7 @@ ifeq ($(CONFIG_LTO_CLANG) $(CONFIG_MODVERSIONS),y y)
cmd_update_lto_symversions = \
rm -f $@.symversions \
$(foreach n, $(filter-out FORCE,$^), \
$(if $(wildcard $(n).symversions), \
$(if $(shell test -s $(n).symversions && echo y), \
; cat $(n).symversions >> $@.symversions))
else
cmd_update_lto_symversions = echo >/dev/null

View File

@ -34,9 +34,6 @@ case "$KBUILD_VERBOSE" in
;;
esac
# We need access to CONFIG_ symbols
. include/config/auto.conf
# Generate a new symbol list file
$CONFIG_SHELL $srctree/scripts/gen_autoksyms.sh "$new_ksyms_file"